Saturday, December 24, 2016

Building Python 3.6 from Source on OpenSUSE like an idiot


Recently I was humbled to find out that I am less good at building things from source than I used to be.  Let me state first that what I am doing below is not something most Python users should ever want to do.  On OpenSUSE and other Linux distributions, we install our software using packages. That's your normal first way of getting python.

Your normal secondary way of getting python, and something a lot of serious python developers do, is use pyenv, which is similar in nature to the rbenv tool ruby developers use.   Both installing system distributions of Python, and installing your own local python with pyenv are generally recommended ways of getting python on Linux and unix systems.

This way I'm showing is generally NOT recommended.

Enormous things like Python are not normally manually built from sources by most users, on most common Linux distros. If you're one of those Arch-style linux or other source-driven-linux distributions, or if you're a FreeBSD-style ports BSD unix user, you're probably more used to configuring and installing things from source.  The instructions below took me a long time to figure out.

First, I have to admit that I did something really stupid. I installed directly into /usr. Never never do that. That area of the filesystem belongs to "openSUSE package management" land, it's a "system folder".   There is no "make uninstall" in python or in any other common C and unix classic "configure+make+make install" source "tarballs".

The sensible thing to do is use a prefix like /opt/mypythonjunk, which I have chosen in the example below. Then you can wipe (rm -rf /opt/mypythonjunk).

After the obvious thing (download the tarball and unpack it, cd into that directory containing the source code, here are the configuration and build steps I used.

You need some stuff installed on your computer. I'm not able to make a complete list but it includes all the developer basics, gcc,make, binutils, etc.  You also need the "dev" (devel) packages which include the libraries and headers for things.  On opensuse that means "zypper install ncurses-dev", and similar things. You must install all that stuff before you can build from sources.

I was working from ~/python/Python-3.6.0  but you can work from anywhere you like.



make clean

mkdir  -p /opt/mypythonjunk

# --enable-optimizations might be useful for some real world uses.
# --with-pydebug might be useful for some development and test uses.
./configure --with-pydebug --enable-shared --prefix=/opt/mypythonjunk

for m in _struct binascii unicodedata _posixsubprocess math pyexpat _md5 _sha1 _sha256 _sha512 select _random _socket zlib fcntl; do 
    sed -i "s/^#\(${m}\)/\1/" Modules/Setup; 
done

make

sudo make install

sudo ln -s /opt/mypythonjunk/lib64/python3.6/lib-dynload/ /opt/mypythonjunk/lib/python3.6/lib-dynload


When you read the above, translate /opt/mypythonjunk to /usr mentally in your head, and imagine the mess I made when I did that. Be smart. Don't do that.

There are instructions on Python's web site. Read those.  But they're not enough for some distros, and OpenSUSE is one of them.

The initial make clean thing is pretty standard advice in case you ran a previous build and your working copy is dirty. It's harmless if your working copy isn't dirty. 

Next, I'm making a single output folder to contain all installed stuff.  I'm configuring with a few flags that I discovered are useful in various cases, and I'm specifying where the installation target folder should be.

The for m... part is something that someone on IRC pointed me at, which is really a nice way of telling you that you need to edit that file Modules/Setup;  the edits above appear to be enabling some of the optional Python modules you might want or not want.

If you run make it will work, but if it fails in some bizarre way, try again with more verbose output ( make -p or otherwise), and save your logs for later analysis.   If you want to ask a question on #python on irc, you will want to paste your make output logs into the python channel's own pastebin service.

 Something that surprises me during the make install phase is to see a lot of gcc (compilation) and tests being run (re-run? recompiled?). I'm told on the IRC #python channel that this is due to something called PGO (profile guided optimization). It should occur if you used --enable-optimizations during the configure phase.

What tripped me up was that the python binary installed and ran and could not find readline or any other python core modules.  The ln -s line above is the fix for that, and it took two days of scratching my head to realize I should do this:


/opt/something/bin/python3.6
import sys
sys.path

Hopefully obviously the path I put above is not the correct path for you. That's a hint to put your own actual direct fully qualified path name in python.

Then look at the output above, and find what directories it's searching for, then go see if those directories exist. If not, the thing to do is either edit sys.path (no thank you!) or hack it (yes please!) with the symlink command (ln -s).

Another trick I find helps to get a recalcitrant python to start up is that you can force a PYTHONHOME value for one command:

PYTHONHOME=/opt/something /opt/something/bin/python3.6

And yet another fun trick is to start python up without installing it, like this:

LD_LIBRARY_PATH=/home/wpostma/python/Python-3.6.0rc1 ./python

I hope these tips help.  I'm posting it on my blog so that the Mighty Google will index it, and other people who are as frustrated as I got trying to get this work, might get additional tips.  Corrections, and suggestions are welcome.

Wednesday, November 23, 2016

O'Reilly Book Bundle!

There's a pretty awesome pay-what-you want sale over at Humble Book Bundle, featuring O'Reilly Unix books.  One of my all time favorites, which I own in original dead-tree version, Unix Power Tools, is included if you pay $8 or more.   If you pay $15 or more, you also get some pretty awesome networking tomes, including O'Reilly's excellent TCP/IP and DNS/BIND books, and a couple more.

I really can not recommend the Unix Power Tools book enough, so I'll just cover a few reasons why I think everyone should read this book, even though it was last revised in 2002. It remains a fantastic tome, and well worthy of reading.



Three top reasons to read it:


1.  It has the best written explanation of the zen of Unix-like systems that I have ever read, the composability paradigm, and the sometimes baffling array of choices you have, such as between various shells.

2. It covers most of what you need to be a competent user, developer, or system administrator on Unix-like systems, Linux or otherwise.

3. It will get you started in a hundred different things, whether it's awk or sed, or vi (lately vim), shell scripting, or the myriad of standard utilities for finding things (learn when to use find, when to use grep, when to use locate/updatedb, and others) and so many more things.

This was the book that taught me enough that I started to actually grok the unix way. Before this book it was a random walk through chaos and entropy. After this book, it was still a random walk through chaos and entropy, but with a wry smile and a sense of understanding.

Even if you think you're a Unix expert you'll learn lots from this book. It's been taken down and thumbed through dozens of times, and I've read it cover to cover twice. Go grab it!  And give the fine folks at O'Reilly some love, tell them linux code monkey sent ya.


Update: If you're one of the millions of people who was happily working away in your Windows only world, and then someone had to go and bring Linux into things, and you are expected to learn how to just SSH into a wild Linux boxen and remotely deal with it,  and that is something that scares you, there's a book in the bundle that starts exactly with installing Putty on Windows and learning what to do when you get into the mysterious world of that bash shell in that remote Linux system that you now have to learn, and may even learn to enjoy.  If this sounds like you, check out the Ten Steps To Linux Survival title in the Unix bundle above.

Thursday, November 17, 2016

Upgrading OpenSuSE LEAP 42.1 to 42.2

For some reason I can't find this documented anywhere in a step by step set of instructions that only apply to LEAP 42.1 to 42.2.  The SuSE docs are good but they have to cover a bunch of versions and a lot of edge cases.  So, since this is the web, I puzzled, googled, puzzled, and then tried stuff.   It worked, and it's pretty easy, actually, but seemed strange to me as I'm more used to Debian and Ubuntu, and Fedora.

It seems odd in 2016 to have to use sed to modify a text file, so you can update from a major release of OpenSuSE to another. But you could also edit the zypper repo file by hand, if the following seems too convenient and easy.   Ubuntu  and Debian are a bit friendlier when it's upgrade time.  I was kind of floored that YaST didn't have a "new OpenSuSE version detect/upgrade" menu.

Before starting you should check your drive space and clean up any old snapshots using snapper.  If your system has btrfs as the root filesystem type, I suggest making sure you zap old snapshots until you have at least 12 gigs of free disk space.  You can not believe the output of df if you want to know your real free space on SuSE Linux systems with btrfs, use this command as root:


btrfs filesystem show

Note that it doesn't show you your actual free space, you have to do a bit of mental math, in the example below, there is a little less than 10 gigs free, approximately, probably enough for your upgrade to work.  I always like to have 12 to 15 gigs free before I do any system upgrade, so the situation below is borderline to me:

Label: none uuid: b3b42cba-c08e-4401-9382-6db379176a1f
Total devices 1 FS bytes used 90.21GB
devid 1 size 100.00GB used 85.29GB path /dev/sda4


On the same system above, df -h might report the free gigabytes to be much higher than that, and that is why you can not trust df, it doesn't work right with btrfs.  And to make life even weirder the command btrfs filesystem df /  where df (disk free space) is in the name of the command does not actually report free space, only total and usage.  Sometimes I really want to reach through the internet and ask the authors of tools what were you thinking?

There is a way to get actual free space from btrfs, which is this:

btrfs filesystem usage -h /

Besides giving me the unallocated free space in gigabytes, it also gives me a lot of stuff I don't care about.

Also before starting, make sure your system is up to date (zypper dup) and then list your zypper repos (zypper ls) and disable any that are third party. I disabled google-chrome before upgrade.

I also have a personal habit of archiving my entire pre-update /etc, with tar. (sudo tar cvf /root/etc-backup.tgz /etc).

 As root the system upgrade commands to go from 42.1 to 42.1 is:


sudo sed -i 's/42\.1/42\.2/g' /etc/zypp/repos.d/*
zypper ref
zypper dup


Further decisions/actions may be required, usually involving selection of packages to be de-installed, to avoid broken system. For example:


Reading installed packages...
Computing distribution upgrade...
2 Problems:
Problem: libkdevplatform8-1.7.1-1.3.x86_64 requires kdevplatform = 1.7.1, but this requirement cannot be provided
Problem: kdevplatform-lang-5.0.1-1.1.noarch requires kdevplatform = 5.0.1, but this requirement cannot be provided

Problem: libkdevplatform8-1.7.1-1.3.x86_64 requires kdevplatform = 1.7.1, but this requirement cannot be provided
  deleted providers: kdevplatform-1.7.1-1.3.x86_64
 Solution 1: Following actions will be done:
  deinstallation of kdevelop4-plugin-cppsupport-4.7.1-1.4.x86_64
  deinstallation of kdevelop4-4.7.1-1.4.x86_64
  deinstallation of kdevelop4-lang-4.7.1-1.4.noarch
 Solution 2: keep obsolete kdevplatform-1.7.1-1.3.x86_64
 Solution 3: break libkdevplatform8-1.7.1-1.3.x86_64 by ignoring some of its dependencies


Choose from above solutions by number or skip, retry or cancel [1/2/3/s/r/c] (c): 


I chose     1   whenever given a choice like the one above because I am pretty sure I can live without some bit of stuff (kdevelop and its various bits) until I figure out how to get it back installed and working.

Next I get to the big download stage.  I have to read and accept a few license agreements (page through or hit q, then type yes).


2307 packages to upgrade, 27 to downgrade, 230 new, 11 to reinstall, 89 to
remove, 4  to change vendor, 1 to change arch.
Overall download size: 1.98 GiB. Already cached: 0 B. After the operation, 517.6
MiB will be freed.
Continue? [y/n/? shows all options] (y): 


After all the EULA dances, it's time to get a cup of tea and wait for about 2 gigs of stuff to download through my rickety Canadian cable internet.

Next post will be on the joys of what worked and didn't work after this finishes.

Post-Script:   Everything seems quite stable after upgrade. The official wiki docs on "System Upgrade"  consists of a series of interleaved bits of advice, some of which apply to upgrading from pre-LEAP to LEAP, and some of which generally still apply to 42.1 to 42.2 LEAP updates.   On the SUSE irc chat, Peter Linnell has informed me that starting in Tumbleweed there is a new package to make this process easier called yast2-wagon.

Sunday, November 6, 2016

Gitlab All The Things

Gitlab is awesome and you should be using it for everything.   In this quick blog post I will explain why I think that, and a few quick ways you can try it out for yourself.



1.  Distributed version control is the future. 

I have been for many years a passionate user of Mercurial and not a passionate fan of Git.  I was right about one thing, which is that distributed version control does enable ways of working which, once you adjust to their small differences, are so powerful and useful, that you will probably not want to go back.   I was wrong about another thing though, which is that a personal choice of technology which is not the dominant or popular tool for this tool category, is a matter of no consequence.  Wrong. Really wrong.

Subversion is a dead end, as are Perforce, Team Foundation Server, and Clearcase, and Visual Source Safe, and every other pre-DVCS tool.  Centralized version control systems are an impediment to remote distributed teams.  In the future all teams will be at least partly remote and distributed, and in fact, that future is already mostly here.

Mercurial is a cool tool, and it actually has some use cases, and luckily it has git interop capabilities. But I'm done with mercurial.  The future is git.

2. Git is the lingua-franca of software development.

The fact is that git has won, and that Git has a tooling ecosystem, and a developer community that is essential.  If you want to keep using Mercurial on your own computer, that doesn't mean you are not going to have to interact with Git repositories, fork them on various git hosting sites, and learn Git.  So if you want to know two DVCSs, Mercurial is still a great "second" version control tool. I still use it for a few personal projects that I don't collaborate with others on.  But I have switched to Git, and overcome my anti-git irrational bias, and more than that, I have come to love it.

3.  Github is important but it is not where you should keep your code or your company's code, or your open source project's code,  as Github's current near monopoly on git hosting is harmful.

This is perhaps a controversial statement, and perhaps seems fractious.    I'll keep my argument simple, and consistent with what others who believe the same way I believe do:

Large Enterprises: If you are a commercial company, you should host your own server and take responsibility for the security and durability (disaster recovery) of your most critical asset.   Enterprises should run their own Gitlab. Anything less is irresponsible.

Small Enterprises: Secondly, small businesses can easily run their own Gitlab or can pay someone to host a git server on their behalf, and can find much better deals than Github offers.  If ethical arguments do not sway you, how about an appeal to thrift; always a powerful motivator for a small business. You can host your private git repositories on gitlab for free. Why are you paying github?

Open Source Project Leaders and Open Source Advocates:   Why are you preaching open source collaboration and hosting your open source project on a site which is powered by a closed source technology, and which is holding your whole way of working on one datacenter owned by one company?  In what draconian alternative future are we living where the meaning of Distributed has come to largely mean that all roads lead to one CENTRALIZED github?  I am not against companies making money. Gitlab is owned by a company.  But that company values transparency, publishes almost all of its main product as a free open source product (Gitlab Community Edition) and then has a plan to commercialize an enterprise product.   In an ideal world there would be six to twenty large sites that host all the open source projects, and github will only be ONE of them. I don't want to crush github, I am just unhappy that a giant closed source commercial elephant is hogging an unfair amount of the market.  I happen to like Bitbucket very much, and would be much happier if Github, Gitlab, and Bitbucket were approximately equal in size and income.  It bothers me that it has become almost encoded into the culture of NodeJS, and Go, and Ruby coders that their shared libraries/packages/modules must be hosted on Github.  That's bad and should be fixed.

4.  Gitlab is the best git server there is.

I love great tools.  I think Gitlab is the best git server there is.   It has built in merge request (pull request) handling, built in continuous integration, built in issue tracker, built in kanban style visual issue board for moving issues through workflow stages, built in chat server (a slack alternative, powered by mattermost), and so much more that I could spend four or five blog posts on it. But in spite of the incredible depth of the features it Gitlab, getting started using it is incredibly easy.


5. Gitlab is also the best FREE and OPEN SOURCE git server there is.

I already said this above, but I feel quite passionate about it so I'm going to restate it a different way.  The future of trustworthy software is to hide nothing.  Transparency is a corporate virtue in an information age, and open source software is the right way to create software which can be trusted. Important tools and infrastructure should be built from inspectable parts. I include all tools used to write software in this.  All software development tools you wish to trust utterly should be open source.

That includes github, and github is only going to open its source if we force it to.  This requires you to stop using github. YOU and YOU and YOU.

Your version control software, and the operating system it runs on should be open source.  You should not host your version control server on windows. It should be hosted on Linux.  It should be a self-hosted or cloud hosted Gitlab instance.  Or you can use the public gitlab.com to host your open source project.   Be fully open source. Don't host your open projects on closed proprietary services.

I have nothing specifically against github or bitbucket other than that I only believe they are appropriate for closed source commercial companies to use to host closed source projects and proprietary technologies.  Use proprietary tech to host your proprietary code. Fair and good.

Closed source software used to write software can no longer be trusted. We can not afford to trust it. Even if we trust the people who create it, we can not inspect it, and find its security weak points, we cannot understand our own ability to protect our customers, our employees, and the general public unless we can do anything necessary to verify the correctness of any element of the tools we choose to trust.   Just as we do not inspect every elevator before we ride in it, I do not believe that open source advocates actually read the entire linux kernel, and every element of a linux system.  What we are instead doing is evolving more rapidly into a state which already is, and always will be, more secure than running a closed source commercial version control tool, or operating system.

Do not imagine me to be saying more than I am saying here. Commercial software is fine, up to the point where you have to trust your entire company, or the entire internet to run on it.   I also do not want a surgeon to use a Windows laptop running any Microsoft or non-Microsoft anti-virus software to visualize my internals while he performs surgery on me.

Now I'm actually NOT trashing Microsoft or my employer here, or yours.  I'm saying, that if Microsoft, or my employer were to receive, in good faith, a request from their customers to audit their codebases, I would like to believe that they (makers and sellers of closed source software) might be able to grant access to the codebase used for their products, to qualified auditors who might want to search for security flaws, and back doors, for their own purposes, that they will soon be legally obliged to do so. But the sad news is that those legal rights which I believe will materialize, will be sadly inadequate, because looking at Windows source code is not the same as knowing that this is the exact, and only code, that is present on your Windows 10 machine.   Microsoft is loading new stuff on there every hour of every day, for all you know, and you can't turn it off or stop it.

OpenSSH vulnerabilities of the last few years notwithstanding, I would trust OpenSSH over any closed source SSL implementation, hands down.  No, many eyes do not make ALL bugs shallow.  But neither is closed source safe from attack, nor can it be made safe.  Networked systems must be made safe, and I believe that only a collaborative open source model will work to make systems actually safe. I do not believe that any single company, even microsoft, will be able to internally work to secure the internet.  Doing that will require a collaborative effort by everybody.

We all matter. Open source matters because people matter. Open source is a fundamental democratic principle of an information age.

The cost to build closed source software is going to increase radically. The cost of supporting and auditing it for safety is only going to increase. Because of that, I believe that many software companies will over time evolve into service providers (SAAS) models, and will transition to models which allow companies to get paid, and to hire great staff to work on their products, but that it will in the end become cheaper to maintain an open source ecosystem than a closed source one. I believe that the public will move away from trusting giant binary blobs, and that our culture will shift to a fundamental distrust of binary blobs.

I believe that companies that are farther ahead on that curve will adapt to the coming future better than those who deny it.  I believe that companies who embrace open source will save their companies.

And that little rant is really a way of stating more about why I think Gitlab is great, and Github is a problem.  I do not, and I can not trust Github. I can't trust it not to abuse its position and power. I can't actually trust it to stay online either. And neither should you. And neither should the tools that assume that their libraries or collections of gems should all be hosted on github. Even github shouldn't be happy having a bullseye that large on their backs.

Put the distributed back in distributed version control.  Diversity is your best defence.

Get gitlab today, and use it.  I'm from the future, I'm right.  Please listen.

What are you waiting for, smart person. Go try it now.


1.   Start by creating an account on Gitlab.com so you don't have to install anything to just try it out and get familiar with its features.   If you know github, you already know how to use gitlab.com so this won't take long.

2.   If you have a modicum of Linux knowledge, or can even read and type in a few bash commands you can spin up a fresh ubuntu 16 LTS VM on your server virtualization hardware, or if you haven't got any, spin one up on azure or amazon or google cloud, and follow the easy instructions to set up your own gitlab instance.   I recommend the "omnibus" install.  It's awesome.

3. Learn a bit from the excellent documentation.   If you have questions you can ask them on the community forum.






  

Saturday, October 22, 2016

On making OpenSUSE LEAP and Tumbleweed My Main At Home Working OS

For the last few months I have been using OpenSUSE Tumbleweed as my main home pc operating system. I  love setting up my system with a pragmatic combination of command line tools, plus having YAST and the nice Gnome and KDE Plasma desktops and the QEMU/KVM based virtualization system, which I have been running a Windows VM inside of.

Windows and Linux Guy Goes Linux Only (With Dual Boot for the Kids)

Because I work professionally with Windows, and periodically wish to maintain some of my side projects which run on Windows, I have been running VMs with various Windows versions, lately mostly Windows 10. Why? Because it's the most convenient place for me to run the set of modern Windows development tools I want to run including Visual Studio 2015.  What's terrible about Windows 10 is that it's a real bloated pig of a thing and unlike Windows 7 you can't shut off the bits that want to take over your disk and your CPU at will. It feels entitled to update or scan, or cache giant downloads, whenever it wants. It feels entitled to use all my disk I/O bandwidth to do silent updates whenever it wants. It feels it's okay to run 100 system services and then tell me that even when I'm the local administrator (a former equivalent of Linux root user)  that I don't really have root level access to my machine. My machine belongs to Microsoft.  That is at the heart of why I relegate windows to a VM. I don't trust it, and I don't trust Microsoft.   Prove me wrong, Satya Nadella.  Prove me wrong.  Recently I read the Microsoft has opened up centers to allow governments to look at the code of Microsoft Windows to see if Windows is spying on them.  My take on that is that it's a show.  Even if Microsoft wanted to prove that it's not spying, how can it do so? The very closed source nature of Windows, and its ability to deliver new stuff quickly behind the scenes means that even if Microsoft is not currently spying on you (and most likely they're actually not), that could change without any notice to you.  KB31359719 that introduces wide area surveillance by the NSA is only a windows update cycle away.    In fact, at a company the size of Microsoft I'd be surprised if there weren't things going on inside Windows that even Raymond Chen and Mark Russinovich don't know is going on, let alone say Satya Nadella.

So yeah. I still have a windows VM.  Because my kids like to play windows games, I also have a second hard drive with a native Windows 10 install that has a large library of Windows games (via Steam) that I can boot into by a secret hand-signal.  The secret hand signal is pressing F12 to get to my bios boot device selection menu (as opposed to putting this into grub).  If you press F12 you can select drive 3, of the three drives I have installed, and boot direct to Windows 10.   I feel this is a good penalty box to place Windows in, but I feel nervous about it.  In the past Microsoft has decided that it should be the only operating system on any box that Windows 10 runs on, and if they decide to roll out a windows update that upgrades (again) the partition schema, they don't seem to feel the need to tell anybody else out there about it, and might "upgrade" even other drives. That's a paranoid fear, I hear you telling me that.  No, it's not paranoid. It's just a well known phenomenon in human psychology called the "unknown unknown".  I don't think that in Microsoft's giant test laboratories that they have a machine with multiple operating systems set up just like mine. I just don't think they do.

Three Drives / No Splitting Drives between Linux and Windows

Right now I have three operating systems on three drives. One per drive.  I used to put multiple operating systems on partitions on the same drive, but I have come to think that's a bad idea.   Drive 1 (/dev/sda) is an SSD with  OpenSUSE LEAP.  Drive 2 (/dev/sdb) is a regular 2tb 7200 RPM hard disk with Windows 10 on it, and Drive 3 (/dev/sdc) is a regular 4tb 7200 RPM hard disk with OpenSUSE Tumbleweed and a large pile of QEMU/KVM  VMs on it.  The most recent time I installed OpenSUSE I actually detached all the sata cables I didn't want to get written to by the install.   This seems like a prudent step to me.  If I install an operating system and it decides to rewrite partition tables, or update a boot block, and moves things around, it's not guaranteed that all operating systems on that disk will agree about those changes.  Strange things like partitions disappearing (because there isn't only one agreed upon way of partitioning disks) can happen.  Better not to let operating systems try to share drives, these days, when you run as many different operating systems as I do, and when it doesn't seem that anybody can really feasibly test all these combinations.  Back in 1996 when I would be dual booting Red Hat Linux 4.0, and a DOS+Windows95 system, things were a bit more rational. Some of the newer formats for partitioning disks that we have now did not exist. My first installation of Tumbleweed, I made the mistake of allowing it to write the bootblock on my first disk, which was at the time, a windows 10 boot disk, that was installed with whatever defaults Windows 10 chooses when it owns the whole of the only disk that PC has.  I don't think it uses DOS 6 era partitions anymore. But I'm not sure.

In the old days when I wanted to see a "snapshot" of my storage devices and their mount points I would often use mount but these days the output of that, if you followed OpenSUSE LEAP's automatic partitioning/mounting suggestions is quite huge.  Instead I use my new favorite ls-family command, lsblk:

> lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 931.5G  0 disk
├─sda1   8:1    0     2G  0 part [SWAP]
├─sda2   8:2    0    40G  0 part /boot/grub2/x86_64-efi
└─sda3   8:3    0 889.5G  0 part /home
sdb      8:16   0   1.8T  0 disk
├─sdb1   8:17   0   3.7M  0 part
├─sdb2   8:18   0 983.8G  0 part
└─sdb3   8:19   0 878.9G  0 part
sdc      8:32   0 931.5G  0 disk
├─sdc1   8:33   0     1M  0 part
├─sdc2   8:34   0     2G  0 part
├─sdc3   8:35   0    40G  0 part
└─sdc4   8:36   0 889.5G  0 part
sr0     11:0    1  1024M  0 rom 
And then, to get my free space, df -h

Nice.  Much easier to read than the jumble of junk that comes out of the mount command, because on OpenSuSE, there is a btrfs root filesystem, with a lot of subvolumes. When you type mount the output feels to me (linux old timer) like a bit much to digest and comprehend.  And did you ever notice that the output of mount skips over the root filesystem on some Linux systems? On OpenSUSE though, it's in there, embedded  not in there. I'll be honest, I didn't need to know about all the btrfs subvolume mounts, and I would like to know where my root filesystem maps. If anybody knows a command to dump all the mountpoints, omitting subvolume mounts, please do let me know. What I'd like to know that the above lsblk obscures is that my root filesystem is mounted SOMEWHERE inside sda2, that is to say sda2 is not a 40gig partition used only for grub boot efi files .   Everything except /home is mounted on btrfs subvolumes inside /dev/sda2, and root (/) is also a subvolume under sda2.  To find the root dev the following could be put into a shell script or an alias, and I'd call it rdev because busybox and some linux shell utils packages used to contain just such a command:

df | egrep "/$" | cut -d " " -f1
That's better than what I came up with, which is just to grep one line out of mount:

mount |grep " / "

 Thank you to jcsl on #suse channel on irc.freenode.net, for the above df idea.

In an upcoming post I'll talk more about the snapper tool and the subtleties of the btrfs journaling filesystem.  Yes, OpenSUSE uses btrfs and it's awesome.  It really is a "better" filesystem, and it's quite mature and stable now, it's been around for eight years at this point.

I'll just point out one strange fact at this point; Don't use df to detect free space, it doesn't report the correct values because it doesn't understand how pooling and subvolumes and metadata in btrfs work.   Unfortunately the authors of btrfs have strange ideas about user-interfaces, and they don't tell you "how many gigs are free". You have to subtract A-B yourself:

> sudo /usr/sbin/btrfs filesystem show /dev/sda2
Label: none  uuid: 53ccd989-ad5e-4ef3-aff7-07523c971a2d
        Total devices 1 FS bytes used 6.53GiB
        devid    1 size 40.00GiB used 7.79GiB path /dev/sda2


The above output shows me I have 33 gigs free on a 40 gig btrfs filesystem.  Maybe I'll write my own version of df and make a shell alias for it.  The value above is similar to the output of df / -h but it can be significantly different. Be aware.




The OpenSuSE Installers Are Fantastic  (But give OpenSuSE its own hard drive)

One of my Tumbleweed installs in about July 2016 I let it write  to my main windows 10 system's boot block. It appeared to wipe the partition table information from that boot block that Windows was using.  Windows 10 was rendered inoperable, and as far as I know when OpenSUSE writes the bootblock it doesn't keep a backup anywhere.  I do not recommend allowing OpenSuSE to write to any drive containing a windows 10 boot sector unless you have a full image of that entire physical disk, and also a separate bootblock backup made with dd. The situation is complex, but as I understand it currently,  Windows 10 and OpenSuSE do not share in common an understanding of what goes where in your boot block.  Since one half of that situation (Windows 10) is a closed source operating system built by a company that actually hasn't fully documented how everything works in Windows 10's partition system,  I'd recommend not sharing drives.  I also don't recommend resizing partitions to make Linux fit on your existing drive. If you have a desktop, I recommend you get a second drive. If you have a laptop, everyone (including me) recommends you make a full drive image backup before you try to make a dual boot  windows/linux system.  Resizing partitions and messing around with dual boot is a bit tricky.

Before you allow anything to write to your boot block make sure you back up your bootblock:

   dd if=/dev/sda of=/root/backup_MBR count=1
If you didn't listen to me (and everybody else) and you seem to have wiped your partition tables and lost everything on your drive (like I did), you might want to find out about the amazing gparted tools bootable ISO that you can use to find and recover your data, maybe.  Chastened thus, by my unbridled enthusiasm and disregard for all sane practices, I believe the whole experience of wiping my drive was a good experience.  We need these little setbacks. They can help keep us humble.   Thanks to those awesome people  who make the gparted live boot rescue environment (you can use it on a DVD or USB stick) I didn't actually lose any of the data that I had put on there since my previous data backups.

Installing and Using OpenSuSE LEAP with some nVidia Cards May Still Require proprietary NV X Drivers

OpenSuSE LEAP is a free/open/libre project that also has some non-free bits available.  These non-free bits come in handy for those of us who need to do stuff like use nVidia video cards that the guys at nVidia don't feel like documenting.  All praise to the team behind Nouveau for trying to make the lives of ordinary users better.    I won't be buying any more nVidia video cards. I really should have bought an AMD/ATI card. As a proud Canadian, I'm happy that AMD has kept a lot of ATI-years talent around up here near where I live, and modern AMD/ATI video cards are  a fantastic value, and AMD is a good partner to the OSS world.    nVidia is simply not.   Linus has castigated them on multiple occasions, and while my new self imposed OSS-code-of-personal conduct prohibits me from expressing how I feel about nVidia's actions in the language I would prefer, I will state instead that I am unhappy, and that I intend to vote with my feet and my dollars.   Down with nVidia and down with proprietary binary patent-encumbered blobs.  Up with AMD who somehow lives in the real world and writes both binary patented drivers for their increasingly irrelevant windows desktop share, and who also seems to find a way to document their stuff and collaborate with OSS so that we end users don't have to suffer with broken desktops.  People who go on the OpenSuSE forums and complain about their system not working out of the box with nVidia need to realize that the problem lies with nVidia and not with OpenSuSE or the wonderful folks at Nouveau. For my particular card, the Nouveau driver now works perfectly in Tumbleweed, so my hope is that this driver makes it into the next stable LEAP release.

In LEAP,  My video card (nVidia GeForce GTX 750 series) doesn't work well with Nouveau.  The system boots up to a graphical boot/login screen (even though I told the installer to log in automatically), but logging in and starting the KDE or Gnome session fails, dumping you back at the session-manager login screen.  Switching to the virtual text consoles (Ctrl+Alt+F1) then back to the X11 desktop (Ctrl+Alt+F7) hangs X11, leaving no moveable mouse pointer. It is possible to get back to a text login (Ctrl+Alt+F1) and kill the display-manager service and start it again (service restart display-manager).  The solution is to use YAST or ZYPPER to install the xf86-driver-nv and uninstall xf86-driver-nouveau, which requires also that you use YAST or some editing skills to enable the community nvidia rpm repository.  After doing that, I also need to blacklist the nouveau driver, re-run mkinitrd as root, and reboot. That blocks the nouveau kernel module from loading.   It's necessary on my system, to do this, as root:

echo "blacklist nouveau" >> /etc/modprobe.d/50-blacklist.conf && mkinitrd && reboot

New to Me, and Maybe new to You? journalctl is your friend!

One area of Linux system evolution I have gotten behind on is understanding systemd and what it does, and what journalctl does.   I have learned a new favorite command to go with dmesg and less /var/log/Xyz, and it's this:

sudo journalctl -b

That shows the output of the systemd journal (log) for the current boot up.  This is awesome.  It's like a super-dmesg command.   For example, on my system my kernel is "tainted" by the evil-empire-of-nVidia, and that output has always been visible in dmesg, but it's also in journalctl, which is not limited to the current bootup, and contains more than just kernel log output.

  A fully featured less-style pager appears to be invoked automatically by the above command, so from an interactive terminal, you can type the above command then type > and that gets you to the end of the buffer, and you can then scroll back and you can search with / just like you would if you'd done a less /var/log/foo. I feel a little foolish that I've been using systemd based linux systems for years and never bothered to read the awesome manual

This brings me to the point that OpenSuSE's documentation is fabulous.    No insult to any other community is intended by that.  I'm also quite impressed by the work that the Debian community puts into its distro docs, and the Fedora project is also quite impressive.    Online support on IRC is also important to all three of those communities, and I've hung out quite a bit in the SUSE irc channel, and tried to help a bit.

Minor Annoyances With Workarounds Provided Already

For some reason, since SLES11 (from my googling) the default behaviour of man is insane (displays duplicate pages for almost everything and asks you to pick which one you want).   Who wants this behavior? This might be some "enterprise"-y default that end users don't like. The fix is easy just add the lines below to ~/.bashrc, and is actually reported to you when man does this annoying thing.  I wonder why they left this behavior as the default?

# man brings up annoying duplicates:
MAN_POSIXLY_CORRECT=1;
export MAN_POSIXLY_CORRECT

The second most annoying thing is that single clicking opens folders in the  KDE Plasma file-explorer (Dolphin).   I find that annoying.    The default behavior in windows actually goes back to the late 1987 IBM Common User Access documents, which describe the standards used in the MS-DOS Executive (Shell), the IBM OS/2 Presentation Manager shell, and which were copied carefully (and with deep understanding) by Microsoft and placed into every version of Windows ever made.  You'll note that even the contrarians at Apple have largely respected many of the CUA precedents. Changing those things is a disaster for usability.  And yet the KDE and Gnome developers (and the Ubuntu more than anyone) seem to periodically get tired of incremental improvements and decide to make whole new user interfaces that wreak havoc on user experience, and verge on user-hostile at times.   But instead of ranting about it, just turn it off, this is Linux, almost everything is easy to change. Click the KDE taskbar icon (equivalent to the start icon in windows), go to the search box and type in  Mouse, and open the mouse window, and change the Single click to open setting to double click.






"Moving In" to a New Linux Box

While I was in there I was amused to read that the default ~/.bashrc has some lines in there that to tell you how to specify your PalmPilot device's baud rate when you connect it with linux. At one time I was a devoted user of my Palm III and Palm V and Palm Vx devices.  I think it's awesome that Linux carries forward its history, but I know that some people periodically whine and wish that someone (not sure who) should go through and delete all our history.  I actually wish openSuSE installed fortune by default, a choice made by the FreeBSD team that I highly approve of.  I like fortune, and install it on all my linux systems:

> sudo zypper install fortune
root's password:
.... 

> fortune
One of the chief duties of the mathematician in acting as an advisor...
is to discourage... from expecting too much from mathematics.
                -- N. Wiener

Each time I install Linux on a fresh box, I wish I had thought about a clean way to get my preferred environment settings onto it.  My ~/.vimrc, my ssh keys, all that. If anybody reading has any preferred techniques, I'd love to hear it.  Some people make a "dotfiles" git repo, maybe I should start doing that.  Since I keep private secrets in my dotfiles, I'm not crazy about pushing it all in one git repo.   I don't even feel comfortable keeping all my secrets on a private bitbucket or gitlab repo. What do you folks do?  There has to be something better than my current mess.
Maybe I will go with the .dotfile repo and it will be only for public files, and I'll have a script on there called fetch_secrets that will decrypt and restore my secret-dotfile-bits from a usb key that I carry about with me.  Or maybe there's some better solution.

After setting up dotfiles and basics, I like to get all my development toolage, and a variety of virtual machines.  I use vim, Netbeans, Eclipse, Webstorm (an IntelliJ idea flavor for html5/web), Visual Studio Code,  java jdk and the usual java tooling suspects,  dotnet core, mono,   go, rvm/ruby, gcc, clang, python3, python2, node+npm, and lots more.   I really like Pharo (a modern smalltalk VM) as well, and getting that running on a 64 bit Linux is a bit of a pain as it's a 32 bit only VM right now.

On Using LEAP Daily

I found some problems getting one of the large Ruby/Rails apps I want to work with going on Tumbleweed and have moved to Leap 42.1 as my main OS these days.  About the only thing I notice is that the "Nouveau" open source cleanroom-reverse-engineered X drivers are less advanced in Leap than they are in the bleeding-edge rolling-release Tumbleweed system. I have to use the stable non-open-source proprietary NV (nvidia)  video drivers for X11, but even they have crazy bugs and incompatibilities with my card, a GeForce GTX 750 (rev a2). Last night my X server just crashed on LEAP.  Might be going back to Tumbleweed until I get rid of this heinous heap of video card garbage.

There is something to be said for running a completely open source system, with no magic-blobs of patent encumbered binary sludge on your system.   Next time I buy a video card it will be an AMD one. I'm done with nVidia.   But I'm just getting started with OpenSuSE and loving it.  It's clean, fast, efficient, and has several really great desktops (KDE Plasma and Gnome) and it really is up to you which one you prefer. I find I mostly prefer KDE Plasma.





Saturday, October 15, 2016

Sheep and Goats, Bullies and Allies.

This post was inspired by a tweet+link posted by Guido van Rossum, Python BDFL, on twitter.

The link was to a post on "Contempt culture" by Aurynn Shaw. It's worth a read, but if you want a TL;DR it's this:  Developers tend to pick sides and heap scorn on the other side, and the other side is people, and being nasty to people is not okay.  I agree.  That's where the title "Sheep and Goats" comes from, which is a reference to a parable in the Christian religion which suggests that human beings can be sorted into good and bad, sheep (good) and goats (bad), wheat (edible) and tares (inedible).

The psychological need to qualify my choice as better than yours is my subject, not the specific wars; tabs versus spaces, curly braces versus keywords like begin and end,  C versus Pascal, perl versus ruby versus python,  emacs versus vim,  garbage collection versus ref-counting versus RAII, windows versus linux versus bsd versus solaris versus whatever else.

The desire I often feel to heap contempt on PHP, for example, is completely inconsistent with the fact that whenever I have used PHP I have found it to be extremely useful.  I do not choose to build entire 500-million-line-of-code product empires in it, but just because I don't choose to do so, doesn't mean I should insult PHP and PHP developers.   Contempt is not justifiable. If you have technical arguments about whether PHP is a good choice for a particular project, those can be made without insulting anyone or their work.  PHP is a work, a technical body of work.  Insulting it, all of it, including all the amazing work inside that tooling and library ecosystem plus the language core itself is insulting to people.  Contempt is bullying.  We white male alpha geeks who insult things, we are bullies.  We need to look in the mirror and wake up.

My concern is also for the way we self-indentified alpha-male-white-geeks gate-keep, police, and exclude others.  I agree with Aurynn. We do that.  I will add that, along with contributing to exclusion of women, and minorities,  and in addition to reinforcing white male privilege in tech communities (points brought up with Auryn Shaw that I agree with completely), that we damage our ability to do good engineering work, place barriers to others and to ourselves in learning, and so on, when we act badly.

Recently I decided I wanted to learn Ruby and Go because there were awesome things built in Ruby and Go that I want to contribute to.   In order to get started learning Ruby, I had to overcome some strong antipathies to Go and Ruby that I had developed, that were basically 100% free of technical rationale, and were 100% products of earlier A/B forks in the road that I had taken.  Some time in about 2004, I chose Python and chose to be anti-Ruby.    Some time after Go's authors decided that generic containers would never be in Go, I decided to be anti-Go.

Recently I watched an old Ruby conference talk where Ruby luminary (now sadly deceased) Jim Weirich talked about his road from Perl to the fork in the road where he tried to like Python, and failed, and then tried Ruby, and it "just fit" his brain.   You and I can probably both tell similar stories, stories that differ only on which fork "fit my brain better" and which way you or I went.  I would like to think that if I met up with Jim Weirich at some Ruby meetup in the great beyond that he, a Christian, and a true gentleman from the accounts I have read, could have a great chat about his talk. I would tell him that I'm ashamed that I ever insulted Ruby, which contains such great community, such a friendly (generally) and welcoming atmosphere.  Okay, so the Ruby community has had to face this "Bro-coder culture" and it has had some toxic personalities. And so has Python, and so has every other language community.   But if I want to attend a Ruby meetup here in my own city THERE IS ONE. And there are people there who will HELP ME LEARN RUBY.   And I want to attend one and help teach people Ruby.  I want to be part of communities like that.  I want to be a guy like Jim Weirich.  I don't want to be that me from 2006 that contributed heat and no light to endless technical battles on indentation and such.   Now, to be fair to my former self, I was provoked.  The anti-whitespace-block crap which Pythonistas have had to put up with has made us all testy on more than one occassion.  The lack of imagination that you could adjust to this reality and actually come to enjoy it is no different than my lack of imagination that I can adjust to the reality of Go sans generics or similarly useable container patterns.

So I proceeded in 2004 to ignore Ruby, and  people proceeded to use Ruby and build things like Gem and Rake and Rails, and people decided to use Go anyways, even though I didn't give it my stamp of approval, and as far as I know not having templated containers and exceptions didn't stop its uses from building great things with it.   Lately I keep running into projects I want to contribute to and use that are built in Ruby and Go.  Right now I'm coding my own projects in rails and Go because that's what  makes sense for the projects I'm interested in.

I am getting the tiniest feel for what it feels like to be an outsider, but with a big difference. The Go community and the Ruby community are two of the friendliest communities I've ever seen, with Python being right up there as well. The spirit of collaboration, and technical meritocracy is strong in these communities.   It's made me wonder what else I've missed out on because of these narrow pseudo-technical filters I've applied to the world of computing.

I want to make a confession.  I am a bully. I have participated in online conversations in which my behavior was rude to others.   I have been rude to co-workers because our technical decisions on things which were matters of taste in the end were different.   I have ridiculed co-workers and even in one case, my boss, for not being smart or technical enough to be worth listening to.

I want to tell one particular story.  About five years ago I worked in a healthcare company where there was a particularly dysfunctional team.  The team leader was actually a great guy but something was going on that rendered him non-functional.  The team had no daily standup, or weekly meeting, no demos, no retrospectives, and was spread across two cities.  The CEO was ineffectual and disconnected from the leadership of the company.   Something was broken. Almost everything was broken.  Into that situation I came in.   Did I make things better? No.  Did I model the values I believed in, or said I believed in? No.   Did I try to institute some functional communication practices, whether scrummy or other?  No.   What did I do? I coded.   I kept my head down.  When arguments came up on the team, I took sides and we debated stuff.   The culture was not healthy and I was not happy either.  But I didn't realize it.   Eventually my supervisor was moved out of the "software dev manager"  role but not fired.  He remained at his desk, and did slightly less management of the team than before, but not much else changed.   Our new software development manager was a nice person, a woman, a woman of color in Tech.   Her background was that she understood the customer's needs, and whether the thing we built would be useful in the real world. A good background.   My only memory of interaction with her was her coming to ask me to do something.   I had previously been asked to do other things by five other people. Instead of politely asking her why my priorities were shifting so frequently, and which thing I should work on, I think I was actually quite rude to her.  I remember that she didn't enjoy the encounter. In fact, I think, looking back, I was being a jerk.  So there's me, who would like to imagine myself as an ally to #WOCINTECH but instead, I'm just another arrogant white male jerk, making subtle and not-so-subtle jabs at people like her, and making her life hard.  It's hard for me to look back at that and face how ugly that is. How it puts me on the wrong side of history.  How I can only be ashamed of being a jerk, and how my behavior makes it hard to run a respectful team.  Maybe not everybody enjoys the "culture of robust discussion" that I have often defended.  Maybe that's just me being rude, being a bully, pushing my chest out and bellowing louder than others. Is that how we make Tech a friendly welcoming place? Is that the best I can do? No. It's not.

I will not split groups into Sheep and Goats.    I will not be free of it because lots of people will keep assuming that the Rubyists are the Sheep and the Pythonistas are the Goats, and vice versa. But I will be part of both communities and will work for understanding and friendly collaboration, and suggest that we not balkanize and hide in our corners, but mix and mingle instead.

At my work, I will work on my tendency to be loud and arrogant. When I act like that I  leave some people little choice than to avoid me, fire me, or if they are not in a position of power, they can quit.   Is that the kind of team culture I want to foster?  No it's not.   Bullies and power-mongers like showdowns.  I value community, collaboration, and consensus.  When those things are not technically possible, then let's have clarity, humanity, and compassion when we disagree.

It's time for me (and you, if you agree) to stop dividing people into "good enough and not good enough to be treated by me with respect".  All people deserve a modicum of discipline, tolerance, and respect, at work, and in the wider tech community, in OSS. We can argue all day about whether being rude and arrogant is harmful, and where the thin red line is,  but in the end, the fact is we're getting the tech community we deserve.  If we select for arrogance and brashness, we may, in fact we definitely will be excluding voices, thoughts, and ideas that can help us.

I have four thoughts about what I can do differently:

* I can be an ally instead of a bully. That means listening.

* I can stop pushing my way on other people.  That means compromising more.

* I can stop making A/B decisions and thinking that because I put a few years into learning Python that it isn't worth my time learning Ruby.  That means learning more.

* I can stop excluding women, and women of color, and visible minorities, and people of cultures and with personalities that don't mesh with my arrogant privileged techno nerd ways, and I can be a friendly guy. I like to think that most of the time I already am that guy, but I've also learned that believing that you're a "nice guy" is a barrier to actually being one.   That means I need humility.

What can you do to make the world of tech a better, friendlier, more inclusive place?

Tuesday, September 27, 2016

My History as a Linux User, Part 1

This is a bit of a personal history post, which I hope explains where I come from and what I like to do in and around Linux.  I hope that this blog can be a bit educational, and a bit fun.

What better way to start than to tell a bit about my use of Linux over the last bit, and what I'm interested in.

The Computing World in 1992.

It's a much different technological world in 2016 than it was in the early 1990s.  Recall for instance that Microsoft was not the omni-present juggernaut that it is today, although it was already a significant power in the computer world, thanks to its MS-DOS and early office and developer tool suites.    Microsoft had guys like Charles Simonyi, who had helped create the digital office at Xerox PARC, who then went on to successfully commercialize (with MS Word) based  heavily on the PARC GUI word processor (BRAVO), and guys like Dave Cutler and Gordon Letwin, two of the powers behind the NT throne, who got their start at DEC designing important bits of VAX VMS. 

Big Software was a thing built by big corporations.  Open source, loose knit informal collaboration, the idea that a major software system could be built and surpass these commercial systems, rendering much of their economic value moot, was heresy.

If you wanted a computer and you had less than 10 thousand dollars to spend your basic choices were:

* The unix workstation world, with Sun as the acknowledged leader, and other companies like IBM and HP having their own hardware lines and their own commercial Unix variants.    Machines like the NeXT also fit into this category, with its Mach/BSD-derived kernel.  

* The 1990s mac world, which had a stronghold in desktop publishing (thanks to the laser writer) had a pretty terrible operating system, without proper inter-process memory protection, the whole thing was prone to crashing, but the user experience was pretty nice for people who liked mice and graphical user interfaces far nicer than the DOS GUIs of the time.

* The PC world, mostly running DOS and Windows 3.1, mostly on a mix of 8086, 80286, and 80386 CPUs running at speeds between 4.77 and 25 megahertz.   IBM's OS/2 exists but only ever achieves a tiny uptake.   After a rift develops between IBM and Microsoft, they go their separate ways, and Microsoft's architects that built the OS/2 2.0 kernel turn to build OS/2 3.0, later renamed NT.

* If you wanted to run a unix-like operating system on your PC, your choices included the free and very limited "minix", early 386 BSD ports which were very immature and snarled by BSD license squabbling and under constant legal threats, and a few commercial (but still quite functionally limited) unix varieties such as SCO Unix and even Microsoft's Xenix.

In that heterogeneous mostly-closed-source computing world, on the 25th of august, 1991, a young Finnish student posts on the usenet comp.os.minix newsgroup that he intends to build a kernel for the wildly popular Intel 80386 powered PCs of the era. Not something "big and professional like GNU". Just a little hobby.

I think it's safe for me to skip telling the parts that everybody knows about the rise of Linux. For anyone without much exposure to Linux, it may be surprising to them to know that Linux represents the work of tens of thousands of open-source developers, some of which got paid and most of which probably did not get directly paid, for their efforts.  Entire desktop computing software ecosystems including general productivity software emerged, and matured.  And yet,  Linux never overtook Windows on the desktop. That was always going to be "next year", the year of Linux on the desktop.

My first exposure to Linux was hearing about it on the nascent internet, while a computer science student in 1991, as the Linux world started to take off and the first distributions arrived in 1992, I ordered them and had them shipped to me on CDs.   The first Linux distribution I tried was one I ordered from a mail-order Linux CD company. I ordered SLS (later became Slackware) and Yggdrasil linux, back when the Linux kernel was at about version 0.96.    My previous experience with Windows 3 and with OS/2 did not do much to teach me about Unix systems.   I had used an IBM RS/6000 workstation in university, and was somewhat familiar with unix commands.  I had learned a bit of the vi editor, and how to use unix mail, usenet news, and ftp and gopher clients.   The "http" bits of the world wide web were just getting started, and in 1991 I had not yet heard of Tim Berners Lee, nor of http, and the total number of "web sites" on what we would call today "the web" was probably around 20.

Linux at that time ran on exactly one type of hardware: Intel 80386 and compatibles from companies like AMD.  Since that time, Linux has been ported to run on tiny little computers you could fit on the end of your thumb, and onto IBM mainframes that cost over half a million dollars, and run on three phase 600V power.

Linux has curiously never taken over the desktops of the world. That year never happened.  Open Office got close to breaking Microsoft's office monopoly, but it turns out that most people in most businesses in North America seem to need or want a bunch of commercial software that doesn't work on Linux.   Steve Balmer was right in his chant about "Developers, Developers, Developers".  What has saved Microsoft from obsolescence is that the desktop PC world still retains a huge place for Microsoft's technologies, especially in small and medium sized businesses and healthcare.

My Linux Use from 1992 to 2016

After initial experiments with Yggdrasil and SLS, and later Slackware, I bounced back and forth among RedHat, SuSE, Debian, and a few others.   I always had a hard drive in my PC that booted up to a Linux distribution, and also had Windows on another drive, and used one of the many boot managers over the years, to dual boot.

As a professional software developer on Windows, I generally got paid all this time to write commercial software, on DOS, Windows 3, Windows 95, OS/2, and NT, and then Windows 2000, XP, and so on.   On weekends and evenings when I was bored, I always ended up playing around and learning more and more on Linux.

I spent some time learning GUI programming, with GTK and QT, but I really hated those toolkits, and never found one I really liked.   I fell in love with Python, and stuck with it since my first time using it, and used it for a thousand different things, but never really did a lot of GUI programming with it.  When I needed a small utility on Linux, I would use the tkinter user interface layer, and once or twice also built a few things using the  wx widgets python wrappers.   I was generally a fan of the Gnome desktop over KDE, and generally a fan of VIM and GVIM over EMACS.    Some time around 2010 Gnome and KDE both simultaneously abandoned usability and usefulness and became incredibly limiting and stupid, and since then, I've been a big fan of XFCE.

Besides doing some Python and C hacking on Linux, its odd that I never got into Pascal on Linux.  My experiences with the leading Pascal on Linux, Free Pascal, were never very good. These days I have to admit that Lazarus+FreePascal on Linux is pretty nice.  Maybe it was because it could never really reach my expectation of being "Delphi on Linux" that I felt happier hacking around in Python, or C.   I have never much liked C++ either.  It feels like a nice language is hiding inside there, but someone put a horrible bunch of extra stuff on top.  I've met lots of other people who feel the same, but nobody seems to agree on what the exact nice subset is.  So that's C++.

The distro I've spent the most time on is Debian. In the last ten years, I've always had a Debian image on my main PC, and a Debian VM on all my Macs.  Debian is a beautiful distribution made by open source purists. While I'm not one of the "purists" myself, I recognize their purity has been useful and a good service to the community.


For the last two years though, I've been running SuSE as well, and have now switched full time to OpenSuSE tumbleweed on my main home computer.  I use the SuSE Virtual Machine Manager (powered by QEMU/KVM) to run virtual machines.   I absolutely adore SuSE, and I'm going to write a few detailed blog posts later about why I love it, and what I think it does right, which is an enormous list of things.   It sounds like a stereotype, but SuSE has the feel of being engineered by a bunch of Germans.   Tumbleweed has the feeling of being an experimental aircraft, which is continuously being patched up. When it works, it's like driving next year's Mercedes Benz.  When it breaks, it's educational for me in a way that no Linux distribution yet has been educational.   Tumbleweed is a rolling release distro that has a lot in common with the Debian "SID" (unstable) distribution, which I have also spent a long time with.  I just can't get over my curiosity about the next version of Gnome, the next version of KDE, the next version of XOrg (or XFree86 back in the day, remember that?), and everything else.    Every time I upgrade Tumbleweed, it's either a disaster or it's christmas.  Mostly it's Christmas. It works better than it has any right to, and breaks far less often than I deserve.
I only recommend it to home users on home PCs for people who don't mind if their computer updates and then fails to boot up, and what's more, wouldn't mind too much if it formatted your entire hard drive. It might do that.   A much saner choice for most people would be to run OpenSUSE LEAP.

Why I Like Linux and what I like to do on Linux.

The design of Unix systems, and Linux as one of the best of these Unix-like systems, is something that I think has been done better than most of the alternatives. That doesn't mean I don't get mad when I make a mistake and the power of Linux is deployed instantly to pay me back for my mistake.  I might complain, but I really do appreciate the power, and the design of the system.   I sometimes make little forays off into the BSD world (FreeBSD, DragonflyBSD) which I might blog a bit about later, but by and large, it's Linux systems that fit my head and my brain better than others.

I'll give one example that I think is interesting.  Linux systems typically can be updated and maintained online longer without reboots than Windows systems can be.  Why is that, do you think?  I think it has something to do with the way they are designed.  You can start and stop services in Windows. You can kill the explorer shell and restart it in Windows, but in Linux you can do almost anything without rebooting, short of a kernel replacement.  There are no equivalents to chroot jails in Windows. There are no equivalents to many many things.

Using Python, Go, C++, or Java on Linux is a thing of beauty, compared to using the same languages on Windows.  I have a hard time explaining why that is, but if my readers want to know, I'll go into it.

To me, Linux is the dream platform for the developer. Along with VM software to run Windows in a "penalty box" where it can't see most of my files and can't take over most of my computer, I can do almost anything I want on my computer.  One of the things I like to do a lot is download and install new software.  To keep my main environment from getting insane, I tend to do that inside VMs.  I have about a dozen VMs running different versions of Linux and different tools inside them.

It's a playground.  It's frankly amazing. Every now and then, I think of what I could do when I first got my commodore 64 and what I can do now, and I'm a bit breathless.

There's also a little hobby I have of trying obscure programming languages. I might post a bit about Pharo (smalltalk), and various Lisp/Scheme languages that I've been messing around with on here.  If you're reading along and you're rather new to Linux, you're my desired target readership. Tell me what you want to know more about.


How I feel about being a Linux User and a Windows User in 2016

Where it seems that Microsoft is truly not in the epicenter anymore is on the web.  In 2016 we're in a world of containers, clusters, and big data, web scale key-value database technologies,  load balancing, caching, and web CDN, in which Microsoft technologies are not important. Key internet technology leaders like Google, Facebook, Amazon, and thousands of others run Linux alone.

As a developer who has always loved and played with Linux but always made my coin working on Windows it's strange to see Windows relegated to a place of almost zero importance in the most important areas of software progress in 2016.

In 2016,  Microsoft has actually ported .net to run on Linux, and Oracle has made Java seem like a technology controlled by an evil empire that I should not want to use.  Microsoft actually seems like the good guy these days.

Why is that, do you think?  I think I know. Other than for gaming, many people might be perfectly happy to run Windows and then shut it down when it pisses them off or gets in their way.  That's about where I'm at in my journey with Windows.  About ready to kick it to the curb.  I'd actually rather pay $99 for a Windows that got out of my way and didn't may me mad, than get it for free and have to deal with its condescending and slimy tactics that remove me from the helm of my own computer.   Windows 10's sense of entitlement to my PC reminds me of auto-pilot from the Pixar film, Wall-E.    The fat captain reminds me a lot of me, in many ways, a typical Windows user, who has become quite used to being ordered around on his own ship.



Windows 10 tells you that you can't disable Windows update. You don't actually know what it sends to Microsoft. You don't actually know much about what it does. If it leaves you a bit of disk I/O bandwidth and compute time to do some things you wanted to do, then that's a bonus.   But you're going to get the extra stuff you didn't want also.   If you didn't pay for it, and someone gave it to you for free, and it came from a corporation like Microsoft or Facebook, then you're not the customer, you're the product.   I don't much like it. I don't think I'm all that paranoid or all that much of a privacy activist.  But I just look at the choices, and I think, I'd rather my computer was running Linux all the time.  I'll fire my my windows 10 vm now and then to run some Windows development tools (Delphi), and I'll boot back to the native Windows VM when my teenager wants to play some games on Steam.  But other than that, I'm a linux guy at home.

How I feel as a Linux User/Hacker in 2016

To me, it's clear that the future of software is open.   I believe in getting paid as a software developer, and I believe that technical support and service (subscription, SAAS models) are the future of getting paid for software.   This future frees the individual and organization making technology decisions from vendor lockins, and will empower users to do whatever they want with their computers.

The desktop world is probably going to stay much like it is, with a little Linux, a lot of Windows, and a fairly large number of developers on Macs.   Companies that can adapt to the new world, like SuSE, Gitlab, and Sencha, are going to do fine, I believe.  Microsoft is going to be around for the long run, as well, as they're pivoting, and will adapt and thrive in the open software future.   They will continue to contribute more core tech like their already open sourced C# compilers and .net frameworks into the open source world, but will probably never open source Windows or Office, but instead will focus on turning those into revenue generating services, rather than boxed software products.   Wow would I like to be wrong there.

To me the exciting software development is happening in the cloud, on the web, on Linux.  That's what interests me today. That's what this blog is about.