Friday, April 6, 2018

Audio on Linux is a Disaster, Debian Stretch Edition

I have recently learned more than I wanted to know about how Audio works in Debian Stretch Linux, and yet, not enough to solve the problems I have.

Generally I love Debian Stretch Linux,   the two big pain points are, Audio, and nVidia.  And nVidia is not Debian's fault, it's nVidia's fault. I hate you nVidia, did you know that I hate you? Anyways back to ranting about audio.

Here are my problems, stated in as transparent a way as I know how:

1. I need to use Skype for work, and a few other closed-source conferencing and remote desktop sharing apps, including, and GoToMeeting.

2. Built in Linux desktop apps like Sound Recording apps all work fine. If they were not working I could investigate them down to the source code level to see why they don't work.

3. But commercial closed source audio-aware apps do not work in a consistent way. Skype for Linux these days is some heinous Electron app.  It probably talks to pulse-audio.  So does GoToMeeting, so does

4. On Skype, people say I sound like a chipmunk. On I sound like a monster or a robot. Many issues like this have to do with sample rate configuration options.  How you can find a set of options that work for all the various apps you have, is an impossible problem.

If I could make an analogy, it's like going into a dark room, and there are an array of 100 or more light switches. One of those combinations of 100 switches, say, 36 of them on, and 64 of them off, in some combination, out of a possible of 2^100 combinations, will result in the lights going on. Another combination, also probably having to be found by trial and error will result in the lights going off. Other combinations will result in your hair catching on fire, your dog being decapitated, and your kids voice changing randomly from sounding like Obi Wan Kenobi, changing to R2-D2, and then to Chewbacca.

In between changing combinations of lightswitches, you should google a bunch of stuff, make notes, try things, and see if you can find the working combination.

I don't see a solution here, because I don't see that there's a standard way for Linux apps, either real native closed source ones that don't use electron, or for electron apps, to work on every Linux distro out there that has Pulseaudio as its audio subsystem.  Perhaps all these apps are tested, and developed, and work on Ubuntu, but not on Debian or any other Debian derivatives.

Who really knows?

Wednesday, February 28, 2018

Thinking about Starting a BSD Blog

This blog has been kind of inactive, but I've been playing around with a lot of things which are Unixen but which are not Linuxen in the last year:

* Illumos, SmartOS and OpenIndiana (stuff based on what was once OpenSolaris, a now dead project)

* The BSD family (FreeBSD, OpenBSD, and others). Most recently I've been messing around with TrueOS which used to be called PC-BSD, and before that I spent some time messing around with Dragonfly BSD.

Whenever the BSD folks talk about Linux licensing, and the GPL, I have to admit, I see their point, and I also see some of the points that Richard Stallman and his Gnu Purists point to.

Where I think the BSD people are undeniably right is:

1. You can't afford to lawyer all the GPL violations in the world, and can't technically even detect them, and so the question is, do you want to spend your time on that?

2. It's fair to ask the question, why not let people take stuff closed source.   If they want to fork and close source a project, why not let them?

3. It's fine to just not care about software freedom, and just want to use your computer.

Where I think the Linux people are right is:

1.  Software freedom leads to hardware freedom.  I don't think that the BSD approach is the right way to handle companies like nVIDIA that refuse to open their drivers. Having the GPL there and the problem of tainting your kernel with private non GPL garbage, is exactly how I think the Linux kernel should be handling it. It's not just a problem that the nVidia partially binary driver is a stupid thing because there are no Stable ABIs in the Kernel Space, it's also a problem because it's a binary blob in my computer and I don't want any binary blobs.

2.  Principles are worth making a fuss over.

So that's where I sit.  BSDs are cool. Linux is cool.

Saturday, December 24, 2016

Building Python 3.6 from Source on OpenSUSE like an idiot

Recently I was humbled to find out that I am less good at building things from source than I used to be.  Let me state first that what I am doing below is not something most Python users should ever want to do.  On OpenSUSE and other Linux distributions, we install our software using packages. That's your normal first way of getting python.

Your normal secondary way of getting python, and something a lot of serious python developers do, is use pyenv, which is similar in nature to the rbenv tool ruby developers use.   Both installing system distributions of Python, and installing your own local python with pyenv are generally recommended ways of getting python on Linux and unix systems.

This way I'm showing is generally NOT recommended.

Enormous things like Python are not normally manually built from sources by most users, on most common Linux distros. If you're one of those Arch-style linux or other source-driven-linux distributions, or if you're a FreeBSD-style ports BSD unix user, you're probably more used to configuring and installing things from source.  The instructions below took me a long time to figure out.

First, I have to admit that I did something really stupid. I installed directly into /usr. Never never do that. That area of the filesystem belongs to "openSUSE package management" land, it's a "system folder".   There is no "make uninstall" in python or in any other common C and unix classic "configure+make+make install" source "tarballs".

The sensible thing to do is use a prefix like /opt/mypythonjunk, which I have chosen in the example below. Then you can wipe (rm -rf /opt/mypythonjunk).

After the obvious thing (download the tarball and unpack it, cd into that directory containing the source code, here are the configuration and build steps I used.

You need some stuff installed on your computer. I'm not able to make a complete list but it includes all the developer basics, gcc,make, binutils, etc.  You also need the "dev" (devel) packages which include the libraries and headers for things.  On opensuse that means "zypper install ncurses-dev", and similar things. You must install all that stuff before you can build from sources.

I was working from ~/python/Python-3.6.0  but you can work from anywhere you like.

make clean

mkdir  -p /opt/mypythonjunk

# --enable-optimizations might be useful for some real world uses.
# --with-pydebug might be useful for some development and test uses.
./configure --with-pydebug --enable-shared --prefix=/opt/mypythonjunk

for m in _struct binascii unicodedata _posixsubprocess math pyexpat _md5 _sha1 _sha256 _sha512 select _random _socket zlib fcntl; do 
    sed -i "s/^#\(${m}\)/\1/" Modules/Setup; 


sudo make install

sudo ln -s /opt/mypythonjunk/lib64/python3.6/lib-dynload/ /opt/mypythonjunk/lib/python3.6/lib-dynload

When you read the above, translate /opt/mypythonjunk to /usr mentally in your head, and imagine the mess I made when I did that. Be smart. Don't do that.

There are instructions on Python's web site. Read those.  But they're not enough for some distros, and OpenSUSE is one of them.

The initial make clean thing is pretty standard advice in case you ran a previous build and your working copy is dirty. It's harmless if your working copy isn't dirty. 

Next, I'm making a single output folder to contain all installed stuff.  I'm configuring with a few flags that I discovered are useful in various cases, and I'm specifying where the installation target folder should be.

The for m... part is something that someone on IRC pointed me at, which is really a nice way of telling you that you need to edit that file Modules/Setup;  the edits above appear to be enabling some of the optional Python modules you might want or not want.

If you run make it will work, but if it fails in some bizarre way, try again with more verbose output ( make -p or otherwise), and save your logs for later analysis.   If you want to ask a question on #python on irc, you will want to paste your make output logs into the python channel's own pastebin service.

 Something that surprises me during the make install phase is to see a lot of gcc (compilation) and tests being run (re-run? recompiled?). I'm told on the IRC #python channel that this is due to something called PGO (profile guided optimization). It should occur if you used --enable-optimizations during the configure phase.

What tripped me up was that the python binary installed and ran and could not find readline or any other python core modules.  The ln -s line above is the fix for that, and it took two days of scratching my head to realize I should do this:

import sys

Hopefully obviously the path I put above is not the correct path for you. That's a hint to put your own actual direct fully qualified path name in python.

Then look at the output above, and find what directories it's searching for, then go see if those directories exist. If not, the thing to do is either edit sys.path (no thank you!) or hack it (yes please!) with the symlink command (ln -s).

Another trick I find helps to get a recalcitrant python to start up is that you can force a PYTHONHOME value for one command:

PYTHONHOME=/opt/something /opt/something/bin/python3.6

And yet another fun trick is to start python up without installing it, like this:

LD_LIBRARY_PATH=/home/wpostma/python/Python-3.6.0rc1 ./python

I hope these tips help.  I'm posting it on my blog so that the Mighty Google will index it, and other people who are as frustrated as I got trying to get this work, might get additional tips.  Corrections, and suggestions are welcome.

Wednesday, November 23, 2016

O'Reilly Book Bundle!

There's a pretty awesome pay-what-you want sale over at Humble Book Bundle, featuring O'Reilly Unix books.  One of my all time favorites, which I own in original dead-tree version, Unix Power Tools, is included if you pay $8 or more.   If you pay $15 or more, you also get some pretty awesome networking tomes, including O'Reilly's excellent TCP/IP and DNS/BIND books, and a couple more.

I really can not recommend the Unix Power Tools book enough, so I'll just cover a few reasons why I think everyone should read this book, even though it was last revised in 2002. It remains a fantastic tome, and well worthy of reading.

Three top reasons to read it:

1.  It has the best written explanation of the zen of Unix-like systems that I have ever read, the composability paradigm, and the sometimes baffling array of choices you have, such as between various shells.

2. It covers most of what you need to be a competent user, developer, or system administrator on Unix-like systems, Linux or otherwise.

3. It will get you started in a hundred different things, whether it's awk or sed, or vi (lately vim), shell scripting, or the myriad of standard utilities for finding things (learn when to use find, when to use grep, when to use locate/updatedb, and others) and so many more things.

This was the book that taught me enough that I started to actually grok the unix way. Before this book it was a random walk through chaos and entropy. After this book, it was still a random walk through chaos and entropy, but with a wry smile and a sense of understanding.

Even if you think you're a Unix expert you'll learn lots from this book. It's been taken down and thumbed through dozens of times, and I've read it cover to cover twice. Go grab it!  And give the fine folks at O'Reilly some love, tell them linux code monkey sent ya.

Update: If you're one of the millions of people who was happily working away in your Windows only world, and then someone had to go and bring Linux into things, and you are expected to learn how to just SSH into a wild Linux boxen and remotely deal with it,  and that is something that scares you, there's a book in the bundle that starts exactly with installing Putty on Windows and learning what to do when you get into the mysterious world of that bash shell in that remote Linux system that you now have to learn, and may even learn to enjoy.  If this sounds like you, check out the Ten Steps To Linux Survival title in the Unix bundle above.

Thursday, November 17, 2016

Upgrading OpenSuSE LEAP 42.1 to 42.2

For some reason I can't find this documented anywhere in a step by step set of instructions that only apply to LEAP 42.1 to 42.2.  The SuSE docs are good but they have to cover a bunch of versions and a lot of edge cases.  So, since this is the web, I puzzled, googled, puzzled, and then tried stuff.   It worked, and it's pretty easy, actually, but seemed strange to me as I'm more used to Debian and Ubuntu, and Fedora.

It seems odd in 2016 to have to use sed to modify a text file, so you can update from a major release of OpenSuSE to another. But you could also edit the zypper repo file by hand, if the following seems too convenient and easy.   Ubuntu  and Debian are a bit friendlier when it's upgrade time.  I was kind of floored that YaST didn't have a "new OpenSuSE version detect/upgrade" menu.

Before starting you should check your drive space and clean up any old snapshots using snapper.  If your system has btrfs as the root filesystem type, I suggest making sure you zap old snapshots until you have at least 12 gigs of free disk space.  You can not believe the output of df if you want to know your real free space on SuSE Linux systems with btrfs, use this command as root:

btrfs filesystem show

Note that it doesn't show you your actual free space, you have to do a bit of mental math, in the example below, there is a little less than 10 gigs free, approximately, probably enough for your upgrade to work.  I always like to have 12 to 15 gigs free before I do any system upgrade, so the situation below is borderline to me:

Label: none uuid: b3b42cba-c08e-4401-9382-6db379176a1f
Total devices 1 FS bytes used 90.21GB
devid 1 size 100.00GB used 85.29GB path /dev/sda4

On the same system above, df -h might report the free gigabytes to be much higher than that, and that is why you can not trust df, it doesn't work right with btrfs.  And to make life even weirder the command btrfs filesystem df /  where df (disk free space) is in the name of the command does not actually report free space, only total and usage.  Sometimes I really want to reach through the internet and ask the authors of tools what were you thinking?

There is a way to get actual free space from btrfs, which is this:

btrfs filesystem usage -h /

Besides giving me the unallocated free space in gigabytes, it also gives me a lot of stuff I don't care about.

Also before starting, make sure your system is up to date (zypper dup) and then list your zypper repos (zypper ls) and disable any that are third party. I disabled google-chrome before upgrade.

I also have a personal habit of archiving my entire pre-update /etc, with tar. (sudo tar cvf /root/etc-backup.tgz /etc).

 As root the system upgrade commands to go from 42.1 to 42.1 is:

sudo sed -i 's/42\.1/42\.2/g' /etc/zypp/repos.d/*
zypper ref
zypper dup

Further decisions/actions may be required, usually involving selection of packages to be de-installed, to avoid broken system. For example:

Reading installed packages...
Computing distribution upgrade...
2 Problems:
Problem: libkdevplatform8-1.7.1-1.3.x86_64 requires kdevplatform = 1.7.1, but this requirement cannot be provided
Problem: kdevplatform-lang-5.0.1-1.1.noarch requires kdevplatform = 5.0.1, but this requirement cannot be provided

Problem: libkdevplatform8-1.7.1-1.3.x86_64 requires kdevplatform = 1.7.1, but this requirement cannot be provided
  deleted providers: kdevplatform-1.7.1-1.3.x86_64
 Solution 1: Following actions will be done:
  deinstallation of kdevelop4-plugin-cppsupport-4.7.1-1.4.x86_64
  deinstallation of kdevelop4-4.7.1-1.4.x86_64
  deinstallation of kdevelop4-lang-4.7.1-1.4.noarch
 Solution 2: keep obsolete kdevplatform-1.7.1-1.3.x86_64
 Solution 3: break libkdevplatform8-1.7.1-1.3.x86_64 by ignoring some of its dependencies

Choose from above solutions by number or skip, retry or cancel [1/2/3/s/r/c] (c): 

I chose     1   whenever given a choice like the one above because I am pretty sure I can live without some bit of stuff (kdevelop and its various bits) until I figure out how to get it back installed and working.

Next I get to the big download stage.  I have to read and accept a few license agreements (page through or hit q, then type yes).

2307 packages to upgrade, 27 to downgrade, 230 new, 11 to reinstall, 89 to
remove, 4  to change vendor, 1 to change arch.
Overall download size: 1.98 GiB. Already cached: 0 B. After the operation, 517.6
MiB will be freed.
Continue? [y/n/? shows all options] (y): 

After all the EULA dances, it's time to get a cup of tea and wait for about 2 gigs of stuff to download through my rickety Canadian cable internet.

Next post will be on the joys of what worked and didn't work after this finishes.

Post-Script:   Everything seems quite stable after upgrade. The official wiki docs on "System Upgrade"  consists of a series of interleaved bits of advice, some of which apply to upgrading from pre-LEAP to LEAP, and some of which generally still apply to 42.1 to 42.2 LEAP updates.   On the SUSE irc chat, Peter Linnell has informed me that starting in Tumbleweed there is a new package to make this process easier called yast2-wagon.

Sunday, November 6, 2016

Gitlab All The Things

Gitlab is awesome and you should be using it for everything.   In this quick blog post I will explain why I think that, and a few quick ways you can try it out for yourself.

1.  Distributed version control is the future. 

I have been for many years a passionate user of Mercurial and not a passionate fan of Git.  I was right about one thing, which is that distributed version control does enable ways of working which, once you adjust to their small differences, are so powerful and useful, that you will probably not want to go back.   I was wrong about another thing though, which is that a personal choice of technology which is not the dominant or popular tool for this tool category, is a matter of no consequence.  Wrong. Really wrong.

Subversion is a dead end, as are Perforce, Team Foundation Server, and Clearcase, and Visual Source Safe, and every other pre-DVCS tool.  Centralized version control systems are an impediment to remote distributed teams.  In the future all teams will be at least partly remote and distributed, and in fact, that future is already mostly here.

Mercurial is a cool tool, and it actually has some use cases, and luckily it has git interop capabilities. But I'm done with mercurial.  The future is git.

2. Git is the lingua-franca of software development.

The fact is that git has won, and that Git has a tooling ecosystem, and a developer community that is essential.  If you want to keep using Mercurial on your own computer, that doesn't mean you are not going to have to interact with Git repositories, fork them on various git hosting sites, and learn Git.  So if you want to know two DVCSs, Mercurial is still a great "second" version control tool. I still use it for a few personal projects that I don't collaborate with others on.  But I have switched to Git, and overcome my anti-git irrational bias, and more than that, I have come to love it.

3.  Github is important but it is not where you should keep your code or your company's code, or your open source project's code,  as Github's current near monopoly on git hosting is harmful.

This is perhaps a controversial statement, and perhaps seems fractious.    I'll keep my argument simple, and consistent with what others who believe the same way I believe do:

Large Enterprises: If you are a commercial company, you should host your own server and take responsibility for the security and durability (disaster recovery) of your most critical asset.   Enterprises should run their own Gitlab. Anything less is irresponsible.

Small Enterprises: Secondly, small businesses can easily run their own Gitlab or can pay someone to host a git server on their behalf, and can find much better deals than Github offers.  If ethical arguments do not sway you, how about an appeal to thrift; always a powerful motivator for a small business. You can host your private git repositories on gitlab for free. Why are you paying github?

Open Source Project Leaders and Open Source Advocates:   Why are you preaching open source collaboration and hosting your open source project on a site which is powered by a closed source technology, and which is holding your whole way of working on one datacenter owned by one company?  In what draconian alternative future are we living where the meaning of Distributed has come to largely mean that all roads lead to one CENTRALIZED github?  I am not against companies making money. Gitlab is owned by a company.  But that company values transparency, publishes almost all of its main product as a free open source product (Gitlab Community Edition) and then has a plan to commercialize an enterprise product.   In an ideal world there would be six to twenty large sites that host all the open source projects, and github will only be ONE of them. I don't want to crush github, I am just unhappy that a giant closed source commercial elephant is hogging an unfair amount of the market.  I happen to like Bitbucket very much, and would be much happier if Github, Gitlab, and Bitbucket were approximately equal in size and income.  It bothers me that it has become almost encoded into the culture of NodeJS, and Go, and Ruby coders that their shared libraries/packages/modules must be hosted on Github.  That's bad and should be fixed.

4.  Gitlab is the best git server there is.

I love great tools.  I think Gitlab is the best git server there is.   It has built in merge request (pull request) handling, built in continuous integration, built in issue tracker, built in kanban style visual issue board for moving issues through workflow stages, built in chat server (a slack alternative, powered by mattermost), and so much more that I could spend four or five blog posts on it. But in spite of the incredible depth of the features it Gitlab, getting started using it is incredibly easy.

5. Gitlab is also the best FREE and OPEN SOURCE git server there is.

I already said this above, but I feel quite passionate about it so I'm going to restate it a different way.  The future of trustworthy software is to hide nothing.  Transparency is a corporate virtue in an information age, and open source software is the right way to create software which can be trusted. Important tools and infrastructure should be built from inspectable parts. I include all tools used to write software in this.  All software development tools you wish to trust utterly should be open source.

That includes github, and github is only going to open its source if we force it to.  This requires you to stop using github. YOU and YOU and YOU.

Your version control software, and the operating system it runs on should be open source.  You should not host your version control server on windows. It should be hosted on Linux.  It should be a self-hosted or cloud hosted Gitlab instance.  Or you can use the public to host your open source project.   Be fully open source. Don't host your open projects on closed proprietary services.

I have nothing specifically against github or bitbucket other than that I only believe they are appropriate for closed source commercial companies to use to host closed source projects and proprietary technologies.  Use proprietary tech to host your proprietary code. Fair and good.

Closed source software used to write software can no longer be trusted. We can not afford to trust it. Even if we trust the people who create it, we can not inspect it, and find its security weak points, we cannot understand our own ability to protect our customers, our employees, and the general public unless we can do anything necessary to verify the correctness of any element of the tools we choose to trust.   Just as we do not inspect every elevator before we ride in it, I do not believe that open source advocates actually read the entire linux kernel, and every element of a linux system.  What we are instead doing is evolving more rapidly into a state which already is, and always will be, more secure than running a closed source commercial version control tool, or operating system.

Do not imagine me to be saying more than I am saying here. Commercial software is fine, up to the point where you have to trust your entire company, or the entire internet to run on it.   I also do not want a surgeon to use a Windows laptop running any Microsoft or non-Microsoft anti-virus software to visualize my internals while he performs surgery on me.

Now I'm actually NOT trashing Microsoft or my employer here, or yours.  I'm saying, that if Microsoft, or my employer were to receive, in good faith, a request from their customers to audit their codebases, I would like to believe that they (makers and sellers of closed source software) might be able to grant access to the codebase used for their products, to qualified auditors who might want to search for security flaws, and back doors, for their own purposes, that they will soon be legally obliged to do so. But the sad news is that those legal rights which I believe will materialize, will be sadly inadequate, because looking at Windows source code is not the same as knowing that this is the exact, and only code, that is present on your Windows 10 machine.   Microsoft is loading new stuff on there every hour of every day, for all you know, and you can't turn it off or stop it.

OpenSSH vulnerabilities of the last few years notwithstanding, I would trust OpenSSH over any closed source SSL implementation, hands down.  No, many eyes do not make ALL bugs shallow.  But neither is closed source safe from attack, nor can it be made safe.  Networked systems must be made safe, and I believe that only a collaborative open source model will work to make systems actually safe. I do not believe that any single company, even microsoft, will be able to internally work to secure the internet.  Doing that will require a collaborative effort by everybody.

We all matter. Open source matters because people matter. Open source is a fundamental democratic principle of an information age.

The cost to build closed source software is going to increase radically. The cost of supporting and auditing it for safety is only going to increase. Because of that, I believe that many software companies will over time evolve into service providers (SAAS) models, and will transition to models which allow companies to get paid, and to hire great staff to work on their products, but that it will in the end become cheaper to maintain an open source ecosystem than a closed source one. I believe that the public will move away from trusting giant binary blobs, and that our culture will shift to a fundamental distrust of binary blobs.

I believe that companies that are farther ahead on that curve will adapt to the coming future better than those who deny it.  I believe that companies who embrace open source will save their companies.

And that little rant is really a way of stating more about why I think Gitlab is great, and Github is a problem.  I do not, and I can not trust Github. I can't trust it not to abuse its position and power. I can't actually trust it to stay online either. And neither should you. And neither should the tools that assume that their libraries or collections of gems should all be hosted on github. Even github shouldn't be happy having a bullseye that large on their backs.

Put the distributed back in distributed version control.  Diversity is your best defence.

Get gitlab today, and use it.  I'm from the future, I'm right.  Please listen.

What are you waiting for, smart person. Go try it now.

1.   Start by creating an account on so you don't have to install anything to just try it out and get familiar with its features.   If you know github, you already know how to use so this won't take long.

2.   If you have a modicum of Linux knowledge, or can even read and type in a few bash commands you can spin up a fresh ubuntu 16 LTS VM on your server virtualization hardware, or if you haven't got any, spin one up on azure or amazon or google cloud, and follow the easy instructions to set up your own gitlab instance.   I recommend the "omnibus" install.  It's awesome.

3. Learn a bit from the excellent documentation.   If you have questions you can ask them on the community forum.


Saturday, October 22, 2016

On making OpenSUSE LEAP and Tumbleweed My Main At Home Working OS

For the last few months I have been using OpenSUSE Tumbleweed as my main home pc operating system. I  love setting up my system with a pragmatic combination of command line tools, plus having YAST and the nice Gnome and KDE Plasma desktops and the QEMU/KVM based virtualization system, which I have been running a Windows VM inside of.

Windows and Linux Guy Goes Linux Only (With Dual Boot for the Kids)

Because I work professionally with Windows, and periodically wish to maintain some of my side projects which run on Windows, I have been running VMs with various Windows versions, lately mostly Windows 10. Why? Because it's the most convenient place for me to run the set of modern Windows development tools I want to run including Visual Studio 2015.  What's terrible about Windows 10 is that it's a real bloated pig of a thing and unlike Windows 7 you can't shut off the bits that want to take over your disk and your CPU at will. It feels entitled to update or scan, or cache giant downloads, whenever it wants. It feels entitled to use all my disk I/O bandwidth to do silent updates whenever it wants. It feels it's okay to run 100 system services and then tell me that even when I'm the local administrator (a former equivalent of Linux root user)  that I don't really have root level access to my machine. My machine belongs to Microsoft.  That is at the heart of why I relegate windows to a VM. I don't trust it, and I don't trust Microsoft.   Prove me wrong, Satya Nadella.  Prove me wrong.  Recently I read the Microsoft has opened up centers to allow governments to look at the code of Microsoft Windows to see if Windows is spying on them.  My take on that is that it's a show.  Even if Microsoft wanted to prove that it's not spying, how can it do so? The very closed source nature of Windows, and its ability to deliver new stuff quickly behind the scenes means that even if Microsoft is not currently spying on you (and most likely they're actually not), that could change without any notice to you.  KB31359719 that introduces wide area surveillance by the NSA is only a windows update cycle away.    In fact, at a company the size of Microsoft I'd be surprised if there weren't things going on inside Windows that even Raymond Chen and Mark Russinovich don't know is going on, let alone say Satya Nadella.

So yeah. I still have a windows VM.  Because my kids like to play windows games, I also have a second hard drive with a native Windows 10 install that has a large library of Windows games (via Steam) that I can boot into by a secret hand-signal.  The secret hand signal is pressing F12 to get to my bios boot device selection menu (as opposed to putting this into grub).  If you press F12 you can select drive 3, of the three drives I have installed, and boot direct to Windows 10.   I feel this is a good penalty box to place Windows in, but I feel nervous about it.  In the past Microsoft has decided that it should be the only operating system on any box that Windows 10 runs on, and if they decide to roll out a windows update that upgrades (again) the partition schema, they don't seem to feel the need to tell anybody else out there about it, and might "upgrade" even other drives. That's a paranoid fear, I hear you telling me that.  No, it's not paranoid. It's just a well known phenomenon in human psychology called the "unknown unknown".  I don't think that in Microsoft's giant test laboratories that they have a machine with multiple operating systems set up just like mine. I just don't think they do.

Three Drives / No Splitting Drives between Linux and Windows

Right now I have three operating systems on three drives. One per drive.  I used to put multiple operating systems on partitions on the same drive, but I have come to think that's a bad idea.   Drive 1 (/dev/sda) is an SSD with  OpenSUSE LEAP.  Drive 2 (/dev/sdb) is a regular 2tb 7200 RPM hard disk with Windows 10 on it, and Drive 3 (/dev/sdc) is a regular 4tb 7200 RPM hard disk with OpenSUSE Tumbleweed and a large pile of QEMU/KVM  VMs on it.  The most recent time I installed OpenSUSE I actually detached all the sata cables I didn't want to get written to by the install.   This seems like a prudent step to me.  If I install an operating system and it decides to rewrite partition tables, or update a boot block, and moves things around, it's not guaranteed that all operating systems on that disk will agree about those changes.  Strange things like partitions disappearing (because there isn't only one agreed upon way of partitioning disks) can happen.  Better not to let operating systems try to share drives, these days, when you run as many different operating systems as I do, and when it doesn't seem that anybody can really feasibly test all these combinations.  Back in 1996 when I would be dual booting Red Hat Linux 4.0, and a DOS+Windows95 system, things were a bit more rational. Some of the newer formats for partitioning disks that we have now did not exist. My first installation of Tumbleweed, I made the mistake of allowing it to write the bootblock on my first disk, which was at the time, a windows 10 boot disk, that was installed with whatever defaults Windows 10 chooses when it owns the whole of the only disk that PC has.  I don't think it uses DOS 6 era partitions anymore. But I'm not sure.

In the old days when I wanted to see a "snapshot" of my storage devices and their mount points I would often use mount but these days the output of that, if you followed OpenSUSE LEAP's automatic partitioning/mounting suggestions is quite huge.  Instead I use my new favorite ls-family command, lsblk:

> lsblk
sda      8:0    0 931.5G  0 disk
├─sda1   8:1    0     2G  0 part [SWAP]
├─sda2   8:2    0    40G  0 part /boot/grub2/x86_64-efi
└─sda3   8:3    0 889.5G  0 part /home
sdb      8:16   0   1.8T  0 disk
├─sdb1   8:17   0   3.7M  0 part
├─sdb2   8:18   0 983.8G  0 part
└─sdb3   8:19   0 878.9G  0 part
sdc      8:32   0 931.5G  0 disk
├─sdc1   8:33   0     1M  0 part
├─sdc2   8:34   0     2G  0 part
├─sdc3   8:35   0    40G  0 part
└─sdc4   8:36   0 889.5G  0 part
sr0     11:0    1  1024M  0 rom 
And then, to get my free space, df -h

Nice.  Much easier to read than the jumble of junk that comes out of the mount command, because on OpenSuSE, there is a btrfs root filesystem, with a lot of subvolumes. When you type mount the output feels to me (linux old timer) like a bit much to digest and comprehend.  And did you ever notice that the output of mount skips over the root filesystem on some Linux systems? On OpenSUSE though, it's in there, embedded  not in there. I'll be honest, I didn't need to know about all the btrfs subvolume mounts, and I would like to know where my root filesystem maps. If anybody knows a command to dump all the mountpoints, omitting subvolume mounts, please do let me know. What I'd like to know that the above lsblk obscures is that my root filesystem is mounted SOMEWHERE inside sda2, that is to say sda2 is not a 40gig partition used only for grub boot efi files .   Everything except /home is mounted on btrfs subvolumes inside /dev/sda2, and root (/) is also a subvolume under sda2.  To find the root dev the following could be put into a shell script or an alias, and I'd call it rdev because busybox and some linux shell utils packages used to contain just such a command:

df | egrep "/$" | cut -d " " -f1
That's better than what I came up with, which is just to grep one line out of mount:

mount |grep " / "

 Thank you to jcsl on #suse channel on, for the above df idea.

In an upcoming post I'll talk more about the snapper tool and the subtleties of the btrfs journaling filesystem.  Yes, OpenSUSE uses btrfs and it's awesome.  It really is a "better" filesystem, and it's quite mature and stable now, it's been around for eight years at this point.

I'll just point out one strange fact at this point; Don't use df to detect free space, it doesn't report the correct values because it doesn't understand how pooling and subvolumes and metadata in btrfs work.   Unfortunately the authors of btrfs have strange ideas about user-interfaces, and they don't tell you "how many gigs are free". You have to subtract A-B yourself:

> sudo /usr/sbin/btrfs filesystem show /dev/sda2
Label: none  uuid: 53ccd989-ad5e-4ef3-aff7-07523c971a2d
        Total devices 1 FS bytes used 6.53GiB
        devid    1 size 40.00GiB used 7.79GiB path /dev/sda2

The above output shows me I have 33 gigs free on a 40 gig btrfs filesystem.  Maybe I'll write my own version of df and make a shell alias for it.  The value above is similar to the output of df / -h but it can be significantly different. Be aware.

The OpenSuSE Installers Are Fantastic  (But give OpenSuSE its own hard drive)

One of my Tumbleweed installs in about July 2016 I let it write  to my main windows 10 system's boot block. It appeared to wipe the partition table information from that boot block that Windows was using.  Windows 10 was rendered inoperable, and as far as I know when OpenSUSE writes the bootblock it doesn't keep a backup anywhere.  I do not recommend allowing OpenSuSE to write to any drive containing a windows 10 boot sector unless you have a full image of that entire physical disk, and also a separate bootblock backup made with dd. The situation is complex, but as I understand it currently,  Windows 10 and OpenSuSE do not share in common an understanding of what goes where in your boot block.  Since one half of that situation (Windows 10) is a closed source operating system built by a company that actually hasn't fully documented how everything works in Windows 10's partition system,  I'd recommend not sharing drives.  I also don't recommend resizing partitions to make Linux fit on your existing drive. If you have a desktop, I recommend you get a second drive. If you have a laptop, everyone (including me) recommends you make a full drive image backup before you try to make a dual boot  windows/linux system.  Resizing partitions and messing around with dual boot is a bit tricky.

Before you allow anything to write to your boot block make sure you back up your bootblock:

   dd if=/dev/sda of=/root/backup_MBR count=1
If you didn't listen to me (and everybody else) and you seem to have wiped your partition tables and lost everything on your drive (like I did), you might want to find out about the amazing gparted tools bootable ISO that you can use to find and recover your data, maybe.  Chastened thus, by my unbridled enthusiasm and disregard for all sane practices, I believe the whole experience of wiping my drive was a good experience.  We need these little setbacks. They can help keep us humble.   Thanks to those awesome people  who make the gparted live boot rescue environment (you can use it on a DVD or USB stick) I didn't actually lose any of the data that I had put on there since my previous data backups.

Installing and Using OpenSuSE LEAP with some nVidia Cards May Still Require proprietary NV X Drivers

OpenSuSE LEAP is a free/open/libre project that also has some non-free bits available.  These non-free bits come in handy for those of us who need to do stuff like use nVidia video cards that the guys at nVidia don't feel like documenting.  All praise to the team behind Nouveau for trying to make the lives of ordinary users better.    I won't be buying any more nVidia video cards. I really should have bought an AMD/ATI card. As a proud Canadian, I'm happy that AMD has kept a lot of ATI-years talent around up here near where I live, and modern AMD/ATI video cards are  a fantastic value, and AMD is a good partner to the OSS world.    nVidia is simply not.   Linus has castigated them on multiple occasions, and while my new self imposed OSS-code-of-personal conduct prohibits me from expressing how I feel about nVidia's actions in the language I would prefer, I will state instead that I am unhappy, and that I intend to vote with my feet and my dollars.   Down with nVidia and down with proprietary binary patent-encumbered blobs.  Up with AMD who somehow lives in the real world and writes both binary patented drivers for their increasingly irrelevant windows desktop share, and who also seems to find a way to document their stuff and collaborate with OSS so that we end users don't have to suffer with broken desktops.  People who go on the OpenSuSE forums and complain about their system not working out of the box with nVidia need to realize that the problem lies with nVidia and not with OpenSuSE or the wonderful folks at Nouveau. For my particular card, the Nouveau driver now works perfectly in Tumbleweed, so my hope is that this driver makes it into the next stable LEAP release.

In LEAP,  My video card (nVidia GeForce GTX 750 series) doesn't work well with Nouveau.  The system boots up to a graphical boot/login screen (even though I told the installer to log in automatically), but logging in and starting the KDE or Gnome session fails, dumping you back at the session-manager login screen.  Switching to the virtual text consoles (Ctrl+Alt+F1) then back to the X11 desktop (Ctrl+Alt+F7) hangs X11, leaving no moveable mouse pointer. It is possible to get back to a text login (Ctrl+Alt+F1) and kill the display-manager service and start it again (service restart display-manager).  The solution is to use YAST or ZYPPER to install the xf86-driver-nv and uninstall xf86-driver-nouveau, which requires also that you use YAST or some editing skills to enable the community nvidia rpm repository.  After doing that, I also need to blacklist the nouveau driver, re-run mkinitrd as root, and reboot. That blocks the nouveau kernel module from loading.   It's necessary on my system, to do this, as root:

echo "blacklist nouveau" >> /etc/modprobe.d/50-blacklist.conf && mkinitrd && reboot

New to Me, and Maybe new to You? journalctl is your friend!

One area of Linux system evolution I have gotten behind on is understanding systemd and what it does, and what journalctl does.   I have learned a new favorite command to go with dmesg and less /var/log/Xyz, and it's this:

sudo journalctl -b

That shows the output of the systemd journal (log) for the current boot up.  This is awesome.  It's like a super-dmesg command.   For example, on my system my kernel is "tainted" by the evil-empire-of-nVidia, and that output has always been visible in dmesg, but it's also in journalctl, which is not limited to the current bootup, and contains more than just kernel log output.

  A fully featured less-style pager appears to be invoked automatically by the above command, so from an interactive terminal, you can type the above command then type > and that gets you to the end of the buffer, and you can then scroll back and you can search with / just like you would if you'd done a less /var/log/foo. I feel a little foolish that I've been using systemd based linux systems for years and never bothered to read the awesome manual

This brings me to the point that OpenSuSE's documentation is fabulous.    No insult to any other community is intended by that.  I'm also quite impressed by the work that the Debian community puts into its distro docs, and the Fedora project is also quite impressive.    Online support on IRC is also important to all three of those communities, and I've hung out quite a bit in the SUSE irc channel, and tried to help a bit.

Minor Annoyances With Workarounds Provided Already

For some reason, since SLES11 (from my googling) the default behaviour of man is insane (displays duplicate pages for almost everything and asks you to pick which one you want).   Who wants this behavior? This might be some "enterprise"-y default that end users don't like. The fix is easy just add the lines below to ~/.bashrc, and is actually reported to you when man does this annoying thing.  I wonder why they left this behavior as the default?

# man brings up annoying duplicates:

The second most annoying thing is that single clicking opens folders in the  KDE Plasma file-explorer (Dolphin).   I find that annoying.    The default behavior in windows actually goes back to the late 1987 IBM Common User Access documents, which describe the standards used in the MS-DOS Executive (Shell), the IBM OS/2 Presentation Manager shell, and which were copied carefully (and with deep understanding) by Microsoft and placed into every version of Windows ever made.  You'll note that even the contrarians at Apple have largely respected many of the CUA precedents. Changing those things is a disaster for usability.  And yet the KDE and Gnome developers (and the Ubuntu more than anyone) seem to periodically get tired of incremental improvements and decide to make whole new user interfaces that wreak havoc on user experience, and verge on user-hostile at times.   But instead of ranting about it, just turn it off, this is Linux, almost everything is easy to change. Click the KDE taskbar icon (equivalent to the start icon in windows), go to the search box and type in  Mouse, and open the mouse window, and change the Single click to open setting to double click.

"Moving In" to a New Linux Box

While I was in there I was amused to read that the default ~/.bashrc has some lines in there that to tell you how to specify your PalmPilot device's baud rate when you connect it with linux. At one time I was a devoted user of my Palm III and Palm V and Palm Vx devices.  I think it's awesome that Linux carries forward its history, but I know that some people periodically whine and wish that someone (not sure who) should go through and delete all our history.  I actually wish openSuSE installed fortune by default, a choice made by the FreeBSD team that I highly approve of.  I like fortune, and install it on all my linux systems:

> sudo zypper install fortune
root's password:

> fortune
One of the chief duties of the mathematician in acting as an advisor...
is to discourage... from expecting too much from mathematics.
                -- N. Wiener

Each time I install Linux on a fresh box, I wish I had thought about a clean way to get my preferred environment settings onto it.  My ~/.vimrc, my ssh keys, all that. If anybody reading has any preferred techniques, I'd love to hear it.  Some people make a "dotfiles" git repo, maybe I should start doing that.  Since I keep private secrets in my dotfiles, I'm not crazy about pushing it all in one git repo.   I don't even feel comfortable keeping all my secrets on a private bitbucket or gitlab repo. What do you folks do?  There has to be something better than my current mess.
Maybe I will go with the .dotfile repo and it will be only for public files, and I'll have a script on there called fetch_secrets that will decrypt and restore my secret-dotfile-bits from a usb key that I carry about with me.  Or maybe there's some better solution.

After setting up dotfiles and basics, I like to get all my development toolage, and a variety of virtual machines.  I use vim, Netbeans, Eclipse, Webstorm (an IntelliJ idea flavor for html5/web), Visual Studio Code,  java jdk and the usual java tooling suspects,  dotnet core, mono,   go, rvm/ruby, gcc, clang, python3, python2, node+npm, and lots more.   I really like Pharo (a modern smalltalk VM) as well, and getting that running on a 64 bit Linux is a bit of a pain as it's a 32 bit only VM right now.

On Using LEAP Daily

I found some problems getting one of the large Ruby/Rails apps I want to work with going on Tumbleweed and have moved to Leap 42.1 as my main OS these days.  About the only thing I notice is that the "Nouveau" open source cleanroom-reverse-engineered X drivers are less advanced in Leap than they are in the bleeding-edge rolling-release Tumbleweed system. I have to use the stable non-open-source proprietary NV (nvidia)  video drivers for X11, but even they have crazy bugs and incompatibilities with my card, a GeForce GTX 750 (rev a2). Last night my X server just crashed on LEAP.  Might be going back to Tumbleweed until I get rid of this heinous heap of video card garbage.

There is something to be said for running a completely open source system, with no magic-blobs of patent encumbered binary sludge on your system.   Next time I buy a video card it will be an AMD one. I'm done with nVidia.   But I'm just getting started with OpenSuSE and loving it.  It's clean, fast, efficient, and has several really great desktops (KDE Plasma and Gnome) and it really is up to you which one you prefer. I find I mostly prefer KDE Plasma.