HDMI Audio Drop outs on AMD 5700XT linux investigation

My new 5700XT card is only supported in newer linux kernels, so I am using 5.4.0-14-generic.

Using this card plugged into my TV has been causing a lot of audio problems.
The HDMI sound cuts out, frequently, for a few seconds at a time.
Watching a youtube video seems ok, but playing games or watching a full screen video shows the problem.
Originally I though this was the specific game I was playing (Lego harry potter), but investigation showed that it also happened in other games and applications.

I’ve tried various distros with the newer 5.x kernels and found the same problems.

I have found that Pulseaudio is reporting a very low fragment and buffer size for the card:
iain@monolith1:~$ pactl list sinks | grep size
device.buffering.buffer_size = “17664”
device.buffering.fragment_size = “2944”

These numbers seem to be far too low.

Checking ALSA directly gives me:
iain@monolith1:~$ cat /proc/asound/card0/pcm0p/sub0/hw_params
format: S16_LE
subformat: STD
channels: 2
rate: 44100 (44100/1)
period_size: 736
buffer_size: 4416

To “solve” the problem I did added the following to
; default-fragments = 2
; default-fragment-size-msec = 250

These numbers were pulled from a post on ARCH forums,

Performing the calculations myself led to even worse results, so I used higher values suggested in various places.

EDIT: This didn’t actually help in Lego Harry Potter. Time to try some more things.

Books I’ve read this year – 2019

The winner stands alone
The Diamond Age
Chronicles of wasted time part 1
Better Angels of our Nature
Zen and the Art of Motorcycle Maintenance

Snow Crash

Starfish Peter Watts

Maelstrom Peter Watts

Behemoth Peter Watts

The book of dust part 1 part 2

Homo deus

C j sansom


Black fyre

Red mars kim stanley robinson

Stories of your life and others

Elantris Brendan Sanderson

Borne Jeff Vandemere

Ditching Android: History

My friends and family know I’m on a bit of a crusade. Or maybe they would say I’m over-principled. I’ve felt a creeping sense of dread for a very long time about Facebook, and Google, but it started to crystalise when the Snowden leaks happened. A lot of techies already knew what was being done, and a lot of it was out in the open already.

Ditching Android then. It’s a shame, Android has been good to me. My first smartphone was the HTC Desire, which I got in 2010 to celebrate getting my first job. I knew a few people with a smartphone already, but they were no where near ubiqutious.

I followed it up with the Nexus 4, which I did 2 screen replacements for.

Then the Oneplus One.

Then briefly a 2nd hand Sony Xperia Z5 Compact

That’s 4 phones in 9 years, which means I’ve kept them slightly longer than average. At no point have I spent a more than £400 on an individual phone.

Cyanogenmod Android

I’ve had a go, now and then, at using non-google Android. My first experiment was with Cyanogenmod on the HTC Desire. I had purely practical reasons for this: The proximity sensor had gone bad and always detected an object in proximity. This meant that if I initiated a phone call, the screen would turn off and then just stay off. If I was trying to call a system that required touch tone typing, then I couldn’t use the keypad. I couldn’t hang up, either. If the other party didn’t hang up, or the I needed to terminate the call myself, I would pull the battery.

There was one workaround for this, which was to use headphones. I think that disabled the “screen off” function even if the proximity sensor was always reading 1.

I installed Cyanogenmod on it, rooted it, then played around with the settings and filesystem to disable the sensor entirely. It worked, and the phone kept me going for another year or so.

For the Nexus 4, I kept using standard Google Android, and used it happily until my 2nd screen replacement broke, which was around the time OnePlus launched the OnePlus 1.

OnePlus One came with Cyanogenmod on it, a custom build of it by OnePlus. Although knowing what I know now I wouldn’t be suprised if it was leaking all my data to a Chinese company instead of an American company.

Eventually I got fed up with the defaults on the OnePlus and installed LineageOs on it, which has so far been my favorite android variant. Because it’s just Android!


Using Android without google services has become a lot harder than it used to be, and I think the main reason is Google Cloud Messaging, or Firebase.

Google Cloud Messaging is an inspired way to work around a problem that Android apps have always had, and when it was first implemented it vastly improved battery life and data consumption.

Trying to use any app that wants to send you notifications without using GCM is basically impossible. A good example is Whatsapp. I have issues with Whatsapp but I’m not Richard Stallman, and I’m not a hermit, so I have to make some concessions. Whatsapp is currently one of those concessions.

But using Whatsapp on an Android phone without GCM or google services installed means notifications or often delayed, or never show up at all unless I go to the app and manually refresh it. Something I did notice is that switching between wifi networks, or switching from Wifi to Mobile data would trigger it to fetch messages, so often I would arrive home from work, my phone would join my Wifi network, and I’d suddenly get all the messages I’d missed that day.

The Email app I used, K9 mail, had the same kind of issue. But this could be configured to explicitly poll for new messages every so often, which works fine…..but kills your battery.

I liked LineageOS a lot, but getting it working, and also using GCM, involved a lot of hassle and tended to break when I updated. Microg, which provides Google Services for Lineagos, is great but tends to fall over on update.

I’ve had enough of the fiddle. I’m trying to force a version of Android into my use case, instead of seeking out an alternative. What I’m doing will always be hard and fiddly.

Around this time I was also looking to get a new mobile. The OnePlus 1 has been a good workhorse but it’s starting to die. Too much of the hardware is failing.

Ditching gmail

I’ve had a gmail account since 2007, when I got an invite from someone I met at a university Open day. At the time it was the bee’s knees, dog’s bollocks and cat’s….elbows?It was hard to get into, it had way more storage than anything else, and it was fresh, and new, and exciting. There were people using it as a filesystem (before dropbox, I think).

I’ve long ranted and raved about how I no longer trust Google. I haven’t done for some time, but their services have become so ubiquitous that it’s hard to actually take the plunge.

Anyway, I’ve been gradually updating my contact information and contact records on a variety of services to my new Fastmail address. I have a choice here: I can use my own, self-hosted email address (using this domain), or I can use Fastmail.

In the long run, I want everything to be on the self-hosted address, and then I can make the choice to either continue to run it myself, which is a bit of a ball-ache, or I can direct the MX record to some company like Fastmail and get them to do it for me. This is a paid service, but then it has to be – Google only run gmail for free because it’s so profitable to harvest user data and to basically be the only option for email that most people have even heard of.

While moving services, I noticed that I have a lot of accounts for basically the same thing. Game Store accounts. Every fucking gaming company has one.

List of GAMING services I’ve had to move

  • Steam
  • Origin
  • Mojang
  • Sony PSN
  • Battlenet
  • Ubisoft
  • UBER (Planetary Annihalation)
  • WURM (who gives a shit about WURM?) I didn’t even move this one.
  • Bethesda! Why on earth does Bethesda have one? They were on Steam until recently! I don’t even remember signing up!

Now that I’ve done this, I need to decide just how long I should leave my old gmail account open for. My “human” contacts already know to use my new address, which I’ve had on this domain for a few years now. I’ve left an out-of-office auto-responder on for a few weeks now, for contacts only. Just in case anyone forgets.

I suppose I’ll let it run for another month, then kill it?

It’s not strange that I am apprehensive about this. After all, this email address has been the keystone of my digital life for more than 10 years now. Practically every service I have ever used has required an email address as part of sign up, and this has been the only one I’ve used for non-financial stuff. It’s also been the case that so many fluff type things have required an email, and the resulting spam has just been unbearable.

Multi Seat Gaming PC for LANs Part 0


A couple of years ago I mucked around with something called PCI-E passthrough.
The idea is to let a Virtual Machine (VM) use an physical GPU.
This lets me, who almost exclusively run Linux, run some Windows only games in a Windows VM. I think I was first exposed to this idea through a reddit post on r/linux_gaming, possibly this one:

GPU Passthrough – Or How To Play Any Game At Near Native Performance In A Virtual Machine (xpost /r/pcmasterrace) from linux_gaming

Thing is, that guide was really only a jumping off point into an endless sea of configuration and testing, and eventual success. But it was really only a toy for me, to see if I could.

The secondary GPU was an AMD 4850HD, which was pretty elderly, and I wasn’t going to be able to use it for everything I wanted. After the initial success, I shelved the project.

And then…

A few months ago, Matt came to me with a suggestion that we attempt to follow this procedure:

We both attend the Skynet Wales LAN , and he thought it’d be cool to be able to do all our gaming from a single rig instead of both of us having to take our entire setups.


The unRAID solution, on paper, looked like the easiest way to do this, but two things made me disagree
1. I had already set up something similar a while back
2. Why do it the easy way when yo can do it the hard way.

Later on it turned out I made the correct call here for a couple of reasons. I’ll go into why later on.

The next few posts will go through the Hardware we used, the software / VM setup, the various tweaks we’ve applied, and some benchmarking results. Then finally how it performed in a real world event.

FreeNAS Corral and Docker

This post was written before FreeNAS Corral got memory-holed. I’ve not decided on my course of action yet. I like running my services in Docker far more than I ever liked JAILS and the LinuxVM, so going backwards is not really an option.


I recently upgraded my FreeNAS 9.3 box to FreeNAS 10, FreeNAS Corral. The upgrade was very smooth, although I did panic at one point when it took 500 years to boot back up.

I knew that the old Plugin system had been removed, and that Docker Support was the new way to get your plugins working. I had several plugins I needed:




The Linux VM plugin, or Jail, was what was letting me run an Ubuntu Server to host Squeeze Server. The Linux VM VirtualBox was not available any more, so the Squeeze Server would have to find a new home. Fortunately, there is a Docker Image for Squeeze Server.


Getting an Owncloud container running was pretty trivial. I did the following:

Create a new Dataset, for Owncloud Data to live in

Screenshot_2017-04-03_13-48-56I gave ownership of it to “www” user and group.

Create the Owncloud Docker Container

Screenshot_2017-04-03_13-51-38I needed to do the following:

  • Give it a name
  • Map the Dataset to the container path
  • Set the Container to Bridged Mode. This is definitely possible with NAT, but I prefer having unique IP addresses for each container.
  • Save the container, and start it up.

Once Owncloud has started up, I let it do a first time setup, letting it host the database itself, etc.

Migrating the Data from the Owncloud Jail

I needed to use the command line for this.

First, stop the container.

Copy the owncloud Jail data to the new Dataset.

My owncloud data was in /mnt/<Pool>/jails/owncloud/media

This location contained the User’s directories and the database, but not the config file.

Copy everything in here to your new Dataset, into the “data” folder:


That’s the data, but you need to edit the Config file as well. The config file for the new owncloud install is here:


You’ll need to replace this with the older config file from the plugin, which is here on my setup:


You might need to update the trusted domains to include your new Owncloud Docker IP (or the FreeNAS IP and Docker Port, if you’re using NAT).

After you’ve done this, you should be able to start the Owncloud Container and log in using your original credentials.

SSL, Let’s Encrypt, Reverse Proxies

This was hard work. The actual solution is not complicated, but there’s no single guide to tell you how to do it.

Nextcloud recommends that you use a Reverse Proxy to get SSL working for your NextCloud container. Owncloud is pracitcally the same thing, so it seems like the thing to do in both cases.

I use Letsencrypt for my certs, and previously I ran certbot from inside the Owncloud Jail. I initially assumed I would do the same thing here, but luckily for me the FreeNAS docker collection includes a Letsencrypt Image that includes NGINIX, which would let me set up a reverse proxy as well.

Things to keep in mind before you start:

-Port forwarding on your Router should already be set up for the domain name you want to create the Certificate for.

-you need a writeable area for the certificates. I have a dataset setup with “www” permissions specifically for this.





What data do I consider important?
I had a backup scare a while ago, thinking I’d lost several years’ worth of photographs. It looks like I did have some level of backup, but not all of them were retrieved.
So I suppose Photographs are one thing I feel I need to backup. What else?


What kind of files am I even thinking of, I wonder? I haven’t tried to write any short stories for years, and spreadsheets I created with Sarah to do things like plan chores don’t really seem backup worthy.
Personal projects, I suppose would count, so that would be the various little programming projects I’ve started and abandoned in the last decade.
My university work, and most importantly my 3rd year project. I worked hard on that, and even if it’s unlikely I’ll be ever doing anything with it again, it would be a shame to lose it.


I’ve got a lot of music files. A lot of these are ripped from CDs, and from my Parent’s CDs. Some of these are files I recorded from records, some are things I was given by friends, some of it is Sarah’s. Replacing the actual files would be a pain in the arse, re-ripping everything would take several days of solid work, and some of the CDs are in poor enough shape that I’m not sure they’d rip again anyway.


I’ve made a few videos, just short personal things mostly. Most of these are on Youtube anyway, since that was the primary mechanism I was sharing things with friends anyway. But if I should lose my Google account, or google went under or something (not that it’s likely to happen).


A long time ago, a member of family asked me if I still liked Google. This was just after Google had started to really take off with their suite of applications, and Google Mail had started to become such a mainstream player.
At the time, they were poking fun at my anti-Microsoft attitude and my dislike of massive IT companies. They were right to do so; I was probably the kind of person to spell Microsoft with a dollar sign.
But as the years have gone on, it’s difficult to ignore that Google (Alphabet) has become a deeply scary company.
Even if they weren’t, it’s probably pretty foolish to have so much of my identity tied to a single service, and it’s also amazingly foolish to keep any important information in a free online service. I know, since I’ve been that fool before: when Hotmail addresses were migrated to outlook.com addresses, any “inactive” accounts were purged of all old emails if no-one log in via web browser happened within a number of days. I’d had that email address 10 years, and suddenly everything on it was gone. It’d be daft to assume that Google never ever EVER has that kind of thing happen.
So, I suppose I’m a tech version of a prepper. There might already be a word for that, not sure. Hopefully I won’t be suspected of being a new Unabomber or anything.
So, my data.

Backup solution.

The saying goes “3 copies, 2 mediums, 1 off site backup”. Currently, as of writing, I have one “master” copy on my NAS, but that’s only got a single hard drive in it. This means it barely qualifies as a NAS at this point. Sure, it’s storage attached to the network, but only a fool would actually host data on it (Yeah I’m a fool).
A 2nd hard drive has been ordered and dispatched from some enormous warehouse, so soon that single hard drive will be mirrored, making me a bit less foolish.
Still, that’s only 1 medium. I’ll need to think a bit about what might constitute a 2nd medium.
The off-site backup is going to be handled in 2 ways. First of all, I’ve subscribed to Crash Plan. I’ve not actually uploaded any data to it yet, since it’s been a bit tricky to set up on FreeNAS. This will be my cloud backup for now, although I’m open to switching cloud backup providers at any point.
The 2nd off site back up will be a similar NAS box running at my parents’ house. I’ll dedicate some time to getting that set up when I visit them at Christmas. It can store their data, and I’ll rsync my data to it, and their data to mine.

The Paranoid bit

Well, I don’t think it’s really paranoia. But I’m not going to be entrusting anything more to Google from now on, and I need to apply that retroactively to their services. This means I will need to get down from their servers everything I’ve ever uploaded, and also take a copy of every email I’ve ever sent or received through their services. This will all go into my backups as well.
I will also need to remove any DRM from any ebooks or films or music I’ve ever bought from them too. This might be a bit tricky.
Of course, this won’t happen overnight. I’ve already begun migrating my emails away from Google, but I still use them as my primary contact for a lot of online services. Amazon for example, uses my Gmail login. I suppose I want to get things that are actually irreplaceable away from them, so that would mean family and friends can have my non-gmail related account, and all of the faceless companies that want my email address for spurious reasons can have the gmail one. In this fashion, I will fill my gmail inbox with SPAM.

FreeNAS Setup

Installing FreeNAS.

I’d read on various blogs and how to sites that you should install FreeNAS to a USB stick, and boot from that. This leaves hard drives free to be proper network attached storage, and not have to use a partition on them just for the OS.

What I ended up doing was use VirtualBox with USB pass-through to install a VM onto a USB thumb drive. The USB stick I tried was an old Sandisk Cruzer Slice I’ve had since 2010, and has usually been my OS install disk since I don’t have a DVD-ROM drive any more.

While booting I was having no end of problems. It took a good 10 minutes to go from POST to grub, and after Grub it would drop to the “mountroot” screen. The mountroot screen looked a bit like this:


This lead me down a rabbit hole where I started to wonder if maybe there was something wrong with the server I was using.

After several days head-scratching, I swapped to a different, newer USB stick, which worked fine. I guess I must have ruined the Sandisk over time.

This little cube sits underneath my whisky cupboard, silently plotting to back up files

This little cube sits underneath my whisky cupboard, silently plotting to back up files

Share Setup

At the moment, the NAS contains only a single hard drive, which is only 350GB. The plan was to use this to experiment with, and once I’m happy, to replace it with several large capacity drives in some kind of RAID configuration.

Even with this limited capacity, it’s actually still enough to store all of my photos and all of our music, so it’s a good test of the system’s capabilities and a good way to muck about with the various share options.

Music: CIFS read/write, and NFS read.

Films: NFS read/write

Photography: NFS read/write


Music needs to be CIFS read/write because Sarah’s laptop is where all digital music is managed, because of Itunes (we hates it FOREVER).

It needs NFS read so that Kodi, running on the Raspberry Pi, can access it and NFS is faster for reads over the network than CIFS is.


Nixie things

With my missus’s birthday rapidly approaching, I have decided to manufacture her something neat.

To that end…

“Nixie” tubes are a brand-name for a cold cathode glow discharge device used for a relatively short period around the 80’s to display numbers, letters and symbols (before LED technology reached maturity). They are only in production by a few die-hard enthusiasts, but ex-Soviet stock is reasonably readily available on the Internet.

They look something like this:


The purpose of this post however, is as follows.
As a byproduct of the majority of available Nixies being Soviet-made, their datasheets are Cyrillic.

And I have translated the one for an INS-1 ‘indicator’ point (not a true Nixie lamp, but a start).

So here it is, take from two documents- one that ships with each batch (firing voltages and current), the physical properties taken from another:


Gas discharge lamp unit is designed to display information in the form of a point in the information display.
The body is cylindrical glass. Weight no more than 1.5g

Cathode marked with a dot (NOTE- this contradicts most of what I have read on various forums! However,  the datasheet clearly has “анод”  (anode) and “катод” (cathode) marked, with the dot on the cathode)


Instructions for Use

Firing voltage, V, Min (Max) .  .  .  .  .   .  .  .  .  .  .  .  .  .  65 (90)

Sustaining Voltage, V, Max .  .  .   .  .  .  .  .  .  .  .  .  .  .  .  .  .  55

Current, mA .  .  .  .  .  .  .  .  .  .   .  .  .  .  .  .  .  .  .   .  .  .  .  .  0.5

Vibration loads:

Frequencies, Hz  .  .  .  .  .   .  .  .  .  .  .  .  .  .  .  .  .   .  .  . 1-1000
Acceleration, m/s² (g), no more  .  .  .  .  .  .  .  .  .  .  .  .  98 (10)

Multiple impacts:

Acceleration, m/s² (g), no more  .  .  .  .  . .  .  .  .  .  .  .  147 (15)
Impact duration, m/s  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 2-15

Single shock:

Acceleration, m/s² (g), no more   .  .  .  .  .  .  .  .  .  . 1472 (150)
Impact duration, m/s  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 13

Temperature (environment), ° C.  .  .  .  .  .  .  .  .  .  . -60 /+ 85
Relative humidity,%, not more   .  .  .  .  .  .  .  .   .  .  .  .  .  .  98
The increased air pressure, Pa (kg/cm²) .  .  .  .  .  .294 198 (3)