What data do I consider important?
I had a backup scare a while ago, thinking I’d lost several years’ worth of photographs. It looks like I did have some level of backup, but not all of them were retrieved.
So I suppose Photographs are one thing I feel I need to backup. What else?


What kind of files am I even thinking of, I wonder? I haven’t tried to write any short stories for years, and spreadsheets I created with Sarah to do things like plan chores don’t really seem backup worthy.
Personal projects, I suppose would count, so that would be the various little programming projects I’ve started and abandoned in the last decade.
My university work, and most importantly my 3rd year project. I worked hard on that, and even if it’s unlikely I’ll be ever doing anything with it again, it would be a shame to lose it.


I’ve got a lot of music files. A lot of these are ripped from CDs, and from my Parent’s CDs. Some of these are files I recorded from records, some are things I was given by friends, some of it is Sarah’s. Replacing the actual files would be a pain in the arse, re-ripping everything would take several days of solid work, and some of the CDs are in poor enough shape that I’m not sure they’d rip again anyway.


I’ve made a few videos, just short personal things mostly. Most of these are on Youtube anyway, since that was the primary mechanism I was sharing things with friends anyway. But if I should lose my Google account, or google went under or something (not that it’s likely to happen).


A long time ago, a member of family asked me if I still liked Google. This was just after Google had started to really take off with their suite of applications, and Google Mail had started to become such a mainstream player.
At the time, they were poking fun at my anti-Microsoft attitude and my dislike of massive IT companies. They were right to do so; I was probably the kind of person to spell Microsoft with a dollar sign.
But as the years have gone on, it’s difficult to ignore that Google (Alphabet) has become a deeply scary company.
Even if they weren’t, it’s probably pretty foolish to have so much of my identity tied to a single service, and it’s also amazingly foolish to keep any important information in a free online service. I know, since I’ve been that fool before: when Hotmail addresses were migrated to addresses, any “inactive” accounts were purged of all old emails if no-one log in via web browser happened within a number of days. I’d had that email address 10 years, and suddenly everything on it was gone. It’d be daft to assume that Google never ever EVER has that kind of thing happen.
So, I suppose I’m a tech version of a prepper. There might already be a word for that, not sure. Hopefully I won’t be suspected of being a new Unabomber or anything.
So, my data.

Backup solution.

The saying goes “3 copies, 2 mediums, 1 off site backup”. Currently, as of writing, I have one “master” copy on my NAS, but that’s only got a single hard drive in it. This means it barely qualifies as a NAS at this point. Sure, it’s storage attached to the network, but only a fool would actually host data on it (Yeah I’m a fool).
A 2nd hard drive has been ordered and dispatched from some enormous warehouse, so soon that single hard drive will be mirrored, making me a bit less foolish.
Still, that’s only 1 medium. I’ll need to think a bit about what might constitute a 2nd medium.
The off-site backup is going to be handled in 2 ways. First of all, I’ve subscribed to Crash Plan. I’ve not actually uploaded any data to it yet, since it’s been a bit tricky to set up on FreeNAS. This will be my cloud backup for now, although I’m open to switching cloud backup providers at any point.
The 2nd off site back up will be a similar NAS box running at my parents’ house. I’ll dedicate some time to getting that set up when I visit them at Christmas. It can store their data, and I’ll rsync my data to it, and their data to mine.

The Paranoid bit

Well, I don’t think it’s really paranoia. But I’m not going to be entrusting anything more to Google from now on, and I need to apply that retroactively to their services. This means I will need to get down from their servers everything I’ve ever uploaded, and also take a copy of every email I’ve ever sent or received through their services. This will all go into my backups as well.
I will also need to remove any DRM from any ebooks or films or music I’ve ever bought from them too. This might be a bit tricky.
Of course, this won’t happen overnight. I’ve already begun migrating my emails away from Google, but I still use them as my primary contact for a lot of online services. Amazon for example, uses my Gmail login. I suppose I want to get things that are actually irreplaceable away from them, so that would mean family and friends can have my non-gmail related account, and all of the faceless companies that want my email address for spurious reasons can have the gmail one. In this fashion, I will fill my gmail inbox with SPAM.

FreeNAS Setup

Installing FreeNAS.

I’d read on various blogs and how to sites that you should install FreeNAS to a USB stick, and boot from that. This leaves hard drives free to be proper network attached storage, and not have to use a partition on them just for the OS.

What I ended up doing was use VirtualBox with USB pass-through to install a VM onto a USB thumb drive. The USB stick I tried was an old Sandisk Cruzer Slice I’ve had since 2010, and has usually been my OS install disk since I don’t have a DVD-ROM drive any more.

While booting I was having no end of problems. It took a good 10 minutes to go from POST to grub, and after Grub it would drop to the “mountroot” screen. The mountroot screen looked a bit like this:


This lead me down a rabbit hole where I started to wonder if maybe there was something wrong with the server I was using.

After several days head-scratching, I swapped to a different, newer USB stick, which worked fine. I guess I must have ruined the Sandisk over time.

This little cube sits underneath my whisky cupboard, silently plotting to back up files

This little cube sits underneath my whisky cupboard, silently plotting to back up files

Share Setup

At the moment, the NAS contains only a single hard drive, which is only 350GB. The plan was to use this to experiment with, and once I’m happy, to replace it with several large capacity drives in some kind of RAID configuration.

Even with this limited capacity, it’s actually still enough to store all of my photos and all of our music, so it’s a good test of the system’s capabilities and a good way to muck about with the various share options.

Music: CIFS read/write, and NFS read.

Films: NFS read/write

Photography: NFS read/write


Music needs to be CIFS read/write because Sarah’s laptop is where all digital music is managed, because of Itunes (we hates it FOREVER).

It needs NFS read so that Kodi, running on the Raspberry Pi, can access it and NFS is faster for reads over the network than CIFS is.


FreeNAS backups and share

I’ve been thinking for a long time about solidifying and formalising my backup procedures. To be honest, my existing system is pretty awful, even though it’s caused me problems in the past.

Existing Backup solution

On my desktop, I have an SSD where my operating system (Xubuntu) lives. This drive is 250GB. I have an HDD, which is 360GB called “coffee”, which is where I store photographs, music, videos, download, etc. MOST of what I download ends up here, but some ends up on the SSD when I forget to move it.

I also have numerous external hard drives, where a collection of media I’ve built up over the last decade lives, in a pretty awful folder structure. Some of this is duplicates, some of it isn’t, some of it is corrupt, some of the hard drives are dead.

I know for a fact that the majority of photographs I took between 2006-2010 are gone forever, because the drive I was storing them on died.

Sarah’s files

Sarah’s laptop is slightly better in some ways, and much worse in others. It’s better because File History in Windows 10 is turned on, but worse because:

  • -The drive that the file history is on is one of my many external hard drives
  • -We never connect the drive
  • -Her filing system is non-existent.

*It’s really bad. Her Music is so scattered and so poorly named that it’s going to take me a while to get it sorted out. I’m going to try out a few different tools for automatic tagging and renaming, see where it gets us.*

 FreeNAS & HP Proliant Microserver

I’ve played with FreeNAS before, and I love how it works and what it can do, but I’ve never had any dedicated hardware to run it on. That’s changed now that I’ve bought one of these:

c03760124.png (474×356)

I’ve begun setting it up and getting a working FreeNAS install on it, but I’ve got a lot of thinking to do about how it’s going to all work and fit together.

I’ve got a few basic requirements:

  1. Have regular backups from Sarah’s laptop
  2. Have regular backups from my desktop
  3. Have a network share configured on it for all of our pictures
  4. Have a network share for all of our Music
  5. Install the OwnCloud plugin
  6. Our phones must auto-backup images to Owncloud
  7. Owncloud must somehow share the storage space so that our images are visible from Owncloud and our phone images are visible on the network share

Once this is all done, I want to be able to do the following:

  1. Access the music shares from Kodi, on my raspberry pi. The Raspi is hooked up to my amplifier, and I want to control what it’s playing from my phone.
  2. Access Owncloud from the internet. This will be a small project all on it’s own

I need to make a few decision about how the shares will work. If I’m already backing up everything from Sarah’s laptop, do I want to use Windows File History to make the backups, or something else?

If I use File History, then it’s going to dump all versions of everything in a network share. I don’t really want all the old versions to be visible to every other device (and Owncloud) that is using that particular share or dataset.




Somebody sent me to this tool.

If your Steam Profile is public, it will show you how long you’d need to play for to “complete” all games in your steam library.

My own entry is here.

It relies on average game length statistics from , which will obviously be a bit hit and miss with games like Kerbal Space Program. For open ended sandbox games, like KSP, what counts as beating it?

Actually, I’ll go and look it up:




I suppose KSP does have a Career mode these days, and maybe that’s what’s been submitted for the Main Story stat? Even so, I could play this game for literally the rest of my life and I’d still probably find stuff to do.

So SteamLeft won’t be perfect, by a long shot. I’ve got whole sections of my library dedicated to multiplayer games that I’ll never “beat”, and I’ve got a load of duplicate games as well, for when things have a separate beta branch entry in my games list.

However, I wrote myself a little script to scrape my steamleft entry daily, and log the results to a file.

The script is here:

 wget -O - | echo $(date +"%d-%m-%Y") $(xmllint --html --xpath "/html/body/main/div/div/section[1]/div[4]/text()" - 2>/dev/null) >> /home/anorak/steamleft/bob

I’ll break down what it’s doing:

wget -O -

Wget grabs whatever content exists at the URL you give it. In this case, the URL is the steamleft page for my steam account. This information is looked up in realtime when you visit the page (presumably).

The “-O -” argument redirect the downloaded contents to standard in, rather than writing the results to disk. This is useful because we don’t need to then look up the contents of disk afterwards, and the next part of the command can read directly from standard input.

| echo $(date +"%d-%m-%Y") $(xmllint --html --xpath "/html/body/main/div/div/section[1]/div[4]/text()" - 2>/dev/null) >> /home/anorak/steamleft/bob

The “|” character is a pipe, and it’s used for directing the output of the command before it to the input of the command after it.

echo $(date +”%d-%m-%Y”)

This outputs the date in the format “dd-mm-yyyy”.

xmllint --html --xpath "/html/body/main/div/div/section[1]/div[4]/text()

xmllint I had to install myself, it wasn’t part of my standard ubuntu install. It’s a program for parsing XML. The “–html” option allows parsing of HTML (HTML is often not XML compliant).

The “–xpath” option  lets me grab the actual element I want from the page.  I found the xpath by looking through the steamleft page in Google Chrome:


$(xmllint --html --xpath "/html/body/main/div/div/section[1]/div[4]/text()" - 2>/dev/null) >> /home/anorak/steamleft/bob

The “-” reads the input from standard in.

the “2>/dev/null” redircects all errors to /dev/null. And there WILL be errors, because it’s HTML. And XML parsers do not like HTML very much.

>> /home/anorak/steamleft/bob

This adds the result to the end of the output file.

The output file looks like this:

30-03-2015 1768 continuous hours
31-03-2015 1768 continuous hours

This script is set to execute once per day. I’ll leave it running for a few months, and see how I’m doing at beating my library. I’ll hook it up to GNUplot at some point too, for shits and giggles.

I’ve been fairly good at not buying new games, with the intention of beating my  back-catalog. This should give me an indication of how I’m doing. And it gave me an interesting little exercise. In fact it took me longer to write up how I did it than it took to actually do it.


UAV, reading sensor data Part II

In our previous adventure, Matt and I defeated the dragon and rescued a fake princess who turned into a giant spider  managed to get some meaningful data from both the gyrometer and the accelerometer. We also made some pretty graphs.

The next step is for us to compare the outputs from the Gyro and the Accelerometer. This is currently impossible, given that the Accelerometer is transformed into meaningul readings (degrees) already, and the gyro is just a load of “rate of change” readings.

We can get the orientation readings from the Gyro like this:

  //Create variables for outputs
  float xGyroRate, yGyroRate, zGyroRate, xGyroRate2, yGyroRate2, zGyroRate2;
  long Time;
  //Read the x,y and z output rates from the gyroscope & correct for some innacuracy; convert to seconds
  xGyroRate = (gyroReadX())/57.5;
  yGyroRate = (gyroReadY())/57.5;
  zGyroRate = (gyroReadZ())/57.5;
  //Determine how long it's been moving at this 'rate', in seconds
  Time = (millis()-previousMillis);
  //Multiply rates by duration
  xGyroRate2 = -(xGyroRate/Time)/4;
  yGyroRate2 = -(yGyroRate/Time)/4;
  zGyroRate2 = -(zGyroRate/Time)/4;
  //Add to cumulative figure
  if (((xGyroRate2)>(gyroLPF))||((xGyroRate2)<(-gyroLPF)))   CumulatGyroX += (xGyroRate2);   if (yGyroRate2>gyroLPF||yGyroRate2<-gyroLPF)   CumulatGyroY += (yGyroRate2);   if (zGyroRate2>gyroLPF||zGyroRate2<-gyroLPF)
  CumulatGyroZ += (zGyroRate2);

Now that we’ve done this, we can measure an axis from both the Gyro and the Accelerometer at the same time, and overlay them on top of one another in gnu plot.

Like so:

Yeah, that didn't really work.

Yeah, that didn’t really work.

Ok, so there was something wrong with that. We tweaked the code some more, and got something a bit better:

No, that's worse.

Much worse.

Finally, we figure out where we went wrong with the code. We don’t have the old versions, so we can’t show you our idiotic mistakes. After fixing it, we get something much closer to what we were expecting:

Roll measurement from Gyro and Accelerometer

Roll measurement from Gyro and Accelerometer

We can see from this graph that both sensors are outputting more or less the same thing. However the gyro measurements are actually off by a bit at the start, and are slowly producing a more noticeably incorrect orientation. If we used just the gyro, we’d end up with the plane downside up. There are also “spikes” in the accelerometer readings, at 700 and 1300 – these are probably noisy readings from the sensor.

To get an idea of how much the gyro readings drift, we turned on all the gyro sensors and then left the board still for a few seconds:

Gyro drift in three axes.

Gyro drift in three axes.

This is obviously not going to work long term – we can’t be certain of either sensor. We need a way to combine the readings from both sensors, or face the consequences.

UAV, reading sensor data

A long time before we did our session where we controlled some servos, we had a session where we were reading sensor data.

We’ve tried this a number of times, but this instance was where we actually started to get worthwhile data, and we could interpret it correctly.

We are currently reading two sensors to find orientation data: the 3 axis gyroscope and the 3 axis accelerometer.

I’ll cover the accelerometer first.

Reading Accelerometer Data

We’re using a 3 axis accelerometer  which if read correctly can tell us the pitch and roll of the aircraft (actually, just the pitch and roll of the sensor, but the sensor will be built into the aircraft).

Each axis in the accelerometer (we’ll call them accX, accY and accZ) tells us how pull that sensor is experiencing (at rest, this is due to gravity). We only need 1 axis to start with, to measure Pitch.

PitchIn this crappy diagram, the rectangle represents our accelerometer. It’s measuring apparent gravity. As the acceleromter has its pitch increased (rotated clockwise, in this case), the apparent gravity being measured in that axis will decrease.

Theta represents the angle between the apparent gravity and actual gravity.

The trigonometery to calculate pitch is pretty easy:

apparent gravity = cos (theta) * g


acc = cos (theta) * g

The relationship between theta and pitch is very simple: simply add 90 degrees to theta to get pitch.

So, rewrite

apparent gravity = cos (theta) * g 


theta = acos (acc / g)

And then to get the actual pitch:

pitch = asin (acc / g)

//Accellerometer Read and Output Section
  float xAccRate, yAccRate, zAccRate;
  double pitch, roll;
  xAccRate = (accReadX());
  yAccRate = (accReadY());
  zAccRate = (accReadZ());
  double measured_g = sqrt((xAccRate*xAccRate)+(yAccRate*yAccRate)+(zAccRate*zAccRate));
  roll = (atan2(xAccRate/128,zAccRate/128))*(180/3.141);
  pitch = (atan2(yAccRate/128,zAccRate/128))*(180/3.141);

Reading Gyro Data

Gyro data is actually a lot easier to read, in some ways. All a the gyrometer is doing is measuring rate of change since the last reading.

Therefore, all we need to do is integrate over a period of time to get the rotation in a certain axis.


//Gyro Read & Output Section
  //Create variables for outputs
  float xGyroRate, yGyroRate, zGyroRate, xGyroRate2, yGyroRate2, zGyroRate2;
  long Time;
  //Read the x,y and z output rates from the gyroscope & correct for some innacuracy; convert to seconds
  xGyroRate = (gyroReadX())/57.5;
  yGyroRate = (gyroReadY())/57.5;
  zGyroRate = (gyroReadZ())/57.5;
  //Determine how long it's been moving at this 'rate', in seconds
  Time = (millis()-previousMillis);
  //Multiply rates by duration
  xGyroRate2 = -(xGyroRate/Time)/4;
  yGyroRate2 = -(yGyroRate/Time)/4;
  zGyroRate2 = -(zGyroRate/Time)/4;
  //Add to cumulative figure
  if (((xGyroRate2)>(gyroLPF))||((xGyroRate2)<(-gyroLPF)))   CumulatGyroX += (xGyroRate2);   if (yGyroRate2>gyroLPF||yGyroRate2<-gyroLPF)   CumulatGyroY += (yGyroRate2);   if (zGyroRate2>gyroLPF||zGyroRate2<-gyroLPF)
  CumulatGyroZ += (zGyroRate2);

Pretty simple 🙂

Some pretty graphs

Of course, that code took us a long time to actually write. And we may have borrowed some from elsewhere, I can’t for the life of me remember. Once we’d got it working, we plumbed the serial output into GNUPlot, so that we could get a visual representation of what was going on.

The general idea was to tilt the board by 90 degrees in once axis, then return it to it’s original position. This would produce a sine wave, eg:


After mucking about for a bit with minicom, I used this command to capture the serial output:

sudo minicom --capture mincap

The serial output just looks something like this:

ID: 69
2 0 0
-32 -4 -27
-1 -1 1
0 -1 1
0 0 1
0 -1 0
0 1 0
0 1 0
1 -1 0
1 -1 0
0 -2 0
0 -1 0

Once we’d got that, I spend yet more time mucking around in gnuplot. Actually, I jest. It was pretty easy to plot a graph.

The first graph we got was this:

We'd done something very obvious and simple wrong.

We’d done something very obvious and simple wrong.

Spot the deliberate mistake.

Upon seeing this, we retired to the pub. Over a pint or three, we realised what it was we’d done wrong.

The Arduino board has an integer size of 16 bit, meaning it has a max size of 32,767. If you try to do

int i = 32767 + 1;

then i would have the value -32767.

And that’s exactly what was happening. We’d used an Integer, because occasionally I’m as thick as a brick sandwich.

We weren’t doing anything with the data yet to convert it to degrees, and we weren’t even sure how big the numbers would be, and we were just adding it all up.

Here’s another graph showing the same thing, but with the dots joined up:


After we started using the (entirely more sensible) long data type instead, we got what we’d originally expected:


So, as predicted, we got ourselves half a sine wave. If we tilt it by 90 degrees in one direction, then back to flat, then 90 degrees in the other direction, we get the full sine wave:

If the graph is a bit lumpy, it's because Matt had the shakes.

If the graph is a bit lumpy, it’s because Matt had the shakes.

In the next post, I’ll talk about working out the actual orientation, and sensor drift.  We’re also going to compare the output from the accelerometer and gyro.

And again with the coding…

Here’s a little Arduino code I’ve cooked up for the purposes of providing a nice crank signal to the FPGA-based ECU, for the purposes of bench testing the output. It’s a simulation of a nicely cleaned up, digital signal generated by a 36-1 toothed crank wheel on your typical Ford engine.

Essentially, a little throttle position sensor (read, potentiometer) is hooked up to adc pin 5. This is purely to give me something to twiddle to see if the FPGA is reacting quickly enough to the change in frequency (Not that I have any fear of this, really- damn thing is pretty fast- but alas, I must have something to fiddle with while testing, or it’s no fun!).

The LED flashing part is purely for visual effect; an ‘is it working’ before I had chance to hook it up to an oscilloscope for testing.

The code is simple enough, my written code is starting to look more user-friendly, and- importantly- it works. Onward with the FPGA testing, then?

Maybe after my exams; this was put together in my ‘evening off’. (Oh, ninjaedit- it only ‘works’ when debug = 1)

#include <Wire.h>
// Calibration Values for TPS
float minimumReading = 48;
float maximumReading = 919;
//Setttings for TPS
int analogPin = 5; // Throttle Position Sensor Wiper Pin
int throttleRead= 0; // variable to store the value read
int throttleCorrected= 0; // variable to store the corrected value
//LED flashy bit setup
const int ledPin = 13; // The number of the onboard LED pin
int ledState = LOW; // Used to store the LED state
//Variables for Pulse Generator
long pulseLength = 0; // Variable to store the generated pulse length
long gapLength = 0;
int toggle = 1; // Variable to act as a toggle
long lastMicros = 0; // Variable to act as timer function
int count = 1; // Variable to store the current tooth count
int calibrate = 92.59259259259; // Calibration for top speed = appx. 9k rpm
int rpm1 = 1; // Variable to store the RPM number
int debug = 1; // Debug setting- changes output from RPM & pulse length to counter & gap indicator
void setup()
Serial.begin(9600); // Setup serial
pinMode(ledPin, OUTPUT);
void loop()
tpsRead(); // Run TPS Read loop
void tpsRead()
throttleRead = analogRead(analogPin); // Read the input pin
throttleCorrected = ((throttleRead-minimumReading)/maximumReading)*100; // Subtract minimum from TPS. Calculate percentage of max
if (throttleCorrected<1)
if (debug == 0)
rpm1 = 60000000/(pulseLength*36);
Serial.print("Simulated RPM :");
Serial.print("RPM; Pulse Width :");
void pulseGenerate()
pulseLength = (throttleCorrected*calibrate); // Calculate length of 'high' pulse (in microseconds)
gapLength = pulseLength*3;
if ( (micros() > (lastMicros + pulseLength)) && (toggle==HIGH) && (count <= 35) )
lastMicros = micros();
toggle = LOW;
if ( (micros() > (lastMicros + pulseLength)) && (toggle==LOW) && (count < 35) )
lastMicros = micros();
toggle = HIGH;
count += 1;
if (debug == 1)
Serial.print (count);
Serial.print (",");
if ( (micros() > (lastMicros + gapLength)) && (toggle==LOW) && (count == 35) )
lastMicros = micros();
toggle = HIGH;
count = 1;
if (debug == 1)
Serial.println ("GAP");
Serial.print (",");

ECU Project- Entry the First

Worthy of note, is that this is -not- the first effort I’ve made in this field.

In short, the reason behind this project is that I have an MGB GT; after the second B-series engine gave up the ghost (they don’t really seem to like 20,000 motorway miles and 12,000 ‘other’ miles a year), I got fed up- and the car sat languishing in the garage for 6 months.

So, my birthday rolled around- and my mother made me a little purchase…


As it turns out, this gave me some bad ideas. As there are several people in my group of friends and acquaintances- not least of which, my father- who are very much into the Ford Capri, I’d heard of a little engine swap that can be performed relatively simply, into that vehicle- the Cosworth 24v V6.

The block length of which would just about fit. As would the gearbox- assuming I’d switch the auto ‘box that comes bolted to the engine, with an MT75 5-speed manual. Naturally, I prefer the sound of carburetors  and this is as standard a fuel-injected, ECU-controlled engine…

A solution was needed; off-the shelf solutions, as usual, too expensive… this is my first step along the route to achieving it.

The purchase of an FPGA (Field Programmable Gate Array) is my first step along the line- inspired by an interview with a gentleman who suggested that experience using them would aid my future employment prospects.

It’s a nice little device, and very clever- anyone familiar with logic circuitry will understand the basic concepts- AND, NOT and OR gates arranged in specific ways to achieve outputs that are desired.

The software we were using (Quartus II) also incorporates megafunctions- effectively, complex logic-based designs in premade ‘blocks’.

Logic Layout

Pictured above is an (admittedly, relatively complex) LED flasher. Connected to the ‘input’ on the left of the image is one of the internal clock pins on the FPGA- more on this later- and the other is connected to an included LED.

Now, to start with, I had no real idea what to expect- and other than making sure that the board was properly functional, did nothing, until I had a little backup; enter Iain, stage right, for “beer and a curry, honest”.

Further, we were not able to confirm the clock frequency of pin on the FPGA I was using. They appear to be variable, and my board turns out to have a few idiosyncrasies (including relatively poor user manuals). So, having read somewhere that it was either 12, 24, or 48 Mhz, for my trial design we set up a little counter-comparator pair- counting clock pulses, at 6000, turn LED on, at 12,000, reset the counter.

pin assignment

Pin association is given in the above image- pin 66 is one of the onboard clock sources, and pin 35 is the pin for the LED output on my board.

And… it worked! the LED flashed at a steady rate of on for half-second, off for half-second. Telling me that the frequency of the CLK source that we used was 12Mhz.

Some issues we ran into- when assigning the values for the constant values, I mistyped on no less than 3 occasions (easy to do, when you’re typing out a 32-bit binary number), omitting a digit, or duplicating another. This took a surprising length of time to rectify (when you’re not sure your design is going to work, it’s often easy to overlook something obvious). Also, we had a little fun with having to re-design the circuit at various points- the LPM_compare function required a 32 bit number to ‘compare’, up to that point everything had been working on 25 to save space… back to the drawing board!

End of Step One.

Grub EFI Dual boot errors

I recently decided to move install Xubuntu on my desktop, having gotten fed up of Arch Linux. Arch Linux will make it’s return, but I was having too much trouble with the AMD legacy drivers. When I’ve upgraded my graphics card (hint – for one that doesn’t have shit linux support), I’ll move back.

In the meantime, I had some problems with Grub.

To begin with, the install did not recognise my Windows partitions at all, but I could still boot from the EFI menu to Windows. I would rather have grub though, instead of having to hammer away at the F8 key on every cold boot.

According to the Ubuntu page about UEFI, you should use boot-repair.

I used boot repair, and the output from it can be found here.

This updated grub, found the Windows partition, and added Windows as an option (actually 2 options for Windows. Don’t know why, but there are multiple EFI files for Windows on the EFI partition).

I got this error:

error: no such file or device: 16E0-4903
error: no such file or device: 16E0-4903
error: file '/EFI/Microsoft/Boot/bootmgfw.efi' not found
Press any key to continue

The grub entry for Windows looked like this:

menuentry "Windows 7" {
search --fs-uuid --no-floppy --set=root 16E0-4903
chainloader (${root})/EFI/Microsoft/Boot/bkpbootmgfw.efi

I checked the EFI partition, and sure enough the file is there. I checked the UUID of sda1 (the EFI partition), and it was correct.

I tried to fix it myself, by using the hard drive name instead of the UUID, but that gave errors about partition type.

Turned out I needed to add this:

menuentry "Windows 7" {
insmod part_gpt
search --fs-uuid --no-floppy --set=root 16E0-4903
chainloader (${root})/EFI/Microsoft/Boot/bkpbootmgfw.efi

I found the information here.

I don’t know why that line wasn’t added, but there we are.

Ink Part 2

Canny Edge Detection

The first stage for me is going to edge detection. There are many edge detectors, but I’m going to use the Canny Edge Detector, because I’m vaguely familiar with it and it’s quite well regarded.

Here’s the image I’m going to use for the initial development:

This handsome face *cough* is about to be melted down and turned into a bunch of squiggly lines. It was taken with my webcam in a partially darkened hotel room. On a Sunday. I also look a little shocked for some reason.

Step 1: Noise Reduction

To start with I’m going to blur my image. This might seem a bit counter intuitive, but it’s actually very helpful in cutting down the amount of noise in the picture. I mean, look at it. There’s randomly coloured pixels all over the place, due to poor lighting and a poor webcam. This kind of thing is going to cause interference in the various algorithms I’ll be using. I’ll show off why in a bit.

I’m going to use a very basic “box blur” kernel, not a fancy Gaussian one. Mainly because it’s easier and I’m lazy.

I’ll be using a 3×3 convolution kernel, like this:

Box Blur Kernel

Box Blur Kernel

This gets applied to an image by moving the centre of the kernel along each pixel in the image, and multiplying the kernel with the window in the image. The result is then summed.

Obviously, you have to deal with edge cases. I’ve taken the easy way out and _not_ dealt with them. I’m only blurring the pixels that are not at the very edge, which leaves a strip around the image that is unprocessed, one pixel thick. Visually, this doesn’t matter- I can leave it there or reduce the image size by 2 pixels in the x and y direction.

Original image has been desaturated and blurred, to try to cut down on the noise.


Step 2: Get Gradient Intensities:

I’m going to use a basic Sobel filter to perform edge detection. Sobel detects rapid intensity changes in a specific direction. In fact, you need a Sobel filter per direction:


This filter is applied in the same way as the box-blur filter described earlier. Here’s the output:

Gradient intensities in the horizontal direction

The test image with intensities in the Y direction calculated

These two results images are actually from before adding the box-blur filter. I’m doing this whole writeup in the wrong order.

The two gradient intensity images are summed together to get the final gradient result, using this very simple formula

|G| = |Gx| + |Gy|

That is a shortcut from doing the full equation of:

G = SquareRoot(Gx2 + Gy2)

This equation, applied to every pixel in both images, gives this result:

A basic gradient intensity image. By itself, this doesn’t do much, but I like to get tangible output from these algorithms. It’s a good visual lesson for what I just did.

That’s it for now. In the next post I do on this, I’ll be doing:

  • Edge direction calculations
  • Thin edge detection
  • Edge following