Ink Part 2

Canny Edge Detection

The first stage for me is going to edge detection. There are many edge detectors, but I’m going to use the Canny Edge Detector, because I’m vaguely familiar with it and it’s quite well regarded.

Here’s the image I’m going to use for the initial development:

This handsome face *cough* is about to be melted down and turned into a bunch of squiggly lines. It was taken with my webcam in a partially darkened hotel room. On a Sunday. I also look a little shocked for some reason.

Step 1: Noise Reduction

To start with I’m going to blur my image. This might seem a bit counter intuitive, but it’s actually very helpful in cutting down the amount of noise in the picture. I mean, look at it. There’s randomly coloured pixels all over the place, due to poor lighting and a poor webcam. This kind of thing is going to cause interference in the various algorithms I’ll be using. I’ll show off why in a bit.

I’m going to use a very basic “box blur” kernel, not a fancy Gaussian one. Mainly because it’s easier and I’m lazy.

I’ll be using a 3×3 convolution kernel, like this:

Box Blur Kernel

Box Blur Kernel

This gets applied to an image by moving the centre of the kernel along each pixel in the image, and multiplying the kernel with the window in the image. The result is then summed.

Obviously, you have to deal with edge cases. I’ve taken the easy way out and _not_ dealt with them. I’m only blurring the pixels that are not at the very edge, which leaves a strip around the image that is unprocessed, one pixel thick. Visually, this doesn’t matter- I can leave it there or reduce the image size by 2 pixels in the x and y direction.

Original image has been desaturated and blurred, to try to cut down on the noise.


Step 2: Get Gradient Intensities:

I’m going to use a basic Sobel filter to perform edge detection. Sobel detects rapid intensity changes in a specific direction. In fact, you need a Sobel filter per direction:


This filter is applied in the same way as the box-blur filter described earlier. Here’s the output:

Gradient intensities in the horizontal direction

The test image with intensities in the Y direction calculated

These two results images are actually from before adding the box-blur filter. I’m doing this whole writeup in the wrong order.

The two gradient intensity images are summed together to get the final gradient result, using this very simple formula

|G| = |Gx| + |Gy|

That is a shortcut from doing the full equation of:

G = SquareRoot(Gx2 + Gy2)

This equation, applied to every pixel in both images, gives this result:

A basic gradient intensity image. By itself, this doesn’t do much, but I like to get tangible output from these algorithms. It’s a good visual lesson for what I just did.

That’s it for now. In the next post I do on this, I’ll be doing:

  • Edge direction calculations
  • Thin edge detection
  • Edge following


And the winner is…

Despite a 14-day top end for delivery, the Hobbyking parcel arrived this morning. Only been unboxed so far- here are a few pics:

The postcard is for scale. That’s a 12-inch diameter, 3 bladed prop. The prop fitting kit is very nicely machined and the spinner is a nice finishing touch; not bad for an extra $3 on top of the cost of the motor.

Sadly, batteries don’t have a connector compatible with the charger (despite being the same brand), and I forgot to order the connectors for the speed controller, so more than just a little assembly will have to wait.

Signing off.

Overambitious Project: Ink

I’ve got that itch again. The one that sits at the back of my head going “you haven’t done any cool programming in ages, don’t you think you should?”

It’s an urge to go and work on a personal coding project, where I can experiment with stuff I find interesting, and can set my own pace and goals. This has become especially important recently, since I don’t even do any coding for my job anymore. I’m moving over to a managerial/everyman/”guy who knows stuff” role, so It’s not just a creative urge, it’s an urge to keep my “skills” sharp.

*Disclaimer: I’m not an amazing programmer, but I enjoy problem solving and I like to think I’ve built some cool stuff*

This project is going to be partly a rehash of my Computer Vision project at University – at least, it will use some of the same algorithms and technologies, and a large part of it will be about extracting a “useful” feature set from a set of images. What I do with that feature set is going to be very different though, and things I considered to be “useful” in my old project will probably be very different.

Here’s the abstract from my dissertation:

Reconstruction of 3D Models From 2D Images.
This project is about trying to recognise “interesting” features or points in two dimensional images, and then attempting to find corresponding features in a different image of the same scene. The coordinates of these points in different images can be used to generate 3D coordinates using Ullman’s theorem. This paper explorers a variety of options for detecting features, and several different methods for matching these features over different images. The variables used to generate and extract features are thoroughly tested, in order to find the best settings for the system. The results are then compared to the initial project specification to see if the system can operate as needed.

Now, this is not what I intend to do here at all. I just posted it anyway.


Ink, Project goal:

Write a program that will take video, and turn it into what looks like a series of hand-drawn ink sketches.

A still might look like this:

An ink drawing I found on the internet somewhere.

The animation should have enough flaws in it to make it seem hand-drawn, and it will probably have a much lower frame rate than the original video.

Part of my inspiration for this is Raymond Brigg’s “The Snowman”


I love the way the crosshatching works on this. The animation is low frame rate, the background is mostly static, but the movements are complex and the shading is wonderful.


So I want to achieve that programatticaly. You heard me. Any artist who reads this will probably tell me I’m removing all the soul from the animation, and they’d probably be right, but this is a project that I want to do.

Another inspiration is “A Scanner Darkly”, a film that I love for many, many reasons, not least because it’s adapted from a Phillip K Dick novel, and all adaptions of his work to film have been fantastic in their own ways. Even Total Recall is brilliant, but not for the same reasons Blade Runner is.

Well, maybe not.

But aside from the brilliant plot, (which, to be honest, was mostly about following stoners around. And paranoia. And government surveillance  And pharmacological conspiracies. And insanity. And psychotic breaks), it was produced in a unique way.

It was filmed normally, then every frame was redrawn partly by hand and then animated the rest of the way. Visually, it’s stunning, but the animation is so realistic that it creates a bizarre disconnect in your head while you watch it, and in places you can forget it’s animation at all. It very much suits the subject matter of the film.


And then there’s this clip, which I’m including for no reason other than the fact that it’s funny. Sort of:


Your sins will be read to you ceaselessly, in shifts, throughout eternity. The list will never end. 

So I want to try and build something similar to the system they used here. (They called it “rotoshop”, and they never released the program). It won’t be so fully-featured (damn, there’s an enterprise-y word. And another one! Auugh, what’s happened to my vocabulary?), because I will probably run into problems and get bored or frustrated. Besides, I’m not trying to accomplish the same thing, but I suspect some of the methodology will be similar.

So, here it goes.

… And then 3 come along at once!

Gentlemen, start your engines!

It’s a race between Germany and Hong Kong!
The flight electronics for the Bormatec have now been ordered from HK-based stockists, Hobbyking.

2x 4000 mAh, 14.8 volt, 4S1P (4 series, 1 parallel) Lithium-Polymer cells, for main power.
An NTM Prop Drive Series 42-38 brushless motor, rated at 750kv*
Along with a 70 Amp ESC and the servos required to operate the aircraft.
Oh, and a LiPo charger. ‘Cos I was lacking one.

The combination of these, using one battery at a time (for better at-the-field testing times), should allow flight times of approximately half an hour. For propeller choice, I have gone for a scale-ish 3-bladed unit from Master Airscrew; 12 x 6 inch.

Running cost at this point, is now c. £350, including the original Arduino electronics and sensor modules. Significantly less, it’s worth noting, than any comparable equipment you could buy… Let’s say a comfortable factor of 10. At the minimum.

And now the waiting begins…

* KV in the sense of brushless RC motors, means RPM per Volt. (so max RPM 11,100)

Introducing… the Bormatec Vamp


After careful consideration, I’ve decided to invest in a Bormatec Vamp as a flying UAV-Testbed. It’s got a 1.8m wingspan, and an all-up maximum weight of 2.5kg; plenty of room for around a kilogram of payload. After doing a little math, it’d cost me about 3x as much as this to purchase the equipment to manufacture an airframe to the standards that I’d like.

At a cost of about £130 delivered to the UK from it’s German manufacturers, it’ll arrive as a kit of parts; assembly will be documented herein.


More pictures and information as soon as it arrives!

For more information on the Vamp, please refer to it’s manufacturer website, listed below- also the source of the included image.

UAV, Finally

Above are a few pictures of things as they stand at present.

This project, as previously stated, is largely an experiment to improve skills, and produce something tangible. The aim is to produce an autonomous aerial vehicle; capable of flying to a destination co-ordinate, performing a task, and returning to base. (Likely, taking a picture or dropping a tennis ball to wind up Iain’s dogs).

The arduino control board was knocked together quickly to make it more robust than having a proto-board with all the components plugged into it.

In essence, the small red stick visible on the board to the right of the picture is a 9-DOF (Degrees of Freedom) sensor stick, incorporating an accelerometer, a  gyro, and a magnetometer. They communicate with the control board (the blue Arduino Pro Mini) via a two-wire protocol known as I2C; these in turn communicate with a base computer using two XRF modules.

Left to it’s own devices, at present the little ‘Duino will talk to the Accelerometer and, working with a little clever 3-D math, ascertains the direction ‘down’. From this, it then outputs ‘pitch’ and ‘roll’ calculated values over a serial data bus, which is transmitted through the XRF daughterboard, up to a range of 2km, back to the PC via the second XRF, to the left of the picture. Note the little yellow and green LED’s; these flash to show data transmission and reception on each of the boards- Though I have forgotten which is which… it’s in the schematic somewhere if it becomes an issue.

It’s yet to be seen how well this responds to the G-forces generated during flight- Some thought as to an appropriate solution is required. Ideas, on a postcard…

The magnetometer and gyro are currently unused. The magnetometer will eventually determine the direction of the board relative to magnetic north; ‘yaw’. It will also use the gyro to increase the accuracy of the calculated P, Y, R angles.

I have also been working on a fuselage mould; Having discovered that none of the filler types I have access to will adhere reliably to expanded polystyrene, I have decided to change contsruction method for the fuselage of this drone; Likely at this point I will make use of vacuum moulding of ABS plastic, or a moulding made of fibreglass. Wings are to be made of extruded polystyrene- think foam fast food containers, but denser. A CNC machine will be made to produce wing sections.


That’s all for now; More as soon as my student loan allows!

(Please ignore the deliberate PCB error of filed down LED’s; I made the footprint wrong, and was too impatient to wait for more to be cut!)

Storytelling in Spec Ops: The Line

Warning: This post contains spoilers for Spec Ops: The Line

In most games,  the designers have done all they can to try to disguise the rails. Rails in this case being a metaphor for linear storytelling. Linear storytelling is not inherently bad, but often seems that way when “you”, or more accurately the character you control, are forced into a decision that the player finds to be idiotic. This breaks immersion.

Good examples of rails can be seen during the Half-Life series, where there are few points that you might feel like you’ve been forced into making a stupid decision. (Well – maybe Gordon didn’t want to jump blindly into a teleporter and go to  hostile alien Zen. But he did anyway…because he was told to).
Every step of your journey is utterly predetermined, but often this goes unnoticed or seems like emergent behaviour. This makes it all the more jarring when you are forced to jump into a prisoner transport pod that immobilizes you and you can’t control it’s direction.

Hop in! It’ll take you to a fun place filled with lightning!

A bad example is Mass Effect 2, where you don’t ever get the option to tell Cerberus to go stick their idiocy where it hurts, but instead you bumble along following the orders of a guy who you have every reason to distrust and hate.

You can’t change anything. Whatever you choose will lead to the next fight scene or set piece. Mass Effect has the worst kind of railroading, because it offers you some choices about who lives, or who you shag, but you can’t make any choice about how your character behaves in-story. Image stolen from 3 panel Soul

Spec Ops: The Line works differently. As already mentioned, most games do their best to present you with the illusion of choice, the try to disguise the rails. Spec Ops instead gives  you the illusion of having no choice, and disguises your choices. The player thinks they are on a rail, but there are many places where this can be ignored.

The one that stood out for me was the point in the game that Lugo was hanged by angry locals (angry doesn’t really do their state of mind justice – the only remaining drinking water in Dubai has been destroyed, and it is all your fault).

Lugo is down, and Walker does his best to revive him. Useless. He’s dead. Adams is surrounded by the mob, who are shouting threats, throwing rocks, getting closer and closer. Adams wants vengeance. There’s no justice; he wants to open fire and gun down the civilians. He’s begging you to make a decision and I start to worry that he’ll just start shooting if I don’t do something.

At this point I was not thinking in terms of “Shooting civilians might be a fail state”, I wasn’t worrying about the game any more. The only thing going through my head was I WILL NOT DO THIS AGAIN. I fired in the air, hoping to drive them away. It worked, and Adams and Walker could continue.

I didn’t think anything of it until I spoke to a friend who finished the game after me.
He said that he’d had to put the game down at this point, he found it too depressing that the game forced you to gun down yet more civilians.

This works heavily in the game’s favour. By disguising the fact that you ever had a choice at all, you can do what feels natural, without ever having to break immersion.

Another example of this is how you deal with the “test” that Konrad sets up. This one more obviously had a choice involved, but even here you can go off the rails – (Konrad’s rails, anyway. Konrad is the GM at this point, in a game-within a game).

Konrad asks you to choose between two prisoners.

The man on the right is a civilian, who stole water. A capital offence, as Konrad remarks. The man on the left is one of Konrad’s own men, who was sent to bring in the civilian for punishment (we all know that soldiers are extremely good at civilian crowd control). During the arrest he killed five more people: the man’s family.

I shot the sheriff soldier (but I did not shoot the deputy)

Later I found out that there were ways around this – you could have attacked the snipers instead, or shot the ropes (triggering an attack by the snipers).

When I first got to this bit, I assumed that it was just the start of a long line of “tests” that Konrad would dream up, to try and persuade you that he was right, and it was the only way to ensure the survival of as many people as possible. I was surprised  then to find that this was it, really, Konrad didn’t have any more moralising to do (well, sort of. I’ll get to that in a separate post).

We’re going to build a UAV

My friend and I want to build an unmanned aerial vehicle. Unlike the predator drones, it will not be armed with missiles.

This is an educational project for us; I’ve never done any proper “bare metal” coding before, or even very much hardware based stuff. Matt, however, is great at the hands on, practical side, but his coding is lacking.

Between us, we should have enough knowledge to fuck up in new and interesting ways.

I’m going to let Matt explain what kit we’ve got, what each bit of it is for, and how he’s fitting it together. I’m going to cover the coding side, and the maths, and how we’re going to get it to fly itself. Or Matt will probably cover that last part; he is doing an engineering degree. I’ll be taking the theory he gives me, and implementing it :).

To start with, we’ll be doing some “basic” stabilisation stuff. We’ll get the plane to try and stay as level as possible, automatically correcting itself except when recieving input from the remote.

Oh – and we don’t have a plane yet.