A new beginning.

When I tried to start writing posts on Raytracing, the idea was to document my steps. The problem was, that when I started,  I’d already finished the project.

When I say finished the project, what I actually mean is ” I got too bored, stuck, or distracted to continue.”

I hadn’t planned the code out very well, and as I added new things I had to break old things, and it turned in to a stack of kludges built on kludges. It was kludges all the way down. It may as well have been held together with gaffa tape.

So I’m going to start again, and this time plan it a bit better, and I’m also switching languages. And I’ll post on my progress, because even though no one is reading this, I can still refer back to it.

In memory of this occasion, here are some snapshots of what I achieved. They’re not all very good, and some of them are downright weird.

Early code, with bad lighting and only one type of object. Orthographic camera.

Can’t remember what the hell happened here. Early plane intersection code, I think

A cock up, but an attractive one, as they go.

Plane intersection code still making things difficult.

Yeah. Not sure about this. Lighting problem? Plane seems to work though.

Plane intersection working, with some randomisation of the “up” vector,

Showing only the phong exponent of the scene.

Polygon intersection test. Weird bug if two polygons lie on the same plane.

Fixed bug. Can’t actually remember what it was.

With polygon intersection tests working ( or so I thought ), I turned my attention to loading an OBJ.

After a long struggle with weird bugs, ta-fucking-da. Also, transparency.

Rabbit model without any normals calculated.

Also, persitent shadow bugs. These eventually turned out to be the plane intersection code playing up. Again.No ordinary rabbit! Normals calculated.Glass onion.

Wizards!

So there we have it, a long line of bugs, and some pretty pictures.

The next version will have all kinds of cool optimisation! (I hope.)

What I want to achieve:

  • Faster OBJ loading
  • BVH tree for scene divisions. Should dramatically speed up rendering.
  • Distributed ray tracing:  soft shadows, depth of field, etc.
  • An easier to steer camera, and possibly some animation, to produce videos. Not in real time.
  • Photon mapping. This can be bolted on top, I think.

Raytracing Part 1: First Steps

Ray tracing theory is simple, but I’ll explain it anyway. Ray tracing is a generic technique, that can be applied to a lot of different things. I’m applying it to one thing- graphics.
(If you want a better overview than this, look elsewhere. Look at wikipedia. Ask someone.)
Simulating light is far too complex to be done “properly”. If you think about the sheer number of photons that are output by a light source (the sun), and the number of bounces each photon goes through, depositing energy each time it does, before finally arriving at your eye, or camera, you’d see why it’s not possible. At least, not at the moment. If instead, you ignore light being emitted from the sun, and instead trace the photons backwards- from your eye, into the world, you are faced with a much easier challenge.
**Some notation. I’ll be using the word eye and camera interchangeably. When I use the word “scene” I’m refering to the environment that the ray’s interact with.**
If you know you wish to render an image of 800 by 600 pixels, that’s 800*600 rays you have to trace.
  1. For each pixel, shoot a ray into the scene.
  2. Test the ray for intersections with every object in the scene.
  3. Find the closest intersection.
  4. Find the colour of that pixel.
  5. Rinse and repeat, until each pixel has been done.
Object Intersections.
The simplest kind of object to test a ray against is a sphere. This is nice, because real-time applications running in OpenGL or DirectX fail miserably at perfect spheres. They have to be rendered as a flat sided object, with lots and lots of sides.
Ah yes – some maths.
A ray can be defined as two vectors: an origin and a direction vector. The direction vector should be a unit vector.
ray = origin + direction
A sphere can be defined as position vector (which describes the pos of the centre) and a radius. That’s all:
sphere = centre + radius
A sphere with centre (x0, y0, z0) and radius r is the locus of all points (x, y, z) such that
\, (x - x_0 )^2 + (y - y_0 )^2 + ( z - z_0 )^2 = r^2.
Finding the Intersection:
A set of points on a ray can be defined by:
Ray(t) = ray(origin) + ray(direction) * t
Where t is the distance from the ray origin.
The ray equation can be substituted into the sphere equation, and solved to find t.
This takes the form of a quadratic equation. If you don’t remember quadratic equations from whatever secondary school maths equation, they take this form:
ax^2+bx+c=0,\,
So, substitution for ray:
X = X(ray(origin)) + X(ray(direction)) * t
Y = Y(ray(origin)) + Y(ray(direction)) * t
Z = Z(ray(origin)) + Z(ray(direction)) * t
Insert X, Y, Z into sphere equation:
(X)2 + (Y)2 + (Z)2 = R2
Solve this quadratically, to find t.
 
There. That was easy.
So, the first thing I did was to define a scene composed of a single sphere, no lights, and shot rays at it through an orthographic camera. All this accomplished was a sense of self satisfaction, as I watched some text in my console, proving that some pixels came back as “hit”, and some came back “miss”. At this point I had no way to show the results on screen.
I could show you the results of this…..but why?
The second thing I did was start hunting around for a way to display my output. Image libraries were out- I thought they might be too complex to learn to use. I simply wanted to define a window of the correct size, and use something like “setPixel”. I eventually, after much google-fu, gave up and used the windows API instead. This felt like a small betrayal (I’m mostly a linux user for coding). It did however allow me to produce this:
Ah. A black screen.
After some more fiddling, this happened:
Hurrah! I had ray-traced my first scene.
Admittedly, is has no lighting, no shading, no support for more than a single sphere, no support for anything other than spheres……The list of things it doesn’t do goes on.
It DOES get better.

Ray Tracer Part 0: Introduction

Everyone and their dog, and their dog’s mum, has built a raytracer. This is not surprising: I mean, seriously, it’s not that hard, and it looks REALLY SHINY. So, despite the fact that everyone has already done this, and in many cases done it better, now it’s MY turn.

I started this project in my final year at Univeristy, during an Advanced Graphics module. I did this because the module was theory, and theory only.Now I like building stuff, but not everyone else does, so I was probably alone in wishing for more brain melting coursework. Also, the theory was hard. Very hard.

So, in order to understand it better, I built it. This may not have been wise, and their were a number of decidedly non-wise decisions in this project (I’ll get to those).

For your enjoyment, and my own, I will be posting here, as regularly as I can manage, until this blog has caught up with me in reality.

-Anorak