Raytracer update

When I started this blog, it was intended to be a way to do some write ups of what I’m doing, and to easily share some of the cool stuff with my friends. I don’t like just sending images and builds over skype file transfer, and I like a record. Unfortunatly, my laziness gets in the way a lot.

I’ve made massive amounts of progress with the Ray Tracer, but not put up any images here. Here’s one:

Stanford bunny and dragon. Render time: 60 seconds.

This image here is the payoff. I’ve implemented reflections, OBJ loading, KD trees, diffuse lighting. There’s still a lot to do, though.

The KDtree is what makes all the difference. The Dragon is a very, very expensive model, containing 871414 polygons.

Without any kind of acceleration structure, it takes well over 24 hours to render. (I tried, as a test, to see how long, but gave up after a day.).

Unfortunately, I won’t be able to get much more speed out of Java.

I ported the Raytracer to Java in the first place because I needed to do a complete rebuild, and I’m better at rapid prototyping in Java than I am in C++. It would have been a better learning exercise to do it in C++, but I’m frequently impatient, and I wanted to see cool stuff without having to fumble too much with the language.

Now, I’m probably going to port it back at some point, and eventually use OpenCL to get my GPU to do more of the hard work, but in the meantime there are some more things I’d like to try out.

Partial list:

  • Path tracing.
  • Distributed ray tracing.
  • Fresnel equations (yeah, yeah, this should already have been done.)
  • Programmable camera, so I can do flyby videos.

I’ve already started work on the Path Tracing, with mixed results.

Perlin Noise

Shamus Young was messing around with Perlin Noise, which turned out to be normal noise. I dug around in an old VM to find this image:

Blue and white marble effect. Sort of.

The massive sphere in the background is procedurally textured with the Perlin Noise function, that is described in this paper: http://mrl.nyu.edu/~perlin/paper445.pdf

It’s not the best example I ever produced, but it’s the only build of my old raytracer that I could find. Shame on me for poor version control.

The source code for it, in Java, is on Ken Perlin’s website here: http://mrl.nyu.edu/~perlin/noise/

To be honest, it’s a very long time since I’ve looked at that stage of my ray tracer. I ditched the procedural generation of textures because they took way too long, but the basic idea is that you use the “noise” function, which takes X Y Z coords as arguements, and it returns the “next” noise point from the initial permutation list. Or something. These days I comment my own code a lot better, precisely because I can’t remember how this stuff works anymore.

When you’ve got the returned noise amount, you can do cool shit with it. I found a couple of functions around that could turn it into a marble style texture, but you can do a lot more with it.

A new beginning.

When I tried to start writing posts on Raytracing, the idea was to document my steps. The problem was, that when I started,  I’d already finished the project.

When I say finished the project, what I actually mean is ” I got too bored, stuck, or distracted to continue.”

I hadn’t planned the code out very well, and as I added new things I had to break old things, and it turned in to a stack of kludges built on kludges. It was kludges all the way down. It may as well have been held together with gaffa tape.

So I’m going to start again, and this time plan it a bit better, and I’m also switching languages. And I’ll post on my progress, because even though no one is reading this, I can still refer back to it.

In memory of this occasion, here are some snapshots of what I achieved. They’re not all very good, and some of them are downright weird.

Early code, with bad lighting and only one type of object. Orthographic camera.

Can’t remember what the hell happened here. Early plane intersection code, I think

A cock up, but an attractive one, as they go.

Plane intersection code still making things difficult.

Yeah. Not sure about this. Lighting problem? Plane seems to work though.

Plane intersection working, with some randomisation of the “up” vector,

Showing only the phong exponent of the scene.

Polygon intersection test. Weird bug if two polygons lie on the same plane.

Fixed bug. Can’t actually remember what it was.

With polygon intersection tests working ( or so I thought ), I turned my attention to loading an OBJ.

After a long struggle with weird bugs, ta-fucking-da. Also, transparency.

Rabbit model without any normals calculated.

Also, persitent shadow bugs. These eventually turned out to be the plane intersection code playing up. Again.No ordinary rabbit! Normals calculated.Glass onion.

Wizards!

So there we have it, a long line of bugs, and some pretty pictures.

The next version will have all kinds of cool optimisation! (I hope.)

What I want to achieve:

  • Faster OBJ loading
  • BVH tree for scene divisions. Should dramatically speed up rendering.
  • Distributed ray tracing:  soft shadows, depth of field, etc.
  • An easier to steer camera, and possibly some animation, to produce videos. Not in real time.
  • Photon mapping. This can be bolted on top, I think.

Raytracing Part 1: First Steps

Ray tracing theory is simple, but I’ll explain it anyway. Ray tracing is a generic technique, that can be applied to a lot of different things. I’m applying it to one thing- graphics.
(If you want a better overview than this, look elsewhere. Look at wikipedia. Ask someone.)
Simulating light is far too complex to be done “properly”. If you think about the sheer number of photons that are output by a light source (the sun), and the number of bounces each photon goes through, depositing energy each time it does, before finally arriving at your eye, or camera, you’d see why it’s not possible. At least, not at the moment. If instead, you ignore light being emitted from the sun, and instead trace the photons backwards- from your eye, into the world, you are faced with a much easier challenge.
**Some notation. I’ll be using the word eye and camera interchangeably. When I use the word “scene” I’m refering to the environment that the ray’s interact with.**
If you know you wish to render an image of 800 by 600 pixels, that’s 800*600 rays you have to trace.
  1. For each pixel, shoot a ray into the scene.
  2. Test the ray for intersections with every object in the scene.
  3. Find the closest intersection.
  4. Find the colour of that pixel.
  5. Rinse and repeat, until each pixel has been done.
Object Intersections.
The simplest kind of object to test a ray against is a sphere. This is nice, because real-time applications running in OpenGL or DirectX fail miserably at perfect spheres. They have to be rendered as a flat sided object, with lots and lots of sides.
Ah yes – some maths.
A ray can be defined as two vectors: an origin and a direction vector. The direction vector should be a unit vector.
ray = origin + direction
A sphere can be defined as position vector (which describes the pos of the centre) and a radius. That’s all:
sphere = centre + radius
A sphere with centre (x0, y0, z0) and radius r is the locus of all points (x, y, z) such that
\, (x - x_0 )^2 + (y - y_0 )^2 + ( z - z_0 )^2 = r^2.
Finding the Intersection:
A set of points on a ray can be defined by:
Ray(t) = ray(origin) + ray(direction) * t
Where t is the distance from the ray origin.
The ray equation can be substituted into the sphere equation, and solved to find t.
This takes the form of a quadratic equation. If you don’t remember quadratic equations from whatever secondary school maths equation, they take this form:
ax^2+bx+c=0,\,
So, substitution for ray:
X = X(ray(origin)) + X(ray(direction)) * t
Y = Y(ray(origin)) + Y(ray(direction)) * t
Z = Z(ray(origin)) + Z(ray(direction)) * t
Insert X, Y, Z into sphere equation:
(X)2 + (Y)2 + (Z)2 = R2
Solve this quadratically, to find t.

There. That was easy.
So, the first thing I did was to define a scene composed of a single sphere, no lights, and shot rays at it through an orthographic camera. All this accomplished was a sense of self satisfaction, as I watched some text in my console, proving that some pixels came back as “hit”, and some came back “miss”. At this point I had no way to show the results on screen.
I could show you the results of this…..but why?
The second thing I did was start hunting around for a way to display my output. Image libraries were out- I thought they might be too complex to learn to use. I simply wanted to define a window of the correct size, and use something like “setPixel”. I eventually, after much google-fu, gave up and used the windows API instead. This felt like a small betrayal (I’m mostly a linux user for coding). It did however allow me to produce this:
Ah. A black screen.
After some more fiddling, this happened:
Hurrah! I had ray-traced my first scene.
Admittedly, is has no lighting, no shading, no support for more than a single sphere, no support for anything other than spheres……The list of things it doesn’t do goes on.
It DOES get better.

Ray Tracer Part 0: Introduction

Everyone and their dog, and their dog’s mum, has built a raytracer. This is not surprising: I mean, seriously, it’s not that hard, and it looks REALLY SHINY. So, despite the fact that everyone has already done this, and in many cases done it better, now it’s MY turn.

I started this project in my final year at Univeristy, during an Advanced Graphics module. I did this because the module was theory, and theory only.Now I like building stuff, but not everyone else does, so I was probably alone in wishing for more brain melting coursework. Also, the theory was hard. Very hard.

So, in order to understand it better, I built it. This may not have been wise, and their were a number of decidedly non-wise decisions in this project (I’ll get to those).

For your enjoyment, and my own, I will be posting here, as regularly as I can manage, until this blog has caught up with me in reality.

-Anorak