Author Archives: rusty

About rusty

Inquisitive and paints in code. I love rainy, cold weather and dark nights.

Muse3d – My (not so) small deferred lighting 3d engine

Around about Christmas of 2014, I was feeling a little restless. Working on the book in my spare time had meant that I hadn’t done any real coding, bar physics demos for about a year. I was feeling a bit burned out writing the book and needed a small distraction, something of a programming challenge to help me relax.

I came up with the idea of writing a quick and dirty 3d engine in straight C that implemented deferred lighting. It would be something that I could use to quickly test out lots of little idea here and here, maybe even prototype a game idea or two. One of the problems is that, although I have written various bit and bobs to do with rendering in various games, I had never really had a “proper go” at an entire rendering engine, especially once using modern multi-pass rendering techniques.

I chose deferred lighting over deferred shading as I wanted to easily support a varying number of material types, as well as something that would scale down well to hardware where frame buffer/texture bandwidth may be limited (for example; my Mac Book Pro with it’s Intel GPU or mobile platforms).

The deferred lighting implementation consists of three main passes:-

  1. Pre-light Pass – Geometry is rendered, outputting only the depth and surface normal information.
  2. Light-pass – The depth and normal information from the pre-light pass is used to calculate lighting information. This pass consists of two separate stages; the ambient stage which renders a full-screen quad for a global ambient light-source and the actual finite light sources, which are rendered as 3d-geometry. The resulting albedo and specular values are rendered into the light accumulation buffer. Additive alpha blending is used so that pixels affected by more than one light source are lit accordingly.
  3. Material pass – The geometry is rendered again, with the added difference in that the depth-buffer is not written to. The depth test is also different from the Pre-Light pass. the “Depth Equals” test is used so that it passes only on pixels that match the values stored in the depth buffer. Each pixel that passes the depth test has its material evaluated to determine the diffuse and specular components which are combined with the light accumulation buffer to determine how bight the pixel is.
muses_targets

Test render showing the various render targets generated and used during rendering. The targets are Light accumulation buffer (top-left) Final colour buffer (top-right), Camera Space normal buffer (bottom-left), Depth buffer (bottom-right)

And so started my “Muse3d” side-project. Progress was really quick and a lot of fun, too! I had forgotten how fun working in good ol’ C was. How much closer “to the metal” it felt. But as much fun as it was, working in C was also a limiting factor. Writing abstracted interfaces in straight C was becoming a pain, and the lack of templates and inheritance complicated matters somewhat. After a few days I decided to move the whole thing to C++ so I could use language features such as templates and encapsulation and inheritance.

After a week or so, I had something that was rendering models using deferred lighting. It was far from complete, but it had a lot of “under the hood features” such as :-

  • Custom memory allocation – I wrote a nice little memory manager that pre-allocated chunks of memory from the underlying OS and used TLSF to manage the heap within each chunk.I did this for no real reason, other than to help me track memory usage at a later stage.
  • Shader caching – The engine uses one big “uber-shader” for all of the rendering. The shaders are compiled at run-time as needed for each model/material/pass combination. However, these shader variations are only compiled once and cached for re-use.
  • Render batching – To avoid shader and material switches happening too often, draw calls are batched by relevance, in the following order
    1. Pass
    2. Shader
    3. Material
  • Auto insertion into relevant render passes – Not such a big deal, but how models are rendered by each pass is a completely opaque process. I guess this is a standard thing to do for any modern multi-pass renderer, but it was amazing to see how one call to render a model instance would then feed into all of the passes and batching.
muses_initial_render

Full-size colour render target. The final result of the pre-light, light and material pass.

I left the project alone for a while as I concentrated on my book again. I’m still working furiously on the book, trying to get the dammed thing finished but now and again, when I’ve hit a block, working on Muse3d helped relax me a little.

Once of the issues with the engine was that in order to save time, I hadn’t abstracted the graphics code, the bit that actually calls platform specific functions to tell the GPU to do things. I had just written some basic wrappers for things like buffers and textures, and the rest was raw OpenGL calls. However, this was going to present me with a few issues in the future and I decided to fix that. It took a couple of weeks of working on the odd bit here-and-there to achieve this.

I cursed my lack of foresight during this process. But at the same time, having a working 3d-engine (even if it wasn’t feature complete) meant that I could do the work in small steps, and verify that I hadn’t screwed anything up as I went a long. Now the render code and graphics code are completely separate. It was a horrible task to have to do, but well worth the effort.

There’s still a bit of work to be done though. Although I have a working light pass, it doesn’t deal with some things properly such as light sources occluded by objects (the occluding objects appear lit when they shouldn’t be). My “to do” list looks a little like this:-

  • Implement stenciling for light sources during the light pass so that only objects within light volumes are lit (currently 60% done)
  • Implement light volumes for:-
    • Directional lights
    • Spot lights
    • Area lights
  • Custom directional & area lights
  • Implement LOD models for spherical lights
  • Calculate positions of pixels in camera-space, rather than using a texture to hold them. This means that the entire render system will only ever use a single colour render target.
  • Implement a shadow pass using stencil shadows
  • Add support for custom post-processing passes
  • Skinned mesh support using CPU and GPU skinning via Transform Feedback/Streamout
  • Implement support for uniform/constant blocks
  • Skeletal animation system

It’s a big list, but nothing that I can’t deal with given a bit of time. Right now my focus is on the book and once that’s done, assuming I’m not working on another book straight away, I can devote some time to the additional tasks. As it is, I’ll fit in work on certain features whenever I need an hour or two respite from writing pages and pages of text.

How I ended up doing difficult things (and loved it)

A large part of my career has been spent doing things like physics, collision detection, animation and gameplay. One of the reasons that I ended up doing these tasks is because traditionally, these are not parts of a game that most people don’t want to touch. This is particularly true for physics and collision detection.

Graphics Programmers as far as the eye can see….

The real hot potato in games programming is graphics. Every bugger wants to work on this. Because everybody notices pretty graphics. It’s the first thing people see when there are screenshots of a game in a magazine or on a website. It’s what they see in a video. Everybody talks about the graphics. When the graphics are great, people keep on talking about them. They pick screenshots apart to see what sort of tech is being used for the rendering.

People love pretty pictures. Especially when they move.

Because of this, there are a lot of graphics programmers who are quite happy in their jobs. They don’t want to change because they have one of the most prized programming positions you can imagine when it comes to video games.  Because these talented, bright, hard working programmers like their jobs, it makes it very difficult to break into graphics programming when you’re a new guy unless they retire, leave or are hit by a bus. There is simply no room for you.No vacant positions for you to fill – and when there are, it’s typically a newly hired experienced graphics guy who slots in.

So as a young, inexperienced programmer, you fill in the cracks. You try to bide your, waiting for a chance (assuming you want to do graphics) to get that slot. But for me, as much as I liked graphics, there were other things that held my interest. I love movement as much as I love pretty pictures. I love finding out why things move the way they do and having that in video games.

As such, when I was given the chance to work on animation, physics or collision; I jumped at the chance. There was very little competition for these roles, so breaking into them, even if you had no real experience was relatively easy.

No sane person wants to work on physics…

So why is that? Why are these such specialised roles? Why are they not as appealing as graphics?

For a variety of reasons, these programming tasks tend to be quite difficult. There’s the mathematics involved (not that tough if I’m honest) and then there’s the fact that, especially in physics; it’s not really a linear system like graphics or audio where you give it your inputs and get an output. With physics, especially when creating physics behaviours, you put in lots of inputs which can interact with each other in strange ways and they then produce a number of outputs.

I feel I should add; the mathematics in graphics these days can be quite complex too. I’m not saying that graphics programmers are stupid or lacking in skill. Quite the opposite is the case. It’s just that things like Physics, require a different kind of set of skills and mindset.

Then there’s the fact that, from my view, no bugger seems to notice when the physics is good or great. When the physics is bad, you’ll hear about it to no end. But when it all works as it should, it typically gets a passing comment and then nobody really seems to give a damn. Not because Physics and collision detection are unimportant; it’s just one of those things that people EXPECT to work well.

So there tends to be a lack of glamour or recognition when working on physics and collision detection. And I think if there’s one thing that all game developers love above all else, it’s the praise that the public gives their efforts. Why else would your typical game developer put up with the low pay, long hours, bad treatment, terrible food and lack of proper compensation? And why would they put up with all that, if the public isn’t going to really recognise of praise their efforts on a regular basis?

Buy that wasn’t so important to me. I just wanted ti work on video games. And if I got to merge that desire with stuff I was interested in outside of games? So much the better. When the chance came along to do animation, physics and collision detection – I jumped at the chance.

But wait a minute…

There’s a double irony in all of this. The first is that I didn’t just flunk mathematics at school. I pretty much flunked school as a whole. I was a terrible student. So the idea of tacking something that relies so heavily on mathematics should have seemed absurd. But you see, I love real-life. I love finding out about the world around me. I love finding out what makes machines work, and what makes the Universe tick. To me, rigid-body physics is part of that interest – adding realistic movement into games and giving a firm base to start looking into other things such as how cars or aircraft work.

The second is that I really started out doing graphics. Hardware acceleration on PC’s was in its infancy before I got into the industry, but 3D games were coming into their own. As such, if you wanted to make a 3D game, you had to know all about rendering triangles at a very low-level so you could raw whatever it was you wanted in your game. I spent a lot of time writing a very fast, efficient software rasteriser which was part of my demo when I went for my first job interview.

I had spent many a long evening and weekend after my day job working in construction, teaching myself about 3D graphics from as many books on the subject that I could get a hold of. The end result was that, not only was I really experienced at making a CPU do things fast and was well versed in matrix and vector algebra; I had a fairly good understanding of 3D graphics.

But even though I didn’t get to work on graphics for a long time, all of that experience prepared me for my career. I understood the value of doing difficult tasks and knew not to be afraid of them.

In short, I came to do physics, animation and collision detection because nobody would let me do graphics and I’m quite possibly insane.

The Cancellation of Glover 2

Glover 2 for the N64, was the first game that I ever worked on, after being hired by Interactive Studios Ltd (ISL) for my first job as a programmer. I started in January of 1999 and my main role was working on tools (3DS Max export/import, Level Editor) and whipping boy for the game. By the time October had rolled around that year, the game had been canned. Our publisher, Hasbro Interactive, had ditched the project.

Why was this? Well…we have to look back to the first game which was developed before I joined ISL. The reason for this, is that as far as we were told, Glover 2 had been canned because of Glover 1. Now this seems strange, because the first Glover has sold fairly well for a non-Nintendo N64 title. And it was on the back of those sales that Glover 2 had been given the go-ahead at Hasbro in the first place.

But Hasbro had messed up. They had screwed the pooch big time. You see, when ordering the carts for the first game, the standard production run was something like 150,000 units. And this is what the management at ISL had advised Hasbro to order – because the N64 wasn’t really fairing that well compared to the PS1 at the time and non Nintendo titles tended to sell poorly. They thought that Glover was a good game in its own right, and a moderate 3rd party success would sell around 150,000 units. And that is exactly what happened. Hence the go ahead for the sequel.

So Glover was a money maker for Hasbro, right? Right? Nuh-uh. As it happened, Nintendo had a special on N64 carts at the time the game was being schedule for production. Some bright spark at Hasbro thought it would just be absolutely SUPER to order double the normal amount – so they put in an order 300,000 units at a slightly reduced cost.

The problem was that none of the retailers wanted to take that stock off Hasbro’s hands. The game had been moderately successful, but the demand just wasn’t there. And thus Hasbro was left with 150,000 or so copies of Glover for the N64 that nobody wanted. That’s something like half-a-million dollars worth of stock that they can’t shift. And with Hasbro Interactive not being in the best of financial shape Glover became a dirty word around the company, as it became apparent over the course of Glover 2 development that they were stuck with all those carts.

Of course, the blame was put on the game and brand itself rather than the idiot who ordered the extra 150,000 carts from Nintendo. And that ladies and gentlemen, is why Glover 2 had been cancelled.

I can still remember the day the cancellation was reported to us. Our external producer from Hasbro actually came to break the news to the Oliver twins and to us, in person. Phil Oliver, a man I have undying respect for, gave our team lead a budget and instructed him to take us out for a few drinks. For me the hit wasn’t so bad. I figured this was part of the learning experience, and cancellations happen after all. So here I was learning the stark realities of game development. But for most of the team this was an unexpected end to more than two, almost three  years of work. Subsequently, they were hit pretty hard by the news and I remember lots of long faces around the small table we occupied in the pub.

We (and by we, I mean the bosses..but we’re all in this together, right?) entertained the idea of re-skinning the game, but that plan was abandoned pretty quickly. I was still pretty busy though. I had been working on two other games while working on Glover 2, as they used the level editor that I had inherited from the original author. As such I was able to swallow down any disappointment that I had and focus on supporting them.

After that was all done, I was moved on to the team who had been picked to handle the development of the game for Chicken Run. And boy, were there a few stories behind that.

My Xcode wishlist

As a long term Visual Studio user (16 years and counting) the transition to Xcode has been a wee bit of a shock. While it’s great having a development environment provided for free, there are some things that irritate the hell out of me while using Xcode.

Rather than gripe about the problems, I’d rather make a list of the features that I’d like added to Xcode to fix the issues that get in the way of being productive.

  • Adding new files remembers where the last fold location you used. Constantly having to navigate to the folder you last used is irritating as hell, not to mention time time consuming.
  • Block-tab/edit – I’m sure there’s a way to do this and it’s probably some sort of twister-like key combination. But if I want to indent/unindent a block of highlighted code, I want to simply press tab and Xcode, like VS, will figure “hey, let’s apply the change to a whole block of code” or “let’s indent this code instead of deleting all the highlighted lines”.
  • Global custom properties across projects in a workspace. Oh dear god, that would be so nice. There’s nothing more error-prone and irritating than having to enter the same custom property for multiple projects within a workspace. I’m not expecting VS style property sheets (as groovy as they are) but just the ability to set shared properties would be very, very awesome.
  • Build dependancy lists – I’d like this. I’d like to have a list of projects within the workspace, and be able to tag which ones a single project is dependant on without necessarily linking to it. Why? Because often, I may be writing a dynamic library that is not hard linked to an executable or another dynamic library – it may be run-time loaded by another project in the workspace, in which cases, build-time linking makes no sense.
  • Please add a list of available macros that I can use in the project build settings. Pretty please! I’m sick of having to look this up in the on-line help.

That’s all I have for now. I’m sure more things will occur to me after writing this, but these are the main niggles that cause me to sigh (minus one luck, according to my Japanese colleagues) groan, moan, curse, squeeze the crap out of my stress ball or wander off to make a cup of tea because “I’m done dealing wit’ dat shit, yo'” (Imagine a middle aged Scottish guy saying that last bit).