Mariusz Bartosik's website

Graphics programming, demoscene and book reviews

Anatomy of a Demo Framework

Demo engine building blocks

It’s time to look inside the main executable file, and plan what we want to put there. The architecture of a demo framework is a bit similar to a game engine one. In fact, there are demos created with tools like Unity 3D.

Using such tools on the demoscene is a controversial and interesting topic. However, we are here to write our own engine, so let’s see what we will need to build one.

  • We want something flexible, yet not too complicated.
  • Over-engineering should be avoided in software targeted at high performance.
  • It should be easily reusable, so next time we can focus on just writing new effects.
  • Only modern OpenGL techniques should be used.
  • As in every coded demoscene release, speed is important, so we should avoid unnecessary overhead of slow libraries and sophisticated containers.
  • Speaking of libraries, any external library used should also be small, we don’t want to include 50 MB of DLL’s just to display a window or GUI.
  • The fewer external dependencies, the better.
  • All licenses should allow to use and redistribute libraries with our freeware release. BTW, I will talk about licensing in one of the forthcoming articles.

Base objects

Let’s devise a logical structure of the framework and divide it into objects.

The first one is easy, as we need one object to link everything else together, initialize it, provide the main loop and an entry point for the demo. So, let’s call it “Core”.

class Core;

We want to display some nice effects, but it’s hard to do without a screen. Let’s fix this. This object should handle screen initialization (setting it to desired resolution and mode) in Windows and on the OpenGL side.

class Screen;

In the previous article, we talked about organizing demo resources. It would be nice to actually read them from a hard disk (or an SSD) into memory and use them. In its initial form, this object will read files directly from the file system. Later, we can add support for reading from an archive, and use some flag to switch a source without the rest of the system even noticing that.

class FileManager; // or ResourceManager, "naming things is hard" :)

Reading a texture file to a memory is only half the job. As OpenGL doesn’t understand formats like PNG or JPG, we need to decompress them first. This object will handle that with a bit of help from external libraries. Later, the raw data has to be transferred to a graphics card memory. This resource is limited, so it would be nice to upload only unique textures to it; this manager should handle this for us. It should return OpenGL identifiers ready to use in shaders.

class TextureManager;

Just like textures, we need to load shaders and prepare them for use. Their sources have to be compiled and linked into programs. Again, after requesting loading a shader for the second time, its original identifier should be returned instead. All other operations like setting variables and so on should be also handled here.

class ShaderManager;

Although it is possible to create amazing effects using only shaders, textures and just two triangles, an ability to load 3D objects also could be useful. Some object should be responsible for uploading the data to graphics card’s memory, preparing OpenGL buffers and so on. So, let’s add GeometryManager to this project.

class GeometryManager;

A demo without a soundtrack is a rare thing. We need an object that could play a file in mp3 or ogg format. Once again, it’s a good idea to use some already made library for that. In addition to being able to pause and stop the music, a nice feature to have is to be able to get current Fast Fourier Transform spectrum data. This can be used to sync an effect to a track’s beat and other things.

class SoundManager;

Watching an effect that displays only its frame number zero can be boring. To animate anything, we need a timer, preferably with a high precision. This object should provide methods to check how much time has passed from the start of an effect. On a higher level, we want to know when to run the next effect or scene from our script.

class Timer;

Nowadays, every picture is Photoshopped, processed and filtered. We want to be able to run some additional passes on the final render of each generated frame. This way we can change final colour toning, add depth of field or soften the image. As we probably will want to do this for each effect, it makes sense to move this to a separate object and call its methods when needed.

class PostProcessManager;

If we want to show some more effects than just a 3D scenes player, it would be handy to use an abstract object to represent them. A unified interface should allow us to initialize all of them in a loop, and later call them to render frames. Something like:

class Effect {
public:
    Effect() {}
    virtual ~Effect() {}
    virtual int init() = 0;
    virtual void drawFrame(float time) = 0;
};

Last but not least, a standard object that can help with debugging things and collecting error reports from other users:

class Logger;

Summary

So, we have a sketch of a demo’s framework. This list is not closed, as we always can add something while developing. For example, it would be convenient to wrap some OpenGL specific functionality like FrameBuffer or RenderBuffer in an object too. Also, do we need a separate object for the script, or will an array or a vector with a basic structure for every effect be enough? We can take care of this later. For now, it’s time to implement first objects.

Avatar

Written by Mariusz Bartosik

I’m a software engineer interested in 3D graphics programming and the demoscene. I'm also a teacher and a fan of e-learning. I like to read books. In spare time, I secretly work in my lab on the ultimate waffles with whipped cream recipe.

Comments

Comments are automatically closed on articles older than 512 days.