Chapter X: going further

Warning

This chapter has never been proofread

Time to say goodbye! I would not want you to leave empty-handed, so I put together this hodgepodge of topics that you may want to look into and pick from for the next steps of your Vulkan journey. I give a few sentences of motivation for these topics, but I do not describe any of them in detail. Instead, I provide pointers to existing resources that do a nice job at that. Cheers!

A. Classical rendering techniques

In this series, we focused on how to instruct Vulkan to do things, and not on what these things should be. There are many techniques that apply to generate realistic graphics. How could we get shadows to work? How can we render realistic looking water? The learnopengl website is an OpenGL tutorial that goes into more detail on such questions. The techniques it discusses naturally carry over to Vulkan.

You may also be interested in learning how actual engines work. Adrian Courrèges hosts a bunch of graphics studies on his blog, and big studios often give talks at specialized conferences:

SIGGRAPH hosts so-called "Advances in Real-Time Rendering in Games" courses every year, many of them about production-grade graphics engines (some of the talks outlined above come from there); the slides from 2006 onwards are hosted online and can be found at: (20)25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 09, 08, 07 and 06.

Though a fair bit dated (early 2000s), the GPU Gems books (accessible online for free) constitute a nice collection of rendering techniques.

vkguide.dev hosts a collection of resources; it is the source for some of the references presented above, but it contains some more.

Inigo Quilez' website contains a lot of great resources (including an impressive library of shortish articles on graphics-related techniques). So does Simonschreibt's (I quite like this one).

B. Raytracing

The classical graphics pipeline (and the mesh shading one) gives the same result as drawing a ray from each pixel on the screen, checking which object this ray first meets in the scene, and computing a color based on that object. Actual vision works the other way around: light sources emit photons, which bump into objects which may absorb them or reflect them; when that object happens to be the camera, then the pixel it lands on gets updated gets colored by this photon (very handwavily speaking), and many photons land on each pixel.

Raytracing is about this second form of rendering. As it is closest to how nature actually works, raytracing makes it easier to obtain realistic results (reflections, ambient occlusion, caustics, etc, do not need to be approximated via rendering tricks). However, it is costly rendering method that used to be too slow for real-time applications. It is only since 2018 that consumer-grade hardware is fitted with accelerators for raytracing.

In Vulkan, raytracing is handled via a set of extensions released in 2020, which Khronos describes in a blog post. Frozein has a short, high-level overview of the ray-tracing pipeline in video form, and a more in-depth course on the topic was recorded by Johannes Unterguggenberger. NVIDIA provides a tutorial about raytracing in Vulkan.

C. Tesselation

We already discussed how tesselation could be used to add details to objects depending on the distance between them and the camera. In Vulkan, tesselation is part of the core standard. It is described in this chapter of the specification. You may also want to look into this tutorial by P.A. Minerva. If the history of tesselation support in hardware is of interest to you, this blog post by RasterGrid has you covered.

D. Geometry shaders

Geometry shaders have bad performance of most desktop platforms. With that being said, who will stop you if you try using them? They are a core part of Vulkan, and the specification has a chapter on the topic. Once again, P.A. Minerva has a nice tutorial on the topic.

Geometry shaders can output brand new geometry based on its inputs, so it is more powerful than tesselation in that sense, but it also comes with limitations of its own (besides the performance, it can only output a limited number of new primitives per input primitives).

E. Physics

Physics engines make the world go round, and Rigid-body dynamics is their bread and butter. These include things such as collision detection (see this cool blog post by Lean Rada). Soft-body dynamics can be viewed as an extension of rigid-body dynamics where the objects are allowed to deform. Cloth and hair simulation are both typically managed through soft-body techniques. Fluid simulation is its own thing.

Physics is a complicated mess, and writing physics engines with decent performance is best left to qualified grown-ups. You may for example want to use the Jolt library (which supports both rigid and soft-body physics); though if you listened to such embarrassingly reasonable opinions, you would probably be using an existing graphics engine instead of turning to Vulkan.

Everything discussed above was about simulating physics in a way that is conducive to real-time rendering. This comes at the expense of exactness. If you care about exactness more than performance, you would use different tools such as the finite element methods (see this video by The Efficient Engineer) or dignified computational fluid dynamics methods. Just be aware that these methods are hard to reconcile with real-time constraints.

F. AI

The world AI means different things in different contexts. In video games terminology, AI does not (necessarily) refer to machine learning or the like. Some people get very angry about the fact that a pathfinding algorithm can be considered AI, but this is one of those facts of life that you just have to get used to. So, no, this section is not about getting the LLM-of-the-day to use Vulkan in your stead.

Speaking of pathfinding, you may want to take a look at this blog post by Amir Patel as well as at this post from Factorio's blog. AI can also be about strategies to play games (in the game theoretic sense of the word); minimax is a prime example of such a strategy (for instance, it was a key component of Deep Blue).

G. Terrain generation

Coupled with methods for loading only the parts of the world that are relevant to a player's current position, procedural terrain generation enables having practically infinite worlds.

Many different methods coexist. One of the conceptually simpler ones is the diamond-square algorithm. There are many methods based on noise, as introduced in this blog post by James Wilkins, or in this one by Brian Wiggington. For a consumer-grade world generator using a noise-based algorithm, see this JFOKUS 2022 presentation about terrain generation in Minecraft by Henrik Kniberg. Amit Patel has a slew of articles about Voronoi-grid-based terrain generation (which still use noise): see this one, this one and these ones. For more realistic results, we may want to simulate the effects of tectonic dynamics and erosion, as discussed in this paper; however, this comes at a cost in performance. Amit Patel has amassed a large collection of additional references on the topic over the years.

H. Voxel engines

Most graphics engines are based around polygonal faces — after all, the graphics pipeline is very much based on such objects (or lines, points and the like, though it is possible to be smart and force it to render things such as perfect spheres)! With Voxel engines, space is represented by a (typically regular) 3D-grid, where cells contain the geometric (each cell is either empty or filled by a material). In other words, voxels are about volumetric data. Voxel engines vary a fair bit in the wile. See for instance how Minecraft, Teardown and 7 Days to Die manage their voxels in very different ways, though all of them share a support for efficient live remodelling of the environment, something that is very hard to get to using the traditional method. Voxel engines are a good application for Vulkan, since very specific, low-level optimizations apply to them.

Voxel engine development seems to be not that rare a pastime, as evidenced by the many people documenting the development of their voxel engines on Youtube: Douglas, Aurailus, Ethan Gore, Philip Mod Dev, John Lin, etc. There is also this nice blog post by 0fps. Voxel.Wiki contains many more resources on the topic.

Making spherical planets in a voxel engine is quite tricky (for reasons related to the impossibility of tiling the surface of a sphere with squares; see this blog post by Red Blob Games). Workarounds have to be used for circling the square. There are several possible solutions, though all have to compromise on something. See the following resources:

Voxels do not need to look blocky: see the marching squares algorithm (to build intuition), and its 3D analog, the marching cubes algorithm.

For an example of a (mostly) voxel specific optimization, check the vertex pulling page on Voxel.Wiki.

I. Retro stuff

People did real-time graphics before GPUs were a thing, relying on quaint hacks to get things working. See for instance this very simple code for terrain generation, this video about raycasting engine (used in the original Wolfenstein/Doom games), or this modern Game Boy Advance programming guide.