Chapter X: going further
This chapter needs one last quick proofreading pass
Time to say goodbye! I would not want you to leave empty-handed, so I put together this hodgepodge of topics that you may want to look into and pick from for the next steps of your Vulkan journey. I give a few sentences of motivation for these topics, but I do not describe any of them in detail. Instead, I provide pointers to existing resources that do a nice job at that. Cheers!
A. Classical rendering techniques
In this series, we focused on how to instruct Vulkan to do things, and not on what these things should be, despite the coolness of computer graphics mostly residing in the latter. How to get shadows to work? How to render realistic looking water? The learnopengl website is an OpenGL tutorial that goes into more detail on such questions. The techniques it discusses carry over to Vulkan.
You may also be interested in seeing how actual industrial-grade game engines are implemented. Adrian Courrèges hosts a bunch of graphics studies on his blog, and big studios often give talks at specialized conferences:
- Assassin's Creed Unity (slides): SIGGRAPH 2015 talk by Ulrich Haar and Sebastian Aaltonen
- Destiny (video, slides): GDC 2015 talk by Natalya Tatachuk
- Rainbow six siege (video, slides): GDC 2016 talk by Jalal El Mansouri
- Doom Eternal (slides): SIGGRAPH 2020 talk by Jean Geffroy, Axel Gneiting and Yixin Wang
- Nanite (video, slides): SIGGRAPH 2021 talk by Brian Karis, Rune Stubbe and Graham Wihlidal
- Alan Wake 2 (video): Digital Dragons 2024 talk by Erik Jansson
SIGGRAPH hosts so-called "Advances in Real-Time Rendering in Games" courses every year, many of them about production-grade graphics engines (some of the talks outlined above come from there); the slides from 2006 onwards are hosted online and can be found at: (20)25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 09, 08, 07 and 06.
Though a fair bit dated (early 2000s), the GPU Gems books (accessible online for free) constitute a nice collection of rendering techniques. Inigo Quilez' website contains a lot of great resources (including an impressive library of shortish articles on graphics-related techniques). So does Simonschreibt's (I quite like this one), the more raytracing-focused graphics codex website, or the more shaders-for-3D-graphics-focused short tutorials library by David Lettier.
vkguide.dev hosts a great collection of resources; it is the source for some of the references presented above, but it contains some more.
B. Raytracing
Both the classical graphics pipeline and the mesh shading one draw a ray for each pixel on the screen, check which object this ray first meets in the scene, and compute a color based on that object. This is quite far removed from how vision actually works, and we are effectively forced to turn to dirty hacks (also called neat rendering tricks) to approximate classical effects such as reflections, ambient occlusion or caustics.
In contrast, the raytracing pipeline implements a model which, though still not perfectly scientifically accurate, is closer to the truth in such a way as to make all of the effects mentioned above emerge spontaneously. It basically considers that there are light sources in the scene that emit photons, which fly around in straight lines and interact with whatever matter they bump into (absorption, reflection, etc); when that object happens to be the camera, then the pixel it lands on gets tainted by this photon (many photons land on each pixel). Simulating enough photons to get a satisfactory result would be extremely costly, so the raytracing pipeline actually runs backwards: it emits photons from the camera and lets them bounce around until they reach a light source. Consequently, it does not waste time simulating photons that never reach the camera (though it still wastes time simulating photons that never end up reaching a light source).
The raytracing pipeline presents another realism/performance trade-off than the more classical pipelines. It does much better on the realism front, though it also is a fair bit more expensive to run. In fact, it used to be too slow for real-time applications. It is only since 2018 that consumer-grade hardware is fitted with accelerators that make this method practical in that context (upscaling techniques such as Nvidia's DLSS or GPUOpen's FSR also helped).
In Vulkan, raytracing is available since version 1.2 via a set of extensions, which Khronos describes in a blog post. Frozein has a short, high-level overview of the ray-tracing pipeline in video form, and a more in-depth course on the topic was recorded by Johannes Unterguggenberger. NVIDIA provides a tutorial about raytracing in Vulkan.
C. Tesselation
We already discussed how tesselation could be used to add details to objects based on the distance between them and the camera. In Vulkan, tesselation is part of the core standard, and it is in fact described in this chapter of the specification. You may also want to look into this tutorial by P.A. Minerva. RasterGrid discusses the history of tesselation support in hardware in a (2010) blog post.
D. Geometry shaders
Geometry shaders have bad performance on most desktop platforms. With that being said, who will stop you if you try using them? They are a core part of Vulkan, and the specification has a chapter about them. Once again, P.A. Minerva has a nice tutorial on the topic.
Geometry shaders can output brand new geometry based on their inputs, so they are more powerful than tesselation techniques in that sense. However, they come with their own set of limitations: besides the abysmal performance, they can only output a limited number of new primitives per input primitives.
E. Physics
Physics engines make the world go round, and rigid-body dynamics is their bread and butter. These include things such as collision detection, as discussed in this cool blog post by Lean Rada. Soft-body dynamics can be viewed as an extension of rigid-body dynamics where the objects are allowed to deform. Cloth and hair simulation are both typically managed through soft-body techniques, whereas fluid simulation is its own thing.
Physics is a complicated mess, and writing physics engines with decent performance is best left to qualified grown-ups (though if you listened to such embarrassingly reasonable opinions, you would be using an existing graphics engine instead of learning Vulkan). You may for example want to use the Jolt library (which supports both rigid and soft-body physics).
Everything discussed above was about simulating physics in a way that is conducive to real-time rendering. This comes at the expense of exactness. If you care about exactness more than performance, you would use different tools such as the finite element methods (see this video by The Efficient Engineer) or dignified computational fluid dynamics methods. Just be aware that these methods are hard to reconcile with real-time constraints.
F. AI
The world AI means different things in different contexts. In video games terminology, AI does not (necessarily) refer to machine learning or the like. Some people get very angry about the fact that a pathfinding algorithm can be considered AI, but this is one of those facts of life that you just have to get used to. So, no, this section is not about getting the LLM-of-the-day to use Vulkan in your stead.
Speaking of pathfinding, you may want to take a look at this blog post by Amir Patel, as well as at this post from Factorio's blog. AI can also be about devising strategies to play games (in the game theoretic sense of the word); minimax is a prime example of such a strategy (for instance, it was a key component of Deep Blue's design).
G. Terrain generation
Coupled with methods for keeping only the currently relevant portion of the world in memory, procedural terrain generation enables having practically infinite worlds.
One of the conceptually simpler methods for generating terrain is the diamond-square algorithm. The next step is noise-based terrain generation, as introduced in this blog post by James Wilkins, or in this one by Brian Wiggington. For a consumer-grade world generator using a noise-based algorithm, see this JFOKUS 2022 presentation by Henrik Kniberg about terrain generation in Minecraft. Amit Patel has a slew of articles about Voronoi-grid-based terrain generation (which still use noise): see this one, this one and these ones (he amassed a large collection of additional references on the topic over the years). For more realistic results, we may want to simulate the effects of tectonic dynamics and erosion, as discussed in this paper; however, this comes with a cost paid in performance.
H. Voxel engines
Most graphics engines are based around polygonal faces — after all, the classical graphics pipelines are based on such objects (or lines, points and the like, though it is possible to be smart and force them to render things such as perfect spheres). With Voxel engines, space is represented by a (typically, regular) 3D-grid, where cells contain geometric data (they are either empty or filled with a material). In other words, voxels are about volumetric data. Voxel engines vary a fair bit in the wild. See for instance how Minecraft, Teardown and 7 Days to Die end up looking very different, though all of them share a support for efficient live remodelling of the environment, something that is very hard to get to using more traditional methods. Voxel engines are a good application for Vulkan, since very specific, low-level optimizations apply to them. For an example of a (mostly) voxel specific optimization, check the vertex pulling page on Voxel.Wiki.
Voxel engine development appears to be a popular pastime, as evidenced by the many people documenting their development journeys on Youtube: Douglas, Aurailus, Ethan Gore, Philip Mod Dev, John Lin, etc. There is also this nice blog post by 0fps. Voxel.Wiki contains many more resources on the topic.
Making spherical planets in a voxel engine is quite tricky (for reasons related to the impossibility of tiling the surface of a sphere with squares; see this blog post by Red Blob Games). Workarounds have to be devised for circling the square. There are several possible solutions, though all have to compromise on something. See the following resources:
Voxels do not need to look blocky: see the marching squares algorithm (to build intuition), and its 3D analog, the marching cubes algorithm.
I. Retro stuff
People did real-time graphics before GPUs were a thing, relying on quaint hacks to get things working. See for instance this very simple code for terrain generation, this video about raycasting engines (as featured in the original Wolfenstein/Doom games), or this modern Game Boy Advance programming guide.
J. The future
What will the future will be made of? Nobody really knows, which is exactly what makes hazarding guesses fun.
Part of Vulkan's initial success was due to it being a (mostly) clean-slate design that could do away with the cruft accumulated by concurrent APIs, which had to follow decades of graphics hardware evolution. OpenGL was almost than twenty five years old when Vulkan was released! Well, Vulkan is now fresh out of its first decade, and graphics hardware kept evolving quite dramatically. It seems to be getting closer to generic SIMD devices: all the data is written in addressable GPU memory, and bindless resources reign supreme. Consequently, some abstractions that were enshrined in the API ten years ago are now empty shells: the drivers just act as a translation layer between them and (usually simpler) modern graphics hardware concepts.
In the modern Vulkan chapter, we saw how the API was extended to accommodate a more, well, modern programming style: bindless resources, buffer device addresses, mesh shaders, etc. These are all new concepts that coexist with the vanilla ones, making an already complex API more complex still. Furthermore, not all devices support these new features, so using them means sacrificing compatibility (or having to program a fallback, meaning about twice as much work). It would be nice if some version of Vulkan where most of the old concepts are deprecated could become widely supported; in the current state of affairs, deprecated concepts must still be used for builing truly cross-platform applications.
What would Vulkan look like if it were designed today? More lightweight designs seem possible. This 2025 blog post by Sebastian Aaltonen discusses what a minimal API could look like for modern devices (i.e., those released from about 2020 on; Vulkan did not bother with GPUs that were older than that at the time of its release either). This post is really long, but it is also really good; I cannot encourage you enough to read it.