A summary of the Rendering Pipeline for Humans

Image result for Xenoblade chronicles 2

I just started playing Xenoblade Chronicles 2 on the Switch. It is pretty neat. Anyway In this post I wanted to talk about the rendering pipeline, or rather put it in simple terms that less tech savy people can understand.

In a game, rendering can be seen as the process of drawing a scene on a computer screen. It involves a mathematical combo of geometry, textures, surface treatments, the viewers perspective and lighting. In other words, it is a combo of the geometry that forms the meshes made in Maya, Blender, whatever, the textures that are applied to those meshes, shaders that do some neat stuff to make it look cool, where the camera is looking and how the scene has been lit. The Rendering pipeline then represents the flow of processes that take place to show a virtual environment on your screen. Basically, how your games console draws that sweet Vidja Game on the screen (bit of an oversimplification, but you get the idea).

In simple terms, there are three stages of the pipeline:

The application phase runs on the CPU (generally) and deals with all your gamey stuff. This includes moving object, input, collisions, etc. This then affects the geometry part of the pipeline that basically determines where stuff is. It involves calculations regarding the position of the camera, the transformation, scale, rotation of each object and all the mesh data we have. The rasterisation phase then actually displays the image on the screen. It goes through some more in-depth processes to get that desired image on to your sweet 43inch Ultra HD TV.

Now it is a bit of a cop-out to say “more in-depth processes” so let’s have a look at what those actually are. We are going to look deeper into both the Geometry and Rasterisation parts of the pipline.

I know what you are thinking.

Those sound like pretty technical things! S’alright, we are gonna look at each one individually. Imagine we are trying to render a model on our screen. Maybe it is a model of a character. Or a car. Or big massive gothic cathedral. Or an awesome mech. I will probably refer to the model as just a “character.” This is the process that happens to get it on the screen.

  • Geometry – So we kind of looked at this phase earlier on. In deeper terms, a model is made out of polygons and vertices. These are shapes that make up a 3d model. In other words they describe the structure of your game character. Not in gameplay terms but how the shape of the character should be in the game terms. The geometry phase processes all of these polygons and vertices so that data can be used in other phases.
  • Illumination – This is where the models are coloured and lit. In other words, you could see it as textures that represent the details of a character, like Link’s skin and clothing in Zelda are applied to the model and the details of how the model should be lit by light sources in the game like street lamps, the sun, etc are applied. This is the process where fancy programs called shaders (not Destiny 2 shaders, CURSE YOU BUNGIE!) can make models in the game look real nice.
  • View Perspective – The model is processed through a viewing perspective, or rather a camera. The process looks at how the camera is set up, whether it has an Orthographic projection (often used for 2D games) or Perspective projection (often used in 3D games) and how big the field of view of the camera is.
  • Clipping – This process looks to see what parts of the model are outside of the cameras viewing volume or rather what parts of the character can be seen by the camera. The bits that cannot be seen are clipped away.
  • Screen-Space Projection – This where we take a projection of the 3d object and put it into 2D space so it can be displayed on a screen. Monitors fundamentally show a 2-dimensional image so we are mapping the 3d model into this space and producing a 2D image to be displayed on the screen.
  • Rasterisation – This is where fancy post-processes occur. A post-process is an extra visual technique that is applied to the image and includes techniques such as bloom. These techniques are applied to the 2D image created by the Screen-Space Projection phase.
  • Display – This is the final image.

And yeah that is that really. I hope this helps understand how the rendering pipeline works a bit better.


Unity Shader Tutorial: An intro to Compute Shaders

Are you ready to turn this up to 11? We are going to look at some real “Triple A” business now. The world of compute shaders. So what are these mysterious  creatures that you probably don’t know exist in Unity?

To be honest, I had completely forgot they were there and I was looking at a fur tutorial (that doesn’t actually seem to work by the way and was also a really dirty way of doing it ) and then remembered my mate had said you could probably do grass and fur in one. I think he actually meant geometry shaders, but compute shaders peeked my interest.

However, after digging around the net it turns out that info surrounding them when it comes to Unity seems quite scarce.

Let’s start from the top!

What is a compute shader, and why should I care?

In Microsoft’s fancy terms, “a compute shader is a programmable shader  stage that expands Microsoft Direct3D11 beyond graphics programming” and “a compute shader provides high-speed general purpose computing and takes advantage of the large numbers of parallel processors on the GPU”.

In simple terms, a compute shader is a program that runs on the graphics card that does stuff outside of the normal rendering pipeline.

So you are probably thinking “OK, I kind of get it, you can run some logic and put some work onto the graphics card, but why would I want to do that?” Well these shaders are really good at maths and parallelization, i.e. they are really good at performing tasks where you are doing a lot of the same thing. In other words, they are really good at tasks that involve applying the same set of calcualtions to every element in a given data set.

This is probably a kind of crappy exlanation, so lets wind the clock back a bit to when I was just gracing the planet with my presence. The 90s. It was a beautiful time with games like Doom, Final Fantasy 7, The Legend of Zelda: Ocarina of Time, Crash Bandicoot, Tekken 3… do I need to go on? Essentially lots of 3D games and PCs started going out with graphics cards. Stuff like this bad boy.

What a rush indeed! Getting that sweet 6MB of power all up in your grill. Anyway OpenGL and DirectX appeared and the magic of the programmable pipeline emerged. Developers just send geometry down to the graphics card and OpenGL/Direct X would figure it out. However, the pipeline was pretty rigid, and thus to make more interesting effects and push the boundaries it had to became more flexible. This led onto shaders, where devs could write their own programs to perform certain parts of the pipeline and make things look like the wizard’s tits.

This the opened up a lot of possibilities and this new system mean that the new pipline could deal with a lot of different types of algorithms and now the GPU can do stuff like crazy multi-threaded physics, etc.

What this means now is we can do crazy stuff like Nvidia’s Hair works.

You on board now? If not, just know it is cool and you feel like a Game Development Maverick when you do it.

Basically, you can potentially harness the GPU to do none graphicsy stuff if you so desire and gain MOAR POWER.

Sod it, lets jump in!

That’s the attitude I want!

Before you start though you need a WINDOWS machine. Macs don’t have it. And to be honest they are kinda crappy for big boy game development like this anyway 😛

Create a compute shader in Unity.

The first thing you will notice is that this is not CG. This is a Direct X 11 style HLSL bad boy. Yeah fasten your seat belts boys and girls.

// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel CSMain

// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> Result;

void CSMain (uint3 id : SV_DispatchThreadID)
     // TODO: insert actual code here!

     Result[id.xy] = float4(id.x & id.y, (id.x & 15)/15.0, (id.y & 15)/15.0, 0.0);

So the above is what you get if you just create one from scratch.

The #pragma kernel CSMain is the first thing we see in our shader. Kind of like the other shaders, this is us telling the program where our entry point is, in this case CSMain. A compute shader can have many a function and you can call a specific funciton from a script. More on that later.

The next bit is a RWTexture2D<float4> Result

Again, like our other shaders, this is just a variable declaration. However, as we aren’t using mesh data, we have to say what the shader will read and write to. In this case we have a RWTexture2D a read/write texture 2d object that the program is gonna use. Take a look at MSDN for reference:


Finally, the last super different thing is the numthreads which is the the number of thread groups that are spawned by our shader. GPUs love the parallel processing business and create threads that run simultaneously.  This line is specifying the dimensions of the thread groups. These specify how the threads that are created are organised and in this case we are saying that we want to create 64 threads. Take a look at msdn for refrence:


The size of your thread groups will be determined by a lot of factors and probably most notably your target hardware. For example, the PS4 may have  a different optimum size compared to the Xbox One.

The rest is kind of bog standard code. The kernel function determines what pixel it should be working on based on the uint3 ID of the thread running the function and writes some data to the result buffer.

Cool. We have our first compute shader! But how do we actually run this warlock? It doesn’t run on mesh data so we can’t attach it to a mesh. We need to grab it from a script.

But hold up, before we start lets change up our script that Unity spat out and make the compute shader do something different.

// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel CSMain

RWStructuredBuffer<int> buffer1;
void CSMain (uint3 id : SV_DispatchThreadID)
    buffer1[id.x] = mul(id.x, 2.0);

You can see we switched out the texture for a structured buffer. This is just an array of data consisting of a single data type, in this case it is an int. In the code you can see we are just taking the id of the thread and multiplying it by 2.

Cool lets write a new script.

using UnityEngine;
using System.Collections;

public class RunComputeShader : MonoBehaviour
    private ComputeShader _shader;

    void Start()
       ComputeBuffer buffer = new ComputeBuffer(4, sizeof(int));

       _shader.SetBuffer(0, "buffer1", buffer);

       _shader.Dispatch(0, 1, 1, 1);

        int[] data = new int[4];


        for (int i = 0; i < 4; i++)


Firstly we are creating a compute buffer the size of an int, a buffer that ComputeShader programs use to store arbitrary data and then we are using the SetBuffer to tell the shader to dump data in there. We use the dispatch function to run our shader and then grab the work the shader has done.

If you set up the above you should see in the debug window it print out some numbers. Yeah it did that on the Graphics card.

Alright fine, it wasn’t the most crazy thing in the world, but it is just showing you that work other than just rendering pretty images can be done.

Round up

This is a post to show you compute shaders are there. I am not saying go out and use them everywhere. The GPU can be used to do some cool multi threaded tasks, however a word to the wise. The tasks that the GPU can do are going to be limited and you really have to look at the problem you are trying to solve before you go down this path. If your game is gonna be super pretty in, you porbably want to be maxing out the gpu on that first before offloading stuff the cpu can do onto it. If your GPU is jsut idling though… maybe on like some lower poly strategy game, etc then maybe consider offloading some of the logic to the GPU using a compute shader.

Well until next time!

Unity Shader Tutorial: Surface Shader Custom lighting

Yes I know I said I wasn’t going to do a tutorial this weekend, but I was learning some shader stuff for my project and The Cel-Shading in Tales of Vesperia, Guilty Gear Xrd and other games are a big inspiration. I was following a gamasutra article on how to do this and I saw he was using a custom lighting function within a surface shader.

When working on some projects, you don’t always want to go for super realistic lighting. You may want to use a custom lighting model rather than just the built in one. Here is the shader code I am using, a slightly modified version of the one in the gamasutra article:


Take a look at the the first pragma line where we are defining the surface shader function and the lighting mode. However instead of using one of the built-in lighting models we are using a method we have defined. Scroll down the shader and you will see the method “LightingCelShadedUsingForwardRendering”. This is the custom lighting calculation we are using. This is a function which you can define in various ways that returns a half4.

The one thing that caught me out when creating this custom method is you need to prefix it with “Lighting” and then when you define which lighting method in the #pragma you are using you do this minus the lighting prefix. You are now using custom lighting in your shader! Neat huh?

It is worth taking a further look at the Unity docs to see other examples of how you can define your own lighting models