Hey look, once again I am talking about graphics. And once again, I will be talking about shaders. Why? Well it is 2017, it STILL seems to be an area that some game developers are scared of, we seem to be relying on more node based “auto-generation” shader tools such as Shader Forge that seems to be creating crazy complicated shaders that may look pretty but could be so much simpler, and all the books I read seem to be cookbooks that just contain recipes on how to create certain effects and never actually explain how the function is actually working.
Also I have an objective of actively getting better at visuals in games in general by approaching them in a way of trying to help get stuff to look pretty but so it doesn’t destroy our performance. Also I want to be able to use these at the start rather than do the “throw the most expensive technique and make it look good quickly then optimize it later” attitude that I have come across a few time.s it is not clever, you can quite easily get yourself into a corner. The game can look pretty as hell, but if it runs like shit, all you have is a fancy tech demo that only runs on expensive hardware.
Anyway, as part of me getting better at visuals, I am revising all my shader knowledge and going to be sharing it here. I know the R-word can give us war-time style flashbacks to exams, but actually it is a super important thing to do when you are working on a craft. And you can find time to do it, for example, I have a bit of down-time on a Sunday afternoon and making the best use of it rather than sitting in my pants and playing Horizon: Zero Dawn for hours on end (which admittedly is hard not to do as that game is REALLY good, congrats Guerilla).
Right, flavor part over, let’s get on with the show. In this article I am going to give a super high level introduction to the graphics pipeline, Vertex and Pixel Shaders and how that ties into Unity.
In the beginning
Once upon a time, many moons ago, back when Spyro was cool and not part of the whole Skylanders thing and Nintendo had this mysterious device called “The GameCube”, Game Developers used to use something called the Fixed Function Pipelin. As a Game Dev, you would take care of updating your game logic and sending your textures and triangles to the GPU. This device would then do all the work on them using the FFP. This was cool, but they were fixed. You could make the gpu do things like “turn lighting off” or “show green fog”, but these were fixed functions that the GPU could perform, meaning they were not very flexible and you couldn’t customize them easily.
Stuff needs to look prettier
The fixed nature of the Fixed Function Pipeline though made the amount of techniques available to Game Developers limited. Although there as a wide range of cool stuff that could be done with the functions in the pipeline, it still wasn’t enough, especially for those chasing realism. Soon the GPU evolved from configurable implementations of a complex fixed function pipeline to highly programmable bits of hardware where developers could implement their own algorithms. Programmable shaders.
The Graphics Pipeline
So you have an idea of what a shader is right? A program that is created by Game Developers to implement their own cool graphics algorithm on the GPU.
Before I jump into them in further detail though, it would be downright irresponsible of me not to talk about the Graphics Pipeline. Below is a diagram of the geometry processing and rasterization stages of a typical GPU
There are a lot of resources out there explaining every single of these stages, so I am not going to spend a very long time explaining what these all do (right now). I will stick the ones we care about for this particular article.
As you may or may not know, when you load a 3D model into a game it is typically described using triangles. These triangles are in turn defined by vertices. These Vertices, or each individual Vertex, are data structures that contains things like the position of the point in 3D space, color of the vertex, the texture coordinate and the normal vector. Vertex shaders manipulate this data. They can potentially change the color, change the position in the 3D space or change the texture coordinates to scroll a texture across the object. One basic example of Vertex Shader usage is to create a flag by displacing the vertices of a flat mesh using a sin wave.
Also known as a fragment shader, a pixel shader calculates the colour of a pixel on the screen based on what the vertex shader passes in, bound textures and user-added data. This works out what colour and transparency that pixel should be for the current primitive. The pixel shader can be used to take care of stuff like per-pixel lighting and bump mapping and can be used to achieve cool special effects like Depth of Field, simulating fire and creating shadows.
So you can write these pixel and vertex shaders, but how do you do that? Well it depends on the target environment, i.e. the graphics API you are using. OpenGL uses a shader language called GLSL where as Direct X uses a language known as HLSL. Unity uses a variant of HLSL.
What about Shaders in Unity?
Although it is useful to know about the Graphics Pipeline and Vertex and Pixel Shaders, Unity makes things a tiny bit easier for us. I briefly talked about lighting in the Pixel Shader section. Writing shaders that interact with lighting can get very complex and Unity has a solution for this with Surface Shaders. They essentially make it easier to write any shaders with lighting easier than if you wrote it in a Fragment and Vertex Shader. The code is still written in HLSL but in a way that removes a lot of the repeated code required when writing lighting in Vertex and Pixel Shaders.
In other terms, Surface Shaders have a built in concept of how light should behave where as Vertex and Pixel Shaders do not. So, if you are doing stuff with lighting, you probably want to use the Surface Shaders.
We will actually make a Surface Shader in Unity. This was here to whet your appetite and kind of give you an inkling of what shaders are and where they fit into everything. If you want to go deeper or get a more technical and more complex explanation of the rendering pipeline and Pixel/Vertex Shaders then it is worth having a peak at the following:
Key Point Across Tutorials and Articles
Although I mainly focused on shaders in this article, one of the points I am going to make when we look at graphics techniques is emphasizing the fact that there are many ways to get to where you want to go. Unity can be a little bit bad for getting sucked in to using “clever” graphics techniques that are available at a tick box or from the asset store. As someone once said, “there is more than one way to skin a cat”. Although a graphics technique maybe the next triple-A buzzword, it doesn’t mean it is necessarily correct, and graphics is full of tips and tricks for getting stuff to look pretty in a perfomance friendly wy. You can even drag out little tips and tricks used from the PS2 days if you want. Also thinking about these tips and tricks from the get go rather than just using the knowingly expensive methods will help you not get backed into a corner later down the line.