A brief shader overview in Unity

I have started playing Yakuza 0 again. It is worth picking up!

Anyway, as you may have guessed, I haven’t got a relevant image for this article so just whacked in what was playing.

Shaders are often this scary, black magic game dev word that a lot of people run away from.

Shaders are bits of code that run on the GPU. They can be code that affects visuals of the image but also in some cases, you can offload complex calculations to the GPU (compute shaders). As this is a graphics focused tutorial, we are going to focus on the visual side.

A visual shader is a shader that affects… well.. the visuals. In the most basic of terms, a visual shader is a program that runs on the GPU that takes 3D model vertices, textures, and other bits of info and returns pixel colours. They are used to draw triangles of 3d models on your screen.

Again, that is super oversimplifying it so I am going to dig into the types of shader Unity has

Surface Shaders

Let’s look at this from a light simulation perspective. If you look in the picture above (see it is not COMPLETELY irrelevant!) you can see there are a lot of lights and the materials on Kiryu’s clothes, skin etc are reacting to it. However, these materials all interact differently with the light. When light hits a material it can be absorbed, reflected, refracted and scattered.

The behaviour of the light rays hitting the surface creates the look of a specific material. For example, the jacket Kiryu is wearing has high light absorption making it look dark. The water on his face has an element of reflection making it look shiny.

This is where surface shaders come in.

Although you can write your own lighting behaviour in Unity, most of the time you don’t want to. That said, there are times you want to do custom lighting which we will look at in later articles, but again MOST of the time, you would use surface shaders. Why? Well, writing lighting equations in shaders can be “well hard”. There are different light types to factor in, different shadow options and different render paths (which we will cover later).

Surface Shaders kind of take the hard part away. As I said before, writing your own lighting behaviour can take a lot of work. Unity’s surface shaders essentially do that for you. The docs refer to this as an auto-generation approach, but essentially what they do is generate the repetitive lighting code under the hood. We will go into more detail about this later, but for now, all you need to know is that they are a way to deal with shaders that use lighting.

Vertex and Fragment Shaders

These are your more “traditional” shaders. The first step it where the shader takes in vertex data, i.e. the geometry, which can alter the data of each individual vertex in the model if you so chose. After this has been done, the result is passed to the second step where a function outputs the colour at each vertex. The first step is known as the vertex function and the second step is known as the fragment function. Hence the name.

In Unity, Vertex and Fragment shaders are often used for non-realistic materials, 2D graphics or fancy post-processing effects. Let’s say I didn’t want to use the real-life lighting behaviour or physically based rendering as it is also known. Let’s say I want to render Kiryu in that above picture as a Cartoon Network character. We would use a Vertex and Fragment shader.

Vertex and Fragment shaders are also used for post-processing. I briefly mentioned post-processing. Post Processing is like adding effects to an image in photoshop. Say I wanted to turn the image above black and white, I would write a vertex and fragment shader for that.

Also, say I have a 2D sprite and I want to take the shape of it and add a glow effect around it. I would probably use a vertex and fragment shader for that.

I also mentioned that surface shaders contain boilerplate lighting code. In fact, when you write a Surface Shader, it gets compiled down into a vertex and fragment one. Essentially, if you are thinking of implementing a different lighting model other than a realistic one, you will probably have to write a vertex and fragment shader.

Summary

  • Shaders are programs that are executed on your GPU and are used to draw triangles of 3d models on your screen
  • Shaders are often used to interact with light. In Unity, we can use Surface Shaders as a way to write effects that interact with light in a realistic way.
  • We can write Vertex and Fragment shaders to apply our own lighting models, create post-processing effects and effect 2D images

For more info on the Surface and Vertex and Fragment Shaders below are some links to some old tutorials that are still relevant, with probably some horrible typos. However stay tuned, as I may do a revision post on both soon!

Unity Shader Tutorial – A look at a basic surface shader

Unity Shader Tutorial – a look at a basic vertex and fragment shader

Creating a Cloak Effect

It has been a while since I have done any type of tutorial or any technical related blog post, so I thought I would share the process of how I implemented a “cloak” effect on my character. Essentially in my game, I am adding a cloak skill and in order to visualize this to the player I want a sort of holographic semi-transparent cloak effect. Something magicy and Sci-Fi. So I jumped back into my shader coding.

If you need a quick refresh of the make up of shaders in Unity then here are a couple of previous posts (with horrendous typos that I should probably sort):

Unity Shader Tutorial – A look at a basic surface shader

Unity Shader Tutorial – a look at a basic vertex and fragment shader

Cool, let’s do it.

First we start by creating a Surface Shader and then strip it back so it just performs diffuse lighting, a good starting point for all shader creation.

basicshader

Firstly in our properties section we are going to add… well… a new property

properties-capture

Which will expose the following “_DotProduct” float value to the editor that we add in our CG Program.

 

dotproduct

We also want to add a _MainTex property value pair into our shader.

We also want to change the tags.

tags

This will render the shader in the transparent rendering queue and will ignore projectors. If we want, as the player character is becoming invisible, we can also add the “ForceNoShadowCasting” tag that will mean the object does not cast shadows.

We also want to disable the expensive PBR in this shader and switch it out for Lambertion reflectance before telling the CG program this is a transparent shader and then finally disable the lighting. Wow that was a long sentence.

pragmas

The last thing we want to do before we actually jump into our surface function is change what actually gets run into the shader. In this case we want the world normal and view direction as well as what is already there.

inputstructure

Cool, now we whack this code in the surface function.

And we get this effect:

basic

On a sphere it looks pretty basic although it makes a cool atmosphere effect one, similar to the one I used in my seasonal globe:

What this shader is doing is actually showing the silhouette of the object. If I swap the circle out for something else and we move around it we can see the outline changes.

silhouette

After I tweaked the values a bit and applied it to my asset store model, here is what I got:

cloak

The gif doesn’t do it as much justice as in-game, but I actually think this is quite cool and that shader is cheaper than the previous refraction shader I was using.

I also had a little experiment for fun with my enemy model by adding a wave to all the verts and made what I am calling the “Demontor” shader

demontor

 

 

Unity Shader Tutorial: An intro to Compute Shaders

Are you ready to turn this up to 11? We are going to look at some real “Triple A” business now. The world of compute shaders. So what are these mysterious  creatures that you probably don’t know exist in Unity?

To be honest, I had completely forgot they were there and I was looking at a fur tutorial (that doesn’t actually seem to work by the way and was also a really dirty way of doing it ) and then remembered my mate had said you could probably do grass and fur in one. I think he actually meant geometry shaders, but compute shaders peeked my interest.

However, after digging around the net it turns out that info surrounding them when it comes to Unity seems quite scarce.

Let’s start from the top!

What is a compute shader, and why should I care?

In Microsoft’s fancy terms, “a compute shader is a programmable shader  stage that expands Microsoft Direct3D11 beyond graphics programming” and “a compute shader provides high-speed general purpose computing and takes advantage of the large numbers of parallel processors on the GPU”.

In simple terms, a compute shader is a program that runs on the graphics card that does stuff outside of the normal rendering pipeline.

So you are probably thinking “OK, I kind of get it, you can run some logic and put some work onto the graphics card, but why would I want to do that?” Well these shaders are really good at maths and parallelization, i.e. they are really good at performing tasks where you are doing a lot of the same thing. In other words, they are really good at tasks that involve applying the same set of calcualtions to every element in a given data set.

This is probably a kind of crappy exlanation, so lets wind the clock back a bit to when I was just gracing the planet with my presence. The 90s. It was a beautiful time with games like Doom, Final Fantasy 7, The Legend of Zelda: Ocarina of Time, Crash Bandicoot, Tekken 3… do I need to go on? Essentially lots of 3D games and PCs started going out with graphics cards. Stuff like this bad boy.

What a rush indeed! Getting that sweet 6MB of power all up in your grill. Anyway OpenGL and DirectX appeared and the magic of the programmable pipeline emerged. Developers just send geometry down to the graphics card and OpenGL/Direct X would figure it out. However, the pipeline was pretty rigid, and thus to make more interesting effects and push the boundaries it had to became more flexible. This led onto shaders, where devs could write their own programs to perform certain parts of the pipeline and make things look like the wizard’s tits.

This the opened up a lot of possibilities and this new system mean that the new pipline could deal with a lot of different types of algorithms and now the GPU can do stuff like crazy multi-threaded physics, etc.

What this means now is we can do crazy stuff like Nvidia’s Hair works.

You on board now? If not, just know it is cool and you feel like a Game Development Maverick when you do it.

Basically, you can potentially harness the GPU to do none graphicsy stuff if you so desire and gain MOAR POWER.

Sod it, lets jump in!

That’s the attitude I want!

Before you start though you need a WINDOWS machine. Macs don’t have it. And to be honest they are kinda crappy for big boy game development like this anyway 😛

Create a compute shader in Unity.

The first thing you will notice is that this is not CG. This is a Direct X 11 style HLSL bad boy. Yeah fasten your seat belts boys and girls.

// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel CSMain

// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> Result;

[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
     // TODO: insert actual code here!

     Result[id.xy] = float4(id.x & id.y, (id.x & 15)/15.0, (id.y & 15)/15.0, 0.0);
}

So the above is what you get if you just create one from scratch.

The #pragma kernel CSMain is the first thing we see in our shader. Kind of like the other shaders, this is us telling the program where our entry point is, in this case CSMain. A compute shader can have many a function and you can call a specific funciton from a script. More on that later.

The next bit is a RWTexture2D<float4> Result

Again, like our other shaders, this is just a variable declaration. However, as we aren’t using mesh data, we have to say what the shader will read and write to. In this case we have a RWTexture2D a read/write texture 2d object that the program is gonna use. Take a look at MSDN for reference:

https://msdn.microsoft.com/en-gb/library/windows/desktop/ff471505(v=vs.85).aspx

Finally, the last super different thing is the numthreads which is the the number of thread groups that are spawned by our shader. GPUs love the parallel processing business and create threads that run simultaneously.  This line is specifying the dimensions of the thread groups. These specify how the threads that are created are organised and in this case we are saying that we want to create 64 threads. Take a look at msdn for refrence:

https://msdn.microsoft.com/en-us/library/windows/desktop/ff471442(v=vs.85).aspx

The size of your thread groups will be determined by a lot of factors and probably most notably your target hardware. For example, the PS4 may have  a different optimum size compared to the Xbox One.

The rest is kind of bog standard code. The kernel function determines what pixel it should be working on based on the uint3 ID of the thread running the function and writes some data to the result buffer.

Cool. We have our first compute shader! But how do we actually run this warlock? It doesn’t run on mesh data so we can’t attach it to a mesh. We need to grab it from a script.

But hold up, before we start lets change up our script that Unity spat out and make the compute shader do something different.

// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel CSMain

RWStructuredBuffer<int> buffer1;
 
[numthreads(4,1,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
    buffer1[id.x] = mul(id.x, 2.0);
}

You can see we switched out the texture for a structured buffer. This is just an array of data consisting of a single data type, in this case it is an int. In the code you can see we are just taking the id of the thread and multiplying it by 2.

Cool lets write a new script.

using UnityEngine;
using System.Collections;

public class RunComputeShader : MonoBehaviour
{
    [SerializeField]
    private ComputeShader _shader;

    void Start()
    {
       ComputeBuffer buffer = new ComputeBuffer(4, sizeof(int));

       _shader.SetBuffer(0, "buffer1", buffer);

       _shader.Dispatch(0, 1, 1, 1);

        int[] data = new int[4];

        buffer.GetData(data);

        for (int i = 0; i < 4; i++)
        {
             Debug.Log(data[i]);
        }

         buffer.Release();
     }
}

Firstly we are creating a compute buffer the size of an int, a buffer that ComputeShader programs use to store arbitrary data and then we are using the SetBuffer to tell the shader to dump data in there. We use the dispatch function to run our shader and then grab the work the shader has done.

If you set up the above you should see in the debug window it print out some numbers. Yeah it did that on the Graphics card.

Alright fine, it wasn’t the most crazy thing in the world, but it is just showing you that work other than just rendering pretty images can be done.

Round up

This is a post to show you compute shaders are there. I am not saying go out and use them everywhere. The GPU can be used to do some cool multi threaded tasks, however a word to the wise. The tasks that the GPU can do are going to be limited and you really have to look at the problem you are trying to solve before you go down this path. If your game is gonna be super pretty in, you porbably want to be maxing out the gpu on that first before offloading stuff the cpu can do onto it. If your GPU is jsut idling though… maybe on like some lower poly strategy game, etc then maybe consider offloading some of the logic to the GPU using a compute shader.

Well until next time!