A brief shader overview in Unity

I have started playing Yakuza 0 again. It is worth picking up!

Anyway, as you may have guessed, I haven’t got a relevant image for this article so just whacked in what was playing.

Shaders are often this scary, black magic game dev word that a lot of people run away from.

Shaders are bits of code that run on the GPU. They can be code that affects visuals of the image but also in some cases, you can offload complex calculations to the GPU (compute shaders). As this is a graphics focused tutorial, we are going to focus on the visual side.

A visual shader is a shader that affects… well.. the visuals. In the most basic of terms, a visual shader is a program that runs on the GPU that takes 3D model vertices, textures, and other bits of info and returns pixel colours. They are used to draw triangles of 3d models on your screen.

Again, that is super oversimplifying it so I am going to dig into the types of shader Unity has

Surface Shaders

Let’s look at this from a light simulation perspective. If you look in the picture above (see it is not COMPLETELY irrelevant!) you can see there are a lot of lights and the materials on Kiryu’s clothes, skin etc are reacting to it. However, these materials all interact differently with the light. When light hits a material it can be absorbed, reflected, refracted and scattered.

The behaviour of the light rays hitting the surface creates the look of a specific material. For example, the jacket Kiryu is wearing has high light absorption making it look dark. The water on his face has an element of reflection making it look shiny.

This is where surface shaders come in.

Although you can write your own lighting behaviour in Unity, most of the time you don’t want to. That said, there are times you want to do custom lighting which we will look at in later articles, but again MOST of the time, you would use surface shaders. Why? Well, writing lighting equations in shaders can be “well hard”. There are different light types to factor in, different shadow options and different render paths (which we will cover later).

Surface Shaders kind of take the hard part away. As I said before, writing your own lighting behaviour can take a lot of work. Unity’s surface shaders essentially do that for you. The docs refer to this as an auto-generation approach, but essentially what they do is generate the repetitive lighting code under the hood. We will go into more detail about this later, but for now, all you need to know is that they are a way to deal with shaders that use lighting.

Vertex and Fragment Shaders

These are your more “traditional” shaders. The first step it where the shader takes in vertex data, i.e. the geometry, which can alter the data of each individual vertex in the model if you so chose. After this has been done, the result is passed to the second step where a function outputs the colour at each vertex. The first step is known as the vertex function and the second step is known as the fragment function. Hence the name.

In Unity, Vertex and Fragment shaders are often used for non-realistic materials, 2D graphics or fancy post-processing effects. Let’s say I didn’t want to use the real-life lighting behaviour or physically based rendering as it is also known. Let’s say I want to render Kiryu in that above picture as a Cartoon Network character. We would use a Vertex and Fragment shader.

Vertex and Fragment shaders are also used for post-processing. I briefly mentioned post-processing. Post Processing is like adding effects to an image in photoshop. Say I wanted to turn the image above black and white, I would write a vertex and fragment shader for that.

Also, say I have a 2D sprite and I want to take the shape of it and add a glow effect around it. I would probably use a vertex and fragment shader for that.

I also mentioned that surface shaders contain boilerplate lighting code. In fact, when you write a Surface Shader, it gets compiled down into a vertex and fragment one. Essentially, if you are thinking of implementing a different lighting model other than a realistic one, you will probably have to write a vertex and fragment shader.

Summary

  • Shaders are programs that are executed on your GPU and are used to draw triangles of 3d models on your screen
  • Shaders are often used to interact with light. In Unity, we can use Surface Shaders as a way to write effects that interact with light in a realistic way.
  • We can write Vertex and Fragment shaders to apply our own lighting models, create post-processing effects and effect 2D images

For more info on the Surface and Vertex and Fragment Shaders below are some links to some old tutorials that are still relevant, with probably some horrible typos. However stay tuned, as I may do a revision post on both soon!

Unity Shader Tutorial – A look at a basic surface shader

Unity Shader Tutorial – a look at a basic vertex and fragment shader

Share

Unreal Engine – Exposing Functions and Variables to the Editor

This is a repost of something I put on twitter a while ago when I first started learning Unreal Engine last year which was how intuitive it was to expose things to the editor. Unity lets you expose variables to the editor by using the SerializedField attribute. Unreal Engine has a similar method in the form of UProperty.

I have shamelessly stolen the example from the Unreal Wiki (for context this is here as part of my notes as well as a reference for anyone else), but if you do the following:

//How long, in seconds, the countdown will run
UPROPERTY(EditAnywhere)
int32 CountdownTime;

You will see the variable appear in the editor:

Also, you see there are comments in the code. These also get exposed to the editor.

Neat.

 

What is even cooler is you can expose your own code in the blueprint system. I love the blueprint system, it is powerful and is great for visualising the logic.

In one of my older projects, I had a need to expose the move direction of the player. By adding the UFUNCTION macro with the BlueprintCallable argument it is exposed to the blueprint system!

You can find out more about this on the Unreal Wiki:

https://docs.unrealengine.com/en-us/Programming/Tutorials/VariablesTimersEvents/2

Share

C++ Struct Memory Layout

Image result for God of war

I have been playing God of War recently (also known as Dad of Boy) and it is amazing. Out of curiosity I looked at what it would take to be a gameplay programmer (I am not planning to move to the states, but you know, you never know what the future may hold) and once again it said “expert C++” or something along those lines. Although I live in C# Unity land most days, my C++ is pretty good, however I would hardly call it expert. I thought I should brush up on some of the more slightly more advanced parts that I don’t usually run into in C#/Unity programming day to day.

The first of these tips and tricks is C++ memory layout, most importantly the order of member declarations in a struct.

Lets look at an example.

struct S
{
    char Char1;
    int Number1;
    char Char2;
    int Number2;
}

If we calculate the size of each member using size of:

  • Char1 – 1
  • Number1 – 4
  • Char2 – 1
  • Number2 – 4

And if we do a sizeof of the whole struct we get is 16. Hang on, our calculations don’t quite add up here. 1+4+1+4 = 10. The structure is 6 bytes longer than the sum of its members. Huh, why? Maybe if we change the order this will change things.

struct S
{
    char Char1;
    char Char2;
    int Number1;
    int Number2;
}

The we do a size of again. This time the size of shows the struct is taking up 12 bytes. Moving the Char2 declaration caused the struct to shrink by 4 bytes.

Why?

Well the hardware here is optimized to read data from memory addresses which are multiples of the data size. Ints are 4 bytes and are thus read from addresses that are multiples of 4 So ints would be read from addresses such as 0, 4, 8, etc. a char on the other hand is 1 byte and are thus read from addresses 0,1,2,3, etc.

The problem is, C++ 11 standard states that data members are allocated in memory the same order they are declared assuming they have the same access specifier. In a struct all members are public causing the data to be laid out in the order defined by the compiler.

Essentially what happens is the data gets padded when allocated. The following diagram shows what happens.

The compiler starts with address 0, allocating the byte for Char1. As the following addresses are not multiples of 4 they are skipped. They are not suitable to put an int32 in. Therefore Number1 gets allocated to address 4 to 7. This causes padding in the struct and thus the struct gets larger. If we take a look at the rearranged struct:

The padding is a lot better on this struct as the members are aligned in better addresses.

In conclusion, the layout of C++ objects do matter. If you pay attention to the location of each member you can make your structs smaller and leave less of a memory footprint. Neat huh?

Share