Unity Shader Tutorial – a look at a basic vertex and fragment shader

Image result for mortal kombat x

I think the pictures I put on these tutorials are going to be whatever game I am playing at the moment. Since we watched the retrospective on Mortal Kombat, we used the magic of Amazon Prime Now to get it within 2 hours of watching it, we have become a bit Kombat Krazy.

Anyway, lets move on to why you are here. The Vertex and Fragment Shaders!  Last time we said there were two types of shader and we broke down a Surface Shader.

As I said last time, if you want to write a Shader that interacts with lighting, you probably want a surface shader, however you can do this in a fragment and vertex shader if you really want. If you want to create cool, advanced effects that are outside the capabilities of the surface shader, you are going to want to create a fragment and vertex shader.

Once again, create a new shader, but this time select “image effect shader”.

basic shader fragment

vert and frag functions

Instead of doing stuff in order like last time, I am going to jump straight in to the important stuff. Inside your main block of shader code you will see a vert and frag  function. In  a vertex and fragment shader, the computation is done in two steps. The first step is running the geometry through the vert function that can alter the position and data of each vertex. The second step is getting the result of the vertex modifying function and then running it through the frag function that changes the colour, etc.

You can actually see this flow in the code. The appdata struct that contains the position and texture coords of the geometry is passed into the vert function which in turn returns a v2f struct that is passed into the frag. Pretty cool huh.

So what is actually happening in these functions. Some really simple stuff actually. If we look in out vert funciton to start with, we see that we are taking in our vertices and the running them intosome mul function with some sort of crazy matrix syntax. BUT WHAT ARE YOU DOING MUL FUNCTION?! Well, what the vert function is receiving is the position of the vertices in WORLD COORDINATES which need to be converted into SCREEN COORDINATES. In other words, for a given thing in 3D, what is the position of the point in 2D on the screen n pixel coordinates. This line of shader code is doing exactly that. It is multiplying our vertices by the Model View Projection matrix. After that, it is just taking a copy of the texture coords so we can use them in our frag function.

Looking at the frag function, you maybe be getting a bit of deja vu. Yes, once again all that is happening here is we are grabbing the colour of the pixels from the texture we pass in. Then to show the shader is actually doing something, we invert the colours.

Semantics

OK, so we looked at the new functions, but we have also noticed stuff in our shader in capitals that is palced after a colon. For example POSITION and SV_POSITION and TEXCOORD. WHO ARE YOU CRAZY CAPITALISED SHOUTY THINGS?! These are actually Semantics.  I guess you can think of them as tags for special variables and how they are initialised. For example the POSITION semantic used on the vertex variable in the appdata struct is telling Unity that we want the vertex variable initialised with the vertex positions. (EDIT) Similarly, there is the SV_POSITION which is specifically used for defining position in the output struct. This variable will be initialised with the screen position of a vertex (you know the ones we figured out in the vert function). Read this thread for more info on why it is different to POSITION. Unity itself will initialise the appdata input structure, however we need to fill the v2f structure ourselves, hence why we do the mul operation in the vert function. All the fields in the two structs need a semantic ,Regardless of how they are populated.

In summary, a semantic is a string attached to a shader input or output that conveys information about the intended use of a parameter.  You can take a look at the following for more information on what Semantics are available and for more detailed info on them:

https://msdn.microsoft.com/en-us/library/windows/desktop/bb509647(v=vs.85).aspx

http://developer.download.nvidia.com/cg/Cg_3.1/Cg-3.1_April2012_ReferenceManual.pdf

Pass

The final thing I want to talk about is that Pass word that is written near the top of our shader. A pass block in the shader denotes when something is rendered once with each shader being able to have multiple passes. In other words, The pass block causes the geometry to be rendered once. Most of the time, vertex and fragment sub-shaders will only have one pass.

What this actually allows is the ability to put two or more vertex/fragment shaders together, with the second one drawn over the first one, allowing you to create some cool effects.

Passes can also actually be used with surface shaders, but you cannot interchange between the two. What I mean by this is you cannot have a pass of a vertex/fragment shader and then a surface shader or vice versa, you must have on type or the other.

 

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

Verify that you are not a robot! (If you are unlucky) *