Unity Shader Tutorial: Surface Shader Custom lighting

Yes I know I said I wasn’t going to do a tutorial this weekend, but I was learning some shader stuff for my project and The Cel-Shading in Tales of Vesperia, Guilty Gear Xrd and other games are a big inspiration. I was following a gamasutra article on how to do this and I saw he was using a custom lighting function within a surface shader.

When working on some projects, you don’t always want to go for super realistic lighting. You may want to use a custom lighting model rather than just the built in one. Here is the shader code I am using, a slightly modified version of the one in the gamasutra article:

lightingshader

Take a look at the the first pragma line where we are defining the surface shader function and the lighting mode. However instead of using one of the built-in lighting models we are using a method we have defined. Scroll down the shader and you will see the method “LightingCelShadedUsingForwardRendering”. This is the custom lighting calculation we are using. This is a function which you can define in various ways that returns a half4.

The one thing that caught me out when creating this custom method is you need to prefix it with “Lighting” and then when you define which lighting method in the #pragma you are using you do this minus the lighting prefix. You are now using custom lighting in your shader! Neat huh?

It is worth taking a further look at the Unity docs to see other examples of how you can define your own lighting models

http://docs.unity3d.com/Manual/SL-SurfaceShaderLightingExamples.html

Unity Shader Tutorial: Talking to Shaders through C# scripts

Hey look, a picture of Final Fantasy XV. Probably the game I am most excited for because I enjoyed everything about the demo. I might even go back and play that again this evening.

Anyway, we have been talking about how we can write some shaders, but how can we hook them up to C# scripts to create cool image effects? Well, we can have a look at one in the Standard Assets. Firstly, open up your Unity project then go Assets->Import Package->Effects. Once you have that, open up ScreenOverlay.cs. You can attach this to a camera if you want to see what it does, but we should jump into the important bits.

Firstly, you will see at the top of the class there is a public variable for the Shader and a private one for the Material. If you keep going down the class, you will see there is a CheckShaderAndCreateMaterial inside the Check Resources method. If youa re in Visual Studio (and you should be because MonoDevelop is garbage), Right click and got to the definition of that function and you will find yourself in the PostEffectsBase, the base class for a fair few of the Post Processing Effects.

This particular function is Creating the Material. If I haven’t said it already by now, every Shader needs a Material, and as the Post Processing Effects work a tad bit differently to the whole “create material through inspector” then “select which shader to use with it”, the Post Processing effect needs to use this function to create a material. As you can see, if all is well, it creates a new material with the given shader.

That bit is important, but is not always needed, it is only needed here because of the way the Post Processing Effects work. If you head back to the ScreenOverlay.cs file and scroll down to the OnRenderImage function, you will see inside there there are functions being called on the Material called things like “SetVector” that take a string and then the value type that corresponds to their function.

Head back into Unity and search for the BlendModesOverlay shader.

Inside the shader you will see there are variable sin the shader like “half _Intensity”.  Now if you head back to ScreenOverlay.cs you can see the line “SetFloat(“_Intensity,” intensity”)”. Essentially we can set textures, floats, vectors, etc that we declare in shaders through C# code by using the functions. You can also grab values to. For full reference, head over to to the Unity doumentation.

http://docs.unity3d.com/ScriptReference/Material.html

Unity Shader Tutorial – a look at a basic vertex and fragment shader

Image result for mortal kombat x

I think the pictures I put on these tutorials are going to be whatever game I am playing at the moment. Since we watched the retrospective on Mortal Kombat, we used the magic of Amazon Prime Now to get it within 2 hours of watching it, we have become a bit Kombat Krazy.

Anyway, lets move on to why you are here. The Vertex and Fragment Shaders!  Last time we said there were two types of shader and we broke down a Surface Shader.

As I said last time, if you want to write a Shader that interacts with lighting, you probably want a surface shader, however you can do this in a fragment and vertex shader if you really want. If you want to create cool, advanced effects that are outside the capabilities of the surface shader, you are going to want to create a fragment and vertex shader.

Once again, create a new shader, but this time select “image effect shader”.

basic shader fragment

vert and frag functions

Instead of doing stuff in order like last time, I am going to jump straight in to the important stuff. Inside your main block of shader code you will see a vert and frag  function. In  a vertex and fragment shader, the computation is done in two steps. The first step is running the geometry through the vert function that can alter the position and data of each vertex. The second step is getting the result of the vertex modifying function and then running it through the frag function that changes the colour, etc.

You can actually see this flow in the code. The appdata struct that contains the position and texture coords of the geometry is passed into the vert function which in turn returns a v2f struct that is passed into the frag. Pretty cool huh.

So what is actually happening in these functions. Some really simple stuff actually. If we look in out vert funciton to start with, we see that we are taking in our vertices and the running them intosome mul function with some sort of crazy matrix syntax. BUT WHAT ARE YOU DOING MUL FUNCTION?! Well, what the vert function is receiving is the position of the vertices in WORLD COORDINATES which need to be converted into SCREEN COORDINATES. In other words, for a given thing in 3D, what is the position of the point in 2D on the screen n pixel coordinates. This line of shader code is doing exactly that. It is multiplying our vertices by the Model View Projection matrix. After that, it is just taking a copy of the texture coords so we can use them in our frag function.

Looking at the frag function, you maybe be getting a bit of deja vu. Yes, once again all that is happening here is we are grabbing the colour of the pixels from the texture we pass in. Then to show the shader is actually doing something, we invert the colours.

Semantics

OK, so we looked at the new functions, but we have also noticed stuff in our shader in capitals that is palced after a colon. For example POSITION and SV_POSITION and TEXCOORD. WHO ARE YOU CRAZY CAPITALISED SHOUTY THINGS?! These are actually Semantics.  I guess you can think of them as tags for special variables and how they are initialised. For example the POSITION semantic used on the vertex variable in the appdata struct is telling Unity that we want the vertex variable initialised with the vertex positions. (EDIT) Similarly, there is the SV_POSITION which is specifically used for defining position in the output struct. This variable will be initialised with the screen position of a vertex (you know the ones we figured out in the vert function). Read this thread for more info on why it is different to POSITION. Unity itself will initialise the appdata input structure, however we need to fill the v2f structure ourselves, hence why we do the mul operation in the vert function. All the fields in the two structs need a semantic ,Regardless of how they are populated.

In summary, a semantic is a string attached to a shader input or output that conveys information about the intended use of a parameter.  You can take a look at the following for more information on what Semantics are available and for more detailed info on them:

https://msdn.microsoft.com/en-us/library/windows/desktop/bb509647(v=vs.85).aspx

http://developer.download.nvidia.com/cg/Cg_3.1/Cg-3.1_April2012_ReferenceManual.pdf

Pass

The final thing I want to talk about is that Pass word that is written near the top of our shader. A pass block in the shader denotes when something is rendered once with each shader being able to have multiple passes. In other words, The pass block causes the geometry to be rendered once. Most of the time, vertex and fragment sub-shaders will only have one pass.

What this actually allows is the ability to put two or more vertex/fragment shaders together, with the second one drawn over the first one, allowing you to create some cool effects.

Passes can also actually be used with surface shaders, but you cannot interchange between the two. What I mean by this is you cannot have a pass of a vertex/fragment shader and then a surface shader or vice versa, you must have on type or the other.