Unity Shader Tutorial – A look at a basic surface shader

As usual I don’t have a decent picture for this post, so here is a picture of Metal Gear Solid V: The Phantom Pain, which I am playing at the moment. Not only is it the most fun I have had with a video game all year, but it also looks very pretty in Konami’s FOX engine. Shame about Kojima though… although I am pretty sure it was extremely expensive to make… anyway, MOVING ON!

So you have come this far, I assume you know what Shaders are. If not, well, go have a read about HLSL, GLSL and Cg. In Unity’s case, in order to render something you need a Material with a shader attached.

Unity supports 3 types of shader, but this is 2015, so we only really want to worry about 2 of them:

  • Surface Shaders
    • Surface Shaders are Unity’s way of making writing lit shaders easier versus writing Fragment and Vertex Shaders.  Surface Shaders generate all the repetitive code that would have to be written by hand compared to writing low level Fragment and Vertex shaders. Surface shaders are mainly used to write shaders that interact with lighting.
  • Fragment and Vertex Shaders
    • You can write lower level Shader programs using CG. This is if you want to create more complex custom effects.

Lets jump right and and create a “standard surface shader”.

basic shader

Properties

Awesome, right lets take this apart. First you will see the shader has a number of properties at the top. You can think of these as public or serialized fields when you are writing C# scripts. When you select the shader to be used on your material, these are the items that you can then fiddle with in the inspector. One GOTCHA you might run into here is that if you change the materials values in play mode, unless it is an (Instance), the changes will likely remain when you exit play mode. Lets further take apart these properties

  • COLOR
    • This one is pretty self explanatory, this is the color that is applied to the object that the material is attached to.
  • MAINTEX
    • The texture parameter, in this case initialised to white. They can also be initialised to black or grey and you can also use “bump” to inidicate that the texture being used is a normal map.
  • GLOSSINESS
    • This is used in combination with the lighting system to determine how “glossy” an object looks
  • METALLIC
    • Again, used with the lighting to determine how “metallic” the object looks

Simple enough eh? However, this properties structure is only actually used to give the editor access to the variables within the shader. We still need to set them up in the shader itself.

Lets jump into the SubShader.

Right at the start you will see a structure called tags. This is basically a Dictionary or list of key value pairs that are used by our shader. They are used to determine rendering order and other parameters of a subshader. Lets take a look at some of the tags available to us.

Queue Tag and the Rendering order

The queue tag can be used to determine which order your objects are drawn. It allows the shader to decide which render queue it belongs to allowing any transparent shaders to be drawn after opaque objects, etc. Typically when the GPU renders triangles which make up our models, it usually sorts them based on how far away they are from the camera. The ones furthest away are drawn first. This is fine when you want to render SOLID SNAKES, but what if you want to render LIQUID SNAKES… I mean Transparent Objects! Haha, can’t get Metal Gear out of my head. Did I tell you how good it was? 😛 In all seriousness, the Queue tag is provided so the user can control the rendering order of each material. There are pre-defined tags available, but, and I take this from the documentation itself “there can be more queues in betweem the predefined ones”. Before we even go into that, lets look at the predefined ones:

  • BACKGROUND
    • Rendered before anything else, used for things like backgrounds and skyboxes
    • Queue index: 1000
  • GEOMETRY
    • The default Queue. Opaque geometry uses this queue.
    • Queue index: 2000
  • TRANSPARENT
    • Used for transparent materials, for example rendering fire particles, water, a crazy magic effect with transparency, glass for peaking into people’s houses
    • Queue index: 3000
  • OVERLAY
    • Used for Overlay effects. Anything rendered last goes here, for example crazy J.J. Abrams lense flare.
    • Queue value: 4000

You might have noticed I put a queue indexi n each of them. These queue index are integers with the smaller the number, the sooner it is drawn. Unity also allows you to do call things like “Background+5” which will give it a queue index of 1005. This is kind of useful if you want something drawn last in a specific layer.

In the above shader, there is no Queue tag defined, meaning it will grab the default.

RenderType Tag

The RenderType tag defined in our shader categorizes it into a predefined group.  This is used by Unity’s Shader Replacement system and in some cases used to produce Camera’s Depth Texture. “Ooo”, you are thinking, “these sound like fancy things. Lindsay, I would love it if you went into a bit more detail”. Of course I can, don’t worry about it.

So, lets get some context. Some rendering effects require rendering a scene with a different set of shaders. For example, if you want to do effective edge detection, you need a texture with scene normals. To achieve this, it is possible to render the scene with replaced shaders of all objects. That last sentence may have you with a big fat inquisitive question mark above your head. It is from the docs, and to start with I didn’t get it either. In normal people terms, you may have discover a cool effect, and need to use Unity’s fancy Shader replacement to achieve it. You can use function such as Camera.RenderWithShader or Camera.SetReplacementShader, which take in a Shader and a Replacement Tag.

The camera renders the scene as it normally would with the objects still using their materials, but the actual shader used on the materials ends up being changed. The replacement tag effects this as well. If the replacement tag is unspecified, then all the objects are rendered with the given replacement shader. If not, then the object’s shader is queried for it’s tag value, and if it doesn’t have that tag, it is not rendered. A subshader is then found in the replacement shader that has a given tag with the found value. If no subshader is found, the object is not rendered. The found subshader is then used to render the object.

Did you follow that? OK Let me put it in some other words as well.

Say you only wanted to render solid objects. I am not sure why, but say you did. You can write a replacement shader that only renders solid objects by using a subshader with the RenderType tag set to solid. Any object that had a shader that didn’t contain the solid tag would not be rendered.

The RenderType tag can be set to the following:

  • Opaque
  • Transparent
  • TransparentCutout
  • Background
  • Overlay
  • TreeOpaque
  • TreeTransparentCutout
  • TreeBilliboard
  • Grass
  • GrassBillboard

At this stage of the game, don’t worry if you are finding this a little bit hard to follow, it took me a few reads of it. This is a bit more advanced, I just didn’t want to leave it out. If you need more info, you can find it here:

http://docs.unity3d.com/462/Documentation/Manual/SL-ShaderReplacement.html

Also, in the interest of time I am going to leave this here as “further reading”

http://docs.unity3d.com/Manual/SL-CameraDepthTexture.html

Cool, that is the Render Type tag done, lets see what we have next.

ForceNoShadowCasting Tag

This essentially does what it says on the tin, If you include this tag and set it to true, the object will not cast any shadows.

IgnoreProjector

If this tag is set to true, the object will not be affected by Projectors. Projectors are pretty clever, they allow you to project a material onto all objects that intersect its frustrum. If you are interested in Projectors, you should check out the prefabs that live in the Standard Assets.

OK, lets move on.

LOD

The next part of the subshader is the LOD. This si the Shader’s level of detail. You could get this confused with Level of detail features used in open world games, etc. The LOD value in a shader enabled ou to enable/disable graphical features. You can essentially disable any shaders below a particular LOD. There is still some pretty crappy hardware kicking around and it is surprising what users try and run the latest games on. Some cheaper graphics cards for example may support lots of features but are too slow to use them. You can use LOD to disable features on crummy hardware like that.

ZTesting and ZWriting

Before we move onto the program itself, we should quickly talk about ZTesting. A transparent object doesn’t necessarily  always appear above an object in the Geometry Queue. The GPU does some cool ZTesting which stops hidden pixels being drawn using the depth buffer. The ZTest culls pixels which are hidden by other objects based on the depth (distance from the camera), regardless of the order it was drawn. You can turn this off in your shader or change how ZTesting is done. Firstly, if you are creating semi transparent objects, you can disable ZWriting by using the syntax “ZWrite Off”. You can also change how you want the ZTest to be done by defining the ZTest syntax in a Pass in your subshader. We will cover this stuff again in later tutorials, but as we were talking rendering queues and whatnot, I thought it was worth mentioning now. If you want to get ahead a bit, head over to:

http://docs.unity3d.com/Manual/SL-Pass.html

Han Solo

Yay, you made it this far. Awesome. But as Han Solo says: “Don’t get cocky”

Also check out this song!

Have you checked out the song yet? Good, we can start talking about the CGPROGRAM.

CGPROGRAM

So yeah, this is where we write look at some CG…

You know right at the start about how we talked about Surface Shaders. What we have here is one of those. To reiterate a bit, whenever you want a material to use lights in a realistic manner, you probably want to use a surface shader.

 

When we write surfaces shaders you are defining the Albedo (the base color of the surface), normals, how metallic it is, etc in the surf function you can see in our shader. These are then run through the a lighting model which outputs the final Red, Green and Blue values for each pixel being rendered using the shader.

So onto the code.

Firstly the CGPROGRAM line is basically declaring, we are writing CG in this next block. You can see towards the end it is wrapped up by ENDCG.

The next line with the #pragma on it is defining which lighting model to use. This is the compile directives. The first part #pragma surface shows that it is a surface shader. “surf” is the funciton we use with our surface shader and “Standard” is the lighting model we are using. The “fullforwardshadows” is an option parameter. Lets take a look at what we can define in the first three required prameters:

  • surfaceFunction
    • Defines which CG function has the sahder surface code, if we really wanted to we could rename “surf” to “balls” or “spaghetti” and replace the current function name with the new one.
  • light Model
    • The light model to use – More info below

I briefly mentioned a lighting model earlier on. As stated above, the light model parameter defines which lighting model to use. We could use on of Unity’s built in ones:

You can also create your own lighting model

http://docs.unity3d.com/Manual/SL-SurfaceShaderLighting.html

Finally, there are lots of optional parameters you can use in this directive, many of which are listed here

http://docs.unity3d.com/Manual/SL-SurfaceShaders.html

The enxt line determines what shader model we want to use. Different Direct X versions use different shader models, At time of writing, Direct X 9 supports Shader Model 3, which is used in games like Bioshock and Bioshock 2 (one of my favourtie game series of all time)

And Direct X 11 supports Shader Model 5.0

Here we decide what we want to target. There is a caveat to that though, again at time of writing, that target 4.0 and target 5.0 directives are only available with cards that run Direct X 11 and on Xbox One/PS4 and a lot of the mobile shaders will require SM2.0 because of their hardware (and they say mobile gaming is the future!)

subshader block

Awesome. SO we haven’t really got into the beef of the CG code yet, but there is a lot of stuff to chose from just for your lighting model, etc.

We are now defining the things we defined back up in our properties structure earlier on. As you can see there is a sampler2D for the texture, halfs for the glossiness and metallic and a fixed4 for the color. Again these are all tweakable from the inspector.

But what are these datatypes. Well I will tell you:

  • float – 32-bit floating point number
  • half – 16-bit floaitng point number
  • int – a 32-bit integer
  • fixed – a 12-bit fixed point number
  • bool – boolean variable
  • sampler – represents a texture object

So a fixed4 for example is a sturcture that contains 4 fixed point values.

You probably also noticed the Input struct containing a float2. This contains the texture coordinates. We will come back to that in a moment.

I want to dive into the main surf function. As you can see it takes in the Input struct defined above and outputs a SurfaceOutputStandard which contains the albedo values, etc we have defined.

Now it is time to talk about that input structure. As I mentioned, it contains the uv data for the texture. There are many other things you can put in this input structure (scroll to the bottom), but for this shader we just need the textures uvs.

You can see the “Albedo comes from a texture tinted by color”. What is happenning here is the tex2D function. Given a texture and a UV coordinate, tex2D will return the RGBA color as well as taking into account the textures import settings. Later on, the alpha is grabbed from the fixed4 we created from this tex2D. We can see the rest of the output structure is being set by the values that are set up in the inspector in the properties structure.

Fallback

The last thing to talk about is a fallback. This basically says, “if none of the subshaders can run on this hardware try finding another one for this hardware”. We can either fallback on a specific shader or we can just set the fallback to Off, explicitly stating there is no fallback, even if the shader cannot run on the hardware.

Next time…

So that whole article was an in-depth ish look at what you get when you create a Standard Surface Shader and what you can do with it. As I said at the start there are a not just surface shaders, but also Vertex and Fragment shaders. In the next article we will look at those before diving into Unity’s fancy PBR and lighting systems.

As always, if you have any questions, sound off in the comments below or nag me on twitter. I will probably respond more to the latter!

Thanks for reading!

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

Verify that you are not a robot! (If you are unlucky) *