Researching a replacement for dynamic shadows continued…

I am still playing Nier: Automata so you will get a few of these pics for a bit.

A while back, I posted an article about researching a replacement for dynamic shadows.

On projects I have worked on we often default to using dynamic shadows to make stuff look good, but they can kill our performance on mobile. In order to hit the 60 FPS mark (or sometimes even 30) theya re one of the things we turn off. We then replace them with blob shadows, which are OK, but are still not as performant as we may like I found this advanced technique called graphics command buffers and a sample that includes decals in deferred shading. Decals are quite a nice way of creating blob shadows.
I have since knocked up a super quick test bed to see the difference between real time shadows, projectors and decal textures.
Realtime shadows:
And here are there corresponding BASIC stats:
Real time shadows:
Projector Blob Shadows:
Decal blob shadows:
Again, this is only a cursory test, but so far the decal blob shadows are edging out on the number of tri, batches and overall framerate. Of course this is needed to be taken with a pinch of salt and needs to be profiled more in other scenarios, however as a starting point it looks promising.
My next step is to see if I can create real time shadows using render textures and combine it with the decal techinque.
There is a warning here though that at time of writing this technique only supports deferred rendering and not ALL mobile gpus support it. However, I will press on and get a better overview of what hardware is affected, and come up with an alternative for that.

Unity Shader Tutorial: Talking to Shaders through C# scripts

Hey look, a picture of Final Fantasy XV. Probably the game I am most excited for because I enjoyed everything about the demo. I might even go back and play that again this evening.

Anyway, we have been talking about how we can write some shaders, but how can we hook them up to C# scripts to create cool image effects? Well, we can have a look at one in the Standard Assets. Firstly, open up your Unity project then go Assets->Import Package->Effects. Once you have that, open up ScreenOverlay.cs. You can attach this to a camera if you want to see what it does, but we should jump into the important bits.

Firstly, you will see at the top of the class there is a public variable for the Shader and a private one for the Material. If you keep going down the class, you will see there is a CheckShaderAndCreateMaterial inside the Check Resources method. If youa re in Visual Studio (and you should be because MonoDevelop is garbage), Right click and got to the definition of that function and you will find yourself in the PostEffectsBase, the base class for a fair few of the Post Processing Effects.

This particular function is Creating the Material. If I haven’t said it already by now, every Shader needs a Material, and as the Post Processing Effects work a tad bit differently to the whole “create material through inspector” then “select which shader to use with it”, the Post Processing effect needs to use this function to create a material. As you can see, if all is well, it creates a new material with the given shader.

That bit is important, but is not always needed, it is only needed here because of the way the Post Processing Effects work. If you head back to the ScreenOverlay.cs file and scroll down to the OnRenderImage function, you will see inside there there are functions being called on the Material called things like “SetVector” that take a string and then the value type that corresponds to their function.

Head back into Unity and search for the BlendModesOverlay shader.

Inside the shader you will see there are variable sin the shader like “half _Intensity”.  Now if you head back to ScreenOverlay.cs you can see the line “SetFloat(“_Intensity,” intensity”)”. Essentially we can set textures, floats, vectors, etc that we declare in shaders through C# code by using the functions. You can also grab values to. For full reference, head over to to the Unity doumentation.

The Stencil Buffer

Hello all, as well as doing all my tutorials, I am learning (re-learning) a lot about graphics at the moment, and want to touch on some more advanced stuff for creating cool effects that you may not know about or know little about (or at least need a refresher on).

The first is a key part of the graphics rendering pipeline. The Stencil Buffer. In technical terms (paraphrased from Realtime Rendering), the stencil buffer is an offscreen buffer used to record the locations of the rendered primitive and  typically contains eight bits per pixel. Primitives can be rendered into the stencil buffer using various functions, and the buffer’s contents can then be used to control the rendering into the colour buffer (pixel buffer) and Z-buffer.

OK, that is a slightly intense explanation. In simple terms, you could think of it like a physical stencil. It is a mask that allows some pixels through and stops others being modified. It can be seen as a kind of “general purpose” buffer almost that allows you to store an additional 8-bit integer for each pixel drawn on the screen. You know RGB values determine the colour of the pixels? And how z values contain depth data of the pixels drawn that is used by the depth buffer? Well a value in the range of 0-255 can be written to the stencil buffer. The stencil values can then be queried and compared to determine how pixels are rendered on the screen.

Still a bit confused?

Let’s look at an example.

Let’s say we are working on Need For Speed, one of the ones where you are being chased by police, none of that Pro-Street business (see, my picture is actually semi-relevant this time). You want your car to have a realistic rear-view mirror so the player can see the often unrealistically funded police force chase them in their hyper cars (seriously, name me a police force that drives Pagani Huayras). You’ll want to render a view pointing behind the car (i.e. a camera that is attached to the back of the back of the car, facing backwards), but you only want to render it in a way that the view only renders inside the rear-view mirror itself. The standard solution is:

  1. Render the shape of the rear view mirror into the stencil buffer.
  2. Turn on stencilling
  3. Render the rear view camera onto the regular buffer.

The stencil will then mask it so that the view only draws into the shape of the mirror.

Is it making a bit more sense?

Here is one final analogy which I found on the OpenTK website which is quite good. And I have added a bit of “cultural” flair.

Say you are doing a Banksy and creating some hot street art somewhere… probably in Brighton. In order to do your street art you have these sick cardboard stencils you made where you cut holes to create that ludicrous image on that junction box. You grab your can of spray paint and as you spray, the paint only passes through the holes you cut out and is blocked by the parts you did not. The graphics API e.g. OpenGL or Direct3D contain a stencil test that acts like this cardboard cutout. The API takes the stencil value of the pixel, tests it against the value in the current stencil buffer and then if the test fails, the pixel is “culled”.

If you don’t know what stencilling is when you are spray painting, here is a youtube video I liberated from the interwebs:

In some more techy terms, this is how Direct3D9 does the stencil test:

  1. Perform a bitwise AND operation of the stencil reference value and the stencil mask
  2. Perform a bitwise AND operation of the stencil buffer value for the current pixel value with the stencil mask
  3. Compare the results of step 1 and 2 using the comparison function.

So, what we haven’t really touched on is what those comparison functions actually are.

Well you can take a look at what is available to you in the D3D9 docs:

Or, if you want something a bit more “closer to home”, you can take a look what Unity’s ShaderLab does

Here is some further research you can look into: