OpenGL Tutorial #2 – Shader Intro

So we set up to now, our program did not actually draw anything. All we did is set up the basics and filled up the colour buffer with the colour blue.

Let’s review that code again.

The function GL.ClearColor. Here is a tip when it comes to figuring out what these kinds of funcitons actually done. There is a lot of refs, Khronos has a lot (the creators of OpenGL), but also funnily enough so does Microsoft, you know the Direct X guys! I actually find Microsoft’s docs actually a little easier to navigate so here is a link to glClearColor:

https://docs.microsoft.com/en-us/windows/desktop/opengl/glclearcolor

You can now also use that to look up what GL.Clear does.

Can you see the naming convention? The C# OpenTK does GL.Clear whereas the C++ code uses glClear.

Like I said, nothing was actually drawn.

Now back when I was at Uni, we learned how to use Fixed Function stuff to render things (yeah in 08-12, little bit behind but oh well). However, this is 2018, and shaders are not new. Open GL itself can draw Primitives such as points, lines and triangles. If you know anything about 3D graphics, you know 3D models are made up of lots of triangles, i.e. primitives. Primitives like points, triangles and lines are made up of vertices. In order for stuff to be rendered we need to delve back into the world of vertex and fragment shaders.

You can, in theory, write a shader directly in code in a string array. I am not about that life.

Firstly, we need some boring boilerplate code. I have created 2 files. FileUtils and ShaderUtils.

Firstly FileUtils

This is a basic static class that Loads a Text file from the folders route.

Secondly, we have “ShaderUtils”

This is what actually creates our shaders from text input.

But hold up, I hear you say. What is a Vertex and Fragment Shader?

A vertex shader contains the vertex data, i.e. the geometry, which can alter the data of each individual vertex in the model if you so chose. After this has been done, the result is passed to the second step where a function outputs the colour at each vertex. The first step is known as the vertex shader and the second step is known as the fragment shader.

For more info, go check out the Unity shaders

A brief shader overview in Unity

Ok, all of that aside. The above code is loading a parsing the shader data to create something usable.

We are not going to do anything exciting with shaders in this post, we are literally gonna render a red dot.

Start a new class called RedPoint.cs.

This is the class we are gonna use the red dot. We firstly load up the vertex and frag shaders, then create the program.

You will notice an array called _vao. Otherwise known as the Vertex Array Object. A VAO is an OpenGL Object that stores all of the state needed to supply vertex data.

Although the MS docs are good, this time I found they were lacking. So back to Khronos for more info:

https://www.khronos.org/opengl/wiki/Vertex_Specification#Vertex_Array_Object

We use GL.GenVertexArrays to generate the names of each VAO (in our case there is one).

https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glGenVertexArrays.xhtml

We then bind the object.

https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glBindVertexArray.xhtml

In our draw method, we use the program and use the draw array function to draw points (in this case one).

Basically, we are using this shader program on the points we draw in the primitives.

Cool, so we have how we bind and use the shader, but we actually need to write them.

Create some text files in the following folders:

Make sure the files are Copied to the Output Directory.

Let’s write the shader files.

The vertex shader here literally sets the point to the center of the screen

The out vec4 color part outputs the colour of the pixel. We set this color in the main function.

Add the RedPoint code into Main.cs

If you run the code now:

There is a red point.

Boom, we have loaded a super basic shader and rendered it.

Share

Tutorial Introduction – Setting up Open TK

Image result for open gl

If you have been over to Humble Bundle recently, you would see there is a really good bundle with a load of programming books in it. 

Included is a book called “Computer Graphics Programming in Open GL with Java”. As Java and C#, and I like C# much better than Java or C#, I thought I would knock together a few tutorials on using OpenGL and C#. I thought I would also dig out my seasonal globe and try and redo it using C#, Open GL and OpenTK.

So, first thing is to update Visual Studio, fire it up and create a Windows Forms Project.

Using Nuget, grab Open TK and add it into your project.

 

Next up, delete the Form1.cs and add a new class called Main.cs, add the following OpenTK usings and make it inherit from GameWindow.

 

Secondly, let’s create a GameSettings.cs class that will contain our constants:

Alright, rather than me screenshotting the whole class here, let’s review it directly from GitHub:

https://github.com/coxlin/JOGLToOpenTK/blob/master/OpenTKProj/OpenTKProj/Main.cs

Firstly, in the constructor, we see we are setting up the standard stuff such as the window size, title, what type of window we use. We are also saying we want to use Open GL 4. We have also said we are using the ForwardCompatible flag. Essentially, if something is deprecated, it will not support that functionality making it “forward compatible” when the deprecated function is removed.

We then override the OnResize. Basically, if a user resizes the window, our viewport will be reset.

We are overriding the Onload method and adding our own LoadContent method. I have purely done this for readability sake and also as someone who came from XNA/MonoGame.

I have done the same with the OnUpdateFrame and OnRenderFrame methods by adding the Update and Draw funcitons that we will actually implement rendering and game logic.

I also wrapped the SetBackColour, mainly because it felt neater.

Finally, Dispose is basically there to unload content.

If you have implemented all of that, then you will see that not everything is compiling.

Go into the Program.cs and add the following:

Here we are creating our Main window and then Running it at 60 FPS (the frame rate we set in Game Settings)

That is the first part done.

If you are totally new to programming, or a uni student etc here for a bit learning then what you should do next is head over to GitHub or Bitbucket, set up a repo and make sure everything is committed. There is a load of different software. I would recommend SourceTree, the GitHub client or TortoiseGit.

 

Share

A brief shader overview in Unity

I have started playing Yakuza 0 again. It is worth picking up!

Anyway, as you may have guessed, I haven’t got a relevant image for this article so just whacked in what was playing.

Shaders are often this scary, black magic game dev word that a lot of people run away from.

Shaders are bits of code that run on the GPU. They can be code that affects visuals of the image but also in some cases, you can offload complex calculations to the GPU (compute shaders). As this is a graphics focused tutorial, we are going to focus on the visual side.

A visual shader is a shader that affects… well.. the visuals. In the most basic of terms, a visual shader is a program that runs on the GPU that takes 3D model vertices, textures, and other bits of info and returns pixel colours. They are used to draw triangles of 3d models on your screen.

Again, that is super oversimplifying it so I am going to dig into the types of shader Unity has

Surface Shaders

Let’s look at this from a light simulation perspective. If you look in the picture above (see it is not COMPLETELY irrelevant!) you can see there are a lot of lights and the materials on Kiryu’s clothes, skin etc are reacting to it. However, these materials all interact differently with the light. When light hits a material it can be absorbed, reflected, refracted and scattered.

The behaviour of the light rays hitting the surface creates the look of a specific material. For example, the jacket Kiryu is wearing has high light absorption making it look dark. The water on his face has an element of reflection making it look shiny.

This is where surface shaders come in.

Although you can write your own lighting behaviour in Unity, most of the time you don’t want to. That said, there are times you want to do custom lighting which we will look at in later articles, but again MOST of the time, you would use surface shaders. Why? Well, writing lighting equations in shaders can be “well hard”. There are different light types to factor in, different shadow options and different render paths (which we will cover later).

Surface Shaders kind of take the hard part away. As I said before, writing your own lighting behaviour can take a lot of work. Unity’s surface shaders essentially do that for you. The docs refer to this as an auto-generation approach, but essentially what they do is generate the repetitive lighting code under the hood. We will go into more detail about this later, but for now, all you need to know is that they are a way to deal with shaders that use lighting.

Vertex and Fragment Shaders

These are your more “traditional” shaders. The first step it where the shader takes in vertex data, i.e. the geometry, which can alter the data of each individual vertex in the model if you so chose. After this has been done, the result is passed to the second step where a function outputs the colour at each vertex. The first step is known as the vertex function and the second step is known as the fragment function. Hence the name.

In Unity, Vertex and Fragment shaders are often used for non-realistic materials, 2D graphics or fancy post-processing effects. Let’s say I didn’t want to use the real-life lighting behaviour or physically based rendering as it is also known. Let’s say I want to render Kiryu in that above picture as a Cartoon Network character. We would use a Vertex and Fragment shader.

Vertex and Fragment shaders are also used for post-processing. I briefly mentioned post-processing. Post Processing is like adding effects to an image in photoshop. Say I wanted to turn the image above black and white, I would write a vertex and fragment shader for that.

Also, say I have a 2D sprite and I want to take the shape of it and add a glow effect around it. I would probably use a vertex and fragment shader for that.

I also mentioned that surface shaders contain boilerplate lighting code. In fact, when you write a Surface Shader, it gets compiled down into a vertex and fragment one. Essentially, if you are thinking of implementing a different lighting model other than a realistic one, you will probably have to write a vertex and fragment shader.

Summary

  • Shaders are programs that are executed on your GPU and are used to draw triangles of 3d models on your screen
  • Shaders are often used to interact with light. In Unity, we can use Surface Shaders as a way to write effects that interact with light in a realistic way.
  • We can write Vertex and Fragment shaders to apply our own lighting models, create post-processing effects and effect 2D images

For more info on the Surface and Vertex and Fragment Shaders below are some links to some old tutorials that are still relevant, with probably some horrible typos. However stay tuned, as I may do a revision post on both soon!

Unity Shader Tutorial – A look at a basic surface shader

Unity Shader Tutorial – a look at a basic vertex and fragment shader

Share