A summary of the Rendering Pipeline for Humans

Image result for Xenoblade chronicles 2

I just started playing Xenoblade Chronicles 2 on the Switch. It is pretty neat. Anyway In this post I wanted to talk about the rendering pipeline, or rather put it in simple terms that less tech savy people can understand.

In a game, rendering can be seen as the process of drawing a scene on a computer screen. It involves a mathematical combo of geometry, textures, surface treatments, the viewers perspective and lighting. In other words, it is a combo of the geometry that forms the meshes made in Maya, Blender, whatever, the textures that are applied to those meshes, shaders that do some neat stuff to make it look cool, where the camera is looking and how the scene has been lit. The Rendering pipeline then represents the flow of processes that take place to show a virtual environment on your screen. Basically, how your games console draws that sweet Vidja Game on the screen (bit of an oversimplification, but you get the idea).

In simple terms, there are three stages of the pipeline:

The application phase runs on the CPU (generally) and deals with all your gamey stuff. This includes moving object, input, collisions, etc. This then affects the geometry part of the pipeline that basically determines where stuff is. It involves calculations regarding the position of the camera, the transformation, scale, rotation of each object and all the mesh data we have. The rasterisation phase then actually displays the image on the screen. It goes through some more in-depth processes to get that desired image on to your sweet 43inch Ultra HD TV.

Now it is a bit of a cop-out to say “more in-depth processes” so let’s have a look at what those actually are. We are going to look deeper into both the Geometry and Rasterisation parts of the pipline.

I know what you are thinking.

Those sound like pretty technical things! S’alright, we are gonna look at each one individually. Imagine we are trying to render a model on our screen. Maybe it is a model of a character. Or a car. Or big massive gothic cathedral. Or an awesome mech. I will probably refer to the model as just a “character.” This is the process that happens to get it on the screen.

  • Geometry – So we kind of looked at this phase earlier on. In deeper terms, a model is made out of polygons and vertices. These are shapes that make up a 3d model. In other words they describe the structure of your game character. Not in gameplay terms but how the shape of the character should be in the game terms. The geometry phase processes all of these polygons and vertices so that data can be used in other phases.
  • Illumination – This is where the models are coloured and lit. In other words, you could see it as textures that represent the details of a character, like Link’s skin and clothing in Zelda are applied to the model and the details of how the model should be lit by light sources in the game like street lamps, the sun, etc are applied. This is the process where fancy programs called shaders (not Destiny 2 shaders, CURSE YOU BUNGIE!) can make models in the game look real nice.
  • View Perspective – The model is processed through a viewing perspective, or rather a camera. The process looks at how the camera is set up, whether it has an Orthographic projection (often used for 2D games) or Perspective projection (often used in 3D games) and how big the field of view of the camera is.
  • Clipping – This process looks to see what parts of the model are outside of the cameras viewing volume or rather what parts of the character can be seen by the camera. The bits that cannot be seen are clipped away.
  • Screen-Space Projection – This where we take a projection of the 3d object and put it into 2D space so it can be displayed on a screen. Monitors fundamentally show a 2-dimensional image so we are mapping the 3d model into this space and producing a 2D image to be displayed on the screen.
  • Rasterisation – This is where fancy post-processes occur. A post-process is an extra visual technique that is applied to the image and includes techniques such as bloom. These techniques are applied to the 2D image created by the Screen-Space Projection phase.
  • Display – This is the final image.

And yeah that is that really. I hope this helps understand how the rendering pipeline works a bit better.

 

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

Verify that you are not a robot! (If you are unlucky) *