Rasterizing versus painting
This post is meant as a layman introduction to rendering with OpenGL. I hope it helps anyone who participates on producing interactive graphics on computers.
Rasterizing is much like painting, except that:
- You use points, lines and triangles as primitives. These carry same purpose as brushes. They leave their shape on the canvas.
- You need to draw a new image every 0.011th second, to present an illusion that the image responds to what the watcher is doing.
- Instead of drawing it yourself, you got to prepare computer to draw it.
Because GPUs perform well with large workloads and perform badly on small loads, and you don't have a full second to do it, you often need to draw whole bunch of things at once without changing your drawing routines or textures.
You want to group things, so you can draw them all at once. You end up having something that resembles layers in a photo editor, except that every layer is procedural:
- One layer only draws spaceships
- Other layer only draws shrapnels
- Third only renders planets, stars and asteroids.
- Fourth renders only text...
Z-buffer and depth testing
If you have to draw every spaceship at once, and then draw planets, the planets would always occlude the spaceships. Besides, two spaceships could occlude each other mutually. This is also known as Visibility problem.
Modern rasterizers use depth buffers for solving visibility problems. In practice this means that during rasterizing you hold multiple canvases: One canvas keeps the color and the another captures the depth of what was drewn.
When depth testing is enabled, the rasterizer can check whether something drewn occludes what is currently being drawn and draw it only at places where it is not occluded.
Shaders
Shaders are programs controlling different aspects of the rasterizer.
- Vertex shaders controls how vertices are positioned on the screen, and what is passed on the fragment shader.
- Fragment shaders controls how every sample point on the screen is filled.
Vertices
The rasterizer needs to know where it draws every triangle, line and point. To do this you need to pass it Vertices that it draws from.
Each vertex can hold several attributes that are used to control how the triangle is positioned and filled. You get access to these attributes in the vertex shader.
Matrices, Models and affine transformations
If you only had to render something from one direction, you could position the triangles in a modelling program of your choice and run a vertex shader that just copies the positions from the attribute you give in.
Often you want to draw something from many angles and from different poses, and you want to draw in a perspective. So you position the triangles in three dimensions in such way that they draw the shape you want to represent from every direction. You make a model when you do so.
Then, to display the model you want to apply an affine transformation on their vertices. You want to uniformly scale, rotate, translate and taper the model to get it displayed from different angles. Matrices are for this purpose and you usually pass them in as uniform values into the shader.
Model itself isn't strictly defined concept. To draw a model you may have to render it on multiple layers, and it doesn't even need to consist of vertices. Due to how complex it is as a concept, you don't have a good, single file format for representing models.
In extreme, your model may include whole set of programs describing how to draw it on the screen.
More under the hood
This was just a summary, but if it makes you feel bad, I remind you that ability to tell how to draw something from multiple angles requires that you know a lot about what you're drawing.
If you haven't noticed yet, there is a humongous connection between physics and representational art, and it's hard to cheat. Unfortunately, everyone can see whether you can draw well-shaped and shaded abs, when it can be viewed from every possible angle.