A simple (sort of) description of Pioneers rendering.

Post Reply
FluffyFreak
Posts: 1341
Joined: Tue Jul 02, 2013 1:49 pm
Location: Beeston, Nottinghamshire, GB
Contact:

A simple (sort of) description of Pioneers rendering.

Post by FluffyFreak »

Introduction:
In the interests of getting more people involved in writing, editing and even just tweaking Pioneers visuals I'm going to try to write a short explanation of how the OpenGL based shader/material system works.

Like any complex system there are custom tweaks and alterations made for some of the system but this will give people a general overview.

First the really general rendering overview, then the Pioneer specific stuff will follow that explains how we setup the data, the materials and the shaders themselves.

Shaders?
Shaders can be thought of as small `C` style programs that take some data, perform some operations on it, and pass it back out to be processed either by another shader, or to the hardware to output it's value(s).

We only use two kinds of shader in Pioneer: Vertex and Fragment shaders.

This kind of a setup first invokes the Vertex stage to transform the position of a vertex from a models vertices into the correct "space"... that's just graphics coder talk for, move a point from where it started off to somewhere that I can see it ;)

Once all of the vertices have been moved the GPU hardware will do some clipping, hidden surface removal and other things, then it will start to render triangles to the screen/framebuffer/texture. When you actually render triangles it will finally invoke the Fragment shader for every pixel.

Fragments are not actually Pixels but this distinction can be ignored for the time being so just think of them as pixels for now, it's easier..

Pioneer:
In Pioneer shaders are owned by Materials and are only used along with a model or other mesh.
NB: The "model" and "other mesh" might seem like an odd distinction but in Pioneer we also use a system of point-sprites.
These take a single vertex for each "point" and the GPU/driver then generates a screen facing quad (2 triangles) from that single point.
That's just one example of the other things that you can do which you might stumble upon.
Lets take one of the simplest example in the game apart a little by looking at the Planet Rings.
Start by looking in: "\src\Planet.cpp", for the method Planet::GenerateRings which I will cut and paste below:

Code: Select all

void Planet::GenerateRings(Graphics::Renderer *renderer)
{
	const SystemBody *sbody = GetSystemBody();

	m_ringVertices.Clear();

	// generate the ring geometry
	const float inner = sbody->GetRings().minRadius.ToFloat();
	const float outer = sbody->GetRings().maxRadius.ToFloat();
	int segments = 200;
	for (int i = 0; i <= segments; ++i) {
		const float a = (2.0f*float(M_PI)) * (float(i) / float(segments));
		const float ca = cosf(a);
		const float sa = sinf(a);
		m_ringVertices.Add(vector3f(inner*sa, 0.0f, inner*ca), vector2f(float(i), 0.0f));
		m_ringVertices.Add(vector3f(outer*sa, 0.0f, outer*ca), vector2f(float(i), 1.0f));
	}

	// generate the ring texture
	// NOTE: texture width must be > 1 to avoid graphical glitches with Intel GMA 900 systems
	//       this is something to do with mipmapping (probably mipmap generation going wrong)
	//       (if the texture is generated without mipmaps then a 1xN texture works)
	const int RING_TEXTURE_WIDTH = 4;
	const int RING_TEXTURE_LENGTH = 256;
	std::unique_ptr<Color, FreeDeleter> buf(
			static_cast<Color*>(malloc(RING_TEXTURE_WIDTH * RING_TEXTURE_LENGTH * 4)));

	const float ringScale = (outer-inner)*sbody->GetRadius() / 1.5e7f;

	Random rng(GetSystemBody()->GetSeed()+4609837);
	Color baseCol = sbody->GetRings().baseColor;
	double noiseOffset = 2048.0 * rng.Double();
	for (int i = 0; i < RING_TEXTURE_LENGTH; ++i) {
		const float alpha = (float(i) / float(RING_TEXTURE_LENGTH)) * ringScale;
		const float n = 0.25 +
			0.60 * noise(vector3d( 5.0 * alpha, noiseOffset, 0.0)) +
			0.15 * noise(vector3d(10.0 * alpha, noiseOffset, 0.0));

		const float LOG_SCALE = 1.0f/sqrtf(sqrtf(log1p(1.0f)));
		const float v = LOG_SCALE*sqrtf(sqrtf(log1p(n)));

		Color color;
		color.r = v*baseCol.r;
		color.g = v*baseCol.g;
		color.b = v*baseCol.b;
		color.a = ((v*0.25f)+0.75f)*baseCol.a;

		Color *row = buf.get() + i * RING_TEXTURE_WIDTH;
		for (int j = 0; j < RING_TEXTURE_WIDTH; ++j) {
			row[j] = color;
		}
	}

	// first and last pixel are forced to zero, to give a slightly smoother ring edge
	{
		Color *row;
		row = buf.get();
		memset(row, 0, RING_TEXTURE_WIDTH * 4);
		row = buf.get() + (RING_TEXTURE_LENGTH - 1) * RING_TEXTURE_WIDTH;
		memset(row, 0, RING_TEXTURE_WIDTH * 4);
	}

	const vector2f texSize(RING_TEXTURE_WIDTH, RING_TEXTURE_LENGTH);
	const Graphics::TextureDescriptor texDesc(
			Graphics::TEXTURE_RGBA_8888, texSize, Graphics::LINEAR_REPEAT, true, true, true, 0, Graphics::TEXTURE_2D);

	m_ringTexture.Reset(renderer->CreateTexture(texDesc));
	m_ringTexture->Update(
			static_cast<void*>(buf.get()), texSize,
			Graphics::TEXTURE_RGBA_8888);

	Graphics::MaterialDescriptor desc;
	desc.effect = Graphics::EFFECT_PLANETRING;
	desc.lighting = true;
	desc.textures = 1;
	m_ringMaterial.reset(renderer->CreateMaterial(desc));
	m_ringMaterial->texture0 = m_ringTexture.Get();

	Graphics::RenderStateDesc rsd;
	rsd.blendMode = Graphics::BLEND_ALPHA_PREMULT;
	rsd.cullMode = Graphics::CULL_NONE;
	m_ringState = renderer->CreateRenderState(rsd);
}
That might all look quite daunting but it's really just a few stages like so:
  • build a mesh to draw
  • draw a texture
  • create the material
  • add the texture to the material
  • create a render state
You need the mesh so that you have something to draw and the `m_ringVertices`are actually initialised and told to use the ATTRIB_POSITION & ATTRIB_UV0 attributes in the constructor as below:

Code: Select all

static const Graphics::AttributeSet RING_VERTEX_ATTRIBS
	= Graphics::ATTRIB_POSITION
	| Graphics::ATTRIB_UV0;

Planet::Planet()
	: TerrainBody()
	, m_ringVertices(RING_VERTEX_ATTRIBS)
	, m_ringState(nullptr)
{
}
Those attributes are define the kind of data which will be passed in with each vertex, and thus what data will be available for each vertex within the vertex shader.
We need the material as that creates our shaders and materials manage the data that we want to pass onto our shaders when we draw the mesh so it needs to be told about the texture it will use.
That all happens in this little block:

Code: Select all

	Graphics::MaterialDescriptor desc;
	desc.effect = Graphics::EFFECT_PLANETRING;
	desc.lighting = true;
	desc.textures = 1;
	m_ringMaterial.reset(renderer->CreateMaterial(desc));
	m_ringMaterial->texture0 = m_ringTexture.Get();
The call to: "m_ringMaterial.reset(renderer->CreateMaterial(desc));" will create our material and it's shaders, loading them from disk, compiling them, performing error checking and validation all based on the "desc.effect" chosen.

Here it's using "Graphics::EFFECT_PLANETRING;" whose implementation can be found in the following 4 files: RingMaterial:
Here we'll go through what happens in RingMaterial, it's not too complex as there is mostly just some value and sanity checking before the call to Program to create the actual shaders.
It's very common to see in our code that we're building up a string of `#define`s, this works very similarly to `C`, here we only add one but you can add many more. We use these to conditionally compile code paths and values within the Vertex and Fragment shaders.

Code: Select all

Program *RingMaterial::CreateProgram(const MaterialDescriptor &desc)
{
	assert(desc.textures == 1);
	//pick light count and some defines
	unsigned int numLights = Clamp(desc.dirLights, 1u, 4u);
	std::string defines = stringf("#define NUM_LIGHTS %0{u}\n", numLights);
	return new Program("planetrings", defines);
}
After this method has been called we actually have a valid material, in the code above you see that we also apply a texture too it, and that's it, ready to render...
At the moment this all looks really simple, because we're hiding some of the complexity.

If you return to the file "Planet.cpp" you'll also find the method "Planet::DrawGasGiantRings" which I've edited below:

Code: Select all

void Planet::DrawGasGiantRings(Renderer *renderer, const matrix4x4d &modelView)
{
	renderer->SetTransform(modelView);
	renderer->DrawTriangles(&m_ringVertices, m_ringState, m_ringMaterial.get(), TRIANGLE_STRIP);
}
The rings are drawn here at the very end and the parameters are the mesh, the render state, our material and the way that the mesh is joined together into triangles.

Somewhere within that call to "DrawTriangles" the material will be applied.
Calling Apply actually does quite a lot of work, it tells the GPU driver what data we want made available to our shader programs.

Code: Select all

void RingMaterial::Apply()
{
	OGL::Material::Apply();

	assert(this->texture0);
	m_program->texture0.Set(this->texture0, 0);

	//Light uniform parameters
	for( Uint32 i=0 ; i<m_renderer->GetNumLights() ; i++ ) {
		const Light& Light = m_renderer->GetLight(i);
		m_program->lights[i].diffuse.Set( Light.GetDiffuse() );
		m_program->lights[i].specular.Set( Light.GetSpecular() );
		const vector3f& pos = Light.GetPosition();
		m_program->lights[i].position.Set( pos.x, pos.y, pos.z, (Light.GetType() == Light::LIGHT_DIRECTIONAL ? 0.f : 1.f));
	}
}
"OGL::Material::Apply();" at the top of the method tells the GPU to use this program and sets a commonly used Uniform value
A "Uniform" is a constant value or piece of data that is constant for every call to your shaders, so every Vertex that shader runs on will see exactly the same value.

This is different from having a `const` value in a shader because a Uniform can be changed for each time/instance that you render a mesh whereas changing a const value in a shader would require you to edit and rebuild the shader.

We use Uniform values a lot and so there are a number of defaults created and initialised within the "Program" class. These are accessed by the inherited "m_program" member but you can add your own Uniform values within shaders.
RingMaterial uses only the base classes Uniform members so above we see it telling the GPU about the texture and lights that it's going to use.
This data will be unchanging and available across all of our shaders.

As of now the whole process is set to run DrawTriangles call will execute the draw invoking the shaders.

RingMaterial - Shaders:
I mentioned them before but here they are again with links to follow along online: In the shading pipeline there is a strict order which the GPU drivers must ensure it looks like things happen so the VERTEX shader must run before the FRAGMENT shader. This is so that the output of the VERTEX shader is available as input data for the FRAGMENT shader.

This is "planetrings.vert" to which I have added some comments:

Code: Select all

#include "attributes.glsl"	// non-standard include system we wrote for Pioneer
#include "logz.glsl"		// these are our own libraries which are common to most of our shaders
#include "lib.glsl"

out vec2 texCoord0;	// output variables
out vec4 texCoord1;

void main(void)
{
	// logarithmicTransform() uses a_vertex & uViewProjectionMatrix defined in "attributes.glsl"
	gl_Position = logarithmicTransform();	// found in "logz.glsl"

	// here we're passing our vertex attributes straight through so the fragment shader can use them
	texCoord0 = a_uv0.xy; // a_uv0 defined in"attributes.glsl" and activated by the ATTRIB_UV0 flag
	texCoord1 = a_vertex; // a_vertex defined in"attributes.glsl" and activated by the ATTRIB_POSITION flag
}
This simple shader takes the vertex data passed in, transforms it using our library function "logarithmicTransform()", this puts the vertex in it's final (possibly) on-screen position. It uses gl_Position which is one of the built-in Variables made available by GLSL.

Next we store some additional data for output to the fragment shader in the form of texCoord0 and texCoord1.
These are different sized variables being vec2 and vec4 respectively.
We could perform some clever operations, checks, etc here but apparently we just need then unmodified this time.

This is "planetrings.frag" to which I have added some comments:

Code: Select all

#include "attributes.glsl"	// non-standard include system we wrote for Pioneer
#include "logz.glsl"		// these are our own libraries which are common to most of our shaders
#include "lib.glsl"

uniform sampler2D texture0;	// the texture uniform, same name as defined in MaterialGL.cpp
in vec2 texCoord0;	// values coming in from the vertex shader
in vec4 texCoord1;

out vec4 frag_color;	// out from the fragment? Yes! It's our final colour value as seen on screen!

void main(void)
{
	// Calculating some colours for the pixel.
	// Bits of ring in shadow!
	vec4 col = vec4(0.0);	// initialise, if you don't you get junk data
	vec4 texCol = texture(texture0, texCoord0);	// read the colour data from the texture

	// now this is clever, try to calculate whether the vertex is hidden by the planet!
	for (int i=0; i<NUM_LIGHTS; ++i) {
		float l = findSphereEyeRayEntryDistance(-vec3(texCoord1), vec3(uViewMatrixInverse * uLight[i].position), 1.0);
		if (l <= 0.0) {
			col = col + texCol*uLight[i].diffuse;
		}
	}
	col.a = texCol.a;	// colour was calculated above, but restore the alpha channel from the texture colour
	frag_color = col;	// FINALLY store the colour in our output

	SetFragDepth();		// another of our library functions that stores the logarithmic depth value
}
The first important line is actually "out vec4 frag_colour" which, thanks to some code in "Program.cpp", tells the compiler where we'll be outputting the final colour value for the pixel.

The next is "SetFragDepth", one of our helper functions, that uses our logarithmic depth values rather than the usual linear depth values.

You can try hacking around with the colours above. For example: Replace the clever stuff with something like "col.xyz = vec3(1.0, 0.0, 0.0);" to output a RED gas giant ring. It really is pretty simple to do basic bits like that.

That's the end of the process, getting a coloured pixel on the screen of your choice.
Hidden from view is the depth buffer which I've lightly alluded too but is handled entirely by that call to "SetFragDepth" so I wouldn't worry about it too much :)

Andy
laarmen
Posts: 34
Joined: Fri Jul 05, 2013 8:49 am

Re: A simple (sort of) description of Pioneers rendering.

Post by laarmen »

Sir, you shall have your beer, at your earliest convenience.
impaktor
Posts: 991
Joined: Fri Dec 20, 2013 9:54 am
Location: Tellus
Contact:

Re: A simple (sort of) description of Pioneers rendering.

Post by impaktor »

Excellent! Not that I intend to hack on this code at the moment, but stuff like this really helps those who do, like the potential future me.
laarmen
Posts: 34
Joined: Fri Jul 05, 2013 8:49 am

Re: A simple (sort of) description of Pioneers rendering.

Post by laarmen »

Question/clarification : about the const vs uniform distinction, uniform means data that will be the same for all vertices but will not be constant across frames, right ?
FluffyFreak
Posts: 1341
Joined: Tue Jul 02, 2013 1:49 pm
Location: Beeston, Nottinghamshire, GB
Contact:

Re: A simple (sort of) description of Pioneers rendering.

Post by FluffyFreak »

laarmen, yeah that's right.
Uniforms you can change every time that you render the object using the shader.
A const is just unchanging forever, a value hard coded into the shader itself.
FluffyFreak
Posts: 1341
Joined: Tue Jul 02, 2013 1:49 pm
Location: Beeston, Nottinghamshire, GB
Contact:

Re: A simple (sort of) description of Pioneers rendering.

Post by FluffyFreak »

HOW TO START LEARNING GRAPHICS PROGRAMMING?
https://interplayoflight.wordpress.com/ ... ogramming/
impaktor
Posts: 991
Joined: Fri Dec 20, 2013 9:54 am
Location: Tellus
Contact:

Re: A simple (sort of) description of Pioneers rendering.

Post by impaktor »

Dropping this shader tutorial here, as it seems interesting:

"A Journey Into Shaders"
https://www.mayerowitz.io/blog/a-journey-into-shaders
Post Reply