Yzer Logo

Doom glare implementation

This is a implementantion of this article by Simon using C++ and OpenGL. This articles uses some images from http://simonschreibt.de.

I decided to implement this effect for my game Yzer. In this post I will try to show how I managed to recreate this effect using C++. Now first things first: what am I trying to create?

alt text

From Simon's blogpost:

If you don’t have much graphic memory and you don’t want to let artist spend a lot of time in creating this effect by hand, i think this technique is awesome. You only have to create a polygon (quad) and add a predefined material to it. Also you can set the parameters right in the material which gives you a good control.

This effect looked like a normal blurred bloom effect to me, nothing special, I thought. But then I looked at the wireframe:

alt text

This isn't post process effect at all! This a mesh with vertices updated every frame. Alright, lets analyze it. The middle quad is static and does not animate. Every edge of that quad is extruded to create the fake glow effect. This animated GIF file is mesmerizing and distracting to me, so I took one frame and I try to look for clues.

alt text

  1. This looks like a simple extrude towards the camera up vector.
  2. Looks the same as (1) but extruded towards the inverse of the camera up vector(down)
  3. I noticed the angle between these edges is ALWAYS 90 degrees. This must be calculated in screen space.
  4. Angle a and b are always identical so this point is averaged out between the 2 edges.

At this point I do not have all the answers, but it is enough to start coding some parts of this effect.

Let us start with the basics, what do we see in the middle of the mesh? I see a quad.

A quad to me is this:

const int edgeCount = 4;
Vector3 quadPoints[edgeCount];
quadPoints[0] = Vector3(50.0f, 65.0f, 100.0f);
quadPoints[1] = Vector3(250.0f, 65.0f, 100.0f);
quadPoints[2] = Vector3(250.0f, 65.0f, 50.0f);
quadPoints[3] = Vector3(50.0f, 65.0f, 50.0f);

These points are defined in world space, because we need to do some calculations in screen space, we need to transform these. How do we do this? Using the camera View and Projection matrices. Here is how:

Matrix4 viewProj = m_activeCamera->GetView() * m_activeCamera->GetProjection();
Matrix4 invViewProj = viewProj.inverse();    

Vector4 screenQuadPoints[edgeCount];

// lets project the points to screen space
for (int i = 0; i < edgeCount; ++i) {
	Vector3& v = quadPoints[i];
	screenQuadPoints[i] = GetScreenFromWorld(v, viewProj);
}

We create a viewProj matrix by multiplying the view and projection matrices. Let's invert it as we will need it when we convert from screen space back to world space. The function GetScreenFromWorld() is a helper function to do the conversion of coordinates. It looks something like this:

Vector4 GetScreenFromWorld(const Vector3& world, const Matrix4& viewProj) {
	Vector4 pos(world.x, world.y, world.z, 1.0f); // needs to be 1.0f to enable translation
	pos = pos * viewProj; // applies the transform to the vector
	pos /= pos.w; // perspective divide
	return pos;
}

This function gives us coordinate in screen space. Note that this is not window-space, all coordinates are now in NDC coordinates ranging from -1.0 to +1.0. Conversion to window space is not needed for the implementation of this effect. I think an image would help to see what we are working with now.

alt text

As you can see Edge 0 goes from screenQuadPoints[0] to screenQuadPoints[1]. We need the direction of this edge and rotate it 90 degrees. The result is the normal of this edge axis.

Vector4 diff = screenQuadPoints[0] - screenQuadPoints[1];
Vector2 edge(diff.x, diff.y);

// Find the axis perpendicular to the current edge
Vector2 axis = Vector2(-edge.y, edge.x);
axis.normalize();

Now we need to push the edge points using the axis vector. This will push the edge away from the center, and should hopefully start to look like an extrude. Let's calculate these points.

// push the edge away using that axis and store those points
Vector2 p1 = ToVec2(screenQuadPoints[0]) + axis * pushDistance;
Vector2 p2 = ToVec2(screenQuadPoints[1]) + axis * pushDistance;

The line ToVec2(screenQuadPoints[0]) + axis * pushDistance takes a point and adds that axis normal we talked about. Of course a normalized vector only has a length of 1.0f. If we want a nice big blur we need to push it out more using pushDistance.

alt text

This is the result of we do this for all edges:

alt text

Doing this is kind of problematic, if we extrude that edge in screen space, the glow amount will be in screen space. It would be odd if a glare like this was the same size up close or far away. To fix this we need to take these calculated points p1 and p2 and convert them back to world space. Then we need to find the world space direction originating from our "source" points (screenQuadPoints[0] and screenQuadPoints[1]).

// push the edge away using that axis and store those points
Vector2 p1 = ToVec2(screenQuadPoints[0]) + axis * pushDistance;
Vector2 p2 = ToVec2(screenQuadPoints[1]) + axis * pushDistance;

// convert back to world space
Vector3 pushedPointInWS1 = GetWorldFromScreen(Vector4(p1.x, p1.y, screenQuadPoints[0].z, 1.0f), invViewProj);
Vector3 pushedPointInWS2 = GetWorldFromScreen(Vector4(p2.x, p2.y, screenQuadPoints[1].z, 1.0f), invViewProj);

You might know that projecting a screen space point to a world space point has an unlimited amount of correct values. The trick here is to use the same z value as the "source" point to convert back to world space. This will make sure that the distance between p1 and the camera is about the same as the distance between screenQuadPoints[0] and the camera. This is important for depth tests in the scene (objects in front of the effect will properly occlude it, as you would expect it to do so).

Vector3 pushedPoint1 = pushedPointInWS1 - quadPoints[0];
pushedPoint1.normalize(); // find the world space direction of the point and normalize it
pushedEdges[0].w1 = quadPoints[0] + pushedPoint1 * pushDistance; // now we push in world space!

Vector3 pushedPoint = pushedPointInWS2 - quadPoints[1];
pushedPoint.normalize(); // find the world space direction of the point and normalize it
pushedEdges[0].w2 = quadPoints[1] + pushedPoint * pushDistance; // now we push in world space!

Alright, pushedEdges[0] now contains the two points we pushed away in world space. We are almost done. We still need to find the point that connect these extruded edges. This is a very similar process, first the crappy image to show what I mean:

alt text

We average out two points and then we get point A as a result. We get the direction that point A points at. Then we normalize it and use it to push A to B.

And the code that actually does it:

// calc averaged point between the 2 edges
Vector3 p = pushedEdges[0].w1 + (pushedEdges[3].w2 - pushedEdges[0].w1) * 0.5f; // this finds the world space point halfway between 2 points
Vector3 dir = p - quadPoints[0]; // get the direction from the point itself
dir.normalize();
pushedEdges[0].sharedPoint = quadPoints[0] + dir * pushDistance; // now we push in world space!

Now all that remains is basic triangulation of these points and we are done!

alt text

Here are some images of the results:

alt text

To get the fade out working when we look at from the side we use the dot product of our camera direction and the normal of our quad. We plug this in the alpha of the mesh:

// normal is the quad normal
float dot = normal.dot((GetCamera().getPosition() - centerPoint).normalize());
float alpha = Math::LinearMap(dot, 0.001f, 0.1f, 0.0f, 1.0f);

Anyway thats it! For comments please go to r/gamedev, the article has been posted there as well.

UPDATE: I have added the full source for a fully working Unity implementation on github.