This page is not a tutorial on how to render a terrain with a heightmap, but rather a summary of the algorithms I improved or created to render the best possible heightmap-based terrain. These features have been implemented for the project of the Introduction to Computer Graphics course given at EPFL (CS-341), and the complete source code can be found here. Also note that, as an effort to make the code snippets readable by anyone without having to read the source code, these snippets use the positive Y axis as up vector (instead of Z in my code), and some variable names have been changed. The features I implemented that did not require particular innovations on my part are listed at the end of this page along with the tutorials or papers I used. You can visit my web page at jadkhoury.github.com to take a look at my other projects, and don't hesitate to contact me if you have any question,
Cheers !
Jad
Instead of loading a heightmap in the program as a texture generated somewhere else, the first step is to render it ourselves on GPU. To make it procedural, we need the heightmap generation to take as a parameter the position of the point we want to generate, such that when we later move the user current position, the program generates the heightmap again, taking into account the new position. Heightmaps are actually noise texture that are manipulated such that the result, even if generated quasi-randomly, corresponds visually to a physical terrain. This is the reason why often in my code the heightmap is referred to as "noise" or "noise texture".
Since we want to generate a texture, the C++ part of the code looks exactly like a program that renders a quad:
//Code from Noise.h in the source code const GLfloat vertex_point[] = { /*V1*/ -1.0f, -1.0f, 0.0f, /*V2*/ +1.0f, -1.0f, 0.0f, /*V3*/ -1.0f, +1.0f, 0.0f, /*V4*/ +1.0f, +1.0f, 0.0f}; // buffer glGenBuffers(1, &vertex_buffer_object_); glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer_object_); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex_point), vertex_point, GL_STATIC_DRAW); GLuint vertex_point_id = glGetAttribLocation(program_id_, "vpoint"); glEnableVertexAttribArray(vertex_point_id); glVertexAttribPointer(vertex_point_id, 3, GL_FLOAT, DONT_NORMALIZE, ZERO_STRIDE, ZERO_BUFFER_OFFSET); const GLfloat vertex_texture_coordinates[] = { /*V1*/ 0.0f, 0.0f, /*V2*/ 1.0f, 0.0f, /*V3*/ 0.0f, 1.0f, /*V4*/ 1.0f, 1.0f}; glGenBuffers(1, &vertex_buffer_object_); glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer_object_); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex_texture_coordinates), vertex_texture_coordinates, GL_STATIC_DRAW); GLuint vertex_texture_coord_id = glGetAttribLocation(program_id_, "vtexcoord"); glEnableVertexAttribArray(vertex_texture_coord_id); glVertexAttribPointer(vertex_texture_coord_id, 2, GL_FLOAT, DONT_NORMALIZE, ZERO_STRIDE, ZERO_BUFFER_OFFSET);
The Vertex Shader is even simpler
//Code from noide_vshader.glsl in the source in vec3 vpoint; in vec2 vtexcoord; out vec2 uv; void main() { gl_Position = vec4(vpoint, 1.0); uv = vtexcoord; }
The goal is to find a way to create a heightmap resulting in a realistic mountainous terrain. The first intuition would be to superpose different instantiations of Fractional Brownian Motion, shifted and/or rotated at each iteration. The problem with this approach is that the rendered terrain looks very "roundish", as FBM doesn't generate peaks and trenches.
This paper from F. Kenton Musgrave proposes a Hybrid Multifractal algorithm that generates exactly the kind of peaks and trenches we're looking for, but a simple iteration of this algorithm only produces lines of mountains with very smooth flat areas (cF the "riges" of this paper). The best compromise has been to combine iterations of both FBM and Hybrid Multifractal to get the look needed: (in fragment shader)
//Code from noise_fshader.glsl in the source float getTerrainHeight(vec2 pos){ vec2 p = pos*density; float b2 = fbm(p*10)*0.2; float h1 = hyrbidMultifractal(p/8.0, H, lacunarity, octaves, offset, gain); float h2 = hyrbidMultifractal(p/3.0, H, lacunarity, octaves, offset, gain/2.0)*2.0; float h3 = hyrbidMultifractal(p*2.0, H, lacunarity, octaves, offset, gain)*0.3; return b2+h1+h2+h3-0.8; }
The density
variable is a simple float that is used to spread or condensate the heightmap.
By playing around with the position sent to the hyrbidMultifractal function and the scaling applied to the returned height value of this function, we can superpose layers of terrain to get more or less details, and more or less rugosity.
Now that we know how to generate a heightmap point given a UV coordinate, all that remains to do in the main() is to render it on the fragment as the color
//Code from noise_fshader.glsl in the source void main(){ color = getTerrainHeight(uv); }
And the heightmap is done ! All that remains to do is create a 1-channel float Framebuffer (using the GL_R32F
format), bind it, render the heightmap to texture, and use it like one would use a heightmap loaded from a file.
When generating the heightmap that way, we don't have many other choices than to leave the camera stationnary and actually move the position used for computing the heightmap. To do so, we have to store the cumulated ground displacement in a variable. Let's say the up-vector is (0,1,0), each time the C++ program wants to move in a certain direction, we have to update the heightmap displacement vector as displacement += vec2(move.x, move.z)
, then pass this vector to the fragment shader of the heightmap generator, and use it to compute the new heightmap as getTerrainHeight(uv + displacement)
With this implementation, the heightmap is indeed infinite, but the smallest displacement results in the code computing the whole heightmap again. But, as shown on the schema, when moving along the displacement vector d
, only a small fraction of the heightmap is new (in red). To solve this waste of time and ressources, the solution would be to read the previous value of the heightmap when possible, and compute only the points when necessary.
To do so, we need not only to pass to the fragment shader the vector of cumulated displacement, but also the vector representing the new displacement at that frame. This vector allows us to define which points of the texture we have to compute, and which we can read from our previous computation.
We now have to be careful with our use of displacement
and new_displacement
. When computing new points, we use the total displacement as before. But when reading from the previous heightmap, the point previously generated already took into account the cumulated displacement as it was during the previous loop. So now, we only have to shift the UV we want to read by the new displacement, giving us the following code:
//Code from noise_fshader.glsl in the source //Compute the new point if necessary, read it if possible bool xReadZone = (new_displacement.x > 0) ? (uv.x < (1.0 - new_displacement.x)) : (uv.x > -new_displacement.x); bool yReadZone = (new_displacement.y > 0) ? (uv.y < (1.0 - new_displacement.y)) : (uv.y > -new_displacement.y); if(xReadZone && yReadZone){ pos = uv + new_displacement; h = readLastBuffer(pos); } else { pos = uv + displacement; h = getTerrainHeight(pos); }
One last detail to take into account is that if we allow interpolation in the float framebuffer used for the heightmap, each time we move (up to 60 times per seconds), we smooth the heightmap, resulting in just a few seconds in a roundish terrain. The solution to this problem is to discretize the cumulated and new displacement vector in function of the dimensions of the framebuffer:
//Code from Noise.h in the source float roundToPrecision(float precision, float value){ return precision * round(value/precision); } vec2 roundToPrecision(float precision, vec2 value){ float x = roundToPrecision(precision, value.x); float y = roundToPrecision(precision, value.y); return vec2(x,y); } void move(vec2 delta){ //We set the displacement to a multiple of the "texel size" such that the //interpolation doesn't smooth the terrain vec2 d = roundToPrecision(1.0/float(tex_size_), delta); new_displacement_ = d; displacement_ = displacement_ + d; displaced_ = true; }
The optimization is almost ready to go now. The only problem remaining is that we want to read the previous heightmap in order to generate the updated one. Unfortunately, it is not possible to read from a FrameBuffer while also writing in it. So we have to create a second buffer exactly in the same manner as the first one. In addition to this cloned buffer, we need to create a variable to store in which buffer we last wrote (e.g. int last_buffer = 1 //or 2
), along with a variable pointing to the current framebuffer texture. Where we previously passed the framebuffer texture, we now pass this variable. Moreover, at the first frame, we have to tell the shader code of the Heightmap Generator that we want it to render the whole heightmap (since we have nothing to read), and then at each frame:
last_buffer
last_buffer
last_buffer
intcurrent_buffer
GLuint
Given that modified
represents whether or not some parameter of the heightmap changed, displaced
is true
if there is a new displacement since the last frame. Here is what the code looks like in the fragment shader of the heightmap generator:
//Code from noise_fshader.glsl in the source float readLastBuffer(vec2 uv){ return (last_buffer == 1) ? texture(height_map, uv).r : texture(height_map2, uv).r; } //Compute the new point if necessary, read it if possible void main() { //First Frame if (pass == 1){ color = getTerrainHeight(uv); //Any other frame } else { vec2 pos; float h; if(modified > 0){ pos = uv + displacement; h = getTerrainHeight(pos); } else { if(displaced == 0){ pos = uv; h = readLastBuffer(pos); } else { bool xReadZone = (new_displacement.x > 0) ? (uv.x < (1.0 - new_displacement.x)) : (uv.x > -new_displacement.x); bool yReadZone = (new_displacement.y > 0) ? (uv.y < (1.0 - new_displacement.y)) : (uv.y > -new_displacement.y); if(xReadZone && yReadZone){ pos = uv + new_displacement; h = readLastBuffer(pos); } else { pos = uv + displacement; h = getTerrainHeight(pos); } } } color = h; } }
//Code from main.cpp in the source // read buffer 1 write in 2 if(last_buffer == 1){ height_buffer2_.bind(); initViewport(tex_size,tex_size); noise_.drawHeightMap(2, last_buffer); height_buffer2_.unBind(); last_buffer = 2; current_noise_buffer_ = fb_tex_noise_2_; //read buffer 2 write in 1 } else if(last_buffer == 2) { height_buffer_.bind(); initViewport(tex_size,tex_size); noise_.drawHeightMap(2, last_buffer); height_buffer_.unBind(); last_buffer = 1; current_noise_buffer_ = fb_tex_noise_; }
I assume the reader knows the basics of shading in OpenGL (i.e. how to implement Phong shading). Now that we have our beautiful terrain generated with the heightmap, we would like to texture it in function of the height of the fragment. We want sand at low altitudes, grass at "middle" altitudes, and snow at high altitudes. One thing to avoid is to have abrupt change in texturing, resulting in ugly transition between snow, grass and sand. So in addition to simply testing the height of the fragment against altitude limits, we create a "mixing zone" around this limit where we linearily mix the textures above and under:
//Code from terrain_fshader.glsl in the source if (height > snow_height + mix_zone){ phong_kd = snow; } else if (height > snow_height - mix_zone) { float coef = (height-(snow_height - mix_zone))/(2.0 * mix_zone); phong_kd = mix(grass, snow, coef); } else if (height > grass_height + mix_zone){ phong_kd = grass; } else if (height > grass_height - mix_zone){ float coef = (height-(grass_height - mix_zone))/(2.0 * mix_zone); phong_kd = mix(sand, grass, coef); } else { phong_kd = sand; }
The result is not bad but lacks in realism. An easy solution to this problem is to use the steepness of the terrain to define additionnal texturing zones. In particular, we would want patches of snow where the terrain is flat, and patches of rocks where the terrain is very steep. To do so, we have to look at the angle between the normal of the terrain and the vertical direction: angleDiff = abs(dot(currentNormal.xyz, vertical))
(assuming that both the vectors are normalized). Now that we have this variable representing how "steep" the terrain is, we can use it to create these patches. So before assigning the texture to the kd
component, we can overwrite the textures used for kd by the eventual patch texture:
//Code from terrain_fshader.glsl in the source float angleDiff = abs(dot(currentNormal.xyz, vertical)); float pureRock = 0.6; float lerpRock = 0.7; float coef = 1.0 - smoothstep(pureRock, lerpRock, angleDiff); grass = mix(grass, rock, coef); snow = mix(snow, rock, coef); coef = smoothstep(0.90, 0.98, angleDiff); grass = mix(grass, snow, coef);
There is just one tiny detail that needs to be fixed: since the camera is fixed and it's the heightmap that actually moves, if we just use the standard UVs when reading the textures, we see the terrain moving but the textures stay stationnary ! This is easily solved by passing the displacement vector to the shaders of the terrain and using vec2 current_uv = UV + displacement
when reading the texture.
I wanted my program to be able to simulate a full day cycle. So before anything, we need a light that rotates around the horizon, which is easily done by assigning a rotation radius and then updating the position in function of this radius and the time:
//Code from Light.h in the source void updateLightPos(float time){ float t = time * speedFactor_; float x = circle_radius_ * cos(t); float y = circle_radius_ * sin(t); light_pos_ = vec3(x, y, 0); light_dir_ = -light_pos_; }
We then use this light position for all of our shading and shadows computations. Regarding the looks of the terrain during the night, I wanted something very minimalist and peaceful, and decided to go with simple distance-base colouring for the terrain. The distance from the camera to a given vertex is found by applying the Model and View matrix (not the Projection !) to the vertex position and looking at the result's z coordinate. This is either done in the vertex shader or in the geometry shader (if you used tessellation). Once that distance is found, we can define the wanted color and scale it with this distance. A linear scaling does not yield a very satistying result, so I used a logarithmic scaling pretty close to sqrt:
//Code from terrain_fshader.glsl in the source vec3 computeNightColor(){ vec3 c = vec3(0.0, 0.2, 0.3); vec3 nightColor = pow(-gDist/MAX_DISTANCE, 0.6) * c; return nightColor; }
With this scaling, the further the terrain is, the lighter it looks (cF image).
Now we have to find a way to blend the day color into the night color when the time is right. This is where the light position comes in: since at night the "sun" will be under the horizon, we can use the height of the light as a coeficient to blend the day and night shading. Using the smoothstep OpenGL function, we get:
//Code from terrain_fshader.glsl in the source void main() { float sunHeight = light_pos.y/MAX_HEIGHT; vec3 day = computeDayColor(); vec3 night = computeNightColor(); float coef = smoothstep(-0.3, 0.0, sunHeight); color = mix(night, day, coef); }
We can do exactly the same with two cubemap, one for the day, one for the night, and the night mode is done !
Here is a list of the the references I used to build this program: