Here 's the screen shot and here 's the working code. Also to delete the quad you create using gluNewQuadric , you shouldn't use free , but the function specifically provided by glu, gluDeleteQuadric.
Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. The Overflow Blog. Podcast Explaining the semiconductor shortage, and how it might end. Does ES6 make JavaScript frameworks obsolete? Featured on Meta. Now live: A fully responsive profile.
Related 4. Hot Network Questions. We can set the comparison operator or depth function by calling glDepthFunc :. Let's show the effect that changing the depth function has on the visual output. We'll use a fresh code setup that displays a basic scene with two textured cubes sitting on a textured floor with no lighting. You can find the source code here. This simulates the same behavior we'd get if we didn't enable depth testing.
The depth test always passes so the fragments that are drawn last are rendered in front of the fragments that were drawn before, even though they should've been at the front.
Since we've drawn the floor plane last, the plane's fragments overwrite each of the container's previously written fragments:. The depth buffer contains depth values between 0. These z-values in view space can be any value between the projection-frustum's near and far plane. We thus need some way to transform these view-space z-values to the range of [0,1] and one way is to linearly transform them.
The following linear equation transforms the z-value to a depth value between 0. The relation between the z-value and its corresponding depth value is presented in the following graph:. In practice however, a linear depth buffer like this is almost never used.
The result is that we get enormous precision when z is small and much less precision when z is far away. Z-values between Such an equation, that also takes near and far distances into account, is given below:. Don't worry if you don't know exactly what is going on with this equation.
The important thing to remember is that the values in the depth buffer are not linear in clip-space they are linear in view-space before the projection matrix is applied. A value of 0. You can see the non-linear relation between the z-value and the resulting depth buffer's value in the following graph:. As you can see, the depth values are greatly determined by the small z-values giving us large depth precision to the objects close by.
The equation to transform z-values from the viewer's perspective is embedded within the projection matrix so when we transform vertex coordinates from view to clip, and then to screen-space the non-linear equation is applied. The effect of this non-linear equation quickly becomes apparent when we try to visualize the depth buffer. If we were to output this depth value of the fragment as a color we could display the depth values of all the fragments in the scene:.
If you'd then run the program you'll probably notice that everything is white, making it look like all of our depth values are the maximum depth value of 1. So why aren't any of the depth values closer to 0. In the previous section we described that depth values in screen space are non-linear e.
The depth value of the fragment increases rapidly over distance so almost all the vertices have values close to 1. If we were to carefully move really close to an object you may eventually see the colors getting darker, their z-values becoming smaller:. This clearly shows the non-linearity of the depth value.
Objects close by have a much larger effect on the depth value than objects far away. Only moving a few inches can result in the colors going from dark to completely white. We can however, transform the non-linear depth values of the fragment back to its linear sibling. With the fragment depth being something that is part of a fragment's output, you might imagine that this is something you have to compute in a fragment shader.
You certainly can, but the fragment's depth is normally just the window-space Z coordinate of the fragment. This is computed automatically when the X and Y are computed. Using the window-space Z value as the fragment's output depth is so common that, if you do not deliberately write a depth value from the fragment shader, this value will be used by default. Speaking of window coordinates, there is one more issue we need to deal with when dealing with depth. The glViewport function defines the transform between normalized device coordinates the range [-1, 1] to window coordinates.
The window-space Z coordinate ranges from [0, 1]; the transformation from NDC's [-1, 1] range is defined with the glDepthRange function.
This function takes 2 floating-point parameters: the range zNear and the range zFar. These values are in window-space; they define a simple linear mapping from NDC space to window space. So if zNear is 0. The range zNear can be greater than the range zFar; if it is, then the window-space values will be reversed, in terms of what constitutes closest or farthest from the viewer. Earlier, it was said that the window-space Z value of 0 is closest and 1 is farthest. However, if our clip-space Z values were negated, the depth of 1 would be closest to the view and the depth of 0 would be farthest.
So it's really just a convention. In the elder days of graphics cards, calling glClear was a slow operation. And this makes sense; clearing images means having to go through every pixel of image data and writing a value to it. Even with hardware optimized routines, if you can avoid doing it, you save some performance. Therefore, game developers found clever ways to avoid clearing anything. They avoided clearing the image buffer by ensuring that they would draw to every pixel on the screen every frame.
Avoiding clearing the depth buffer was rather more difficult. But depth range and the depth test gave them a way to do it. The technique is quite simple. They would need to clear the buffers exactly once, at the beginning of the program. From then on, they would do the following. However, the depth range would be [0, 0. Since the depth test is less, it does not matter what values just so happened to be between 0. And since every pixel was being rendered to as above, the depth buffer is guaranteed to be filled with values that are less than 0.
Only this time, the depth range would be [1, 0. Because the last frame filled the depth buffer with values less than 0. This fills the depth buffer with values greater than 0. Rinse and repeat. This ultimately sacrifices one bit of depth precision, since each rendering only uses half of the depth buffer. But it results in never needing to clear the depth or color buffers. See, hardware developers got really smart. They realized that a clear did not really have to go to each pixel and write a value to it.
Instead, they could simply pretend that they had. Because of that, this z-flip technique is useless. But it's rather worse than that; on most hardware made in the last 7 years, it actually slows down rendering. After all, getting a cleared value doesn't require actually reading memory; the very first value you get from the depth buffer is free.
There are other, hardware-specific, optimizations that make z-flip actively damaging to performance. The Depth Buffering project shows off how to turn on and use the depth buffer.
0コメント