WebGL Basics 4 – Wireframe 3D object

We continue with a real 3D object.

Introduction

In the previous post, we rendered a single triangle. In this one, we add the third dimension and create a complex object. But because our fragment shader allows only to draw with a constant color, we won’t render faces but only lines. This "wireframe" object is a preparation for the future post about the transformation matrix. What we need to do in brief:

  • Create a new object composed of a single long line
  • Change the shader code (the current one processes only 2D coordinates)
  • Change the rendering Javascript code

Object definition

The object is given as a sequence of 3D vertices (flat array) in the global variable vertices:

// Vertices of the object
var vertices = new Float32Array([
    /* X Axis */
   -1.0,  0.0,  0.0,
    0.9,  0.0,  0.0,

...

    0.0,  0.0,  0.8,
    0.6,  0.0,  0.8
]);

This kind of definition is extremely constraining as it is composed of a single long broken line. If you look at the vertices in the code, you will see that the definition is not optimal, i.e. that some line segments are defined several times. It is acceptable as an example but would not be adapted for a real application. We will see alternatives below.

3D processing in shader

The change is straightforward. The ppos attribute variable is extended from vec2 to vec3, and ppos.z is given to create the final homogeneous coordinate:

// GLSL ES code to be compiled as vertex shader
vertexShaderCode=
'attribute vec3 ppos;'+
'uniform mat4 mvp;'+
'void main(void) {'+
'  gl_Position = mvp * vec4(ppos.x, ppos.y, ppos.z, 1.0);'+
'}';

3D wireframe rendering

Once the shader is modified, the corresponding Javascript code must be adapted, first in the initialization function start(). The vertices are given as parameter, and the number 3 in vertexAttribPointer (instead of 2) indicates that we process 3 values per point now.

  // Puts vertices to buffer and links it to attribute variable 'ppos'
  gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
  gl.vertexAttribPointer(vattrib, 3, gl.FLOAT, false, 0, 0);

Only the call to the function drawArrays must be changed in draw() (the function called periodically):

  // Draws the object
  gl.drawArrays(gl.LINE_STRIP, 0, vertices.length/3);
  gl.flush();

Here we use the gl.LINE_STRIP rendering mode. Several modes are available:

  • LINE_STRIP: the two first vertices delimit the ends of the first segment, and each new vertex defines the end of a new segment starting at the end of the previous one
  • LINE_LOOP: same as LINE_STRIP with an additional segment between the first and last vertices (closing the contiguous segments set)
  • LINE: pairs of vertices delimit the ends of each individual segment (giving a non-contiguous set of segments)

Result

As usual the result is available on-line. A house with the axes now rotates around the origin:

Please note that:

  • The rotation is only relative to the origin
  • There is an X/Y distortion (aspect ratio is not corrected)
  • There is no perspective (orthogonal projection), so it is impossible to determine which vertex is far or near

An interesting effect is also visible if the 3 rotations are activated. One corner of the house is cropped when the corresponding vertex has a Z coordinate outside the interval [-1,+1], limits of the visible cube:

Summary

Not many points this time:

  • LINE_STRIP is used in drawArrays to render a wireframe object
  • The object is defined as a sequence of 3D vertices in a flat array
  • The shader processes 3D vertices directly (using the ppos.z component)

The next post will deal with the complete transformation matrix.

Advertisement

,

  1. #1 by Jason Slemons on September 28, 2011 - 20:10

    This seems really simple now. But I wonder if the formulas for gl_Position like ;” gl_Position = mvp * vec4(ppos.x, ppos.y, ppos.z, 1.0);” combine a projection onto the X-Y-Z (or X-Y in the previous post) plane that you might want to do later(like after mvp is multiplied)? Not sure…

    • #2 by blogoben on September 29, 2011 - 08:47

      In this post the Z coordinate is just lost after the rotations because Z-buffering is not activated, but it is necessary to give the Z coordinate of the vertices to the shader to create the 3D effect through rotations.
      Hence it is a projection on XY (same as the screen coordinates) implicitly done by the 3D driver/ hardware once returned by the shader code.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: