# Posts Tagged Mathematics

### WebGL Basics 5 – Full transformation matrix

In this post we look at the transformation matrix with all steps.

## Introduction

The vertices of the 3D scene are stored in static arrays, and then in "buffers", by the Javascript code. In order to render the scene as viewed by an observer located at an arbitrary position in the scene, the vertex coordinates must be transformed such that the visible part of the scene is in the viewable cube (or view "frustum"). The necessary transformations are summarized hereafter:

- The 3 translations correspond to the 3 degrees of freedom of the observer in pure translation
- The scaling represents a zoom capability of the observer
- The 3 rotations correspond to the 3 degrees of freedom in rotation
- The projection is necessary because the visualization device we use is a 2D monitor
- The aspect ratio must be corrected due to the default scaling used to fill the WebGL context

The next sections describe these transformations step-by-step starting with a reminder about homogeneous coordinates.

## Homogeneous coordinates

A coordinate transformation in the 3D space is a function that takes a 3-components vector as input and returns a transformed 3-components vector. In general, such a function could be anything (square, exponentiation, etc) and would transform the space in any arbitrary way. If the transforming function is chosen to be a single matrix multiplication, several common transformation can be realized: rotation, scaling and symmetry in the 3D space can all be implemented as 3×3 matrix multiplications. Furthermore, because it is a linear operation, several successive transformations can be combined into one matrix multiplication, the necessary matrix being the product of the matrices of the successive operations. Consequently, because only one matrix operation is necessary, the method is well adapted to massive computations in the GPU.

The only problem with 3×3 matrix multiplications is that some operations that are necessary to render a 3D scene, namely translations and projections, cannot be realized.

Homogeneous coordinates allow to concentrate all transformations into one matrix multiplication by adding an extra dimension (not the time!), such that:

- All operations are done on 4-components vectors, using 4×4 matrix multiplications
- The "homogeneous coordinates" of a point are hence the 4 components of a vector (x,y,z,w), where x, y, z are the coordinates of the point in the space, and w (from "weight") an appended scale factor set to 1 at the beginning, which is used only for internal computation
- 3D space coordinates are obtained back by dividing the 3 first components by the 4th one:
`X=x/w , Y=y/w, Z=z/w`

Several areas can be identified in the 4×4 matrix (in wxMaxima: `mat:matrix( [Mxx,Mxy,Mxz,Cx], [Myx,Myy,Myz,Cy], [Mzx,Mzy,Mzz,Cz], [Lx,Ly,Lz,W]);`

):

The coefficients in the matrix can be interpreted in independent groups (provided the other coefficients are set to 0, except the diagonal set to 1):

`Mij`: the 3×3 matrix used for the common linear transformations`Ci`: after multiplication by a point vector`(x,y,z,w)`, the result is`(x + Cx w, y + Cy w, z + Cz w, w)`, hence these coefficients are used to add constant values i.e. for translations`Lj`: after multiplication by the point vector, the scale factor becomes a linear combination of the point coordinates (`Lx x + Ly y + Lz z + W w`), i.e. it allows to divide the coordinates among them (when 3D coordinates are transformed back), what is useful for projection.

The next sections illustrate how the space transformations are implemented in 4×4 matrices.

In addition, homogeneous coordinates support operations on points that are infinitely far. Indeed if the scale factor is near 0, the last 4D to 3D transformation (division by the scale factor) will result in values nearing infinity. But before this last transformation, all operations are done on non infinite values without risk of errors.

## Observer position and orientation

We define the observer with the following parameters:

*(ox,oy,oz)*: the point looked at*(rx,ry,rz)*: orientation of the observer around the axes*d*: the distance between the observer and the point looked at

## Translation

A translation is necessary to follow the movements of the observer. The general form a of a translation matrix with homogeneous coordinates is as following:

Multiplied by the point coordinates given by the column-vector (x,y,z,w), we obtain:

As usual with homogeneous coordinates, the transformation is completed only after the last operation of dividing the three first components by *w *(that should be equal to 1 at this stage):

Here we define the translation as a change of the origin (0,0,0) of the model. The new origin is noted *(ox,oy,oz)* and corresponds to the point the observer is located (or better said is looking at). The translation matrix is as following (in wxMaxima: `translation:matrix( [1,0,0,-ox], [0,1,0,-oy], [0,0,1,-oz], [0,0,0,1]);`

):

Negative values are due to the definition we use: the model is translated from the *(ox,oy,oz)* point to the (0,0,0) point, i.e. in the opposite direction as the vector *(ox,oy,oz)*.

## Rotation Basics

Basically, a rotation is a transformation of the plane. In the plane formed by the X and Y axes, the rotation around the Z axis makes a point turn around the origin according to a given angle:

Let’s take the point with polar coordinates (r,α). The Cartesian coordinates are given by:

`x = r * cos(α)`

y = r * sin(α)

In polar coordinates, a rotation around the origin is simply done by changing the angle component. Hence in our example the new point (x’,y’) after rotation by the angle theta is given by:

`x´ = r * cos( α + θ )`

y´ = r * sin( α + θ )

By applying the angle addition formulas:

`x´ = r * ( cos(α) * cos(θ) - sin(α) * sin(θ) )`

y´ = r * ( sin(α) * cos(θ) + cos(α) * sin(θ) )

Now we can re-order the terms to let appear the Cartesian coordinates of the original point:

`x´ = x * cos(θ) - y * sin(θ)`

y´ = x * sin(θ) + y * cos(θ)

This is the standard form of a 2D rotation, given hereafter as a multiplication matrix:

There is another way to obtain this formula by using complex numbers. Indeed, a multiplication by a complex number of the form r*eiθ results in a rotation of angle θ and a scaling of factor r. So we rewrite the original point as *x+iy*, and multiply it with the complex number of angle θ and modulus 1 (defined using trigonometric functions) to get the transformed point:

`x' + iy' = ( x + iy ) * ( cos(θ) + i sin(θ) )`

After development (remember i2=-1) and re-ordering:

`x' + iy' = x cos(θ) - y sin(θ) + i ( x sin(θ) + y cos(θ) )`

The real and imaginary values must be identified with x’ and y’, and then we get the same formula as with the first method.

## Rotations of the observer

Rotations are needed due to the orientation of the observer. In our definition, the observer rotates around the point looked at. Taken separately, each single rotation can be understood as a rotation around one axis, defined by angles such that positive values represent:

*rx*: goes down, look towards top*ry*: goes right, look towards left*rz*: rolls to the right staying at the same place

In order to simplify the matrices, we set:

*sx = sin(rx)**cx = cos(rx)**sy = sin(ry)**cy = cos(ry)**sz = sin(rz)**cz = cos(rz)*

Using the results of the previous paragraphs, the rotations around the axes can be represented in homogeneous coordinates as following, starting with the rotation around Z (in wxMaxima: `rotz:matrix( [cz,-sz,0,0], [sz,cz,0,0], [0,0,1,0], [0,0,0,1]);`

):

The rotation around X and Y are obtained by permutation, i.e. by moving the 2×2 rotation matrix within the 4×4 identity matrix. The rotation matrix around X (in wxMaxima: `rotx:matrix( [1,0,0,0], [0,cx,-sx,0], [0,sx,cx,0], [0,0,0,1]);`

):

Around Y (in wxMaxima: `roty:matrix( [cy,0,sy,0], [0,1,0,0], [-sy,0,cy,0], [0,0,0,1]);`

):

The different rotations must be combined, what is descried in a section below.

## Scaling and aspect ratio

Two scaling operations are needed: one for the zooming factor of the observer, the other to correct the X/Y distortion due to the standard rendering mechanism.

The zoom is an isotropic scaling, i.e. the 3 dimensions are scaled equally. Such a transformation is represented by the following matrix (in wxMaxima: `scale:matrix( [s,0,0,0], [0,s,0,0], [0,0,s,0], [0,0,0,1]);`

):

The correction of the aspect ratio is anisotropic: the Z coordinate remains unchanged and either the X or Y coordinate must be scaled such that a square is rendered as a square and not as a rectangle. Here we choose to leave the Y coordinate unchanged and to adapt the X coordinates (horizontal compression). Without correction, the point at (1,1) would be rendered at the top-right corner of the view port (i.e. with x equal to half of the the width of the context) . Once corrected, the same point must rendered at a width *x’* such that it equals half of the height of the context (hence no distortion). Formally:

`x' = context_height/2`

x = context_width/2

By dividing the first formula by the second and re-ordering:

`x' = x * context_height / context_width`

We define the aspect ratio as the quotient of the width by the height (remember the common "16/9" screen dimension), i.e. it will be greater than 1 for a standard screen and equal to 1 for a square view port:

`ar = context_width / context_height`

With this definition, the matrix used for the correction aspect ratio is (with wxMaxima: `aspect_ratio:matrix([1,0,0,0],[0,1/ar,0,0],[0,0,1,0],[0,0,0,1]);`

):

## Combining rotations, scaling and translations

The transformations seen so far can be combined in a basic transformation matrix, i.e. without the perspective projection part. Because the matrix product is not a commutative operation, the order in which matrices are multiplied is important. Some rules can be deduced from the above sections to determine possible combinations:

- The rotation around the Z axis is better after the rotations around X and Y, such that it is a final rotation of the rendered plane
- Scaling and rotations can be combined in any order
- The translation must be the first operation for it moves the center of all subsequent operations

We choose the following order: first translations, then rotations, then scaling and aspect ratio.

A very important point to note about the way WebGL (as well as OpenGL) uses arrays: matrices are flattened in linear arrays according to the "column-major order". This means that data is stored one column after the other, and not row by row. Because wxMaxima returns matrices in row-major order, the matrices we use must be first transposed before being passed to any WebGL function. We obtain the wished matrix with wxMaxima using the previously entered matrices and the final command `transpose(aspect_ratio. scale. rotz. roty. rotx. translation);`

:

This basic transformation matrix is implemented in the Javascript code in addition to the full one (see section below).

## Perspective projection

The projection is needed to render a 3D scene onto a 2D screen, it is hence a transformation of the space to a plane. The projection plane is typically chosen perpendicular to the view direction. In our basic transformation matrix, the depth coordinate is just ignored to render the model (we keep *(x,y)* from *(x,y,z)*). This kind of projection, called "orthographic" (it can be shown that it conserves parallelism and relative angles), cannot render realistically a scene with perspective because the depth information is simply lost.

To produce a perspective effect, i.e. such that far objects are rendered smaller than near ones, the depth information must be combined with the other coordinates in the projection computation. The following picture illustrates what happens on the Y coordinates during the projection of a point to the screen in the general case:

Here we look at the positions of the objects in the plane ZY, i.e. we see the observer and what it observes from the side. The observer is located at *(0,0)* and the point to be rendered (real position) is at *(Z,Y).* We set the "projection screen" at a distance *d* of the observer. Very naturally, we find that the observer "sees" the point to be rendered through the projection screen at a position *Y’* such that (using the Thales theorem a.k.a. similar triangles):

`Y' / d = Y / Z`

Or, after re-ordering:

`Y' = d * Y / Z`

Several things can be seen in this formula:

- The main operation is a division by
*Z*, what matches the expected behavior (the bigger Z, the smaller are the rendered objects) - The parameter
*d*has the effect of a zoom factor, and due to this distance*d*there is always a translation to take into account for a correct positioning of the camera. These unwanted scaling and translation make it difficult to predict the final position of rendered points. - If the observer is far away from the projection screen, i.e.
*d*and*Z*are high hence the ratio*d/Z≈1*, the result of the projection*Y’≈Y*is almost the same as without projection. Consequently, if*d*is high, the perspective effect is minimized (like for an orthographic projection).

Besides, in order to fit in the homogeneous coordinates concept, the projection must be represented as a matrix multiplication. As an illustration, in the simplest case (no translation, no scaling), a projection is a product by a matrix of the following form (in wxMaxima: `pure_projection:matrix( [1,0,0,0], [0,1,0,0], [0,0,1,0], [0,0,1,0]);`

):

Multiplied by the column-vector coordinates of a point *(x,y,z,1)* (or in wxMaxima `point:matrix([x],[y],[z],[1]);`

), the result is (`pure_projection.point;`

):

The division by Z appears only after the homogeneous coordinates are transformed back into 3D space coordinates (division by the 4th coordinate) :

The main issue with this simple projection matrix is that the Z coordinate equals 1 at the end, and is thus lost. The Z value is necessary to correctly display overlapping faces according to their depths (the nearest faces must be drawn last).

We will hence use a matrix that allows keeping the Z coordinate, and at the same time make the result more predictable. The goal here is to find a projection matrix such that a given set of points remain invariant during the projection, that the parameter *d* (distance between observer and projection screen) controls only the perspective effect without zoom (for which there is an explicit scaling operation as seen in a section above), and that the Z coordinates of all points in the visible interval are scaled to the standard [-1,1] interval.

The perspective projection is formally defined with the following requirements:

- The projection screen shall be at a distance
*d*of the observer*.* - Points in the projection screen at Z=0 shall be invariant in the operation (no scaling, etc)
- Points at Z=n (near plane, i.e. points nearer than this plane are not displayed) shall have a coordinate Z’=-1 after projection
- Points at Z=f (far plane) shall have a coordinate Z’=+1 after projection

Requirement 1 implies a translation on the Z axis. Because the translated value of the *z* coordinate must be used by later operations on all other coordinates, the translation is isolated in a dedicated pre-projection matrix (`pre_projection:matrix( [1,0,0,0], [0,1,0,0], [0,0,1,d], [0,0,0,1]);`

):

The multiplication by a point *(x,y,z,1)* gives (`pre_projected_point:pre_projection.point;`

):

We now look for a matrix containing unknowns that can answer our needs after the first translation on Z, i.e. a matrix including a scaling on X and Y, an additional scaling factor and a projection factor. We assume that the following matrix is usable for this task (in wxMaxima `conjectured_projection:matrix( [A,0,0,0], [0,A,0,0], [0,0,B,C], [0,0,1,0]);`

):

The product of the coordinates of a point *(x,y,z,1)* by the pre-projection matrix and then by our conjectured matrix gives (`conjectured_projected_point: conjectured_projection. pre_projected_point;`

):

After the last transformation from homogeneous coordinates to space coordinates (division by the 4th component), it comes (`conjectured_projected_point_3D: matrix( [conjectured_projected_point[1][1]], [conjectured_projected_point[2][1]], [conjectured_projected_point[3][1]])/ conjectured_projected_point[4][1];`

):

We extract the first and third components (`xcoord:conjectured_projected_point_3D[1][1];`

and `zcoord:conjectured_projected_point_3D[3][1];`

) and use them to create a system of equations by substituting values given by Requirements 2, 3 and 4 (in wxMaxima `equations: [subst(0,z,xcoord)=x, subst(f,z,zcoord)=1, subst(n,z,zcoord)=-1];`

):

This is a common system of linear equations showing corner conditions if we want to keep the sign of all factors:

*d>0*: If the distance between eye and projection screen equals 0, all points are at the infinite. If lower than 0, everything is inverted.*n+d>0*: If the distance*d*is smaller than the near plane, the coordinates of the points between the observer and the near plane will be inverted.*f+d>0*: Same as the condition on the near plane.

It can be solved by wxMaxima (`solutions: solve(equations,[A,B,C]);`

):

The results are quite complex and show the obvious corner condition *f-n>0*, i.e. near and far planes are placed in increasing Z coordinates (the Z axis is pointed towards the back of the rendering cube).

## Full transformation matrix

Now that the last operation is defined, it can be combined as the last step. The product and the final transposition are done in wxMaxima with the code `full_transformation:transpose(conjectured_projection. pre_projection. aspect_ratio. scale. rotz. roty. rotx. translation);`

:

## Exchange of X and Z

For a better understanding of the projection operation, we add a final matrix that only exchanges the X and Z coordinates (`exchange_xz:matrix( [0,0,1,0], [0,1,0,0], [1,0,0,0], [0,0,0,1]);`

):

This operation must be activated only for didactic purposes. It shows the projection operation from the side. The resulting matrix is given hereafter (`full_transformation_exz: transpose(exchange_xz. conjectured_projection. pre_projection. aspect_ratio. scale. rotz. roty. rotx. translation);`

):

Depending on the activation of the last XZ exchange, the wished matrix is selected.

## Javascript code

We use wxMaxima to get the results in a C-like syntax and insert the formulas in `getTransformationMatrix`

. As the distance to camera must be greater than the near plane, we use the transformation matrix without projection for smaller values of *d*, and a third matrix for a projection including the final XZ exchange:

// Returns a transformation matrix as a flat array with 16 components, given: // ox, oy, oz: new origin (translation) // rx, ry, rz: rotation angles (radians) // s: scaling factor // d: distance between camera and origin after translation, // if d <= -n skips projection completely // f: z coordinate of far plane (normally positive) // n: z coordinate of near plane (normally negative) // ar: aspect ratio of the viewport (e.g. 16/9) // exz: if true exchanges X and Z coords after projection function getTransformationMatrix(ox, oy, oz, rx, ry, rz, s, d, f, n, ar, exz) { // Pre-computes trigonometric values var cx = Math.cos(rx), sx = Math.sin(rx); var cy = Math.cos(ry), sy = Math.sin(ry); var cz = Math.cos(rz), sz = Math.sin(rz); // Tests if d is too small, hence making perspective projection not possible if (d <= -n) { // Transformation matrix without projection return new Float32Array([ (cy*cz*s)/ar,cy*s*sz,-s*sy,0, (s*(cz*sx*sy-cx*sz))/ar,s*(sx*sy*sz+cx*cz),cy*s*sx,0, (s*(sx*sz+cx*cz*sy))/ar,s*(cx*sy*sz-cz*sx),cx*cy*s,0, (s*(cz*((-oy*sx-cx*oz)*sy-cy*ox)-(oz*sx-cx*oy)*sz))/ar, s*(((-oy*sx-cx*oz)*sy-cy*ox)*sz+cz*(oz*sx-cx*oy)), s*(ox*sy+cy*(-oy*sx-cx*oz)),1 ]); } else { // Pre-computes values determined with wxMaxima var A=d; var B=(n+f+2*d)/(f-n); var C=-(d*(2*n+2*f)+2*f*n+2*d*d)/(f-n); // Tests if X and Z must be exchanged if(!exz) { // Full transformation matrix return new Float32Array([ (cy*cz*s*A)/ar,cy*s*sz*A,-s*sy*B,-s*sy, (s*(cz*sx*sy-cx*sz)*A)/ar,s*(sx*sy*sz+cx*cz)*A,cy*s*sx*B,cy*s*sx, (s*(sx*sz+cx*cz*sy)*A)/ar,s*(cx*sy*sz-cz*sx)*A,cx*cy*s*B,cx*cy*s, (s*(cz*((-oy*sx-cx*oz)*sy-cy*ox)-(oz*sx-cx*oy)*sz)*A)/ar, s*(((-oy*sx-cx*oz)*sy-cy*ox)*sz+cz*(oz*sx-cx*oy))*A, C+(s*(ox*sy+cy*(-oy*sx-cx*oz))+d)*B,s*(ox*sy+cy*(-oy*sx-cx*oz))+d ]); } else { // Full transformation matrix with XZ exchange return new Float32Array([ -s*sy*B,cy*s*sz*A,(cy*cz*s*A)/ar,-s*sy, cy*s*sx*B,s*(sx*sy*sz+cx*cz)*A,(s*(cz*sx*sy-cx*sz)*A)/ar,cy*s*sx, cx*cy*s*B,s*(cx*sy*sz-cz*sx)*A,(s*(sx*sz+cx*cz*sy)*A)/ar,cx*cy*s, C+(s*(ox*sy+cy*(-oy*sx-cx*oz))+d)*B,s*(((-oy*sx-cx*oz)*sy-cy*ox)*sz+cz*(oz*sx-cx*oy))*A, (s*(cz*((-oy*sx-cx*oz)*sy-cy*ox)-(oz*sx-cx*oy)*sz)*A)/ar,s*(ox*sy+cy*(-oy*sx-cx*oz))+d ]); } } }

## Results

As usual, the result can be seen on-line. Additional HTML/Javascript controls were added to let the user play with the parameters of the transformation. It should look like this:

The controls give the ability to decompose the full transformation step-by-step, i.e. we start with the following view:

After the rotation around the X axis:

After the rotation around the Y axis:

After the rotation around the Z axis:

After scaling:

After perspective projection:

We use the XZ exchange to display what happens on the Z coordinates during the perspective projection, i.e. both images show the same scene after projection (with no other transformation), the first from the front, the second from the side:

## Summary

Nothing really new about WebGL in this post, mainly mathematics:

- Operations with "homogeneous coordinates" consist in successive 4×4 matrix products, where space coordinates are represented by 4-components column-vectors, the fourth component being initialized to 1
- Operations such as translation, rotation, scaling and projection on a plane are combined in one matrix product
- After this product, the 3D coordinates are obtained back by dividing the first 3 components by the fourth one
- Rotation and scaling operations are common three dimensional transformations adapted to homogeneous coordinates by putting the 3×3 transformation matrix into a 4×4 identity matrix
- The translations cannot be represented by a 3×3 matrix (non-linear operations) and need a specific 4×4 matrix with translation vector components on the 4th column
- The perspective projection is as well represented as a 4×4 matrix through which the 4th component of the homogeneous coordinates is made proportional to the Z component, resulting in a division by Z during the conversion from homogeneous to space coordinates (division by the 4th coordinate).
- A quite complex calibration of the factors used in the perspective projection is necessary to keep the Z coordinate and scale it to the interval [-1,+1]
- The parameters chosen for the transformation are:
*(ox,oy,oz)*point looked at,*(ry,ry,rz)*rotation angles,*ar*aspect ratio,*s*scaling,*d*distance between observer and projection screen,*f*far plane Z coordinate,*n*near plane Z coordinate

### LSB-based integer factorization

Standard factorization methods use whole numbers. There is always a certain number of steps including modulo operations, but the numbers are considered from the beginning to the end, i.e. from the Most to the Least Significant Bits (LSBs).

Another approach could be derived from the simple following number puzzle (not sure a solution exists for this one):

A B C * C ----- 1 2 3 5

It is only necessary to look at the end of the numbers, i.e. the C, to see that there is only one solution C=5. This is the raw idea: try to deduce the LSBs first, without processing the whole numbers. A recursive search would systematically look at all pairs of LSB strings:

p = X X 0 0 1 0 * q = X X 0 1 0 0 ----------------- n = X X 1 0 0 0

A common transformation used by other methods is done as following: if `n` is the product, `p` and `q` its factors, it is always possible to set:

`n = pq = (a+b)(a-b) = a²-b²`, or

`n + b² = a²`

This operation changes the problem into finding a square number `b²`, add it to `n` and check if the sum if also a square (`a²`). The LSBs of squares is something that can be tested as well.

In this form, the method would surely not perform very well compared to others. The idea, if even makes any sense, must be developed and optimized, e.g.:

- The assessment of LSBs could be integrated as speed-up in in another factorization method
- A heuristic criterion could reduce the solutions space of the recursive search
- Using another base for the calculations (i.e. start by converting
`n`into this base) may simplify the assessments on LSBs - It may be possible to replace the recursive search by the resolution of an equations system

### Wave Propagation part 3

Last time we came to a formula that can be used to compute the simulation of a one-dimensional masses-springs system, and saw that it could be interpreted as approximation of the wave equation. This introduction was intended to show that a simple model, used primarily for a numeric simulation, leads to the same properties as a continuous system. We will use the wave equation as basis in the next posts and no more discrete system, because it yields just the same results.

The general form of the wave equation in 3 dimensions is:

*∂²U/∂t² = c² ∆U = c² ( ∂²U/∂x² + ∂²U/∂y² + ∂²U/∂z² )*

Where *c* is the speed of the wave (i.e. "celerity"), *∆ *the Laplacian operator and U a scalar function of *x*, *y*, *z* and *t*. In order to make the simulation more interesting while keeping it computable, let’s look at approximated formulas in two dimensions. We are looking for the computation of *U(t,x,y)*, and assume that we can approximate the second-order derivatives by discretization using *∆x = ∆y = ∆t = 1*:

*∂²U/∂t² = c² ( ∂²U/∂x² + ∂²U/∂y² )*

*U(t+1,x,y) – 2 U(t,x,y) + U(t-1,x,y) = c² ( U(t,x+1,y) – 2 U(t,x,y) + U(t,x-1,y) + U(t,x,y+1) – 2 U(t,x,y) + U(t,x,y-1) )*

*U(t+1,x,y) = c² ( U(t,x+1,y) + U(t,x-1,y) + U(t,x,y+1) + U(t,x,y-1) ) + (2-4c²) U(t,x,y) – U(t-1,x,y)*

An obvious simplification of the formula can be reached by assuming *c = 1*, but as a consequence the propagation speed is fixed hence a phenomenon involving an interface between different mediums (e.g. refraction) cannot be simulated. Furthermore, the propagation speed would be exactly the same as the discretization intervals (e.g. *∆t*). The result is that the waves move of exactly one table cell per step, what may lead to unwanted effects. With this assumption, the formula becomes:

*U(t+1,x,y) = U(t,x+1,y) + U(t,x-1,y) + U(t,x,y+1) + U(t,x,y-1) – 2 U(t,x,y) – U(t-1,x,y)*

Graphically, the computation of this formula can be represented as following:

Now, to define the computation completely, it is necessary to clarify the behavior at the borders and the initial conditions. For the borders, the simplest approach is to keep a border of constant cells (not processed in the computation) set to 0. The resulting behavior on the borders is a reflection of all incoming waves. Because this behavior may not be wished for a particular simulation activity, algorithms have been developed to "absorb" the incoming waves on the borders, but I won’t address them in this post. Another method, simpler but increasing the computation time, is just to extend the simulated area to a such a size that reflected waves won’t reach the interesting area before the end of the simulated time.

The initial condition is another interesting topic because 2 steps are necessary. Basically, the 2 initial steps can be set to 0 for a stable start (nothing moves unless a perturbation is introduced). Any perturbation can be inserted in the computation in either one or two consecutive steps. If only one step is changed, the perturbation has no speed and will propagate equally in all directions. If two computation steps are modified, it is possible to give a direction to the introduced perturbation, under the condition that the celerity of the medium is respected.

That’s all for now regarding wave propagation. There is not a lot of information in these 3 posts, but enough to give primary ideas and to start to develop a simple simulation. Many other methods exist for calculations based on finite differences. The goal here was to just look at one that can be understood with basic mathematics. I will come back to this topic sometimes in the (maybe far) future.

### Wave Propagation part 2

In the first part we considered a system composed of a chain of connected springs and masses, and came to the conclusion that the vertical position of the masses is given by the solution of a system of interdependent equations.

This system of equations is of the following form, where *a**i* is the acceleration and *U**i* the position of the mass at position *i*:

*m a*i* = k ( U**i-1** – 2 U**i** + U**i+1** )*

For each step of the computation, the new state of the system must be computed using its previous states. This is a basic mechanism for any kind of numeric simulation. The remaining difficulty is the correct representation of the acceleration term, which can be transformed using a first-order approximation, e.g. the derivative of *U* with respect to *t* can be given by:

*∂U(t)/∂t ≈ ( U(t+Δt) – U(t) ) / Δt*

We apply this formula two times to get the second order derivative (acceleration):

*∂*²*U(t)/∂t*²* ≈ ( U(t+2Δt) – U(t+Δt) – ( U(t+Δt) – U(t) ) ) / Δt / Δt*

*∂*²*U(t)/∂t*²* ≈ ( U(t+2Δt) – 2 U(t+Δt) + U(t) ) / Δt*²

Another way to write this formula:

*∂*²*U(t)/∂t*²* ≈ ( U(t+**Δt**) – 2 U(t) + U(t-Δt) ) / Δt*²

At this point, it is important to understand how to achieve the practical computation. The expected result is a set of values (the *U**i*) evolving over the time. The time between two consecutive steps will be exactly *Δt.* But the approximation of the acceleration shows that the computation of a new state of the *U**i* depends not only on the previous state, but also on the state before the previous state (i.e. 2 states in the past due to the *t-2Δt* value). Practically, this means that a total of 3 states of the *U**i* must be kept in memory: the current state being computed plus two steps in the past.

The usage of different steps of the computation shows that we now need to clearly identify the computed values using their temporal and spatial coordinates. An obvious simplification is to set:

*Δt = 1*

After that, let’s use the notation *U**i**(t)* to identify a computed value at position *i* and step *t*. We put the approximation in place of the acceleration. The resulting formula to compute the *U**i**(t+1)* is then:

*m ( U**i**(t+1) – 2 U**i**(t) + U**i**(t-1) ) = k ( U**i-1**(t)** **– 2 U**i**(t)** + U**i+1**(t)** )*

With this formula, the computation is straightforward because there is only one term at *t+1* and all other terms are given by the previous steps. It is astonishing to see the relative simplicity of the formula (a linear combination of the previous values, no special case to handle), keeping in mind that it mimics the behavior of a physical system.

This result shows a kind of symmetry between the parts on left and right: the same pattern *U(n+1)-2U(n)+U(n-1)* is present, with varying time index on the left hand side and varying spatial index on the right hand side. The part on the left is the approximation of a second order derivative with respect to *t*. So it appears natural to try to interpret the part on the right similarly.

This can be done and results in a well-known equation that can model any linear propagation phenomenon, i.e. where each propagated wave is independent of the others. This is the Wave Equation, given here in 1 dimension:

*∂**²U/**∂**t² = c² **∂**²U/**∂**x²*

To be continued…

### Wave Propagation part 1

A while ago I created a Java applet/application to play with propagation of waves in an ideal medium. The theoretical background necessary to compute this kind of simulation is documented in this series of post.

The first step is to find an algorithm that reproduces the behavior of propagation. Propagation is a very common phenomenon that can take many forms: water waves, sound, electromagnetism, vibrating string, waves on jelly, etc. Whatever the form, propagation is always based on the local change of a property of the medium (e.g. water or air) over which the propagation occur. Even if there is an apparent movement, matter is not displaced, only the local perturbation moves.

In the case of water waves, the modified property is the local level of the water surface. For sound waves, it is air pressure. For electromagnetism it is not a scalar value but a vector. In this case, each component of the vector is assumed independent of the others and behaves like a scalar property showing propagation.

A propagation is only possible if the medium is such that the local property is in a state of stable equilibrium, i.e. the induced deformation will tend to be corrected by the surrounding environment, like a spring comes back to its initial position.

A network of springs and masses is a simple, non-continuous propagation medium that can be modeled easily.

For example, in one dimension, let’s imagine a long chain of alternating masses and springs, attached to each others. Let’s assume that each mass can move only up and down, and that gravity plays no role. In this system, if a mass is moved from its initial position, the surrounding springs will tend to pull it back. But at the same time, because both ends of the springs can move, the other masses will be pulled as well, and so on.

Now let’s consider one of the masses (at position 2 on the picture), all masses have a mass of *m*, the springs have a stiffness of *k:*

Because the masses move only up and down, only the vertical components of the forces applied by the springs need to be considered. Let’s write the sum of the forces applied to the mass at position 2, where *a**2* is the acceleration of the mass, and *U**i* the vertical position of the masses:

*m a**2** = ∑ forces applied to mass at 2*

*m a**2** = F**12** + F**32** = k ( U**1** – U**2** ) + k ( U**3** – U**2** )*

*m a**2** = k ( U**1** – 2 U**2** + U**3** )*

Now, different things can be noted about this relation. To begin with, no real surprise, it is a differential equation (the acceleration *a**2* is the second-order derivative of the position *U**2**)*. If the masses at positions 1 and 3 would not move (e.g. U*1*=U*3*=0), we would obtain a perfect oscillator (the mass at 2 would go up and down following a sinusoidal movement), typically represented by an equation of the form *y” + K y = 0*.

Back to the considered system where the masses at 1 and 3 move and all masses of the chain are interconnected via springs. Obviously the relation shows that the vertical position *U**2* cannot be computed without knowing *U**3* and *U**1*, which in turn depend on the positions of the next masses in the chain, and so on. It turns out that any mass in the chain can be influenced by the movement of any other mass. As a result, for each step of the computation, it is not possible to solve only one local equation for one mass, but all positions must be computed.

Stay tuned for the next part.