# Posts Tagged tutorial

In the following I will present a method for deforming three dimensional geometry using a technique relying on radial basis functions (RBFs). These are mathematical functions that take a real number as input argument and return a real number. RBFs can be used for creating a smooth interpolation between values known only at a discrete set of positions. The term radial is used because the input argument given is typically computed as the distance between a fixed position in 3D space and another position at which we would like to evaluate a certain quantity.

The tutorial will give a short introduction to the linear algebra involved. However the source code contains a working implementation of the technique which may be used as a black box. First an example of what we achieve in the end:

Source code with Visual Studio 2005 solution can be found here. The code should also compile on other platforms.

### Interpolation by radial basis functions

Assume that the value of a scalar valued function is known in distinct discrete points   in three dimensional space. Then RBFs provide a means for creating a smooth interpolation function of in the whole domain of . This function is written as a sum of evaluations of a radial basis function where is the distance between the point to be evaluated and :

Here are scalar coefficients and the last four terms constitute a first degree polynomial with coefficients to . These terms describe an affine transformation which cannot be realised by the radial basis functions alone. From the known function values we can assemble a system of linear equations:
where , and is an matrix :

Here . A number of choices for will result in a unique solution of the system. In this tutorial we use the shifted log function:

with . Solving the equation system for gives us the coefficients to use  in equation when interpolating between known values.

### Interpolating displacements

How can RBF's be used for deforming geometry? Well assume that the deformation is known for  3D positions  and that this information is represented by a vector describing 3D displacement of the geometry that was positioned at in the original, undeformed state. You can think of the positions as control points that have been moved to positions . The RBF interpolation method can now be used for interpolating these displacements to other positions.

Using the notation  and   three linear systems are set up as above letting the displacements be the quantity we called in the previous section:

where is assembled as described above. Solving for , , and involves a single matrix inversion and three matrix-vector multiplications and gives us the coefficients for interpolating displacements in all three directions by the expression

### The source code

In the source code accompanying this tutorial you will find the class RBFInterpolator which has an interface like this:

class RBFInterpolator { public: RBFInterpolator(); ~RBFInterpolator();   //create an interpolation function f that obeys F_i = f(x_i, y_i, z_i) RBFInterpolator(vector x, vector y, vector z, vector F);   //specify new function values F_i while keeping the same void UpdateFunctionValues(vector F);   //evaluate the interpolation function f at the 3D position (x,y,z) real interpolate(float x, float y, float z);   private: ... };

This class implements the interpolation method described above using the newmat matrix library. It is quite easy to use: just fill stl::vectors with the , and components of the positions where the value is known and another stl::vector with the values. Then pass these vectors to the RBFInterpolator constructor, and it will be ready to interpolate. The value at any position is then evaluated by calling the 'interpolate' function. If some of the values change at any time, the interpolator can be quickly updated using the 'UpdateFunctionValues' method.

We want to deform a triangle surface mesh. These are stored in a class TriangleMesh, and loaded from OBJ files.
In the source code the allocation of stl::vectors discribing the control points and the initialisation of RBFInterpolators looks like this:

void loadMeshAndSetupControlPoints() { // open an OBJ file to deform string sourceOBJ = "test_dragon.obj"; undeformedMesh = new TriangleMesh(sourceOBJ); deformedMesh = new TriangleMesh(sourceOBJ);   // we want 11 control points which we place at different vertex positions const int numControlPoints = 11;   const int verticesPerControlPoint = ((int)undeformedMesh->getParticles().size())/numControlPoints;   for (int i = 0; i<numControlPoints; i++) { Vector3 pos = undeformedMesh->getParticles()[i*verticesPerControlPoint].getPos(); controlPointPosX.push_back(pos[0]); controlPointPosY.push_back(pos[1]); controlPointPosZ.push_back(pos[2]); }   // allocate vectors for storing displacements for (unsigned int i = 0; i<controlPointPosX.size(); i++) { controlPointDisplacementX.push_back(0.0f); controlPointDisplacementY.push_back(0.0f); controlPointDisplacementZ.push_back(0.0f); }   // initialize interpolation functions rbfX = RBFInterpolator(controlPointPosX, controlPointPosY, controlPointPosZ, controlPointDisplacementX ); rbfY = RBFInterpolator(controlPointPosX, controlPointPosY, controlPointPosZ, controlPointDisplacementY ); rbfZ = RBFInterpolator(controlPointPosX, controlPointPosY, controlPointPosZ, controlPointDisplacementZ ); }

Now all displacements are set to zero vectors - not terribly exciting! To make it a bit more fun we can vary the displacements with time:

 // move control points for (unsigned int i = 0; i<controlPointPosX.size(); i++ ) { controlPointDisplacementX[i] = displacementMagnitude*cosf(time+i*timeOffset); controlPointDisplacementY[i] = displacementMagnitude*sinf(2.0f*(time+i*timeOffset)); controlPointDisplacementZ[i] = displacementMagnitude*sinf(4.0f*(time+i*timeOffset)); }   // update the control points based on the new control point positions rbfX.UpdateFunctionValues(controlPointDisplacementX); rbfY.UpdateFunctionValues(controlPointDisplacementY); rbfZ.UpdateFunctionValues(controlPointDisplacementZ);   // deform the object to render deformObject(deformedMesh, undeformedMesh);

Here the function 'deformObject' looks like this:

// Code for deforming the mesh 'initialObject' based on the current interpolation functions (global variables). // The deformed vertex positions will be stored in the mesh 'res' // The triangle connectivity is assumed to be already correct in 'res' void deformObject(TriangleMesh* res, TriangleMesh* initialObject) { for (unsigned int i = 0; i < res->getParticles().size(); i++) { Vector3 oldpos = initialObject->getParticles()[i].getPos();   Vector3 newpos; newpos[0] = oldpos[0] + rbfX.interpolate(oldpos[0], oldpos[1], oldpos[2]); newpos[1] = oldpos[1] + rbfY.interpolate(oldpos[0], oldpos[1], oldpos[2]); newpos[2] = oldpos[2] + rbfZ.interpolate(oldpos[0], oldpos[1], oldpos[2]);   res->getParticles()[i].setPos(newpos); } }

That's it!!! Now I encourage you to download the source code and play with it. Perhaps you can experiment with other radial basis functions? Or make the dragon crawl like a caterpillar? If you code something interesting based on this tutorial send a link to me and we will link to it from this page

Karsten Noe

I got a mail from Woo Won Kim from Yonsei University in South Korea who has made a head modeling program that can generate 3D human heads from two pictures of the person using code from this RBF tutorial. Check out a video of this here.

The famous Stanford dragon rendered using GPU raycasting.

This post will try to explain how to implement a GPU based raycasting render, using open GL and Nvidia’s CG. This tutorial assumes some experiance with OpenGl and vertex-fragment shaders.

First of all why do we need this algortihm? Because it is a smart way to achieve high quality volume rendering and the raycasting algorithm is well suited for the modern GPU’s. Esspecially the new 8800 series because of the unified shading system.

The reason behind this tutorial is to help people getting startet with GPU raycasting because there is some technical difficulties that has to be adressed in order to render volumetric data like in the picture above.

The main core of the algorithm is to send one ray per screen pixel and trace this ray through the volume. This is possible to implement in a fragment program and the rendering can be done in realtime. The techinque is pretty flexible for instance effect like shadows can be implemented with a few lines of code.

Here is a conceptual image of the raycasting algorithm where one ray pr pixel is spawned and traced through the volume.

In order to generate the nessesary rays we use a clever trick by using OpenGl abillity to render geometry. How can this help us you might say? Well listen up my young apprentice. First we define a ray:

• A ray is just a origin point o and a direction vector dir.
• A ray descripes a line in 3d space by using the formula P(t) = o + dit * t
• So to generate a ray we need to find the origin point and the direction vector.

This can be done be rendering a cube where the colors represent coordinates, and let OpenGL’s interpolation take care of the rest. The way to do this is to render the front and the backside of a unitcube that is illustrated just below.

If we subtract the backface (on the right) from the front we get at a direction vector for each pixel. This is the direction of our ray. The origin is just the frontface values of the cube. So we have to do two renderpasses one for the front and one for the back. To render the backside we enable frontface culling. In my implementation i use an OpenGL framebuffer object (fbo) to store the result from the rendering of the backside, and use the frontface rendering to generate the fragments that starts the raycasting process. If you are unfamiliar with framebuffer objects check out this link.

So to calculate the raycasting we need to create the ray and then step through the volumetexture. This is all done in a single fragmentprogram and calculated on the GPU. The fragment program is fairly simple, the only real issue is to calculate texturecoordinates used to index the backface buffer in order to get the ray exit point. These texture coordinates is refered to as normalized device coordinate, and in the implementation we find a corresponding pixel in the backface buffer by this calculation.

float2 texc = ((IN.Pos.xy / IN.Pos.w) + 1) / 2;

Where IN.Pos is the modelviewprojected position. This calulation gives us the fragments screen position in the interval between [0,1]. Then the ray exitposition is found be using texc to index in the backface buffer like this:

float4 exit_position  = tex2D(backface_buffer, texc);

Now we create the ray and use the shader model 3.0 looping capabilities to create a for loop. This loop will step through the volume with a certain stepsize delta and we can accumulate opacity and color value according to the nature of our volume data set. In the demo implementation the ray terminates when it leaves the volume or when the accumulated opacity reaches a high enough value. But there are many possibilities. For instance if we terminate the ray when a certain opacity threshold is first encountered then the result will be some kind of iso surface rendering.

The raycasting technique can be used for many types of rendering problems where polygonbased rendering have a hard time. For instance effects like smoke and glass can be rendered in realtime. Maybe i will do a tut on these subjects in the near future.

To get you started with this cool rendering algorithm, i have made a simple GPU raycasting implementation that hopefully will clear up the details. That is just the kind of guy i am

The demo contains a windows executable and source code that implements the GPU raycasting algorithm. To compile you need Nvidias cg and Glew.

The demo just shows a volume of colors where some spheres have been subtracted. The stepsize of the ray can be adjusted be pressing the “w” and “e” buttons. Notice this demo might be hard on your gfx card, just try to press “w” a lot this will result in a big stepsize and the raycasting will update more rapidly.

As a last comment: this is just a tutorial that will get you started with GPU ryacasting, the technique was invented back in 2003 by these guys:

J. Kr¨uger and R. Westermann. Acceleration techniques for
GPU-based volume rendering. In Proceedings of IEEE Visualization
2003, pages 287–292, 2003.