Hardware Accelerated Volume Rendering Using PC Hardware

CSE 564 - Introduction to Volume Rendering
Final Project
by Evan Closson
05/20/2003

Files

Introduction

Being able to visualize data at interactive rates is an important technique in the exploration of volumetric data sets. There are specialized systems that can be used to accomplish this but these systems are not readily available. Common PC hardware is very easily available and can be used to explore volumetric data quickly.

Goals

Techniques

Different techniques can be used to make use of graphics hardware to accelerate volume rendering. One method is to use 2D texture hardware by slicing up a volume along each of it's major axis. This method is fast, it has decent quality but it is very memory intensive. Another more memory considerate method is to use 3D Texture hardware, this is also very fast and doesn't require as much memory. Unfortunately this sort of hardware is not as readily available as the 2D texture hardware, however it is becoming much more common in graphics cards as time goes on.

What I did

I unfortunately did not have access to any of the more powerful graphics hardware so I set about to use my Geforce4 440 Go to accomplish accelerating volume rendering. This hardware doesn't support 3D textures so I made use of the 2D texture slice approach.

The basic steps to this approach are to slice a volume along each of the major axis. This will give three data sets or slice stacks that must be stored in memory -- very limiting for large datasets. Three stacks are needed because these slices are textured onto 2d polygons for rendering, if these slices are being viewed from the side nothing will be seen. To fix this all three major axis have a stack associated with it. The axis' stack that is closest to the camera location is rendered back to front with blending enabled.

To make better use of memory and allow for interactive transfer function updates texture palettes were used. The intensity value of each voxel is used as a lookup into the texture palette and the transfer functions edit the texture palette directly. I made use of the extensions GL_EXT_paletted_texture and GL_EXT_shared_texture_palete to accomplish this.

To find out what stack needs to be rendered the axis that is closest to the camera must be determined. This can easily be done by using the inverse model-view transformation. The x,y,z value of the world space camera location will simply tell the closest axis by looking at the largest component x, y, or z.

I also implemented trilinear filtering which can be accomplished with the 2D slices by making use of register combiners. Extensions GL_NV_register_combiners and GL_ARB_multitexture were used. The basic idea is that when rendering the slices of the stack instead of rendering each slice render a slice generated in-between the slice i and i + 1. Bilinear filtering occurs within normal hardware for i and i + 1 and then the combiners can be used to interpolate the two slices.

Results

This is an image of the program in action:

hardware volume rendering program in action

Here are some images that were generated from the hardware renderer:

hardware volume rendering head hardware volume rendering foot hardware volume rendering engine hardware volume rendering foot hardware volume rendering lobster hardware volume rendering engine

Schedule

Date Objective Completed
April 7 Web site set up.
April 14 Learn about the capabilities of the graphics hardware I posses and will have access too.
Narrow down and decide upon the exact the types of hardware accelerated renderers I will implement.
April 21 Framework and demo code design completed
April 28 Renderer code design completed.
Majority of Research done.
May 5 Implementation of renderer completed.
May 12 Renderer Integrated into Engine Room
May 19 Project due. Presentations start.

Progress

References and Links