Point-based Volume Visualization

Current visualization methods for arbitrary-mesh, volumetric data sets do not allow interactive rendering, or even low-quality previewing, of large-scale data sets containing several million grid points. In most cases, a scientist creates or measures such a data set without a-priori knowledge of where to find the features she is looking for; sometimes, even without knowing what those features are. Volume visualization has proven to be a very helpful tool in these situations. But without interactive navigation and exploration tools, finding features in a very large data set and highlighting them using customized transfer functions is very difficult and time-consuming.

If images of a data set could somehow be rendered at interactive rates, even at relatively poor quality, the navigation process could be sped up considerably. It is possible to reduce most scientific data sets to the "lowest common denominator" by ignoring the shape and connectivity of the grid cells they are defined on, and treating them as clouds of (valued) points without connectivity. Once a rendering algorithm capable of interpolating data values between those points in real-time is found, the simplicity of point clouds can be exploited to easily create multi-resolution data sets by assigning subsets of points to a hierarchy of increasingly coarser representations. It then becomes possible to implement a previewing renderer that delivers interactive frame rate for arbitrarily large data sets by selecting an appropriately coarse representation for visualization.
Figure 1: Point-based preview rendering of a medium-sized unstructured data set consisting of 103,064 valued points and originally connected by 567,862 tetrahedral cells. The wing geometry is implicitly represented by 27,044 boundary triangles. (Data set provided by Paresh Parikh, ViGYAN, Inc.).

Project Goals

The main project goals were to design and implement data structures and algorithms to interactively preview large scientific data sets defined on arbitrary grids at multiple levels of resolution. This included the following functionality:

Project Status

In March 2000, the project reached the level of functionality described above. The hierarchy generation program used Delaunay tetrahedralizations and maximum independent sets to remove neighbouring points in each hierarchy level to generate the next coarser one, leading to a point reduction factor of about 1:8 between hierarchy levels. Hierarchy generation on medium-sized data sets (about 100,000 points) takes about a minute. The implemented preview renderer can deliver interactive rendering performance on desktop and medium-range graphics workstations, see the screen shot pages for details. We submitted a paper describing rendering algorithm and pre-processing hierarchy generation to the IEEE Volume Visualization 2000 conference - the rejected VolVis submission is available for download (PDF format, 4,794K) - and later presented it at the Workshop on Computer Graphics and Virtual Reality in 2000.

Related Publications

Pages In This Section

The "Mavriplis" Data Set
Screen shots of previews of a small unstructured data set. This data set, resulting from an airflow simulation, was provided by Dimitri Mavriplis at ICASE. It is defined on a tetrahedral grid consisting of 2,800 vertices and 13,576 tetrahedra.
The "Blunt Fin" Data Set
Screen shots of previews of a medium-sized curvilinear data set, the ubiquitous "Blunt Fin" (the "Utah Teapot" of CFD visualization). This data set was provided by C.M. Hung and P.G. Buning. It is defined on a curvilinear hexahedral grid consisting of 39 x 31 x 31 cells with vertex-centered data (resulting in 40,960 vertices).
The "Parikh" Data Set
Screen shots of previews of a large unstructured data set. This data set, resulting from an airflow simulation around a delta wing with "payload," was provided by Paresh Parikh at ViGYAN, Inc. It is defined on a tetrahedral grid consisting of 103,064 vertices and 567,862 tetrahedra.
The "Shalf" Data Set
Screen shots of previews of a large AMR data set. This data set, resulting from a cosmological simulation of an X-ray cluster, was provided by Greg Bryan, Mike Norman and John Shalf at the Laboratory for Computational Astrophysics at NCSA and Lawrence Berkeley National Laboratory. It is defined on a hierarchy of rectilinear grids generated by an adaptive mesh refinement (AMR) algorithm. It consists of 2,531,452 vertices altogether.