Geometry and Graphics

Oct 31, 2015 · 7 min read

Virtual Reality for Smart Cities

What if we could integrate the work we’ve done on traffic and buildings into one comprehensive digital twin? And how could such a digital twin be made more compelling to the general public? This project arose from a thought experiment and built on a project two of us did for HackSimBuild 2022.

My Contributions
  • Wrote large parts of the book chapter discussing potential applications.
  • Supported the prototype implementation through feedback and testing (credit for the heavy lifting of building it goes 100% to my colleague, Haowen!)

Three different rendered views. The top scene is a daytime view of two people conversing next to a roundabout, with buildings in the background and a koala on the sidewalk. The bottom left shows a person running towards a volleyball field in a park at sunset, with one of the campus buildings in the background. The bottom right shows a person standing next to a large ornate water fountain in the same place, also during sunset.
Three 3D renderings of scenes at ORNL campus. The top shows a view of an existing roundabout and existing buildings. The bottom images are both fictitious options for the use of a green space next to one of the campus buildings.

Other Successes

Ph.D. Thesis: Discrete Geometric Methods for Surface Deformation and Visualization

A lot of research I did as a Ph.D. student contributed to my thesis, but I had a few side quests as well. My main quest, however, was the modification of geometric surfaces under different constraints. (This gets math-y, you’ve been warned.)

Deformation that Preserves Total Gauß Curvature

In industrial surface generation, saving even a small of material in an individual piece can accumulate into substantial cost savings. Bending energy is one of the properties that are important in this context, and it can be measured using Gauß curvature. The goal of the deformation is to minimize (changes in) total Gauß curvature to minimize material cost and waste.

3D rendered gray mechanical component (left) and a recreation using surface patches
The fandisk model by Hoppe and a portion of it recreated from Bézier surface patches. The Bézier patches are colored by curvature.

My Contributions
  • Mathematical proof to determine the limits to the deformations.
  • Performed case studies for a simple geometric form (helicoid) and a more complex machined part (fandisk), applying valid deformations and rendering the results (Matlab).

Efficient Deformation of Bézier Curves and Surfaces

Bézier curves are smooth parametric curves that can be defined by a small number of control points. Bézier surfaces are generated from meshes of control points using a similar method to create a smooth surface.

Animation of rounded surface with a grid rendered on top of it. The surface balloons out and back in, and it’s colored dark red towards the edges, fading into white at the top.
Bézier surface undergoing transformation using the local influence.

To deform the curve, you can take a lot of points and move them individually, but the resulting curve will no longer be smooth, and you will have to move a lot of points if you want it to look smooth. Instead, you could move the handful of control points and re-generate the curve in much less time. The question is: where do you move the points to achieve the desired outcome?

A black control polygon around a S-shaped Bézier curve. Different parts of the curve are highlighted in different colors corresponding to the control point labels. Next to this, there are overlapping intervals for each point in the same colors.
Illustration of local influence for a Bézier curve. Each point only considers a section of the curve to detemine its normal. This section is defined as the span between the points for the previous and next neighbors (or just one neighbor for the end points).

My Contributions
  • Developed and compared four different approaches to deforming the control polygon: using the control polygon’s (discrete) normal, using normal at the curve point that matches the arc length of control point with respect to the polygon, using the normal vector that points from the curve to the control point, or using the normal vector for the curve point that’s closest to the control point.
  • Developed a localized version that only allows nearby parts of the curve to affect the choice of normal.
  • Evaluated feature preservation and complexity of all methods.

12 graphs in a 4x3 grid, with one row per polygon shape, and one column per type of normal.
Comparison of the different approaches for three different polygons (linear, self-intersecting, and nested). Overall, the new (dotted line) polygons using direction-based and distance-based normals resemble the original (solid line) polygons more closely than the ones using polygon-based and parameter-based normals.

Optimizing Triangulations of Meshes after Deformation

In fluid simulations, the flow can be visualized by inserting particles and following them. One way to do this is to insert a flat surface with a fixed triangulation, and studying how this surface deforms over time. These surfaces are called time surfaces. Depending on the type of flow, these surfaces distort strongly, which results in a lot of very long and thin triangles. These are not desirable because their normal vectors are not well-defined (a line has no normal vector and the closer a triangle resembles a line, the worse the normal vector gets). Without well-defined normal vectors, the surfaces can’t be rendered nicely. To remediate this problem, I allowed the particles to move a little bit after each simulation time step to keep a more consistent sampling of the flow space. This is called particle relaxation.

My Contributions
  • Developed a uniform particle relaxation which moves particles equally.
  • Implemented a method to move particles by fitting a 3-dimensional surface (Coons patch) into nearby points, and moving the particle on this surface. This ensured that the surface does not shrink in this step.
  • Developed a new relaxation criterion which considers the amount of distortion in parameter space, and counteracting this distortion (using the metric tensor).
  • Developed an alternative relaxation criterion which considers the local Gauß curvature.
  • Optimized the mesh after each step to preserve good triangle quality (i.e. good normals).

Side Quest: Geometric Reconstruction of Tokamak Data

For this internship project, I was tasked with visualizing Tokamak data from ELMFIRE fusion reactor simulation outputs which were provided by Aalto University, Finland.

The ELMFIRE simulation produced four slices through the Tokamak donut shape per time step, formatted in polar coordinates (rectangle).

List of points (screenshot of text file) to connectivity (labeled triangulation of rectangular mesh) to 2D slices (hollow disk with rays pointing out and a spiral from outer to inner edge) to 3D volume (sketch of a hollow torus, labeled with angles and radii).
Workflow from simulation outputs to the 3D representation of Tokamak data.

My Contributions
  • Converted Tokamak simulation outputs from rectangles in polar coordinates (simulation output file format) to discs with holes (annullus) with appropriate triangulations (VTK).
  • Transformed the discs to different layouts (flat, cylinder, torus).
  • Interpolated intermediate slices.
  • Constructed 3D volumetric data from discs and applied transformations to account for quasiballooning.
  • Visualized 2D and 3D Tokamak data in ParaView in several different arrangements.
  • Ported visualization to a VR Powerwall.
  • Presented the results to a panel of German Aerospace and Aalto University staff.

`Four different views of the same dataset. 8 hollow slices arranged in a torus, on a grid, and in a cylindrical arrangement (all using a blue and yellow colormap) and a volumetric rendering of the torus with a diagonal cut through it (rendered in a red white blue colormap).
Different views of reconstructed and interpolated Tokamak data.

Translucent volume-rendered data along with solid slices, in rainbow colors.
Reconstructed Tokamak data in a rendering engine for a VR Powerwall.

Side Quest: Dynamic Scheduling for Distributed Computations

This side quest was mostly my B.Sc. student’s main quest (thesis work), which I co-supervised with Prof. Hans Hagen (University of Kaiserslautern) and Markus Flatken (German Aerospace).

My Contributions
  • Mentored my student in understanding the distributed data streaming and rendering pipeline.
  • Discussed advantages and disadvantages of potential underlying data structures with my student.
  • Guided my student in the development of a view-dependent dynamic scheduling system.
  • Supervised the writing of the actual thesis, and contributed to the writing of a follow-up research paper.

A workflow chart showing 5 steps: raw data (segmented slices of cylinders in many colors), user query (R-Tree), selection (a worklist), scheduler (another R-Tree), and distribution to multiple workers.
An initial set of R-Trees is generated once for the raw data. These R-Trees are used to select relevant data for a user query. From this selection, a new set of trees is generated and used by the scheduler to define a processing order.

Selected Publications

B.Sc. thesis: Merging Triangulated Meshes

When designing a car, it’s a priority for the designers that the car doesn’t just look sleek, but that it is safe and comfortable for the people using it. To improve occupant safety, the German automotive industry has funded the development of RAMSIS, a system that uses 3D CAD manikins to simulate vehicle occupants and analyze the ergonomics and safety of vehicle interiors.

Several 3D figures in different poses that overlap (left) and one large merged mass of these figures (right).
Groups of individual manikins (left) should be merged into one large model (right).

My Contributions
  • Developed an algorithm to merge triangulated 3D meshes for RAMSIS, a 3D CAD simulation for vehicle occupants funded by the German automotive industry.
  • Tested the algorithm on meshes of manikins which consisted of 2,700 triangles in 52 groups.
  • Performed troubleshooting to determine which geometric properties the original meshes lacked and proposed a pre-processing step to fix the meshes and enable proper merging.
  • Merged the meshes of manikins in multiple different poses to determine required space for safe and comfortable vehicle operation.

Two renderings of a pair of overlapping cubes. The version on the left shows shaded solids. The version on the right shows their wireframes which reveals the correct triangulation after merging.
A merged object created from two intersecting cubes. In the wireframe version, the new triangulation can be seen.