Inside the Process: Building 3D Environment Visualization for Applied Intuition’s Core Simulator

March 29, 2021

In autonomous vehicle and robotics software, visualization is an essential tool for autonomy engineers to understand what their vehicle sees and how their stack reasons about complex environments. Our browser-based product suite requires cutting-edge technologies to visualize simulations in a performant, feature-rich, and accurate manner. With the continuous addition of new features to improve the robustness of our simulation tools, visualization constantly evolves to support more diverse and complex customer use cases. In this blog, we share with you our journey of building out a mission-critical component of our software through two projects. These projects renewed our commitment to visualization as a first-class capability within our tools.

Root-Causing Complex Vehicle Dynamics Problems Through In-Product Visualization

Early last year, we noticed a few instances of odd behaviors with vehicle dynamics. When a customer’s ego vehicle (i.e., autonomous vehicle system) turned left on a specific bend of the road, bizarre transformations occurred. The bounding volume and yaw rate would distort to an absurd degree. This caused a specific simulation to fail, as the ego vehicle reported impossible dynamics conditions it could not handle. Our visualization and plots were the first clue that something was amiss. Acting swiftly, a task force of our engineers sought to root out the problem. The task force methodically vetted the implementation from each angle until they hit upon a surprising conclusion -- the jarring vehicle behavior was working exactly as intended!

The issue actually existed within the map data -- which had a set of erroneous points that spiked multiple feet above the road surface (Figure 1). When we reported our findings back to our customer, their feedback was skeptical: the map data was vendor-vetted and had never failed before. Plus, why would road-indication polygons in the map affect their vehicle dynamics behavior?

Figure 1: A set of erroneous points spiking from the road surface (blue points; image c) was not observed in 2D map (images a and b). 

The devil was in the details. Within our core simulator, we supported the option to automatically generate terrain based on provided map data by computing a Delaunay triangulation. In this case, the generated terrain along the bend was grossly malformed and spiked through the expected road surface. We simply could not see it in our 2D view. In order to surface this class of problems in the future, we would need to visualize both terrain and 3D map data. We had a choice in front of us: (i) either write an internal script which would leverage third-party tools (e.g. MeshLab) to triage future events or (ii) commit more resources to building out in-product visualization using WebGL. We chose the latter option because we concluded that visualization-based debugging would be useful for any customer debugging terrain down the line. Dynamics-Day(s) marked our renewed commitment towards first-class in-product visualization to empower our customers.

But how would we implement terrain visualization? At first blush, the problem seemed simple enough. Leveraging three.js, our WebGL library of choice, we could use the built-in suite of Loaders to parse and display our mesh. As a proof-of-concept, we wrote our Delaunay-triangulated mesh to a well-known format, sent it to our web-based client through our file management microservice, fed the mesh into the relevant loader, and applied the necessary transformations and projections. For trivial meshes (e.g. low polygon planes), our initial solution was sufficient; however, for complex meshes larger than tens of thousands of vertices, we began to see serious problems; sometimes the client would be visibly frozen for as much as 30 seconds. Because Applied Intuition’s tools were built in the browser and JavaScript is single-threaded, all user inputs were frozen for the duration necessary to load and parse the mesh. If we shipped the terrain visualization without these performance optimizations, other customers would immediately run into performance problems.

In order to address loading performance, we looked toward Web Workers and Transferable interfaces to load and parse the mesh in a background thread. Web Workers allowed us to distribute work between multiple cores of a CPU instead of serially computing on one core (Figure 2). Transferables were essential in order to reduce copying the BufferGeometry data in both contexts. Leveraging a Web Worker-based architecture allowed us to deliver seamless results to our customers at a much larger scale. Compared to the tens of thousands of triangles in the initial implementation, we were now comfortably accepting multiple millions of triangles. Both user-specified terrain and our own in-house world generation create terrain with tens of millions of triangles to render (Figure 3). To this day, our customers use in-product terrain visualization to quickly triage data-related issues, saving countless engineering hours.

Figure 2: How Web Workers offload computation from the main thread to keep our tools responsive to user interactions at all times (Credit: LaunchSchool).
Figure 3: Terrain visualization of procedurally generated overpass section (~2 million vertices).

The World Is Not Flat: Multi-Level Scenarios and Environments

With terrain visualization out in the wild by mid-April, our customers became increasingly interested in visualizing the world around their ego vehicle. Terrain was a necessarily 3D concept, yet we typically projected all of the other entities down onto a single plane. The visualized world was flat -- an affordance of an earlier age when autonomous vehicle programs’ operational design domains (ODDs) were similarly more limited. With the increasing maturity of our customers’ autonomy stacks, multi-level environments such as complicated highway systems in Japan (Figure 4) necessitated full 3D visualization along with point-of-interest validation against satellite imagery.

Figure 4. An example of multi-level road interactions in Japan.

Visualizing the world in 3D would require an overhaul of our entire coordinate system before adding objects into our Scene. But why was implementation complicated? Didn’t we just need to remove one projection operation?

In order to understand the coordinate complexity, we need to examine the primary modes of operation of our simulation data. Data in autonomous vehicles typically operate in three primary frames of reference:

  • Vehicle: all poses are relative to the rear axle of a given vehicle (typically an ego)
  • Sensor: all poses are relative to the origin of a sensor (e.g. camera, lidar)
  • Map: all poses are relative to the origin of a map

It is the job of the visualizer data pipeline to convert all data out of these three frames and into a local frame to stream directly into the client. 

Yet, when fleshing out designs for 3D visualization, it became apparent that the existing systems were inconsistent in their implementation of these coordinate transformations. Instead of passing in true 3D data to the coordinate manipulation system to preprocess, different systems would truncate or otherwise mutate the z axis in an ad-hoc manner prior to converting it to the client’s local frame. We needed to refactor numerous one-off implementations across the different systems, and unify our approach.

We created a singleton called “Coordinates,” which would be in charge of passing in the appropriate arguments based on current state (user viewing options, map, etc) and make the appropriate calls to pure functions. This singleton was created on world initialization and acted as the single-gate through which all transformations would have to pass. The singleton would apply operations through Array.map in a stateless and parametric way. The newly shared code path could be easily tested in one place via both unit and integration tests. By September, multiple new work streams were now possible: multi-level scenario editing, offroad routing, vehicle dynamics on uneven surfaces, occluded perception modeling, just to name a few (Figure 5).

3D visualization made both in-house and external engineers aware of their respective planar assumptions. The constant reminder of a third dimension pushed our systems to a new level, and more importantly, brought 3D modes of operation to the top of mind -- not some edge case to revisit “another day.”

Figure 5: Simulating an autonomous vehicle’s performance on 3D highway interactions.

What’s Next for Frontend Engineering at Applied Intuition?

Renewing Applied Intuition’s commitment to first-class visualization is an ongoing and rejuvenating process, and we hope you enjoyed our deep dive into a sliver of the work that goes on behind the scenes. When we look back at our work over the last year, it’s incredible to see the differences between versions. In addition to terrain and 3D work, we implemented lidar and radar visualization, supported satellite imagery overlay, and migrated fully to Typescript, all while making significant performance gains across the entire frontend (Figure 6 a-d). Each individual team member drives high impact and the multiplier effect of working with such a talented team and high output team is empowering.

Figure 6a: Extent of visualization in Applied Intuition’s core simulator circa Jan 2020.
Figure 6b: Lidar visualization in Applied Intuition’s core simulator (March 2021).
Figure 6c: Satellite imagery in Applied Intuition’s core simulator (March 2021).
Figure 6d: A visual mode in Applied Intuition's core simulator (July 2021).

There is still a lot of work left to expose complex features of our simulation tools in an intuitive, user-friendly manner. Some examples of areas to improve include application performance, full layout customization, and collaborative editing features. If you care about solving these complex frontend challenges, we want to talk to you! The best way to get hold of us is by applying for a full-stack engineering role and front-end engineering role :) We are actively hiring for candidates with all levels of work experience!