“We use the most powerful and flexible simulation platforms in the self-driving industry.”
Mike Carter
Founding Engineer
Customer challenge
Testing ADAS and AD perception and localization systems in the real world is time-consuming and often difficult to execute and repeat at scale.
Real-world operations don’t capture all situations
One situation has many edge cases and variations
Certain events are dangerous to test in the real world
Collecting and labeling real datasets is costly
Applied Intuition’s solution
Applied Intuition’s solution allows teams to test ADAS and AD perception and localization systems at scale, identify edge cases, and ensure high-quality code.
Re-simulate failures found in the field
Generate synthetic datasets to train perception models using both classical and generative sensor simulation
Utilize sensor simulation to iterate rapidly
Develop high-performing perception and localization systems
01
Identify issues found in field testing
Search real-world logs using both structured queries and natural language to identify segments that challenged your ADAS or AD stack, including issues like driver interventions.
Modify perception modules or re-train machine learning (ML) models with synthetic data generated with both classical and generative sensor simulation to address the identified issues. Iteratively run sensor simulations to monitor progress across purpose-built test cases, and introduce variations across behaviors, weather, and lighting to ensure robustness.
Execute a full test suite before merging changes or deploying a new stack version in the field to prevent regressions. Generate new test cases with natural language and parameter-based approaches to scale coverage across your ODD.
Develop high-performing perception and localization systems
01
Identify issues found in field testing
Search real-world logs using both structured queries and natural language to identify segments that challenged your ADAS or AD stack, including issues like driver interventions.
Modify perception modules or re-train machine learning (ML) models with synthetic data generated with both classical and generative sensor simulation to address the identified issues. Iteratively run sensor simulations to monitor progress across purpose-built test cases, and introduce variations across behaviors, weather, and lighting to ensure robustness.
Execute a full test suite before merging changes or deploying a new stack version in the field to prevent regressions. Generate new test cases with natural language and parameter-based approaches to scale coverage across your ODD.