How Applied Intuition’s SDS Uses ADP to Supercharge Active Safety Development

March 5, 2026
1 min read

Applied Intuition’s active safety and SDS teams face the same pressures as our customers: growing system complexity, long‑tail edge cases, and tougher regulations under compressed timelines. To keep up, we rely on our own tooling.

This blog post explains why traditional active safety development is slow and fragmented, how Applied Development Platform (ADP) changes that, and how our Self-Driving System (SDS) team uses the platform—so OEMs and Tier 1s can do the same.

Why Is Active Safety Development So Slow and Fragmented?

Euro NCAP 2026 expands crash‑avoidance testing with higher speeds, night‑time conditions, and more complex junction and vulnerable road user scenarios. Physical proving grounds remain critical, but covering all of this with only on‑road testing is impractical, which makes a unified simulation and data platform essential.

Fragmented workflows slow validation

Many Autonomous Emergency Braking (AEB) and Lane Keeping Assist (LKA) programs rely on a patchwork of tools and hand‑offs.

  • Data arrives from tracks, proving grounds, and fleets with inconsistent ingestion paths and pipelines
  • KPI computation, analysis, and triage are spread across multiple tools owned by different teams
  • Feedback loops from testing back to function owners can take weeks

The impact is measurable.

In one urban people‑mover program, a physical test campaign showed low‑speed AEB reliably avoiding pedestrians. However, a follow‑up simulation with a very slow pedestrian (around 0.5 m/s, like a person with a walker) revealed the shuttle didn’t brake at all. Root‑cause analysis—spread across teams and tools and run on a single engineer’s workstation—took weeks and ultimately traced to perception classifying very slow pedestrians as static obstacles. These multi‑week validation loops are common when programs depend on fragmented, locally run tooling.

KPI gaps hide real performance

Most teams want KPIs to guide development, but up‑to‑date metrics over large datasets are rare.

  • Different groups compute metrics using inconsistent tools and methodologies
  • Results arrive as static reports after major milestones
  • Analysis often starts only after costly test campaigns end

The consequence is delayed insight and inefficient data collection.

For example, winter tests in Sweden or hot‑weather campaigns in Australia are planned months ahead and run for weeks, but analysis may not begin until the campaign is over. Critical findings that could have focused data collection arrive too late, forcing follow‑up trips and delaying programs by months.

For AEB, false‑positive activations per several thousands of driving hours is a key KPI. As targets approach zero, every event must be triaged, and each new stack release must be validated against a representative Operational Design Domain (ODD) dataset. Doing that across scattered tools quickly becomes a major operational burden.

Unstructured triage wastes time

Once an issue is detected, teams often triage like this:

  • Jump between log viewers, custom scripts, internal tools, and spreadsheets
  • Share screenshots and log snippets via email or chat
  • Re‑do analysis when ownership changes or context is lost

The result is slow, error-prone issues resolution and poor knowledge continuity.

AEB false‑positive monitoring via open‑loop re‑simulation is a good example: Teams must stitch together drive collection, ingestion, large‑scale re‑simulation, KPI computation, triage, and regression validation, often across proprietary, legacy systems. Instead of focusing on core stack development and approvals, engineers end up operating complex infrastructure.

Road testing misses the long tail

Road and track testing struggle with rare but safety‑critical scenarios:

  • Actors that only appear in specific geographies or seasons
  • Adverse weather or low‑visibility conditions
  • Complex intersections or unusual infrastructure

Validating detection of large animals such as moose is difficult purely on‑road: encounters are rare, behavior varies, and conditions change. Simulation can address these gaps by systematically varying animal behavior, weather, and traffic—especially useful if real‑world events and near‑misses can be turned into reusable, parameterized scenarios.

How Does ADP Change the Active Safety Workflow?

ADP connects data, KPIs, triage, and simulation in a single platform, so active safety teams can focus on improving the stack instead of plumbing tools together.

One workflow from KPIs to signals

ADP links high‑level performance metrics and low‑level sensor data in one environment:

  1. Start from KPIs across track, road, and simulation—for example, AEB false‑positive rates, LKA lane‑keeping quality, or NCAP 2026 effectiveness scores
  2. ​Filter by road type, environment, vehicle, or software version to find where performance degrades
  3. Drill down from KPI outliers to the exact sequences behind them
  4. Inspect sensor signals, object lists, planner outputs, and actuation commands all within in the same web UI

Internally, we use ADP to monitor NCAP 2026 AEB effectiveness across scenarios and stack versions, and to track AEB false‑positive rates via open‑loop re‑simulation on large real‑world datasets. Running these workflows on one platform keeps KPIs and underlying data views aligned and comparable over time.

Built‑in structure for triage

ADP encodes a structured triage flow instead of ad‑hoc investigations:

  • Detect a KPI anomaly or test failure
  • Identify and group the relevant datasets and sequences
  • Inspect and annotate individual sequences and frames
  • Define issues with clear context (data, conditions, stack version)
  • Track ownership and status through resolution

This reduces duplicate work, makes triage more predictable, and builds a library of relevant scenes. For AEB false‑positive monitoring, ADP handles large collections of recorded traffic logs, re‑simulates the AEB stack in log‑replay mode, and automatically surfaces each activation with full context—including scenario metadata, sensor data, and KPI contribution included—for streamlined triage.

Simulation tightly linked to real data

ADP ties simulation directly to real‑world findings:

  1. Detect an interesting event or near‑miss in fleet or track data
  2. Extract the scenario, including actors, trajectories, and environment
  3. Turn it into a parameterized scenario or family of scenarios
  4. Run variations in simulation to probe system behavior and test improvements
  5. Validate promising fixes in simulation, then confirm on‑road where appropriate

Because this loop runs on the same platform as KPI monitoring and triage, simulation isn’t a separate, one‑off exercise. A moose detection near‑miss in winter conditions, for example, can become a scenario template replayed at different speeds, lighting, and weather before the next physical test campaign.

Validating AEB braking behavior through closed-loop resimulation

How Does Applied Intuition’s SDS Team Use ADP?

Applied Intuition’s SDS team uses ADP daily to develop its own active safety features, acting as an internal customer of the platform.

Building SDS on production tooling

We run SDS on the same production ADP platform we ship to customers—no separate internal prototype stack. In practice we use two main setups:

  • A dedicated ADP server to manage large data collections and run large‑scale simulations over real and synthetic scenarios
  • Local ADP instances on developers’ machines for fast, on‑table testing of new features and fixes

Because both share the same UI and workflows, engineers move seamlessly from quick local experiments to large‑scale evaluations, which supports high‑frequency development cycles. The same capabilities our engineers depend on—data ingest, KPI computation, triage, and simulation—are the ones OEMs and Tier 1s receive, which is unusual in a space where many tools are built primarily for demos or one‑off projects.

AEB as an end‑to‑end example

Our AEB development workflow shows how this works:

  • Local iteration: Engineers design and test new AEB features locally using a subset of Euro NCAP and FMVSS scenarios to catch regressions early
  • Large‑scale simulation: Once stable, changes are evaluated on the ADP server over large synthetic and real‑world datasets to estimate AEB KPIs
  • False‑positive rate estimation: ADP re‑simulates the AEB stack on large sets of recorded traffic logs and aggregates false‑positive activations into a realistic false‑positive rate
  • NCAP and regulatory scores: AEB is re‑run on Euro NCAP and FMVSS test suites in ADP, and scoreboards show NCAP points, star ratings, and regulatory pass/fail outcomes for the current system
  • Structured triage: When false positives occur or specific scenarios underperform, developers inspect individual simulation logs in ADP—internal signals, object tracks, planner decisions, and actuator commands—to drive the next iteration

Because this loop—from local test to large‑scale re‑simulation, KPI review, and triage—runs primarily through ADP, SDS engineers get fast, consistent feedback on every meaningful change. It also means every improvement driven by SDS work (for example, better collaboration or more efficient NCAP workflows) is immediately available to customer teams using the same platform.

Work With Us on Active Safety

If you’re building active safety or ADAS systems and recognize these challenges—slow validation loops, KPI gaps, triage bottlenecks, and long‑tail coverage—we’d be happy to show how ADP can help. You can see how our SDS team uses the platform and explore how similar workflows could fit into your development and validation process.

And if you’re an engineer excited about tooling‑first workflows and large‑scale active safety development, we’re always interested in talking to people who want to push the state of the art together.