«  View All Posts

Virtual testing: The invisible accelerator

January 17, 2020 | 5 min. read

By the Aurora Team

 

On-road testing gets the spotlight in the industry, but there’s a lot of meaningful work happening behind the scenes at Aurora to deliver self-driving technology safely, quickly, and broadly. One major accelerator is our Virtual Testing Suite, which allows us to conduct millions of valuable off-road tests per day.

Since Aurora’s inception, we’ve spoken at length about the many ways we’re fueling the rockets for success. This has included our design for accurate, scalable HD maps, our approach to creating robust hardware from first principles, and our acquisition of Blackmore FMCW lidar. Over the past year, we’ve made tremendous progress in every facet of the Aurora Driver, allowing it to master increasingly complex maneuvers in equally complex environments.

The Aurora Driver

The Aurora Driver can now navigate seamlessly in busy intersections. Here, the Driver inches forward and waits until it can safely nudge around an SUV that isn’t able to clear the intersection by the time the light turns green. Soon after, the Driver smoothly handles a surprise jaywalker.

Our diverse Virtual Testing Suite is a major accelerator that enables us to make rapid development progress.

Virtual testing refers to any time that our software is running offline in response to synthetic or real historical data as opposed to operating in real-time on the road. As we’ve said since our early days, a strong virtual testing program is a marker of a mature self-driving effort, and we take a multifaceted approach to thoroughly assess the performance of each element of our system before it reaches our vehicles. This combined with thoughtful on-road testing will allow us to deliver the Aurora Driver safely and quickly at scale.

Why we test off-road

Decades of collective experience writing production-ready software has taught us that mature organizations test early and often. Virtual testing is the pillar of our development process because it allows us to:

  • Test more thoroughly. It’s practically impossible to experience every possible scenario in real life. Virtual tests allow us to test different permutations of the same scene in a controlled way. It’s also not feasible to run thousands of tests on the road to evaluate a single change to the codebase, but we can do that off-road.

  • Find and address issues early. On-road tests examine how all parts of our system work together, but virtual tests allow us to determine how well tiny parts of the system are working individually. This makes it much easier to identify the root cause(s) of problems early in development.

  • Fine-tune new capabilities quickly. Fast, reliable feedback from virtual tests enables rapid iteration and improvement.

  • Develop objective measures of progress. We run the same tests with the same parameters each time we make a change to the codebase. When a new version passes more tests than its predecessors, we can say it’s performing better.

  • Make on-road testing safer and more efficient. When our software passes thousands of off-road tests, we can be confident that our Driver will perform well on-road. Virtual testing also allows us to streamline our real-world tests to focus on the areas where we need additional feedback.

Put simply, virtual testing has been and will continue to be critical to successfully developing and deploying the Aurora Driver.

Aurora’s Virtual Testing Suite

We’ve invested—and continue to invest—in the infrastructure that enables our multifaceted Virtual Testing Suite to run millions of off-road tests per day.

A robust virtual testing program requires this complementary suite of tests, beyond just simulation, that assess how software works at every level. Thus, our Virtual Testing Suite includes a mix of codebase tests, perception tests, manual driving evaluations, and simulations.

Codebase unit and regression tests

We don’t leave testing to the end of the development process. As code is written, we write both unit tests (e.g., seeing if a method to calculate velocity gives the right answer) and integration tests (e.g., seeing whether that same method works well with other parts of the system). New work must pass all relevant tests before we can merge it with the larger codebase, allowing us to identify and fix issues early.Aurora Engineering

Engineers at Aurora must write tests for each new piece of code.

 

Perception tests

We use log data to create a series of specialized perception tests. For example, say a bicyclist passes very close to one of our vehicles. Specialists would review that footage and then label things like object category (bicyclist), velocity (3 mph), etc. We would then use that “absolute truth” to test how well new versions of perception software can determine what really happened on the road.

Some tests assess how well the system performs across the board. For example, “How many pedestrians does it correctly identify?” The rest evaluate specific capabilities like tracking close objects. For example, “Does the perception system see this bicyclist right next to our vehicle?” Perception tests are currently made from real-world data, but we’re developing highly-realistic sensor simulations so that we can also generate tests for uncommon and high-risk scenarios.The Aurora Driver

This video shows one of our specialists reviewing log data and labeling a parked vehicle.Aurora's Perception System

In this complex scene, each actor (vehicles, pedestrians, bicyclists) has been carefully annotated by human experts to provide the extensive ground-truthing of our perception system.

Manual driving evaluations

We want the Aurora Driver’s movements to be natural and predictable for other drivers and pedestrians in the world. That’s why we test whether our motion planning software can accurately forecast what a trained, expert driver would do in complex situations. For example, would the Aurora Driver start making a right turn sooner than when our expert vehicle operators actually begin turning? To do this, we collect data while our vehicle operators drive and then assess how the Aurora Driver’s plan varies from the vehicle operator’s actual trajectory. The ability to test future versions of our software against these expert demonstrations makes manual miles sometimes more valuable than the miles we drive autonomously.The Aurora Driver

Here we compare a vehicle operator’s trajectory (blue) to the intended trajectory of the Aurora Driver (green) during a right turn.

Simulations

Simulations are virtual models of the world where we can change the parameters to test how the Aurora Driver would react in many permutations of the same situation. For example, we might vary the number of pedestrians at a virtual crosswalk. We routinely flag interesting events we see on the road and use them as inspiration for new simulations. We’ve even built custom tools to automatically extract information from log data (e.g., how fast a pedestrian is walking) and plug it into our simulation models.

Aurora unprotected left turn

We used log data from real-world events like this unprotected left turn (top) to inspire new simulations (example on bottom).

Rapid progress safely at scale

With decades of experience developing self-driving technology, our team has a deep understanding of the requirements to develop, test, and ultimately deploy a robust Aurora Driver at scale. That’s why we continue to prioritize foundational work, like growing our Virtual Testing Suite, that we know will pay massive dividends down the road.

Our growing Virtual Testing Suite allows us to make progress safely at a rate that is not possible with real-world testing alone, and accelerates our ability to deliver the Aurora Driver safely, quickly, and broadly. However, we get the most out of real-world miles when we can use them to train and vet future versions of the Aurora Driver. Over the past few years, we’ve built tools and pipelines that help us identify, review, categorize, and convert the majority of disengagement events and interesting on-road scenarios into virtual tests within a matter of days. We are committed to spending our on-road miles at the boundary of our capabilities, so we analyze all interesting on-road events and novel disengagements and rapidly incorporate those findings in our models. We look forward to sharing even more soon.

We’re looking for passionate individuals from all disciplines to help us keep fueling the rockets. Visit our Careers page to view openings and learn more about what it’s like to work at Aurora.

the Aurora Team

Delivering the benefits of self-driving technology safely, quickly, and broadly.