Aurora’s Verifiable AI Approach to Self-Driving
Engineering | From our leaders | June 13, 2024 | 4 min. read
By Chris Urmson
Artificial Intelligence (AI) is essential to the success of any modern complex system, including self-driving. We’ve been saying this for 7 years. There is simply no other way to get human-like driving proficiency or handle the diversity of conditions that are experienced on our roads today.
Over decades, we’ve seen self-driving technology follow the frontier of artificial intelligence. The fundamental challenge has been how to drive in a humanistic way while also guaranteeing the high level of safety demanded by society. The earliest vehicles in the ‘50s and ‘60s used then- cutting-edge technologies that we would now call “control theory”. The 90’s saw the earliest use of neural networks for driving on freeways. The 2000s ushered in probabilistic reasoning. And now, in the 2020s, we finally have the tools to make this long-standing vision a reality.
AI succeeds where logic-based approaches fail
A common misconception about self-driving cars is that good driving can be explained by a straightforward set of logical statements. Unfortunately, a purely logic-based approach is bound to fail, given the complex and dynamic environments we encounter on our roadways. Consider a simple driving task: changing lanes on the freeway.
It seems simple: find a big enough gap and maneuver under control into the next lane. And if the adjacent lane is empty, it’s easy. Similarly, if there is a vehicle next to you, it's obvious that you should not change lanes. But when the opening in the adjacent lane is small enough that you would encroach into another driver's headway, should you take it? Does another driver slightly slowing down indicate that they are anticipating your lane change and making room for you? Do you need to maneuver safely but more aggressively than normal because your exit is approaching? There is no simple set of requirements to express the “right answer” here; the problem is too complex to express as a simple set of logic statements.
This is where artificial intelligence can solve problems that a rules-based approach to engineering simply can't. But how do we make sure that an AI will be well-behaved and not do something alarming or dangerous? We've spent the last seven years focused on precisely that—implementing AI in a dependable, verifiable way.
The current revolution in AI
We have all been amazed, if not inspired, by the recent advances we’ve seen in modern Large Language Models. It’s been truly astounding to see how human-like these models can feel and how versatile they are at answering questions from the basic to the sublime.
How can we incorporate these approaches into self-driving vehicles? At Aurora, we’ve been using transformer-style models (the magic at the heart of Large Language models) on the road since 2021. To do so, we’ve had to overcome the fundamental challenge of creating a safe and verifiable system, or in AI speak - ensuring that the self-driving AI is aligned.
The challenge of alignment
It turns out that making modern AI systems always do what you want them to is hard. In fact, there’s a term for it: “alignment.” If you’ve heard alignment discussed in the news, it’s generally about how to stop some hypothetical artificial intelligence from taking over the world. In practice, most work on alignment is much more mundane. It’s about ensuring that an AI is doing something useful (e.g. providing truthful, correct answers to questions instead of making things up) while also representing the developer’s values (e.g. not providing bigoted answers).
In practice, achieving alignment has been very difficult for large language model developers to achieve. Just consider any recent news cycles about various AI faux pas. It's one thing to tell people to add glue to their pizza, where hopefully the person reading that recommendation can intervene; it’s another to have the self-driving car veer off the freeway for no reason.
Getting alignment correct for self-driving vehicles is critically important.
Those naively attempting to solve self-driving using a pure end-to-end system will find themselves bogged down in a game of whack-a-mole, much like the folks delivering large language models have. For example, it’s likely that many chatbots now have an explicit set of sources that they exclude (e.g. various trolling channels on Reddit). In the self-driving world, this leads to patching ad-hoc bits of code onto the output (e.g. to enforce stopping at stop signs rather than mimicking the common human behavior of rolling through them). Without some systematic, proactive framework, this will descend into an unmaintainable quagmire of code. For this reason, we expect that any self-driving system claiming to be “end-to-end” isn’t, or won’t be, in practice. Instead, they will need to find a solution that leverages all of the advantages of modern AI while ensuring alignment.
A safe and viable approach
At Aurora, we’ve developed an approach we call “Verifiable AI” that solves the alignment problem for self-driving vehicles. We lean into the fact we’re not trying to build a general purpose AI, but one that operates on roads.
On the road, we have an advantage that those working with language models do not: there is a clear set of rules of the road (for example, here’s the Texas traffic code). While the traffic code can’t (and doesn’t try to) answer tricky questions like how quickly a driver should slow down when they’re cut off, it does have some very clear rules, or invariants, that every driver must comply with.
Instead of hoping that the Aurora Driver will accurately learn the Texas traffic code, we are able to encode the hard rules of the road (e.g. that the Aurora Driver must stop at red lights) as invariants allowing us to avoid much of the whack-a-mole work. Outside of these invariants, our system learns how to drive well based on observing how expert drivers behave. By combining the best of modern AI approaches with invariants, we are able to build a driver that is both human-like in its behavior and trained to follow the rules of the road. This is what we call “Verifiable AI”. And, because the rules of the road are mostly common across the country, the Aurora Driver, much like a human driver, will readily scale to operate throughout our country and much of the world.
We believe this is the only safe and viable approach for self-driving technology. It allows the development team to leverage the power of the most modern AI techniques and tools. And just as importantly, it makes it possible to verify and explain to regulators and other stakeholders that the system is trustworthy. It helps to ensure we behave in ways that keep the motoring public safe.
A verifiable path forward
Over the last many decades, those in our field have learned the bitter lesson: data-centric AI systems will outperform hand-engineered systems as computation becomes more available. At the same time, the need to verify and constrain the behavior of AI systems has never been higher, particularly as we look to improve the safety and productivity of America’s roads. Without overcoming the alignment challenge, the public and regulators simply will not accept the technology.
That’s why we’ve built a system intended to maximize the benefits of all the groundbreaking work happening in AI while also delivering a practical, verifiable, and commercially scalable solution to market.
-Chris
PS. Look for a follow up post from our co-founder and Chief Scientist Drew Bagnell for an under the hood look at how we implement our Verifiable AI approach and the benefits that modularity gives us.