February 15, 2023

Can We Trust Software Stack 2.0?

by Dr. Andy Zelenak, Controls Engineer
Can We Trust Software Stack 2.0?

New Artificial Intelligence (AI) tools are powerful and extremely useful for some tasks such as writing blog posts, documenting code, and generating initial content for the purpose of brainstorming. This paper looks specifically at the field of autonomous driving, which is a cutting-edge application of AI and one for which we have several years of data already. A close look at the use of AI for autonomous driving reveals that even the most optimistic AI proponents still depend heavily on human developers. It will be some time before we transition to a “Software Stack 2.0” world where AI is trusted primarily over human coders.

Introduction

In November, the OpenAI research laboratory made a huge splash by opening the ChatGPT bot for public use. The bot is able to provide detailed, articulate responses to a wide variety of prompts, including coding questions.

The engineers at PickNik were delighted, excited, and a bit nervous when we first tried to write some code with the ChatGPT bot. It did a decent job! Obviously this has major ramifications for software developers around the world. We are now using the chat bot in a limited capacity (and always human-reviewed!) to write comments and basic unit tests for some non-critical code. Occasionally we prompt the chat bot for its opinion on how to write a function; the output it provides is often close to correct and it can be useful for brainstorming. However, we have found so far that code written by ChatGPT is usually not reliable.

For example, after some effort in prompting ChatGPT to write an angle wrapping function, we received this:

#include <cmath>
// Unwrap a radian angle to a value between -PI and PI
double unwrapRadianAngle(double angle) {
 // Calculate the modulus of the angle and 2*PI
 double mod = fmod(angle, 2 * M_PI);

 // If the modulus is greater than PI, subtract 2*PI
 // to get the equivalent angle between -PI and PI
 if (mod > M_PI) {
 mod -= 2 * M_PI;
 }

 return mod;
}

It missed this:

if (mod < -M_PI) {
 mod += 2 * M_PI;
}

The bot’s output was helpful but not fully correct, and still requires an expert to closely review and understand the math.

What does a leader in the field predict for AI in the near future?

Given that the publicly-available AI tools for software development have grown so quickly in the past few months, this paper tries to summarize the status quo. We’ll look at an example of a world-class company and how they use AI in their day-to-day operations. Is it time to trust the great neural network in the cloud, or is it mostly hot air?

Andrej Karpathy, recent director of Tesla AI, gave a presentation in 2018 called “Building the Software 2.0 Stack.” In a nutshell, he described a workflow where the task of humans was mainly to curate datasets and provide them for training to neural networks. Of course, his target application was autonomous driving. The neural nets would do the difficult decision-making, even to the level of determining how to nudge forward into traffic to improve visibility. It’s safe to say that Tesla is one of the most optimistic organizations in the world when it comes to trusting and investing in artificial intelligence.

Has Tesla achieved the Software 2.0 vision yet? My answer is yes, partially, but human-tuned heuristics are still prevalent. In scrutinizing the release notes of several software updates, we see that human intelligence is still very involved in Tesla’s Autopilot software. For example, comments such as this hint at human decision-makers:

  • “Reduced sensitivity for speed-based lane changes in CHILL mode.”
  • “Added highway behavior to offset away from blocked lanes and generic obstacles like road debris while also adding a smooth hand-off between in-lane offsetting and lane changing.”

It is difficult to tell whether humans are coding these changes themselves or tuning the cost/reward functions of deep reinforcement learning algorithms.

Perhaps we can judge the progress of Software Stack 2.0 by how many statistics we see in these release notes. When the release notes are filled with boring statistics rather than semantics, we’ll know we’re getting closer to Software Stack 2.0. It’s not exciting to read “Brake engagements reduced from 1.7654 to 1.7492 per mile” but that is what progress will look like.

AI Performance Metrics

So humans are still very involved in tuning the heuristics for Tesla’s self-driving vehicles. But does the software perform well? That depends on which metric we consider. Per a crowd-sourced dataset measuring user interventions per mile, Tesla is way behind other autonomous vehicles. The mean distance between Tesla user interventions is approximately 5 miles. AutoX, Waymo, and Cruise are all much more reliable at >29,000 miles between user intervention. Of course, this is not an apples-to-apples comparison because these companies are using different sensor suites. Tesla’s sensor suite, which is based on cameras only, is less expensive than the competitors’ and Tesla is operating in a less restricted fashion in many ways.

On the optimistic side, the Tesla Vehicle Safety Report of Q4 2021 said:

“In the 4th quarter, we recorded one crash for every 4.31 million miles driven in which drivers were using Autopilot technology (Autosteer and active safety features). For drivers who were not using Autopilot technology (no Autosteer and active safety features), we recorded one crash for every 1.59 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 484,000 miles.”

While this statistic sounds promising, it is likely misleading; it doesn’t consider confounding variables like fraction of highway miles or fraction of miles driven at night.

Autonomous driving performance will ultimately not be judged by performance under “typical” driving circumstances (where it is easier to gather data), but performance in the rare circumstances that require a deeper understanding of what is safe and permissible. When there is unexpected road construction, confusing signage, or other unexpected circumstances, it is hard to guarantee safety. Over time more driving data (and accident data!) will help fill a gap, but traditional software engineering can help harden software 2.0-based systems. At a high level, run-time monitors can ensure that an autonomous car complies with certain safety specifications. If violations occur, contingency maneuvers could be used to bring a vehicle to a safe state.

While for driving it may be practical to collect massive amounts of data to train neural networks, in other domains this may not always be true. It often takes a very large amount of data to create generalizable behavior. This was one of the reasons that OpenAI abandoned its efforts in using AI for robotics. Physics simulators can be part of the solution: they enable the generation of massive amounts of training data. However, one needs to be mindful of the sim-to-real gap: a simulated system may not behave exactly like the real system (or may not capture the variability encountered in the real world). This gap is decreasing, but performance of an AI-driven system in situations that fall into this gap is ill-defined and can be catastrophic.

Summary

This blog discussed the use of Artificial Intelligence in software development, with a specific focus on the use of AI for autonomous driving. I mentioned that while AI tools are powerful and can be extremely useful for certain tasks such as initial concept generation and code documentation, the use of AI for autonomous driving reveals that humans still need to keep a close grip on the reins. I also shared my personal experience of working with the OpenAI ChatGPT bot and how it did a decent job but code written by it is often not reliable. I also talked about the example of a world-class company, Tesla, which is one of the most optimistic organizations in the world when it comes to trusting and investing in artificial intelligence and how they are working to achieve the vision of a fully autonomous AI-driven world. However, humans are still very much involved in the process. I suggested that progress towards this vision can be measured by the increase of statistics and decrease of semantics in software release notes. Overall, while AI has the potential to greatly improve software development, it is important to keep in mind that it is not yet ready to replace human intelligence entirely.