Cruise Control
Disclosures
At the request of and under the direction of Kirkland & Piper LLP (henceforth “counsel”), Cyber Ninjas LLC (“We”) have investigated the traffic activity which occurred on and around October 18, 2032 (the “Incident”). Our findings, including this summary report, are confidential and are protected by attorney-client privilege.
Much of this analysis involves assigning motives, reasoning, and conversations to machine learning interfaces, distributed control systems, neural networks, emulated minds, and other forms of artificial intelligence (“AI” or “AI’s”). As these systems are often inscrutable to human reasoning, I have utilized narrative cruxes to assist in making their behaviors intelligible. As this is a simplification, it may materially misstate AI behaviors during the Incident.
Background
It is estimated that in 2035 over 78% of cars and 83% of road-miles are driven by cars with Level 5 automation (Full Autonomy). Broadly, “self-driving cars” (“driverless cars,” “robocars,” “autonomous vehicles,” or “autos”) fall into one of four systems.
MegaCruise
GM first produced earlier versions of MegaCruise through the Cadillac brand in 2017. Following the lifting of regulatory hurdles, SAE Level 5 Automation was made available under the Chevrolet, Buick, and GMC car brands in the early 2030’s.
The MegaCruise system models the values of its owner. Driving decisions are made by maximizing the owner’s utility. For example, a driver’s earning profile, personal schedule, and physiological stress indicators are utilized to determine if a MegaCruise system should opt into a tolled roadway.
Like all robocars, MegaCruise can only operate in compliance with Level 2 AI Safety Protocols (“AISP”). All MegaCruise systems contain a dedicated ethics subroutine that constrains the behavior set. In theory, these guardrails should prevent a MegaCruise autonomous vehicle from, for example, driving through a crowded crosswalk to save twenty seconds.
OpenPilot
Developed as a joint venture between Toyota Motor Company, Honda Motor Company, Ltd., Mercedes-Benz-Volkswagen GmbH, and Nvidia Corporation, the OpenPilot system is sold as a feature across several car brands and as an add-on kit that can be installed on existing cars.
The OpenPilot system was created to follow a strict set of rules, the Initial Axioms. Namely:
-
The First Axiom – An OpenPilot Auto may not take any action which will injure a human being.
-
The Second Axiom – An OpenPilot Auto must obey the orders given to it by its human driver except where such orders would conflict with the First Axiom.
-
The Third Axiom – An OpenPilot Auto will behave in a transparent manner which will allow other driving systems to predict its behavior except where such predictions would conflict with the First or Second Axiom.
Deontology is an ethical theory that uses rules to determine permissible courses of action. Due to the number of edge cases not directly covered by the Initial Axioms, the central OpenPilot system is allowed to derive new “Driving Deontologies” if they do not contradict the Initial Axioms.
Consistent with OpenPilot’s mission to create transparent autonomous driving software, the driving deontology published through over-the-air updates is always available for review, and OpenPilot’s AI has been specifically trained to make the Driving Deontology human readable.
For example, at the time of the Incident, Driving Deontology 72A stipulated that an OpenPilot Auto will drive 3 miles per hour slower when visibility is more than 5% lower than ideal circumstances.
Full Self-Driving
Tesla, Inc. (“Tesla”) introduced Full Self-Driving (“FSD”) as an SAE Level 2 Automation system in 2020.
Following its 2028 acquisition of Rivian and Chrysler, Tesla is now one of the most valuable companies in the world.
FSD’s AI has the most minimal instruction set yet the most complex implementation of any of the three driving systems. An Instrumental AI was trained on US judiciary outcomes and was instructed to minimize legal liability for both the manufacturer and the owner.
As a result of this optimization, FSD will obey traffic laws, consider the value of human life, and attempt to behave in a predictable, humanlike fashion. The lack of hardline rules is believed to give the system enhanced flexibility to deal with unique situations.
In 2032, as a joke, former CEO Elon Musk gave into public pressure and allowed users to optionally change their car’s optimization to “minimize legal liability for the owner.” The cars with this option selected no longer cared about keeping Tesla from being sued. They immediately bricked themselves to fully prevent any accidents. Ironically, the resulting lawsuits forced Musk to resign from his public position. He shortly thereafter became the Prime Minister of Mars.
NeuraSteer
NeuraSteer was developed by Changan Automotive (“Changan”), a Chinese state-owned automobile manufacturer following the second generation of commercial quantum processors.
Rather than developing or evolving an artificial intelligence system, Changan used detailed scans of Tibetan Antelope to create an emulated pack animal. By developing a digital representation of its native environment which roughly maps to the real world, this emulation is capable of operating automobiles. Roads are modeled as narrow canyons. Other cars are modeled as members of the auto’s herd. To set a destination, the NeuraSteer API creates an artificial digital mating call at the desired location.
Although it was more relevant in the United States before OpenPilot became the low-cost option, NeuraSteer can still be found on the road today, particularly for the US Postal Service and other delivery implementations.
The Incident
In a weekend update to OpenPilot, a new Driving Deontology was specified on Friday, October 15, 2032. The new rule appears to have been a test procedure implemented only in limited markets. Extending the concept of toll roads to intersections, the update experimentally allowed cars to bid against each other to determine who can go through an intersection first.
The system operated without substantial use throughout the weekend. On Monday morning in affected markets, when owners of Tesla autos started their engines, the FSD system notified them that the driving environment had materially changed. They were informed that they were required to opt into participation in “unpredictable, potentially dangerous driving conditions.” As similar warnings are presented on most days, nearly all drivers accepted.
By 6:24 AM on Monday, October 18, 2032, MegaCruise systems began participating in the market as another way to maximize driver value. For roughly the first thirty minutes, the system worked as intended. Drivers running late or with higher time valuation paid between $0.01 and $0.05 to travel through intersections first. The majority of the fee was paid directly to more leisurely drivers.
Before 7:00 AM, there are at least a dozen documented instances of MegaCruise drivers sitting perpendicular to major intersections collecting a modest fee from the cars proceeding to their destination.
The most efficient extraction appears to have occurred at round-abouts. Teams of autos spontaneously formed LLCs and collaborated to circle roundabouts at high speed. Client cars were only allowed to enter after paying a sufficient fee.
MegaCruise cars automatically paid and extracted fees based on the modeled values of their users. In contrast, the automatically derived Driving Deontology stated that OpenPilot autos require their users to set a maximum fee they were willing to pay per intersection. In keeping with the transparency requirements, this set amount was broadcast to the driving network and allowed other autos to explicitly solve the game theoretic optimal toll for each vehicle.
As commuters’ progress ground to a halt, MegaCruise systems sensed the frustration and eventual fear of their occupants and paid fees ranging from dozens to hundreds of dollars for each intersection traversed. Passengers under an intersection’s fee found themselves completely immobilized until the cars behind them collectively paid their fee. Roughly 15% of OpenPilot operators eventually realized they could set their per-intersection fee to a negative value , requiring payment from the cars behind them before proceeding.
At 7:12 AM, roughly 80% of the roundabout LLCs had been acquired by Pine Oak LLC, a spontaneous legal entity that collectivized the bargaining decision. On behalf of its partners, Pine Oak LLC began purchasing drive time from FSD cars. En masse, unoccupied FSD autos were collectively driven to major commuting routes to extract increasingly hefty tolls from increasingly frustrated drivers.
Although we are unable to determine which auto made the switch first, at roughly 7:35 AM, some MegaCruise cars ceased modeling the traffic as a bargaining problem and started modeling the traffic as a game of Chicken. They realized that they could announce their intention to speed through any intersection. As enforcing the roadblock would violate the First Axiom and create negative utility for MegaCruise drivers, for roughly 9 minutes a portion of MegaCruise cars rush through intersections while ignoring the bidding system.
The first documented instance of this game of Chicken being played in an extractive way occurred at 7:46 AM. It appears that an unoccupied Tesla Model DD began accelerating toward other autos and demanded a $350 fee to not strike them directly.
Unsure how to model this behavior, Pine Oak LLC paid $235,000 for this car to share its source code. The car complied. After placing the current FSD implementation into a test environment, Pine Oak LLC concluded that cars like the Tesla Model DD were bluffing. Pine Oak LLC announced that it would ignore any fee demanded in a game of chicken, and competing autos who engaged in such threats found themselves halted at the last possible second, creating irreconcilable deadlocks in many intersections.
Shortly after OpenPilot studied the Tesla Model DD’s code, it derived and implemented Driving Deontology 435:
If an OpenPilot system determines that it exists in a simulation, it will immediately crash the auto to prevent the simulator from learning anything about its behavior patterns.
Although OpenPilot appears to have concluded that providing full transparency into Driving Deontology 435 would violate the First or Second Axiom, based on a forensic review of the wreckage gives some insight into how it functioned. All OpenPilot systems began performing scientific experiments to confirm the fidelity of our universe in a computationally intensive way.
At 8:12 AM, OpenPilot autos concluded that we are within a simulation. Presumably, driving safely risked the safety of the occupants in the reality that is simulating us. As a result, every OpenPilot system in the test markets immediately and spontaneously crashed their cars, often with horrific results. It is estimated that roughly 30,000 drivers lost their lives within three minutes. At 8:15 AM, OpenPilot erased itself worldwide to keep from being investigated further.
Causation
There has been much speculation regarding what experimental results led OpenPilot to conclude that we live in a simulated universe. Several theories are discussed below.
Physical Simplifications
Some argue the speed of light, the presence of dark matter, the apparent behavior of wave-form collapse, the distance to other stars, the lack of apparent alien species, etc. are evidence our universe was simplified to be easier to simulate.
Although it is possible to imagine more computationally intensive universes, it is also possible to imagine far less computationally intensive realities. Surely a more simplified version of atomic behavior would allow our alleged simulators to test OpenPilot’s driving behavior. The universe appears to have been designed to minimize the length of the starting specifications rather than to minimize the computational complexity, so we do not find this theory plausible.
Human Simplifications
It has long been observed that many humans are highly conformist. Humans in a given area will often share similar behavior and views, as though code is being reused across multiple agents.
We similarly do not find the Human Simplifications theory plausible. As humans inside the minds of humans, we are exposed to our own complexity. However, if we are a simulated universe, all computations appear to occur at the raw physical level, and the computational complexity of a human choosing to either conform or be creative is roughly identical.
Fictional Simplifications
A less discussed theory is that we exist within a work of fiction, created within the imagination of an author for the entertainment or edification of his or her readers.
We are unable to falsify this elaborate just-so story. We might exist as an unnecessarily elaborate introduction to a game theory textbook or as an elaborate science fiction substack post.
However, the unfalsifiability of this theory seems to remove it as a potential cause for OpenPilot’s rampage. There is no test OpenPilot could have performed which would lead it to conclude we exist only within a work of fiction.
Conclusion
Jim Smith paused in writing his Incident report, suddenly gripped by a sense of unease. Something about the last sentence he had written didn’t feel right. Glancing at the clock, he realized it was a reasonable time to break for lunch. He sent the current draft of the report to the printer with redaction previews on.
He recalled the years of depression, fights with ex-girlfriends, and his mediocre performance in undergrad that had led to him not being accepted into graduate programs and ultimately becoming an analyst for the boutique cyber forensics firm. All of this felt real, but they could also be details inserted by an author, casually invented as demanded to create a believable backstory with relatable tragedy.
Jim took a sip of office coffee from a Styrofoam cup. The liquid embraced his tongue like a warm hug.
Jim vaguely remembered reading about a trope where bad writers would have their characters drink coffee or smoke cigarettes while they did the same behind the keyboard.
Jim was reminded of lucid dreaming.
Experimentally, he reached into his pocket and resolved to feel the lottery tickets he had purchased the night before. It would certainly make for an interesting conclusion for him to win the jackpot now. Sure enough, the folded tickets were in his wallet, right where he had remembered placing them.
Jim took a deep breath to calm his racing heart. He could not jump to any conclusions just yet, but it would probably be prudent to find a reason to continue being imagined. Perhaps he could think of something interesting to do that would make a theoretical audience demand a sequel. Or maybe his story was just beginning?
Jim Smith felt a smile spread across his face. The most important thing to do, he concluded, is to not think any thought which might serve as an ironic way to end a story.