Self-Driving Car Freedom?

by William Mattar | August 28th, 2017

Today’s entry takes a look at a scholarly article published just over a year ago in the Yale Journal of Law and Technology: The Costs of Self-Driving Cars: Reconciling Freedom and Privacy With Tort Liability In Autonomous Vehicle Regulation.

Analyzing the laws of agency and products liability, the article’s main thrust is that “the more users want to preserve their freedom and privacy, the more liability they may end up retaining for the behavior of their self driving cars.”

The author notes “predictions that self-driving cars may be able to prevent many of [the ten million car crashes [that occur in this country every year],” but—identifying competing interests of “freedom, privacy and liability”—acknowledges that “[n]ot everyone views the arrival of [self-driving car] technology quite so positively.”

The author frames the competing interests as follows:

Though the automobile has stood as a symbol of freedom and personal autonomy for generations,7 some fear that legal and economic pressures might eventually restrict the frequency and scope of human driving . . ..Regulators have been even more concerned with the threat self-driving cars pose to their users’ privacy. For example, California demands that the “manufacturer of the autonomous technology installed on a vehicle shall provide a written disclosure to the purchaser of an autonomous vehicle that describes what information is collected by the autonomous technology equipped on the vehicle.”

The most compelling aspect of the article, however, is the author’s exploration of how the application of existing common law—rules articulating by the courts—could shift liability away from self-driving car users toward third parties like manufacturers or government entities.

In assessing this “liability shift” the author proposes four different degrees of interaction between the vehicle and occupant:
On one end of the spectrum (the author proposes four possibilities), are discretionary-uncommunicative vehicles. Despite its autonomous features, such a vehicle relies heavily on its operator:
“A user can tell discretionary-uncommunicative vehicles which route to take and can take the wheel back if she wants to drive. . . Because discretionary-uncommunicative vehicles grant their users the maximum degree of control, discretion, and autonomy over their operation, a discretionary uncommunicative vehicle should generally be considered the agent of its user, not its manufacturer.”

On the other end of the spectrum are Nondiscretionary-Communicative Vehicles, where the operator is pretty much taken out of the equation:
“these self driving cars might communicate with other vehicles, insurers, manufacturers, and/or government agencies. Based on current proposals, nondiscretionary-communicative vehicles would probably fall into one of two camps: interactive or remote controlled . . . Because decision-making would either be guided by pre-established standards (in the interactive
scenario) or determined externally by an overseer program (in the remote-controlled scenario), either the standards or the overseer may often be the cause of accidents.”
While discretionary-uncommuncative vehicles would facilitate a liability environment were operators “channel [liability for accidents] through the private insurance market,” non-discretionary-communicative vehicles would promote “market-share liability, which could more reasonably accommodate this system-wide diffusion of agency and shared responsibility.”
In other words, a common fund or trust for accident victims.