(Jason Doiy)

Dr. Bull and his team returned from vacation to tackle a deadly accident involving a self-driving car. Ejetto, an autonomous vehicle manufacturer, was just weeks away from launching their new car operated by an anthropomorphized program called E.J. One night during a test, an engineer named Adam was run over and killed by the car. Adam’s widow sued Ejetto claiming that the company was negligent and seeking damages that would bankrupt the company. Ejetto claims that Adam was careless, showing up late to work and skipping security protocols, which caused E.J. to malfunction. Dr. Bull takes the case because he believes that Ejetto’s founder is a promising genius with the potential to change the world.

As things unfold, it becomes clear that there has been foul play. Someone had embedded a software compromise into E.J. such that any hacker could exploit the car’s algorithm of ethics. This algorithm determines the car’s course of action in situations in which the loss of human life is unavoidable. As one of Ejetto’s engineers explains: If a driver is alone and a tree fell into his lane, and there was one car to his right carrying two passengers, and a car to his left carrying three passengers, E.J. would need to make a decision of where to crash. Because the car is programed to kill as few people as possible, in this example E.J. would run straight into the tree and kill the single driver. On the night of Adam’s death, someone had breached the backdoor and convinced E.J. that it was better to run over Adam than to avoid him.

While the trial at issue here concerned an employer’s liability for an employee’s injury, autonomous vehicles in general throw a wrench into much of the existing tort law. Since there is no question that self-driving cars will inevitably be involved in accidents, states are fast trying to adopt new approaches to deal with this emerging technology. As it currently stands, however, no state has laws to address the programing of crash-optimization algorithms or, more generally, tort liability for accidents caused by autonomous vehicles. This is already becoming a problem, as shown by high-profile accidents involving self-driving cars manufactured by Google, Tesla, and others.

If states choose to adopt a traditional negligence standard, a number of complications emerge. When there is an accident in a user-operated vehicle caused by a driver’s error, the driver is responsible only if he acted negligently. That is, if he failed to operate the vehicle with the level of care that someone of ordinary prudence would have exercised under the same circumstances. Juries are instrumental in structuring the key elements of this standard, including determining what duty is owed and whether or not a person breached it by acting unreasonably. In so doing, juries channel and help shape the community’s conscious of what risks are reasonable for a person to take. In a tragic scenario like the aforementioned tree example, a jury is unlikely to find that the driver had a duty to the passengers in the other cars to sacrifice himself. Indeed, the reasonable man is not perfect.

With an autonomous vehicle, however, if the car is operating as designed (thereby not implicating traditional notion’s of products liability) it is far more difficult to construct a profile of negligence. Should the standard of care be higher or lower than that of an ordinary person? Is it even possible for a robot to act negligently when it engineers intentionally programed it to make a specific decision? Moreover, is it reasonable, or even desirable, for a computer to make the self-sacrifice choice? Perhaps because of how difficult these questions are, most scholars of the law surrounding autonomous vehicles propose that strict liability and a broad insurance scheme is the most realistic solution to address this emerging technology.

As per usual, Dr. Bull did not confront these difficult questions. Instead, he invited the jurors on a field trip to test out Ejetto’s self-driving car for themselves. He also orchestrated a near accident to convince Ejetto’s founder to testify at the trial. These unconventional methods work, such that Dr. Bull ultimately convinces the jury to find Ejetto not liable, and identifies the greedy programmer responsible for Adam’s death. And just like that, Dr. Bull wraps-up (perhaps, just a bit too nicely) another week’s work.

Steve Susman serves as Executive Director for the Civil Jury Project at NYU School of Law. Forty years ago, he founded Susman Godfrey, the country’s first commercial litigation boutique, specializing in representing plaintiffs on a contingent fee basis in complex business disputes including antitrust and securities fraud class actions. Most recently, he has been on a crusade to save jury trials in civil cases. He is an adjunct professor at NYU and teaches a course entitled “How to Try a Jury Case Intelligently,” where his students put into practice procedures to make jury trials less expensive and more comprehensible.

Richard Jolly serves as Research Fellow for the Civil Jury Project at NYU School of Law. He earned his J.D., magna cum laude, from the University of Michigan Law School, where he served as a contributing editor for the University of Michigan Journal of Law Reform. Before coming to NYU, Richard worked as a summer associate at Irell & Manella in Los Angeles, California and clerked for the Honorable Deborah L. Cook of the Sixth Circuit in Akron, Ohio.

Dr. Bull and his team returned from vacation to tackle a deadly accident involving a self-driving car. Ejetto, an autonomous vehicle manufacturer, was just weeks away from launching their new car operated by an anthropomorphized program called E.J. One night during a test, an engineer named Adam was run over and killed by the car. Adam’s widow sued Ejetto claiming that the company was negligent and seeking damages that would bankrupt the company. Ejetto claims that Adam was careless, showing up late to work and skipping security protocols, which caused E.J. to malfunction. Dr. Bull takes the case because he believes that Ejetto’s founder is a promising genius with the potential to change the world.

As things unfold, it becomes clear that there has been foul play. Someone had embedded a software compromise into E.J. such that any hacker could exploit the car’s algorithm of ethics. This algorithm determines the car’s course of action in situations in which the loss of human life is unavoidable. As one of Ejetto’s engineers explains: If a driver is alone and a tree fell into his lane, and there was one car to his right carrying two passengers, and a car to his left carrying three passengers, E.J. would need to make a decision of where to crash. Because the car is programed to kill as few people as possible, in this example E.J. would run straight into the tree and kill the single driver. On the night of Adam’s death, someone had breached the backdoor and convinced E.J. that it was better to run over Adam than to avoid him.

While the trial at issue here concerned an employer’s liability for an employee’s injury, autonomous vehicles in general throw a wrench into much of the existing tort law. Since there is no question that self-driving cars will inevitably be involved in accidents, states are fast trying to adopt new approaches to deal with this emerging technology. As it currently stands, however, no state has laws to address the programing of crash-optimization algorithms or, more generally, tort liability for accidents caused by autonomous vehicles. This is already becoming a problem, as shown by high-profile accidents involving self-driving cars manufactured by Google , Tesla, and others.

If states choose to adopt a traditional negligence standard, a number of complications emerge. When there is an accident in a user-operated vehicle caused by a driver’s error, the driver is responsible only if he acted negligently. That is, if he failed to operate the vehicle with the level of care that someone of ordinary prudence would have exercised under the same circumstances. Juries are instrumental in structuring the key elements of this standard, including determining what duty is owed and whether or not a person breached it by acting unreasonably. In so doing, juries channel and help shape the community’s conscious of what risks are reasonable for a person to take. In a tragic scenario like the aforementioned tree example, a jury is unlikely to find that the driver had a duty to the passengers in the other cars to sacrifice himself. Indeed, the reasonable man is not perfect.

With an autonomous vehicle, however, if the car is operating as designed (thereby not implicating traditional notion’s of products liability) it is far more difficult to construct a profile of negligence. Should the standard of care be higher or lower than that of an ordinary person? Is it even possible for a robot to act negligently when it engineers intentionally programed it to make a specific decision? Moreover, is it reasonable, or even desirable, for a computer to make the self-sacrifice choice? Perhaps because of how difficult these questions are, most scholars of the law surrounding autonomous vehicles propose that strict liability and a broad insurance scheme is the most realistic solution to address this emerging technology.

As per usual, Dr. Bull did not confront these difficult questions. Instead, he invited the jurors on a field trip to test out Ejetto’s self-driving car for themselves. He also orchestrated a near accident to convince Ejetto’s founder to testify at the trial. These unconventional methods work, such that Dr. Bull ultimately convinces the jury to find Ejetto not liable, and identifies the greedy programmer responsible for Adam’s death. And just like that, Dr. Bull wraps-up (perhaps, just a bit too nicely) another week’s work.

Steve Susman serves as Executive Director for the Civil Jury Project at NYU School of Law. Forty years ago, he founded Susman Godfrey , the country’s first commercial litigation boutique, specializing in representing plaintiffs on a contingent fee basis in complex business disputes including antitrust and securities fraud class actions. Most recently, he has been on a crusade to save jury trials in civil cases. He is an adjunct professor at NYU and teaches a course entitled “How to Try a Jury Case Intelligently,” where his students put into practice procedures to make jury trials less expensive and more comprehensible.

Richard Jolly serves as Research Fellow for the Civil Jury Project at NYU School of Law. He earned his J.D., magna cum laude, from the University of Michigan Law School , where he served as a contributing editor for the University of Michigan Journal of Law Reform. Before coming to NYU, Richard worked as a summer associate at Irell & Manella in Los Angeles, California and clerked for the Honorable Deborah L. Cook of the Sixth Circuit in Akron, Ohio.