What's Next: Why Lawyers Need to Care About Deepfakes + AI and Zoning Law (It's Going to Be a Thing)
Coming soon to a courtroom near you … deepfake videos. Plus, we break down what the rise of autonomous vehicles means for zoning law.
March 13, 2019 at 10:00 AM
15 minute read
Welcome back to What's Next, where we report on the intersection of law and technology. Today, we talk with Stanford's Riana Pfefferkorn about deepfakes and why lawyers need to care about this alarming issue. Also, autonomous vehicles could affect our zoning laws (think fewer parking garages). More on that random legal aspect, and more, below.
If you follow technology, it's likely you're in a panic over deepfakes—altered videos that employ artificial intelligence and are nearly impossible to detect. Or else you're over it already. For lawyers, a better course may lie somewhere in between. We asked Riana Pfefferkorn, associate director of surveillance and cybersecurity at Stanford Law School's Center for Internet and Society, to explain (sans the alarmist rhetoric) why deepfakes should probably be on your radar.
➤ How long have you been focused on the phenomenon of deepfake videos? What is it about deepfakes that most interests you?
I first got interested in deepfakes in the spring of 2018, when I was co-teaching a course on cybersecurity law and policy at Stanford. The other two instructors were a professor of computer science named Dan Boneh and a fellow at the Hoover Institution, Andrew Grotto, who used to be a top cybersecurity policy official at the White House. The two of them had been working on a paper together about deepfakes, and they talked about that during our final session of class.
That piqued my interest, because deepfakes get at a concept that is very important in both encryption and cybersecurity more generally: authentication. How can I trust that the person I think I'm chatting with over a messaging app is in fact that person and not an interloper? How can I trust that a piece of information I'm retrieving from a database is accurate and correct, and wasn't tampered with at some point before I pulled it? And what kind of proof will satisfy me, depending on the context? That will be different if I'm acting as a fact-finder in a criminal case in court, versus if I'm just casually texting with a friend.
➤ In an upcoming issue of NWLawyer, you write about the evidentiary issues that will emerge if deepfake videos make their way into court cases. What led you to consider those implications?
At the same time as I was co-teaching this class last spring, there were two new additions to Federal Rule of Evidence 902: FRE 902(13) and (14), which had gone into effect at the end of 2017. They're both about that same concept, authentication—specifically, of electronically-stored information (ESI). And by and large I think those amendments are going to do a lot to streamline the admission of electronic evidence. But Professor Boneh taught the class that methods of digitally authenticating videos, such as through watermarking or cryptographic signatures, are still susceptible to manipulation. So, when I got up to talk about Rules 902(13) and (14), I cautioned the students that these new amendments will not be a magic bullet to keep deepfakes from creeping into evidence.
Before coming to Stanford, I spent several years as a litigation associate at a large law firm, and before that I clerked for a magistrate judge. So even though I'm now in this more academic role, I'm still interested in the nuts and bolts of pre-trial and trial practice. I find procedural issues fascinating. And that's what really grabbed me about deepfakes: How are we going to deal with them when parties start trying to introduce them into evidence in court? Are the rules courts have developed over the years to guard against forged or tampered evidence, from handwritten documents to digital photographs, going to hold up? Or are we going to need new rules?
➤ You point to a number of challenges for courts “in the not-too distant future” if deepfake videos make their way into litigation. How fast is that future approaching and do you think litigators and judges are sufficiently alert to these issues?
The verisimilitude of AI-generated photo and video seems to be growing by leaps and bounds on a near-daily basis. AI is now capable of generating fake human faces in which I for one cannot detect any tell-tale signs that it's not a real photo. So I think that future is probably coming, maybe not this year, but definitely in the next couple of years. The tools for making deepfakes are becoming ever more sophisticated. Right now the most impressive AI-generated images and video are coming from academic teams and teams at big companies — places with deep resources to devote to AI. But that progress will also trickle down rapidly to the deepfake tools that are available to anyone on the Internet to use for free. So the abusive applications of deepfakes, which are already happening, are only going to ramp up.
First we will probably see litigation about deepfakes, where the plaintiff is someone who's been victimized by a video purporting to depict her saying or doing something she didn't do, and she's trying to recover under some tort theory. But with the advances in free, readily-available tools, I think in the next couple years we'll also see deepfakes creeping into run-of-the-mill cases. In those cases, the deepfake won't be the basis of the cause of action, it'll be just another piece of evidence in the case. That's been true of social media: evidence from social media now plays a part in a wide array of cases, not just cases that are about social media platforms (e.g. cyberbullying). I think it will be true of deepfakes too.
I definitely don't think litigators and judges are thinking about these issues sufficiently yet. But we should be getting ready, while we still have a little lead time before deepfakes start cropping up everywhere. That's where I'm planning to go next in my work on deepfakes: developing practical guidelines and suggestions for how courts should go about the task of rooting out deepfake evidence, what the dos and don'ts are for litigators as they're collecting evidence for their case, and maybe also the role of expert witnesses. Experts are yet another part of the picture of deepfakes in the courtroom. This is such a cutting-edge issue that there are only a few people who right now are qualified enough to give expert opinions as to whether or not something is a deepfake. If deepfakes come up in enough cases, that handful of individuals are going to be in very high demand. So in addition to the need for lawyers and judges to prepare, I also foresee an issue with the expert pipeline.
Deepfakes in the courtroom is such a big topic. There's a lot of work to do.
➤ With the rise of deepfakes, do you see a risk that jurors may discount authentic video and audio evidence? Yes. Once a video has been authenticated and the court has admitted it into evidence, it's for the jury to decide how much weight to give it, and the opposing party may make arguments to try to minimize its weight. I think juries may be more easily persuaded by such arguments now, in the age of “fake news,” than they might have been in the past, thanks to public awareness of the deepfakes phenomenon.
We might even see a kind of “reverse CSI effect,” where juries may expect the proponent of a piece of video or audio evidence to employ a lot of high-tech bells and whistles to persuade them that real evidence is not fake, even after it's been admitted. But that's expensive and time-consuming, and that shouldn't be what it takes to get juries to keep believing what's real is real. Right now in my research on this topic, I'm thinking through other options, such as whether the proponent could ask for a jury instruction (and expect it to be heeded).
With that said, the public has also been aware of other kinds of fakery, such as forged signatures and Photoshopped images, and those didn't lead to total nihilism by juries that it's possible to know what's real. So my hope is that both judges and juries will take deepfakes in stride. Time will tell.
➤ Aside from the litigation context, where do you anticipate that practicing lawyers may encounter deepfakes?
Wherever videos come into play, that's a chance for deepfakes to become an issue. And that means a range of practice areas. For example, say you are an M&A lawyer doing due diligence on a possible deal between your client and another company. If a fake video surfaces that seems to show the company's CEO making racist or sexist remarks, or stating that the company's marquee product does not work as well as advertised, that could influence your client's decision about whether to go forward with the deal. For corporations, the well-timed release of a deepfake video could mess with a lot of business dealings, attract regulator attention, hinder investment and recruiting, and anger shareholders.
The problems aren't limited to the corporate context. In employment matters, a deepfake video might lead to an employee's termination, or cause a job candidate not to be hired. Think, too, of matters of death and incapacity. In a will contest, for example, a deepfake video might be used to persuade the probate court that the decedent was, or was not, of sound mind at the time of the will signing. Or, someone might fall into a persistent vegetative state without having an advance health care directive in place. If there is a dispute among her loved ones about what her wishes would have been, a fake video might affect the dispute by supposedly depicting her talking about what she wanted.
We can foresee a range of legal settings in which deepfakes might come up, and they won't be limited to the litigation context. That means attorneys of all stripes need to be thinking about the role video recordings play in their practice and how deepfakes might affect that.
—Vanessa Blum
|
Zoning Law and Artificial Intelligence: It's Going to Be a Thing. Really
Autonomous vehicles are poised to transform life, especially in major cities. But most of the focus has been on issues surrounding transportation flow, the environment and insurance. When they arrive en masse, though, self-driving cars will also transform public policy in a way that many lay people—or even attorneys—might not expect.
Eric Tanenblatt, the global chair of public policy and regulation at Dentons, recently sat down for an interview with Legaltech News, detailing the multitude of ways autonomous vehicles will be transformative. One way surprised us, though: zoning laws. Don't see the connection? Well, think about the fact that autonomous vehicle fleets, like Ubers and Lyfts right now, will be expected to constantly be moving, Tanenblatt said. Then add in that many people will no longer own their own cars, relying on these constantly-moving fleets. “What that means is that there won't be the need for parking decks and parking garages, parking lots to the extent we have them now. That frees up the space for more economic development or green space, that's going to require local governments to change some of their zoning laws,” he explained. He also later added, “There may be new requirements where you need to add drop off and pick up access, because there will be so many vehicles driving around.”
It's a strange thing to think about—parking lots are a staple of American life at this point. Singing, “they paved paradise and put up a Waymo drop off point” doesn't quite have the same ring to it. And yet, these are the potential realities of the new AV age.
Some may say that day is far in the future, given the way many regulatory bodies—local and national alike—are slow to react. But, Tanenblatt explained, that's no reason to ignore the possibility for now. “It's not slowing private industry down, and I don't foresee it slowing down in the near term,” he said. “What's going to happen is the government is going to need to catch up. And until the federal government does, it's in every company's best interest to understand what the rules are in each local jurisdiction.”
—Zach Warren
|
3 Things Labor and Employment Lawyers Should Know About Using AI in Hiring
Kelly Trindel is head of industrial organizational science and diversity analytics at pymetrics, Inc., where she helps the company proactively test for hidden racial/ethnic and gender biases in assessment tools. The New York-based startup uses games based on cognitive neuroscience and artificial intelligence to help employers find the candidates who best fit their needs, while reducing gender and racial biases.
Before joining the company in 2018, Trindel, who has a Ph.D in experimental psychology, was chief analyst and research director at the U.S. Equal Employment Opportunity Commission in Washington, D.C. While there, she provided statistical analytical support to the commission's discrimination investigations and case development for nearly eight years during the Obama administration.
Pymetrics's cloud-based assessment tools are being used by Tesla, LinkedIn and Unilever, among others. The company raised $40 million in venture capital funding last fall.
Trindel spoke with us recently about what labor and employment lawyers need to know about the use of artificial intelligence and machine-learning in recruitment and hiring. Her remarks are edited for brevity and clarity.
➤ Assessment and hiring tools must comply with federal Uniform Guidelines on Employee Selection Procedures, under Title VII of the Civil Rights Act, which have been around in some form since 1978. My message to labor lawyers is that the guidelines are still relevant. Old regulations still matter, and labor lawyers should be asking vendors of tools like pymetrics whether, and how they comply with employee selection procedures. If I were still at the EEOC, and investigating the use of a tool of pymetrics or other companies, I would be looking at the uniform guidelines.
➤ Artificial intelligence offers opportunities to improve fairness and validity of assessment tools. AI tools give us new ways of de-biasing. Before we go live with a model at pymetrics, we test it with a group of people that we call a bias set, people who have played our games and voluntarily given us their race, ethnicity and gender. In this way, we can see if there is a significant difference in performance by a demographic group prior to going live, and we can see if men outperform women, or whites outperform Hispanics, and if there is a significant difference, and if so, we can see what the predictors are that cause the differences and remove them from the local model.
➤ Be aware that there seems to be special scrutiny at EEOC on facial recognition technology, and not just in employment. This technology is under special scrutiny, in part, because of MIT research finding that facial recognition has had trouble with accuracy in detecting facial expressions of minorities, especially the facial expressions of women of color. A letter was sent to the acting chair of the EEOC from a group of U.S. senators in September 2018, including Sen. Kamala Harris, about its perspective on the use of facial recognition technology in employment selection and AI, and it is useful for labor lawyers to know this is a focus. To my knowledge, I haven't heard an official response from the commission. This may be because the EEOC currently lacks a quorum. But it is something for labor lawyers to be aware of.
—MP McQueen
|
On the Radar:
Opt-in or Opt-out: That was a key part of a data privacy debate in Washington, D.C., this week that included tech privacy counsel and the Senate Judiciary Committee. Google, Intel and other company representatives and privacy advocates served as panelists for the hearing that focused on the recent data protection laws in California and the European Union. Read more from Caroline Spiezio here.
Staffing Up: Akin Gump Strauss Hauer & Feld has picked up U.S. Federal Trade Commission official Haidee Schwartz, who was most recently the acting deputy director of the Bureau of Competition. She saw the agency square off with major companies over a string of proposed combinations, such the merger of prosthetics makers Otto Bock HealthCare and FIH Group Holdings; fantasy sports platforms DraftKings and FanDuel's attempted merger; and deals involving Staples and others. Read more from Ryan Lovelace here.
GCs on Legal Tech: Ian McDougall, the general counsel of LexisNexis, offered his predictions on legal tech adoption, increasing legal department sizes and changing inside-outside counsel relationships while speaking at a Stanford University event this past week. An increased focus on efficiency and value has also led in-house counsel to tap into advancing legal tech, he said, noting general counsel don't want to pay outside counsel high rates for a job that could be done by a computer. Read more from Caroline Spiezio here.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllWhat's Next: Judge to Quash Twitter Subpoena | SCOTUS Won't Review Trial Ban
4 minute readTrending Stories
- 1Justices Will Weigh Constitutionality of Law Allowing Terror Victims to Sue PLO
- 2Nevada Supreme Court to Decide Fate of Groundbreaking Contingency Cap Ballot Measure
- 3OpenAI Tells Court It Will Seek to Consolidate Copyright Suits Under MDL
- 44th Circuit Allows State Felon Voting Ban Challenge to Go Forward
- 5Class Actions Claim Progressive Undervalues Totaled Cars
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250