The Liabilities of Artificial Intelligence Are Increasing
The longer regulators wait, the more widely used algorithmic decision-making systems become. In the process, the concrete harms these technologies can cause are becoming clear. So what can companies do?
June 15, 2020 at 07:00 AM
5 minute read
The original version of this story was published on Legal Tech News
|
"With the proliferation of machine learning and predictive analytics, the FTC should make use of its unfairness authority to tackle discriminatory algorithms and practices in the economy."
This statement came from FTC Commissioner Rohit Chopra at the end of May. The fact that these words followed a more formal blogpost from the regulator focused on artificial intelligence—in the midst of a global pandemic, no less—highlights what is becoming the new normal: Liabilities on the use of algorithmic decision-making are increasing. This holds true with or without new federal regulations on AI.
For those paying attention to the rapid adoption of AI, this trend might come as no surprise—especially given that regulators have been discussing new regulations on AI for years (as I've written about here before). But the increasing liability of algorithmic decision-making systems, which often incorporate artificial intelligence and machine learning, also stems from a newer development: the longer regulators wait, the more widely used AI becomes. In the process, the concrete harms these technologies can cause are becoming clear.
Take, for example, automated screening systems for tenants, which the publication The Markup recently revealed have been plagued by inaccuracies that have generated millions of dollars in lawsuits and fines. "With about half of the nation's 43 million rentals turning over every year," according to The Markup, "even an error rate of 1 percent could upend the lives of hundreds of thousands of people." Among those people are, for example, Hector Hernandez-Garcia who, along with his wife and newborn son, became temporarily homeless after being incorrectly profiled by one such algorithm. (Hernandez-Garcia sued; the company settled.)
Or take the Michigan Integrated Data Automated System, used by the state to monitor filing for unemployment benefits, which was also recently alleged to have falsely accused thousands of citizens of fraud. Class action lawsuits have been filed against the state, alleging a host of problems with the system and demonstrating that automated systems create harms that are as hard-to-detect as they are injurious.
Then there's the recent lawsuit against Clearview AI, filed in Illinois at the end of May by the ACLU and a leading privacy class action law firm, alleging that the company's algorithms violated the state's Biometric Information Privacy Act. That act limits the way that data like fingerprints or facial images can be used, with a fine of up to $5,000 per violation, which other states have sought to imitate in recent years.
In other words, the list of lawsuits, fines and other liabilities created by AI is long and getting longer. The non-profit Partnership on AI even recently released an AI incident database to track how models can be misused or go awry.
All of which means that organizations adopting AI are creating concrete liabilities in the process. Indeed, these harms are becoming more apparent to regulators and consumers alike every day. As the fallout from pandemic creates new pressures for organizations to embrace automation, the adoption of AI is likely to accelerate even more.
So what can companies do?
The first answer is to have plans in place for when AI causes harm. There is a burgeoning field of AI incident response—similar to traditional cybersecurity incident response—focused on crafting clear plans for how to react when algorithms misbehave. This type of algorithmic misbehavior might have internal causes, like when the data the AI was trained on differs too widely from data in the real world. Or it can have external causes, like an attacker attempting to manipulate the algorithm.
Whatever the cause, there's a range of materials that lawyers can use to help their organizations prepare, like this series of articles focused on legal planning for the adoption of AI. (I've contributed directly to that literature as well.)
Second is asking the right questions to mitigate major risks before they emerge. To help lawyers in this role, my boutique law firm, bnh.ai, teamed up with the non-profit Future of Privacy Forum to release a set of 10 questions called "10 Questions on AI Risk" earlier this month. These basic questions can help guide lawyers as they seek to understand key areas of liability created by AI.
Last, and perhaps most importantly, is the importance of not waiting until an incident occurs to address AI risks. When incidents do occur, for example, it's not simply the incidents that regulators or plaintiffs scrutinize, it's the entire system in which the incident took place. That means that reasonable practices for security, privacy, auditing, documentation, testing and more all have key roles to play in mitigating the dangers of AI. Once the incident occurs, it's frequently too late to avoid the most serious harms.
An ounce of prevention, to quote the old proverb, is worth a pound of cure. And that's true now more than ever for organizations adopting AI.
Andrew Burt is managing partner at bnh.ai, a boutique law firm focused on AI and analytics, and chief legal officer at Immuta.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllA Reporter and a Mayor: Behind the Scenes During the Eric Adams Indictment News Cycle
The DOJ's Whistleblower Pilot Program Adds Incentives for Robust Corporate Compliance Programs
10 minute readTrending Stories
Who Got The Work
Christopher J. DeGroff, Andrew L. Scroggins and Samantha L. Brooks from Seyfarth Shaw have stepped in to represent AG Equipment Company in a pending lawsuit over alleged employment discrimination under the ADA. The case was filed Aug. 30 in Oklahoma Northern District Court by the Equal Employment Opportunity Commission on behalf of five former employees who contend that they were wrongfully terminated after seeking accommodations from the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. District Judge Sara E. Hill, is 4:24-cv-00403, Equal Employment Opportunity Commission v. AG Equipment Company.
Who Got The Work
Samantha J. Hughes of Dykema Gossett has entered an appearance for Home Depot in a pending slip-and-fall personal injury lawsuit. The suit was filed Aug. 30 in California Central District Court by Countrywide Trial Lawyers on behalf of Ernestina Rolon. The case, assigned to U.S Magistrate Judge Karen L. Stevenson, is 2:24-cv-07451, Ernestina Rolon v. The Home Depot, Inc. et al.
Who Got The Work
R. Evan Jarrold and Latiqua M. Liles of Constangy, Brooks, Smith & Prophete have entered appearances for Walmart in a pending lawsuit for alleged breaches of the Family and Medical Leave Act. The complaint was filed Aug. 30 in Missouri Eastern District Court by Roberts, Wooten & Zimmer on behalf of a former Walmart employee who contends that he was wrongfully terminated for taking medical leave after contracting COVID-19. The case, assigned to U.S. District Judge Matthew T. Schelp, is 4:24-cv-01196, Weber v. Walmart, Inc.
Who Got The Work
Nelson Mullins Riley & Scarborough partner Molly Jean Given has entered an appearance for CooperCompanies, a medical device maker comprised of CooperVision and CooperSurgical, in a pending product liability lawsuit. The case, filed Aug. 27 in California Northern District Court by Girard Sharp and Sauder Schelkopf LLC, is part of a wave of cases brought on behalf of plaintiffs whose embryos failed to develop during in-vitro fertilization due to alleged contamination of the defendant's embryo culture media lots. The case, assigned to U.S. District Judge Jon S. Tigar, is 4:24-cv-06047, I.I. et al v. CooperSurgical, Inc. et al.
Who Got The Work
Jacob Oslick of Seyfarth Shaw has entered an appearance for Prudential Insurance Co. of America in a pending ERISA lawsuit. The complaint, which pertains to short- and long-term disability benefits, was filed Aug. 29 in Pennsylvania Eastern District Court by the Cornerstone Law Firm on behalf of Catherine Alunni. The case, assigned to U.S. District Judge John M. Gallagher, is 5:24-cv-04547, Alunni v. The Prudential Insurance Company Of America.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250