Encouraging Transparency and Accountability, Advocates Push Governments to Confront Machine Bias
Those looking to combat machine bias in government services are pushing for greater transparency into how technologies like AI work. Will tech companies be willing to cooperate?
August 29, 2018 at 10:00 AM
7 minute read
In a bid to combat technology biases, two dozen legal and tech organizations, including The Legal Aid Society, The Brennan Center for Justice at NYU School of Law, and the AI Now Institute, have sent a letter of recommendation to New York City's Automated Decision Making Systems task force.
The task force, launched by the New York City Council in late 2017 and made up of a number of academics and civil rights advocates, is responsible for looking at ways to regulate the use of automated decision making technology by the New York City government. It is due to release its final report on the subject by December 2019.
While the recommendations advocated by two dozen organizations were written with New York City government agencies in mind, they are intended to apply broadly to the use of automated decision making systems in any local and state government agency in the U.S. In fact, some recommendations are already being implemented in other jurisdictions.
To be sure, turning such recommendations into reality across the country., let alone in New York City, can be difficult. Pushing technology companies to become more transparent and accountable with how they develop their products, after all, will likely be met with some resistance.
When used to manage or streamline government services, automated decision making systems, which use algorithms to make determinations and include most artificial intelligence-powered platforms, can negatively affect certain groups if they contain inherent biases.
Just look at the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) analytics tool, which has been used in some U.S. state courts to predict inmate recidivism. A 2016 investigation by ProPublica found that COMPAS was biased, predicting high rates of recidivism for African-American prisoners.
Rashida Richardson, one of the signers of the letter and director of Policy Research at The AI Now Institute, an interdisciplinary research center at NYU looking at the social implications of AI, noted that the letter's recommendations can help combat such biases, and can be applied well beyond New York.
“We do encourage advocates and government officials in other jurisdictions to consider the recommendations as well,” she said.
Still, such recommendations may not receive the warmest welcome from some in the tech industry. One recommendation, for instance, calls on developers that sell products to government agencies to releases certain information, such as the product's source code, a description of the algorithms used, and the training data set, to “allow the public to meaningfully assess how such system functions.”
Richardson said that some in the tech space “may be hesitant” to agree to the recommendation, “evident by the fact that a lot of vendors have tried to use trade secret claims to block access to [even the] less technical information about the systems they provide in government.”
But she added that it's not a sure bet that such trade secret claims will hold up in court. “The source code in itself will not necessarily reveal everything about a system, you would actually need other [information] to raise concerns over trade secrets,” she said.
What's more, she noted that there is also a “movement by some vendors to make source code or other technical details about a system open source, so what exactly one specific vendor will do is not” indicative of an industry wide resistance.
According to another letter signee, Rachel Levinson-Waldman, senior counsel at the Brennan Center for Justice, added that the move to push vendors to release source code is vital because it's tied to transparency and accountability.
“It has to do with the ability to audit and say here's what's going into this system, what's going out of this system and to what extent either parts of it affected by bias,” she said.
To bring more transparency to these systems, the letter also recommends that each developer explain, in plain language, how their automated decision making system comes to its determinations.
But clear explanations of how every system works may not always be possible. At last week's ILTA conference, for example, Daniel Katz, an associate professor of law at Chicago-Kent College of Law explained that some AI methods “are much more of a 'black box', like deep [neural] learning, while other methods are much more explainable.”
Richardson, however, noted that the “the systems that are currently used by government agencies are not as advanced–not the sort of neural learning systems.” Still, she said, such neural learning “is the way the tech is going and that can and will be a problem.”
Marc Canellas, another letter signee who is a voting member of the IEEE-USA Artificial Intelligence and Autonomous Systems Policy Committee, and a former researcher at the Cognitive Engineering Center at the Georgia Institute of Technology, said that there will always likely be some limits to what developers can explain.
“I don't think we're looking for complete descriptions of what these algorithms are doing,” he said.
But he argued that such explanations should be comprehensive enough to make the developers and users of the technology feel they have responsibility for its decisions, “The government's use of automated decision systems can increase the distance between the person making the decision and the person being affected and that can really affect accountability and responsibility.”
To further instill accountability, the letter also recommends that any system found to discriminate against a particular community or group of people be redesigned with input from those negatively affected.
While there may be resistance in the tech world to inclusive, collaborative product redesigns, Richardson argued such ideas are far from impossible. Indeed, such revamps are already happening in some states.
In 2012, the United States District Court for the District of Idaho ruled that a contractor with Idaho's Medicaid program had to reveal the algorithm used to determine annual Medicaid funding levels. The American Civil Liberties Union had raised a red flag after it was disclosed that thousands of people in the state were losing their Medicaid coverage.
Upon examining the algorithms, the ACLU found the formulas to be biased and based on incorrect data sets. The organization filed a class action lawsuit in 2015, K.W. v. Armstrong, arguing the algorithms should not be used. The district court ultimately agreed, and restored the funding cuts to the state's Medicaid program.
In October 2016, the ACLU also gained court approval for a settlement with the Idaho Health and Welfare Department, committing it to multi-year overhauls of the algorithms it uses to determine eligibility and funding for certain programs with the help of an outside expert. The department also agreed to have its algorithms regularly verified for accuracy.
Idaho isn't the only state to have problems with automated decision making systems. In March 2018, the Pulaski County Circuit Court in Arkansas ruled that the Arkansas Department of Human Services had to stop using an algorithm to determine how many hours of home care to allocate disabled Medicaid recipients in the state. The ruling, which was in response to a lawsuit by Legal Aid Arkansas, required the state to first submit the algorithms for public comment and review by the state's legislature before going into effect.
How governments in Idaho and Arkansas and throughout the U.S. deal with automated decisions systems can differ widely, in part because there is no standard way to approach such technology from a legal and social perspective. But such discrepancies are a big part of the reason 24 organizations are focused on directing the New York City task force's work.
“When you have one of the largest types of municipal agencies making those requests” from automated decision systems developers, “it is important, because it may encourage smaller governments to follow suit,” Richardson said.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
- 1Call for Nominations: The Recorder and Law.com's California Legal Awards 2025
- 2The Week in Data Dec. 13: A Look at Legal Industry Trends by the Numbers
- 3Antitrust Class Actions Against CVS, Other Pharmacy Benefit Managers Are Piling Up
- 4Judge Grinds NY's Cannabis Licensing Regime to a Halt Again
- 5On the Move and After Hours: Barclay Damon; VLJ; Barnes & Thornburg
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250