As the debate continues over the bias of algorithms and artificial intelligence tools used in important matters ranging from loan approvals to prison sentences, the San Francisco District Attorney’s Office is taking a different approach. Last week, the office announced it is turning to an artificial intelligence-powered bias mitigation tool to redact any race-specific language before a police officer’s incident report hits a prosecutor’s desk.
The software is an effort to remove implicit bias from prosecutors’ charging decisions. But observers say that monitoring of the program will be needed as algorithms are only as objective as the information they are programmed with.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.
For questions call 1-877-256-2472 or contact us at [email protected]