Last week, the U.S. and 35 other member countries that comprise the Organization for Economic Co-Operation (OECD) agreed to a set of intergovernmental standards pertaining to the use of AI.
Before anyone gets too excited, standards are just that—not rules, per se, but they do lay out some expectations that AI will be used in ways protect human rights, prioritize safety/security and lay some measure of accountability at the feet of the people deploying the technology. And that might be enough to seriously impact the future of AI.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.
For questions call 1-877-256-2472 or contact us at [email protected]