After many years and product marketing campaigns, defining a concept like artificial intelligence without going too broad or narrow is still tricky business for both governing bodies and consumers alike. But regulators looking to mitigate the risks AI can pose in areas like education, law enforcement or even employment decisions, all without stymying technological innovation, may find themselves consistently erring on the side of “broad.”

This may already be the case with the initial proposal for the Artificial Intelligence Act (AIA) unveiled by the European Commission last April. The proposed legislation attempts to thread the needle between a broad and narrow overview by defining AI according to four different risk categories: unacceptable risk, high risk, limited risk or minimal risk. 

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]