The European Commission on Wednesday unveiled new regulations governing the use of artificial intelligence, the first such attempt by a government regulator for a technology that has seen significant scientific breakthroughs but is at times controversial.

The draft rules, which the EU said were written to create transparency for the use of AI and ban “systems considered a clear threat to the safety, livelihoods and rights of people,” would set limits on the use of AI in a myriad of activities, including loan applications, self-driving cars, hiring decisions and infrastructure. It is also proposing stricter rules on the use of biometrics, such as facial recognition used by law enforcement, which would be limited.

“On artificial intelligence, trust is a must, not a nice-to-have,” Margrethe Vestager, the European Commission executive vice president who oversees digital policy for the 27-nation bloc, said in a statement. “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.”

The framework foresees a hefty enforcement regime for noncompliant companies, with penalties of up to 6% of a company’s global annual turnover.

“By providing a strict regulatory framework early in this rapidly evolving field, the EU seems to aim at intercepting the work under preparation by existing Big Tech working groups and set a new worldwide standard for human-friendly AI ahead of the other regulators,” Kristof De Vulder, managing partner for DLA Piper in Belgium, told Law.com International.

Kristof De Vulder/courtesy photo