Banner artwork by Drago Claws / Shutterstock.com
The European Union is at the vanguard of artificial intelligence (AI) regulation after its lawmakers overwhelmingly approved the EU AI Act June 14. Intended to foster AI development while minimizing its threats, the regulation would categorize AI systems on four risk levels, from minimal to unacceptable. Most applications that use AI today, such as ChatGPT, fall into the low or no-risk category.
The proposed law, which isn’t expected to go into effect until 2025, would target “social scoring systems” that make judgments based on a person’s behavior or appearance, applications to subliminally manipulate children and other vulnerable groups, and predictive policing tools.
Violations could generate substantial fines, up to €30 million or six percent of a company’s annual global revenue. For large technology companies like Google or Microsoft, fines could amount to billions.
The law draws comparisons to the EU’s other laws, like the General Data Protection Regulation, which can levy fines of €10 million or two percent of a firm’s annual revenue from the preceding quarter.
Each of the EU’s 27 member states will enforce the rules in its jurisdiction.