‘Setting the standard’: EU unveils plan to rein in risky AI uses
The proposal would ban AI in ‘social scoring’ systems that judge people based on behaviour and physical traits.
European Union officials unveiled proposals Wednesday for reining in high-risk uses of artificial intelligence (AI) such as live facial scanning that could threaten people’s safety or rights.
The draft regulations from the EU’s executive commission include rules on the use of the rapidly expanding technology in systems that filter out school, job or loan applicants. They also would ban artificial intelligence outright in a few cases considered too risky, such as “social scoring” systems that judge people based on their behaviour and physical traits.
The ambitious proposals are the 27-nation bloc’s latest move to maintain its role as the world’s standard-bearer for technology regulation, putting it ahead of the world’s two big tech superpowers, the United States and China. EU officials say they are taking a “risk-based approach” as they try to balance the need to protect rights such as data privacy against the need to encourage innovation.
“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the European Commission’s executive vice president for the digital age, said in a statement. “By setting the standards, we can pave the way for ethical technology worldwide and ensure that the EU remains competitive along the way.”
The proposals also include a prohibition in principle on controversial “remote biometric identification”, such as the use of live facial recognition to pick people out of crowds in real time, because “there is no room for mass surveillance in our society”, Vestager said in a media briefing.
There will, however, be an exception only for narrowly defined law enforcement purposes such as searching for a missing child or a wanted person or preventing a terror attack or threat. But some EU lawmakers and digital rights groups called for the carve-out to be removed over fears it could be used to justify widespread future use of the intrusive technology.
The draft regulations also cover AI applications that pose “limited risk”, such as chatbots which should be labelled so people know they are interacting with a machine. Most AI applications will be unaffected or covered by existing consumer protection rules.
Violations could result in fines of up to 30,000 euros (more than $36,000), or for companies, up to six percent of their global annual revenue, whichever is higher, although Vestager said authorities would first ask providers to fix their AI products or remove them from the market.
The proposals still have to be debated by EU lawmakers and could be amended in a process that could take several years. They would apply to anyone who provides an artificial intelligence system in the EU or uses one that affects people in the bloc.
EU officials, trying to catch up with the Chinese and American tech industry, said the rules would encourage the industry’s growth by raising trust in artificial intelligence systems and by introducing legal clarity for companies that use AI.