European Union officials want to sharply curtail the use of artificial intelligence (AI) in certain settings, including law enforcement’s use of facial recognition technology, and outright ban it in other cases, as detailed in a newly introduced bill.
The move comes amid growing use of AI in cybersecurity products -- and growing regulatory scrutiny of AI worldwide.
The measure proposed by the European Commission, the EU’s executive branch responsible for proposed legislation, would regulate how AI is developed and deployed, particularly in circumstances in which people’s safety or fundamental rights are imperiled. Limited use of AI would extend to "high risk" arenas such as critical infrastructure, crime profiling as well as personal matters ranging from recruitment to credit rating and immigration. Violators would be subject to fines of up to six percent of worldwide annual revenue.
On a wide range of issues, most recently data privacy, the EU has positioned itself as ahead of other countries in chipping away at potential overreach by big time technology companies, a posture that has directly affected Amazon, Facebook, Google, Microsoft and others, each with AI development figuring prominently.
“Our regulation addresses the human and societal risks associated with specific uses of AI,” said Margrethe Vestager, executive vice president of the EC. “We think that this is urgent. We are the first on this planet to suggest this legal framework," she said. “On artificial intelligence, trust is a must, not a nice to have. With these landmark rules, the E.U. is spearheading the development of new global norms to make sure A.I. can be trusted.”
It will not be easy to move the bill from proposal to law in terms of substance and time. The proposal must be approved by the European Council, which represents the 27 EU member states, and the European Parliament, which consists of elected members, surely to be battered by lobbyists.
Many uses of AI, including gaming and junk mail, will not be affected by the regulations. But circumstances in which a person is engaged with a machine, such as a chat bot, and video deep fakes will be regulated, Vestager said.
AI and machine learning are on steep growth curves for many cybersecurity providers, positioned as the centerpiece of product development, business strategy and value proposition. A recent research study of more than 200 organizations and technology partners found that half of the worldwide cybersecurity related patents filed in the last four years are focused on the application of artificial intelligence and machine learning. Nearly half the organizations in the study are expanding cognitive detection capabilities to tackle unknown attacks in their security operations center. For sure, channel partners and cybersecurity providers will need to carefully balance AI development with data privacy and other security-related concerns.
AI and Mass Surveillance Concerns?
Critics said the EU measure falls short of outlawing some of the most egregious uses of AI, such as mass surveillance by governments, profiling and technology to predict people's behavior in public spaces. "Biometric and mass surveillance, profiling and behavioural prediction technology in our public spaces undermines our freedoms and threatens our open societies,” Patrick Breyer, a European Parliament Greens party lawmaker, told Reuters, calling the proposed requirements a “mere smokescreen."
ChannelE2E, MSSP Alert’s sister site, features a lengthy, comprehensive accounting of AI regulations, policies and privacy issues in a regularly updated blog.