President Biden has signed a far-reaching Executive Order on artificial intelligence (AI) that asserts federal powers of oversight on companies building out AI technologies where a “serious risk to national security, national economic security or national public health and safety,” may come into play.
According to a White House fact sheet, the EO “establishes new standards for AI safety and cybersecurity, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”
The EO is intended to establish and cement American leadership in AI, spanning:
- The promise and managing the risks of AI
- Establishing new standards for AI safety and security
- Protects Americans’ privacy
- Advances equity and civil rights
- Stands up for consumers and workers
- Promotes innovation and competition
- Advances American leadership around the world
Protecting the Public Against Deep Fakes
The EO also aims to curb the dangers of so-called deep fakes that can alter public opinion to sway elections or influence the marketplace, the fact sheet said.
“Deep fakes use AI-generated audio and video to smear reputations, spread fake news and commit fraud,” Biden said at the signing event.
According to news reports, Biden described how hackers can siphon a few seconds of a person’s voice and manipulate its content to express something different than its original intent. In the hands of cyberattackers such capabilities can readily be used for political gain.
Federal Agencies to Set, Enforce AI Standards
The EO results from the work of a number of federal entities and constitutes what amounts to a first draft of AI policy. Biden’s ability to reach into private industry is limited although the President’s powers extend to regulating how the government uses AI.
The policies and processes primarily involve three federal entities:
- The Department of Commerce’s National Institute of Standards and Technology (NIST) will set standards for extensive red-team testing to ensure safety before public release.
- The Department of Homeland Security (DHS) will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board.
- The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.
“Together, these are the most significant actions ever taken by any government to advance the field of AI safety,” the government said.
Cyber Experts Weigh In
A number of cybersecurity community leaders weighed in on the importance of the Executive Order.
"A responsible data-centric approach should underpin every AI implementation, and the standards being set for government agencies should also be upheld across private and public organizations as well,” said Rehan Jalil, Securiti president and chief executive.
Jalil added, “It is crucial to acknowledge that once AI systems gain access to data, that information becomes an integral part of the system permanently; we cannot afford to trust AI technology without the appropriate controls and set responsibilities."
Andrew Costis, AttackIQ chapter lead of the adversary research team, said that understanding the attack surface of the AI technology is important, especially for known adversary behaviors.
“AI technologies will still be susceptible to certain attacks such as data exfiltration, supply chain attacks, and data manipulation," he said. "There already exist today a number of ways to safely emulate these known adversary behaviors in order to better plan and prepare for possible AI attacks in the future."
Because AI is a “dual use technology” it has the potential to “usher humanity forward” or “regress our advancements,” said Shreyans Mehta, Cequence Security founder and chief executive.
“We are in the early phases of AI innovation and it is important to bring in government regulations for the responsible development of this technology, Mehta said. "Unregulated development can lead to a significant impact on the security and privacy of the public, sometimes even without their knowledge. Also, AI hallucinations have the potential to spread mass disinformation. Government involvement will put guardrails around accountability and transparency of the generated content."