Risk Identification/Classification/Mitigation, Threat Intelligence, Generative AI

NSA Report: Deepfakes Threaten National Security

Share
Credit: Getty Images

The National Security Agency (NSA) and other federal agencies have issued a new report on deepfake media that aims to help organizations identify, defend against and respond to altered and bogus multimedia.

What are Deepfakes?

Deepfake refers to media that has been synthetically created or manipulated using some form of machine or artificial intelligence (AI) technology, the agencies said. What used to be described as computer generation imagery or CGI, now omnipresent in movies, has been updated to refer to synthetically generated and/or manipulated media also termed Shallow/Cheap Fakes and Generative AI.

Synthetic media threats can threaten a company’s brand and reputation, impersonate its management and executive team and interfere with its communications and protection of sensitive information. It can also launch campaigns of disinformation to foment social unrest about political and other societal issues, the agencies said.

The jointly produced Cybersecurity Information Sheet (CSI), entitled “Contextualizing Deepfake Threats to Organizations'' is authored by the NSA, Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA). Collectively, the agencies said that deepfakes pose significant threats for National Security Systems (NSS), the Department of Defense (DoD), and the Defense Industrial Base (DIB) organizations.

“The tools and techniques for manipulating authentic multimedia are not new, but the ease and scale with which cyber actors are using these techniques are,” said Candice Rockell Gerstner, NSA Applied Research Mathematician who specializes in Multimedia Forensics. “This creates a new set of challenges to national security. Organizations and their employees need to learn to recognize deepfake tradecraft and techniques and have a plan in place to respond and minimize impact if they come under attack.”

Minimizing the Impact of Deepfakes

The CSI advises organizations to consider implementing a number of technologies to detect deepfakes and determine the provenance of multimedia, including:

  • Real-time verification capabilities
    Passive detection techniques
  • Protection of high priority officers and their communications

Recommendations for minimizing the impact of deepfakes include:

  • Information sharing
  • Planning and rehearsing responses to exploitation attempts
  • Personnel training
D. Howard Kass

D. Howard Kass is a contributing editor to MSSP Alert. He brings a career in journalism and market research to the role. He has served as CRN News Editor, Dataquest Channel Analyst, and West Coast Senior Contributing Editor at Channelnomics. As the CEO of The Viewpoint Group, he led groundbreaking market research.