Channel partner events, AI benefits/risks, Generative AI

Deepfake AI Defense: Former White House CIO Warns on Misinformation

Theresa Payton delivers the keynote at IT Nation Secure on June 4.

IT Nation Secure, a ConnectWise event, got Tuesday morning’s programming going with a real-world wake-up call from keynote speaker Theresa Payton over the growing threat of deepfake videos and voice cloning — threats she believes could potentially wreak havoc on voter confidence in the upcoming U.S. Presidential election.

Back when she was serving the administration of President George W. Bush overseeing IT operations as White House Chief Information Officer, Payton could not have imagined a world when the lines between real and fake would essentially be indistinguishable to the average person. Of course, that’s where the experts come in — the MSPs, MSSPs and cybersecurity vendors in attendance whose job it is to separate fact from fiction and root out misinformation — keeping a keen eye on AI governance.

With the ability to access potent AI algorithms and the widespread availability of ChatGPT, cybercriminals now possess formidable tools at their disposal. These AI advancements have fundamentally reshaped the landscape of criminal operations, offering novel challenges and prospects for the cybersecurity realm. But Payton shows that it’s possible to turn the tables on cybercriminals, as was the aim of her keynote: “Cybercrime is Accelerating: AI, Deepfakes, Voice Cloning — Future-Proof Yourself Now!”

She demonstrated herself how easy it was to create a deepfake video and audio using free tools and low-end computers.

Payton recommended a free tool to scan and detect deepfake videos: scanner.deepware.ai. It’s use is as simple as placing a link to a video or uploading one to the site. She also urged fact-checking of audio and video content to combat misuse of technology.

“Train staff and clients on tools to detect deepfakes and fact check information,” Payton advised. “Consider implementing internal passphrase policies for additional authentication beyond passwords.”

Chatbots aren’t safe for exploitation either, Payton said, nor are biometrics and supply chains, Payton said, emphasizing the importance of addressing the ethical and security implications of AI in various industries. Moreover, AI defense applications should  “design for the human,” reasoning there is no substitution for having a team that can instinctively spot fake from real.

Every Minute of the Day…

As for a “did you know moment” — meaning there’s a lot of digital traffic to keep track of and secure —  Payton noted that every minute there are:

  • 6944 chats
  • 6.3 million Google searches
  • 360,000 X posts
  • 24,000 Spotify streams
  • 694,000 Instagram shares
  • 40 years of streaming content
  • 1 million dollars spent online
  • 30 DDoS attacks
  • Cybersecurity threats and AI use in customer service

“Now when I see that kind of traffic, that amount of transactions, I think to myself, first of all, good golly, that has a lot of data being created,” said Payton, noting that the top two cybercriminal attacks vectors are password reuse and social engineering.

Payton revealed that now we're actually seeing somebody for whom “English is not even a third language” can they have excellent translation in a social engineering attack.

“They can say, ‘hey, I'm going to target people in Texas,’” so they’ll use the same tone and energy of (actor) Matthew McConaughey and now they have something that is familiar and it's becoming much more convincing. They’re using both predictive AI and generative AI to help them sort ‘what am I going to do?’ What's the coding look like? How do I actually do a credential accepting attack or password spraying attack and we're seeing those two and we're seeing those two elements that we've seen. They've been popular for years, but we're really seeing the cybercriminals getting much better at it and it's because they're using AI.”

Theresa Payton’s Governance AI 5-Step Framework

1. The Human User Story. Document customer centric and employee centric stories.

2. Establish Safe AI Team. Leverage an existing council or set up a new one comprised of line of business executives, technology, risk, legal, customer service, marketing and security. Add other roles as needed.

3. Create Pilot Test-Learn. Ensue all AI implementations go through a pilot phase that tests out resiliency, reliability, privacy, security and efficiency.

4. Trust but Verify.

5. Deployment.

Jim Masters

Jim Masters is Managing Editor of MSSP Alert, and holds a B.A. degree in Journalism from Northern Illinois University. His career has spanned governmental and investigative reporting for daily newspapers in the Northwest Indiana Region and 16 years in a global internal communications role for a Fortune 500 professional services company. Additionally, he is co-owner of the Lake County Corn Dogs minor league baseball franchise, located in Crown Point, Indiana. In his spare time, he enjoys writing and recording his own music, oil painting, biking, volleyball, golf and cheering on the Corn Dogs.