Here’s a cybersecurity pro’s night terror come alive: Artificial intelligence (AI) and malware already in the wild can be combined by cyber crooks to create a combustible variant that can evade the best defenses and not execute until it sees its target victim. Viruses and worms? They’re child’s play compared to this programmed bug.
How does it know its prey? Using facial recognition, geolocation and voice recognition technologies. Like a sniper, it’s after one specific target who unknowingly releases its malicious code hidden in seemingly safe carrier apps. Keep in mind that a number of prominent security providers are banking their technology’s advantage on AI and machine learning. Are hackers turning the tables on them?
How Hackers Might Use Artificial Intelligence
Maybe so. IBM researchers have developed as a proof-of-concept a “new breed” of AI-driven cyber weapon. The vendor demonstrated DeepLocker in a session on Wednesday at the Black Hat USA 2018 conference in Las Vegas. Essentially, the researchers wanted to understand how AI could be used by bad actors to up the cyber attack ante.
Here's what we know (via an IBM blog post):
- DeepLocker is purpose-designed to be stealthy.
- It flies under the radar, avoiding detection until the precise moment it recognizes a specific target. Its evasion techniques are different than anything previously seen in the wild.
- Like nation-state malware, it could infect millions of systems without being detected. But, unlike nation-state malware, it can be aimed at consumers and businesses.
- It uses a Deep Neural Network (DNN) AI-model to hide its attack payload in benign carrier applications, such as video conferencing apps. The payload will only be unlocked if—and only if —the intended target is reached.
- It’s extremely challenging to reverse engineer the benign carrier software and recover the mission-critical secrets, including the attack payload and the specifics of the target.
“We are on the cusp of a new era: the artificial intelligence era. The shift to machine learning and AI is the next major progression in IT. However, cybercriminals are also studying AI to use it to their advantage — and weaponize it,” wrote Marc Ph. Stoecklin, a principal research scientist and manager of the Cognitive Cybersecurity Intelligence (CCSI) group at IBM.
Potential DeepLocker Scenarios
Here’s how DeepLocker could work, based on an IBM test model:
- The WannaCry ransomware was hidden in a video conferencing application so that it wouldn’t be discovered by anti-virus engines and other tools.
- The AI model was trained to recognize the face of a specific person to unlock the ransomware and execute on the system.
- When the video conferencing app was launched, it fed camera snapshots into the embedded AI model.
- As the victim sat in front of the computer and used the application, the camera would feed their face to the app, and the malicious payload will be secretly executed.
Kinda of makes one shudder -- you could be a mule without ever knowing. IBM’s point is both educational in nature and a warning. “As cybercriminals increasingly weaponize AI, cyber defenders must understand the mechanisms and implications of the malicious use of AI in order to stay ahead of these threats and deploy appropriate defenses,” the researchers said.
So far, DeepLocker-like malware has not been observed in the wild. But, the tools to construct are readily available along with the malware techniques so it may be only a matter of time. “In fact,” said IBM, “we would not be surprised if this type of attack were already being deployed.”