The impact of artificial intelligence on cybersecurity now and in the years to come could leave most visionaries with the belief that spotting, isolating and solving security issues while maintaining a robust defense will be confined to automation.
Will AI so outpace a security professional’s ability to effectively prepare for and react to security events that the human factor will universally disappear? Not so fast, says Tony Pietrocola, president of Cleveland, Ohio-based cybersecurity company AgileBlue. He believes people should always be an important component to a comprehensive security operation, the power of AI notwithstanding.
As AgileBlue’s co-founder, Pietrocola leads a business focused on detecting cyber threats faster and more accurately across the digital infrastructure and cloud. Foremost among its tools is an AI-power security operations center (SOC) and security orchestration, automation and response (SOAR) platform.
"It Can't Be All AI and Technology"
AI is a tool in the company’s delivery of 24/7 monitoring, detection and response to proactivity identify cyber threats. However, Pietrocola believes people should remain part of the security equation.
Regarding his perspective on the risks and potential of AI across cybersecurity management, he believes that AI is about making business and security decisions with a high degree of confidence. It’s about knowing when to shut down a device, disabling an account, blocking an IP address, and more.
“I think that AI is fundamentally going to change cybersecurity because you're going to need it to defend against the bad guys who are using it," Pietrocola told MSSP Alert. “But because of the shortage of cyber talent, AI is going to rise from level one to level two support for cybersecurity.”
So, what are the most important aspects of AI that organizations should keep in mind as they deliver robust cybersecurity management? As Pietrocola explained:
“At the end of the day, the first thing to remember is it can't all be AI and technology. Cybersecurity is a good mix of great tech with really good people. I think you need to have both, and if you don't I think will have some problems.”
Offering an AgileBlue use case, Pietrocola noted a client that pressed heavy on automation without forsaking the human element:
“They wanted a human being, meaning one of our AgileBlues or whoever they're working with to actually hit the button to disable something or block something. We need to find that happy medium there where humans are just as important as the AI in that relationship.”
But as the required support levels rise, Pietrocola believes that AI should run the show.
“AI can show proof of its work, such as a confidence score as to why something was escalated, that this was a real alert not a false alert. AI has the ability to learn from these experiences.”
He notes that timing and speed are critical factors in cybersecurity defense and remediation. But not everything can be stopped, as we've seen the ability to mitigate risk is huge. Explaining the imperative, he said:
“Mitigating risk happens the faster you move, and AI moves so fast that detecting and responding quicker and taking decisive escalation will be the difference. The really solid engineers will then be brought in, and I think it's gonna level the playing field in giving us a better ability to play defense because the bad guys don't seem to be running at a negative deficit.”
To that end, who has a greater mastery of AI today, the good guys or bad guys? Well, it’s a continuous game of cat and mouse between cybercriminals and their targets.
“We need cyber solutions that adapt to a cyber event on the fly, not just something that's you know stationary so to speak,” Pietrocola said. “It can't just be like, well if it didn't do this then I can't do that.”
Defense as Offense
So, by playing strong defense, AI can put the good guys on the offensive. As Pietrocola explained:
“When you know something is not right, when it's anomalous behavior and it's detected, be it an IP address or a link or somebody's about to click, if we can isolate that thing immediately — meaning that the AI knows we got a problem and it isolates that device — theoretically, even though that looks like defense, that's offense. We can shut something down before someone else has the ability to execute something. AI can put us on the offensive.”
But how far should security professionals take in terms of cyber defense? Should you attack the attackers? Pietrocola won’t go that far.
“Like I mentioned, to become offensive by shutting down a device or a firewall or blocking something, that's one thing,” he said. “But if we try to become offensive and do something that shuts their machines down, blows their stuff up, play ransomware with them, that's possible, but I don't think we want to do that.”
For one, cybercriminals can actually sue their attacker, and there is evidence of such cases, according to Pietrocola. Besides, why encourage more attacks and start a war?
“If we play hardball with those guys, what’s to stop them from coming back and maybe creating something that's more catastrophic?” he said.
To illustrate his version of defense, Pietrocola used a football analogy:
“I played college football and I was a defender, so I like playing defense. Sometimes, if you hit a receiver hard enough, you're actually playing offense, and they don't want to come across the middle anymore.”
Emphasizing to the idea of humans complementing AI — working in partnership with it, Pietrocola said:
“AI is not the answer to everything. It won't take our jobs because we still need that human touch. It's going to complement the jobs we can't fill or don't have the education or expertise for yet. AI is not just going be set in motion and stop every cyberattack. It's going to take us humans to properly set it up, properly program things, and get it in the right position for it to work."