Content

Machine Learning and Artificial Intelligence (AI): Managed Security Buzzwords?

Artificial Intelligence

The cybersecurity field faces a major skills shortage. There simply aren’t enough experts to fill every job opening. This leads to a number of problems. For starters, teams often get stretched ultrathin, leading to serious burnout issues. Additionally, teams could easily end up making mistakes, particularly when they have a flood of logs to comb through or a large number of security events to investigate.

Author: SolarWinds Director of Business Development Marco Muto
Author: SolarWinds Director of Business Development Marco Muto

To bridge this gap, a lot of security vendors offer artificial intelligence (AI) and machine learning capabilities in their products. The promise of this technology cannot be understated—it has the potential to allow teams to accomplish significantly more without breaking their teams in the process.

But are AI and machine learning just buzzwords?

What’s the difference anyway?

Before we get into that, we should explain just what we mean by AI and machine learning.

You’re probably familiar with the term AI, which refers to the idea of machines performing similar thinking functions to a human. It can respond to a stimulus or an environment without human intervention or without following a preset pattern. (Automation, by contrast, typically involves preset patterns. You can automate routine tasks, but it’s harder to automate something that requires decision-making).

AI gets smarter over time. It does this via machine learning, which usually involves a system using inputs and feedback. For a grossly oversimplified example, if an AI system for a hedge fund decides to make a stock purchase based on a key indicator (input), and the stock price quickly plummets (bad outcome), the system should learn to avoid buying based on that indicator when presented in a similar circumstance.

Applying these two pieces to cybersecurity tools, AI and machine learning work to help teams get a much deeper, richer picture of either security incidents or their entire threat ecosystem.

For example, many security information and event management (SIEM) solutions require use some form of AI to provide the basic value of the tool, although it still requires humans to configure the tool, train the system, and tweak rules as needed. The sheer volume of logs created by an infrastructure would be impossible for any human to monitor without some assistance, yet these logs contain the information needed to detect and respond to most attacks. A SIEM tool using AI could help correlate different log elements, perhaps against other information like threat intelligence feeds, to help produce a more complete picture of an event. This would allow the system to make a decision whether to send an alarm or notification to the security team, where they can further investigate.

Over time, the system would theoretically learn how to better reduce false positives and false negatives based on feedback, either from users or from the environment itself.

Are we there yet?

Here’s the thing—AI and machine learning have a lot of promise. They’re not just shiny buzzwords without substance. However, don’t expect a miracle cure either.

For starters, SIEM tools still—and will likely always—require human input. Someone will have to write, maintain, and modify correlation rules. If the system is smart enough to be trainable, humans must still provide input for the system to learn (similar to email users marking individual messages as “spam”).

Beyond that, AI-driven security tools typically still need users to analyze the outputs of the system. If the AI in a SIEM tool provides an alert based on what it deems a possible critical threat, security analysts must still must investigate the alarm and learn more about the incident before escalating threats. Incident response teams will need to gather more information before deciding how best to respond. Forensic analysts will still have to use their own judgment and procedures to decide how best to preserve evidence (or if it has to be reported to authorities).

There’s also one other problem with AI—cybercriminals. The bad guys will try to game any system, and many are clever enough to do it. If they can successfully hack an algorithm behind a security tool, those criminals could have the run of the business until there’s a fix (or a replacement). Every system can be broken—even ones that adapt.

The role of AI

AI and machine learning, without a doubt, have a place in cybersecurity. It’s not just a buzzword. Despite the downsides, artificial intelligence plays a crucial role in assisting teams. SIEM tools in particular use AI to make log correlation and intrusion detection easier to use (and frankly, possible at all.) However, it’s important to realize AI is only one tool among many.

When it comes to SIEM tools, SolarWinds® Threat Monitor helps teams simplify the threat detection and response process. The system also allows users to train the system to continuously improve detection and response over time. It also includes multiple other tools designed to help improve your security, such as threat intelligence feeds, robust reporting, extensive log searching, and much more. Try Threat Monitor free for 14 days to learn more.


Author Marco Muto is director of business development at SolarWinds, parent of SolarWinds MSP. Read more SolarWinds MSP blogs here.