AI benefits/risks

AI’s Efficacy is Limitless in Cybercrime

Share
AI with young man in the night

Bringing artificial intelligence into the cybersecurity field has created a vicious cycle. Cyber professionals now employ AI to enhance their tools and boost their detection and protection capabilities, but cybercriminals are also harnessing AI for their attacks. Security teams then use more AI in response to the AI-driven threats, and threat actors augment their AI to keep up, and the cycle continues.

AI Security Solutions

Despite its great potential, AI is significantly limited when employed in cybersecurity. There are trust issues with AI security solutions, and the data models used to develop AI-powered security products appear to be perennially at risk. In addition, at implementation, AI often clashes with human intelligence.

AI’s double-edged nature makes it a complex tool to handle, something organizations need to understand more deeply and make use of more carefully. In contrast, threat actors are taking advantage of AI with almost zero limitations.

The Lack of Trust

One of the biggest issues in adopting AI-driven solutions in cybersecurity is trust-building. Many organizations are skeptical about security firms’ AI-powered products. This is understandable because several of these AI security solutions are overhyped and fail to deliver. Many products promoted as AI-enhanced do not live up to expectations.

One of the most advertised benefits of these products is that they simplify security tasks so significantly that even non-security personnel will be able to complete them. This claim is often a letdown, especially for organizations struggling with a scarcity of cybersecurity talent. AI is supposed to be one of the solutions to the cybersecurity talent shortage, but companies that overpromise and underdeliver are not helping to resolve the problem – in fact, they’re undermining the credibility of AI-related claims.

Making tools and systems more user friendly, even for non-savvy users, is one of cybersecurity’s main aspirations. Unfortunately, this is difficult to achieve given the evolving nature of threats, as well as various factors (like insider attacks) that weaken a security posture. Almost all AI systems still require human direction, and AI is not capable of overruling human decisions. For example, AI-aided SIEM may accurately point out anomalies for security personnel to evaluate; however, an inside threat actor can prevent the proper handling of the security issues spotted by the system, rendering the use of AI in this case practically futile.

Nevertheless, some cybersecurity software vendors do offer tools that make the most of AI’s benefits. Extended Detection and Response (XDR) systems that integrate AI, for example, have a good track record for detecting and responding to complex attack sequences. By leveraging machine learning to scale up security operations and ensure more efficient detection and response processes over time, XDR provides substantial benefits that can help ease the skepticism over AI security products.

Limitations of Data Models and Security

Another concern that compromises the effectiveness of using AI to fight AI-aided threats is the tendency of some organizations to focus on limited or non-representative data. Ideally, AI systems should be fed with real-world data to depict what is happening on the ground and the specific situations an organization encounters. However, this is a gargantuan endeavor. Collecting data from various places around the world to represent all possible threats and attack scenarios is very costly, and something that even the biggest companies try to avoid as much as possible.

Security solution vendors competing in the crowded market also try to get their products out as soon as possible, with all the bells and whistles they can offer, but with little to no regard for data security. This exposes their data to possible manipulation or corruption.

The good news is that there are many cost-efficient and free resources available to address these concerns. Organizations can turn to free threat intelligence sources and reputable cybersecurity frameworks like MITRE ATT&CK. In addition, to reflect behavior and activities specific to a particular organization, AI can be trained on user or entity behavior. This allows the system to go beyond general threat intelligence data – such as indicators of compromise and good and bad file characteristics – and look into details that are specific to an organization.

On the security front, there are many solutions that can successfully keep data breach attempts at bay, but these tools alone are not enough. It is also important to have suitable regulations, standards, and internal policies in place to holistically thwart data attacks aimed at preventing AI from properly identifying and blocking threats. Ongoing government-initiated talks for AI regulation and the proposed AI security regulatory framework by MITRE are steps in the right direction.

The Supremacy of Human Intelligence

The age when AI can circumvent human decisions is still decades or maybe even centuries away. This is generally a positive thing, but it has its dark side. It’s good that humans can dismiss AI judgment or decisions, but this also means that human-targeted threats, like social engineering attacks, remain potent. For example, an AI security system may automatically redact links in an email or web page after detecting risks, but human users can also ignore or disable this mechanism.

In short, our ultimate reliance on human intelligence is hindering AI technology’s ability to counter AI-assisted cyber-attacks. While threat actors indiscriminately automate the generation of new malware and the propagation of attacks, existing AI security solutions are designed to yield to human decisions and prevent fully automated actions, especially in light of the “black box problem” of AI.

For now, the impetus is not to achieve an AI cybersecurity system that can work entirely on its own. The vulnerabilities created by allowing human intelligence to prevail can be addressed by cybersecurity education. Organizations can hold regular cybersecurity training to ensure that employees are using security best practices and help them become more adept in detecting threats and evaluating incidents.

It’s correct – and necessary – to defer to human intelligence, at least for now. Nonetheless, it’s important to make sure that this doesn’t become a vulnerability that cybercriminals can exploit.

Takeaways

It is more difficult to build and protect things than to destroy them. Using AI to fight cyber threats will always be challenging due to various factors, including the need to establish trust, the caution needed when using data for machine learning training, and the importance of human decision-making. Cybercriminals can easily disregard all these considerations, so it sometimes seems like they have the upper hand.

Still, this problem is not without solutions. Trust can be built with the help of standards and regulations, as well as the earnest efforts of security providers in showing a track record of delivering on their claims. Data models can be secured with sophisticated data security solutions. Meanwhile, our ongoing reliance on human decision-making can be resolved with ample cybersecurity education and training.

The vicious cycle remains in motion, but we can find hope in that it also applies in the reverse: as AI threats continue to evolve, AI cyber defense will be evolving as well.

Guest blog courtesy of Stellar Cyber. Read more Stellar Cyber guest blogs and news here. Regularly contributed guest blogs are part of MSSP Alert’s sponsorship program.