Artificial intelligence (AI) is a tool MSSPs, MSPs and also sorts of cybersecurity enterprises can leverage to help defend their customers. Cyber adversaries are also leveraging AI to benefit their own financial and disruptive activities. We know that there will be victories and defeats on both sides.
Experts from a variety of cybersecurity businesses contacted MSSP Alert to offer their 2024 AI trends and predictions. Continue reading to learn what they’re telling us about AI in the new year, and years ahead — and how it all factors into a maintaining a cyber-safe world.
A Rise in Disinformation / Election Year Vulnerabilities
“We expect to see a greater number of AI-operated attacks using fake news to disseminate disinformation during a (US) presidential election year. Media businesses — print, film, streaming, etc. — are all highly regulated, but the internet itself isn’t. This allows bad actors to take advantage of the influence of celebrities and world leaders to create AI-generated versions of these public figures to disseminate fake news and give it a sense of legitimacy without ever having to be fact-checked. Without any governmental legislation to clamp down on these tactics, bad actors will be able to socially engineer behavioral changes in society.”
- Sam Curry, Vice President and CISO, Zscaler
Generative AI Creates Security Opportunities
“Generative AI and machine learning (ML) are increasing the frequency and complexity of cyberattacks, creating new pressures on companies. This technology can allow cybercriminals to launch sophisticated and stealthy attacks like deepfakes or self-evolving malware, compromising systems on a large scale. To counter these advanced threats and fight fire with fire, enterprises must use AI-driven cybersecurity. This technology has the potential to transform the industry by improving enterprise posture through automated hardening of configurations and compliance, overcoming micro-segmentation challenges, fine-tuning least privilege access, enhancing reporting and more.”
- Margareta Petrovic, Global Managing Partner, and Dr. KPS Sandhu,
Head of Global Strategic Initiatives, Cybersecurity, Tata Consultancy Services (TCS)
Advanced AI to Unleash Social Engineering Attacks
“Commercially available and open-source AI capabilities, including Large Language Models (LLMs) like ChatGPT and LLaMA, and countless variants, will help attackers in designing thought out and effective social engineering campaigns. With AI systems increasingly integrating with troves of personal information through social media sites from LinkedIn to Reddit, we’ll see the ability for even low-level attackers to create targeted and convincing social-engineering based campaigns.”
- Kevin O’Connor, Director of Threat Research, Adlumin
The Rise of “Shadow AI”
“In 2024, generative AI's widespread workplace use will bring new cybersecurity challenges, notably ‘Shadow AI.’ Employees integrating AI tools into workflows without leadership knowledge create cybersecurity and data privacy risks. Without governance, organizations can't see what tools employees use and how much sensitive information is at risk. Companies will start embracing a Managed AI policy that can reduce Shadow AI risks. Educating teams on safe AI practices, setting clear usage policies, as well as implementing monitoring for AI tool usage, and updating security protocols as AI technology evolves, will be vital for harnessing AI’s benefits while minimizing data security risks.
- Michael Crandell, CEO, Bitwarden
Evolving AI Security Posture Testing and Deterrence
“AI cybersecurity will advance in the next year with a stronger focus on AI red teaming and bug bounties. Following industry leaders like Google, who now include generative AI threats in their bug bounty programs, the practice will expand to identify and address unique AI vulnerabilities, such as model manipulation or prompt injection attacks. AI red teaming (offensive security testing) will continue to employ diverse teams for comprehensive AI system assessments, focusing on empathy and detailed testing scenarios. The blend of AI red teaming and incentivized bug bounties will be crucial in securing AI systems against sophisticated cybersecurity threats, reflecting a proactive, industry-wide approach to AI security.”
- Josh Aaron, CEO, Aiden
AI Versus AI
“Attackers are increasingly using AI and ML to develop more sophisticated attacks, but AI can also be used to counter these attacks. This arms race between AI-driven defense and AI-assisted offense will drive innovation in the cybersecurity industry, resulting in ever more advanced security solutions. AI-powered security solutions are already being used to identify and prioritize threats, automate incident response, and personalize security controls. In the future, these solutions will become even more sophisticated, learning from experience, and adapting to new threats in real-time. This will enable AI-driven cyber defense systems to proactively identify and neutralize automated attacks fueled by AI before they cause damage. In this evolving cybersecurity landscape, organizations need to embrace AI and ML to stay ahead of the curve.”
- Brian Roche, Chief Product Officer, Veracode
Emergence of “Poly-Crisis” From AI-based Cyberattacks
“Come 2024, perpetrators will find it easier to use AI to attack not only traditional IT but also cloud containers and, increasingly, ICS and OT environments, leading to the emergence of a ‘poly-crisis.’ Such a scenario threatens not only financial impact but also impacts human life simultaneously at the same time in cascading effects. Critical computing infrastructure will be under increased threat due to increasing geo-political threat. Cyber defense will be automated, leveraging AI to adapt to newer attack models.”
- Agnidipta Sarkar, Vice President of CISO Advisory, ColorTokens