Artificial intelligence-powered tools such as ChatGPT and Google’s Bard, and to a degree Microsoft’s Security Copilot, have given rise to a new level of hackers’ stealing credentials and phishing for sensitive information, according to Password Manager.
In a survey of 1,000 cybersecurity professionals, Password Manager sought to learn how much of a threat AI-run tools present to the “average American.”
AI Raises Hacking Concerns
Key findings from the report include:
- 56% are concerned about hackers using AI-powered tools to steal passwords.
- 52% say AI has made it easier for scammers to steal sensitive information.
- 18% say AI phishing scams pose a “high-level” threat to both the average American individual user and company.
- 56% say they are "somewhat" or "very" concerned about threat actors using AI tools to hack passwords.
- 58% of respondents say they are "somewhat" or "very" concerned about people using AI-powered tools to create phishing attacks.
Commenting on the findings, Marcin Gwizdala, chief technology officer at Tidio (via Password Manager), said:
“One of the threats that appeared by using AI, in general, is phishing scams. ChatGPT can be easily mistaken for an actual human being because it can converse seamlessly with users without spelling, grammatical, and verb tense mistakes. That’s precisely what makes it an excellent tool for phishing scams.”
Additionally, the study found that 52% of cybersecurity professionals say AI tools have made it "somewhat" or "much easier" for people to steal sensitive information.
“The threat of AI as a tool for cybercriminals is dire,” says Steven J.J. Weisman, a leading authority on scams, identity theft and cybersecurity, told Password Manager.
Weisman, in the report, explained that with AI, phishing scams can now become more viable:
“In particular, many scams originate in foreign countries where English is not the primary language, and this is often reflected in the poor grammar and spelling found in many phishing and spear phishing emails and text messages coming from those countries. Now, however, through the use of AI, those phishing and spear phishing emails and text messages will appear more legitimate.”
Five Recommendations to Guard Against AI Tricks
Daniel Farber Huang, Password Manager’s subject matter expert, offered five recommendations in the blog for individuals and businesses to not get victimized by AI-powered ruses:
- Assume any unsolicited communication – email, text, DM or other – is a potential scam and exercise basic precautions when reviewing messages.
- If there is a compelling reason to respond to an incoming communication, it is safest to contact the sender or organization directly rather than hitting “reply.” Find the official phone number or email from the company website and contact them directly to ensure you are communicating with the authorized representative.
- Understand that basic bots are used for all types of solicitation and are trained to appear human and personable, including on sites like LinkedIn.
- If and where possible, consider adding an icon or emoji to your listed name on social media sites. LinkedIn, for example, allows you to add emojis in your profile name. Real human beings will not manually insert a graphic into their individual message to you, but a bot will automatically do so, which can serve as a red flag that you are being mass solicited.
- Recognize that voicemail messages, text exchanges, and even chat room conversations can be AI generated to fool you into thinking you are communicating with a real person, with the goal of trying to manipulate you into revealing personal information or sensitive data.