Until now, one of the easiest ways to spot a phishing email has been spelling or grammatical errors — but ChatGPT has changed all that. With the popular AI-driven chatbot spinning out prose faster than you can say “write my term paper,” everyone now needs to be suspicious of even the most well-written emails.
ChatGPT’s ability to generate natural language is exciting but also gives cybersecurity professionals like Leapfrog reasons to be concerned. The first ChatGPT-fueled cyberattacks — phishing emails — are already in play, and companies must be on alert.
Better emails, more clicks
In Cybercrime Meets ChatGPT: Look Out, World, IEEE Spectrum writes about researchers who found that ChatGPT and other chatbots are currently generating phishing emails that are “much better and more convincing” than typical phishing emails.
Another study concurs, as CSO Online reports in Study shows attackers can use ChatGPT to significantly enhance phishing and BEC scams.
Security researchers demonstrated that not only does ChatGPT’s natural language make it harder to detect phishing, it can compromise business email communications and potentially shift attacker tactics. They share an example of how cybercriminals can use ChatGPT to generate entire email chains between various people to add credibility to their scam.
What’s more, chatbots can be used to generate code for malware — researchers were able to build a full malware infection strain using ChatGPT. There’s no doubt that threat actors will soon do the same and more.
So ChatGPT can write the malware and the emails that deliver it.
RELATED: More mature IT operations are better able to spot and stop cybercrime.
Current and future threats
A cyber threat analysis report from Insikt Group called I, Chatbot sums up the current ChatGPT threat in its key findings, which include:
- Threat actors are already sharing how to use ChatGPT for malware development, social engineering, disinformation, phishing, malvertising, and money-making schemes.
- Researchers found it doesn’t take much time or experience to use ChatGPT to replicate malicious code that’s identified on the dark web.
- ChatGPT lowers the barrier to entry for threat actors.
- The most immediate threat from ChatGPT-aided cybercrime is from non-state threat actors.
At this time, researchers aren’t expressing concern about related geopolitical threats.
However, the current version of the ChatGPT model that was formally launched in November 2022, GPT-3, is a huge improvement over the previous version, GPT-2. GPT-4 is currently rolling out and will likely be better yet.
ChatGPT weighs in
What does ChatGPT itself have to say about its role in phishing? Leapfrog posed this simple query — ChatGPT and phishing — and got this response:
“As an AI language model, I am a tool that can be used for various purposes, including those that are not ethical or legal. Unfortunately, there have been instances where malicious actors have used language models like me to aid in phishing attacks.
Phishing is a type of social engineering attack that involves tricking individuals into providing sensitive information or performing actions that can compromise their security, such as clicking on a link that downloads malware or entering login credentials on a fake website. In some cases, attackers use language models like me to generate convincing messages or emails that appear legitimate to the victim.
However, it’s important to note that language models like me do not have malicious intent on our own. We are simply tools that can be used for a wide range of purposes, including those that are ethical and legal. The responsibility for using language models ethically and responsibly ultimately lies with the users who interact with them.
To avoid falling victim to phishing attacks, it’s important to be cautious when receiving unsolicited emails or messages and to verify the authenticity of any requests for sensitive information before providing it. Some ways to do this include checking the sender’s email address or website URL, looking for spelling or grammatical errors in the message, and contacting the organization directly to confirm the request.”
There’s not a single spelling or grammatical error in all 266 words of the ChatGPT-generated text — it’s hard to ignore the irony of its phishing-safety advice.
Responding to phishing threats — old and new
At Leapfrog, part of our responsibility is to stay on top of cybersecurity for our clients. Phishing is a very real, very common attack vector that impacts every business that’s online — it’s the leading vector responsible for 41% of all attacks. That’s why advanced email protection has been part of our standard managed security package for years — it uses the power of AI in a positive way to combat malicious email activity. Currently, the algorithm is being updated to look for AI-associated content.
Each time the threat landscape changes, however, as it does with ChatGPT’s ability to improve phishing and spear-phishing effectiveness, we look at what we can do to help reduce risk to our clients right now.
In this case, we are re-emphasizing that every employee at every company must do their part. We advise our clients to double down on best practices:
- Train employees on new threats as soon as possible, including fun, in-real-life sessions that help make security habits relatable
- Fortify your culture of security by talking frequently about cybercrime and its potential impact on your business and its ability to operate
- Consider adding security services to contain threats and monitor for potential problems that make it past defend-and-protect solutions, including stopping infiltrations that can originate from attacks like phishing
Most often, a company’s weakest security links are its internal processes and not its technology — locking up your valuables doesn’t do you much good if you inadvertently give away the keys.If you’re concerned about the dramatic increases in cybercrime and the ever-evolving threat landscape, our CyberRisk Program takes a holistic approach to cyber risk management. It includes a combination of technical solutions (like a permissions structure that makes the most sense for your company) and governance strategies that help ensure your people, processes, and facilities all support a secure environment. Please get in touch with us for more information.