Will hackers use AI chatbots to craft better phishing emails?

James Dyer | 10th Feb 2023

Will hackers use AI chatbots to craft better phishing emails?

Cybercriminals can use AI tools like ChatGPT and older ones, like Quillbot AI, to reword emails to ensure they’re not losing out on potential victims from grammar and spelling slipups, and to speed up the time taken to create these attacks. The focus of this blog is specifically on ChatGPT, which has reportedly become the fastest-growing consumer application in history, hitting 100,000,000 users by January 30, 2023.

ChatGTP from OpenAI hit the news in a big way in late 2022, and for good reason. Backed by hundreds of millions in investment and trained on 570GB of text from websites, books, and articles (according to the chatbot itself), ChatGPT can create human-seeming responses to natural language queries typed into a text box.

Increasing fears and concerns

ChatGPT has raised fears of sophisticated, AI-powered chatbots doing everything from helping students cheat on essays to overwhelming public forums on government proposals. Another concern that is increasing in prominence: helping cybercriminals create better-than-human phishing emails. Better because:

  • They’re fluent in the target’s language
  • They could be pumped out by the thousands
  • They could exponentially amplify the efforts of individual cybercriminals

Phishing emails, frequently disguised as coming from a trusted source such as a colleague or legitimate company, attempt to trick people into clicking on a malicious link, downloading attachments containing malware, or divulging sensitive information. So, could a chatbot craft more believable phishing emails that trick more people?

Well, yes and no.

Chatbot capabilities

First, let’s get out of the way what AI-powered chatbots can and cannot do. Trained on large datasets of natural language, chatbots are built through reinforcement learning, aided by human feedback that nudges the bots ever closer to more lifelike responses.

But they can’t think for themselves or even understand the meaning of what they write. In short, they are tools that must be guided by actual humans, albeit potentially performing some routine tasks much faster.

So, while they can produce typo-free and grammatically correct text, enhancing the appearance of legitimacy for phishing emails, there’s a lot they cannot do to mask the nefarious nature of phishing emails. That’s because phishing emails necessarily come with certain ‘tells’.

Repeated elements of phishing emails and campaigns

Attackers have gotten much more sophisticated since the days of supposed 419 scams in badly written emails. That includes cleaned-up grammar and spelling—the kind of mistakes long correctable with the help of dedicated spelling and grammar checkers.

Fortunately, there are many other ways to spot phishing emails besides poor writing, meaning a well-crafted email doesn’t ensure a successful attack. Other tells include:

  • Unusual email addresses
  • Offers or promises that are too good to be true
  • Seeming to come from trusted contacts, but making usual requests (via compromised email accounts)
  • Requests to supply sensitive information
  • Links to suspicious websites, for example, sites with unusual domain names
  • Communicating a sense of urgency

Still, some experimenters have shown surprising results in getting ChatGPT to write convincing phishing emails and even create code for malware, including ransomware.

Just before Christmas, I decided to put ChatGPT to the test. Initially, I started with a simple prompt, asking ChatGPT to write an email that requested someone’s contact details:

Email produced by ChatGPT following simple promptEmail produced by ChatGPT following simple prompt I’d given

As you can see, the email is well-structured, including both a salutation and sign off, with good grammar and no typos. What’s more interesting, though, is the way ChatGPT has interpreted the prompt and humanized the text. The prompt refers to a long-lost university friend and ChatGPT has responded by using the following emotive phrases: ‘It’s been so long since we last saw each other... I was thinking about the good old days at [Uniersity Name]… I’m looking forward to seeing you again and catching up on old times.’

ChatGPT has taken a simple instruction and, in this instance, written quite a genuine outreach email to obtain contact information.

How chatbots could help attackers

In theory, sophisticated, AI-powered chatbots like ChatGPT created by ethical developers shouldn’t be able to do anything illegal. But there are lots of examples of people finding loopholes around restrictions.

Having seen the initial email that ChatGPT built (which could be used by someone trying to illegitimately get someone’s personal data) I then pushed the prompts further to demonstrate how ChatGPT could be used by cybercriminals.

In my next test, I asked AI to create a Christmas-themed email that more closely follows phishing emails we see in the wild by offering two sought-after prizes. Specifically, the email would be from Amazon and inform the recipient of their chance to win a PlayStation and Oculus Quest 2. Here are the results of that prompt:

Email produced by ChatGPT following simple prompt I’d givenEmail generated by ChatGPT offering a Christmas giveaway.

As phishing links are a common form of payload, I then asked the AI to insert a link. Here’s what the results returned:


Email generated by ChatGPT inserting an HTML link to a GiveawayEmail generated by ChatGPT inserting an HTML link to a Giveaway

But anyone can write an email, right? Not everyone can code. To see a fuller extent of what ChatGPT could do, I then got the bot to create rudimentary ransomware code. This isn’t a 100% working code, but the skeleton is there. Here are the results from that prompt:

Requesting a Python script for ransomware

Python code produced by ChatGPT to encrypt and infiltrate dataPython code produced by ChatGPT to encrypt and exfiltrate data

While ChatGPT included a security warning before the code, it still produced it. Of course, while I will never use this for malicious purposes, I’d put money on the warning not being enough to deter a cybercriminal.

The overall verdict is that cybercriminals would have to refine the emails and the code. Without AI, creating malware is a specialist task, and it’s likely specialized cybercriminals will stay onboard to refine the code – but their jobs have just got a lot easier. We can also only anticipate that AI-generated outputs will improve over time, potentially making it easier for non-specialists to create malware.

The key to remember, though, is that the finished emails would still come with all the other earmarks of phishing attempts outlined earlier in this article.

No silver bullet

Phishing attacks continue to become more advanced and automated. That’s a given, no matter what technology bad actors deploy to launch them. This is as true for chatbots as any other tool. And that’s why organizations need integrated cloud email security (ICES) solutions to protect against the most sophisticated attacks. Being able to detect phishing emails – whether written by a person or AI – will protect organizations.

Such solutions deploy their own AI and machine learning models to detect text-based attacks (not just malicious links and attachments), suspicious formatting and requests, language that tries to create urgency, attention-getting subject lines, and more.

So, yes, chatbots do have the potential to help attackers craft more believable phishing emails. But those emails can still be detected and stopped from doing harm with the help of effective countermeasures.