Using Generative AI To Automate Conversational Phishing Attacks

Banner Image: MSP Buyers Guide For Security Awareness Training
Author profile photo
Sebastian Salla Published: July 03, 2024

Artificial Intelligence (AI) is genuinely turning the technology world on its head. It seems like every day, a new application for AI is being discovered and made available to the general public. While AI is overwhelmingly being used to bring a positive impact to the world, there is a dark underbelly where AI is used for a range of malicious purposes. AI-enabled phishing is one such purpose.

In this blog, we'll detail what conversational phishing is, why conversational phishing attacks are dangerous, and finally, the step-by-step process of how Generative AI is used to automate conversational phishing attacks.

Jump To Automating Conversational Phishing With AI In 5 Steps

What Is Conversational Phishing?

Traditionally, phishing has been one-dimensional. Victims receive an email that immediately entices them to perform an action. This action could be clicking a phishing link, downloading an attachment, buying gift cards, or any number of other things.

The benefit of traditional phishing is that it can be highly automated, allowing attackers to send thousands or millions of emails daily.

Conversational phishing attacks take this a step further. Instead of immediately enticing victims to perform an action, they're drawn into a seemingly innocuous conversation with no defined action other than simply responding to an email.

Image with information about the difference between traditional phishing and conversational phishing

Before AI, conversational phishing was reserved for highly targeted spear-phishing attacks, in which the attacker manually reads and responds to emails sent by their victims. The key limiting factor here is the attacker's human element. It prevents them from scaling these attacks beyond more than a couple of hundred targets at any given time.

Why Is Conversational Phishing Dangerous?

Conversational phishing emails are dangerous because of two key elements.

Image with information about how humans and email security are both not equipped to handle conversational phishing

1. Modern Email Filters Aren't Designed To Catch Them

Email security tools aren't readily equipped to catch and prevent conversational phishing attacks. This is due to the nature of what these tools look for when they try to determine whether an email is spam or phishing.

The important thing, in this case, is the payload. When attackers send traditional phishing emails, where the payload, whether it is a link, attachment, or information request, is directly embedded in the initial email, email security technologies can analyze and detonate this payload.

Payload analysis heavily influences the reputation score assigned to any given email, which is then used to determine where the email should go, whether it's the user's Inbox, Spam folder, or Administrative Quarantine folder.

When attackers use conversational phishing attacks, where the payload isn't readily available, email security tools are robbed of this key analysis capability. Simultaneously, once engaged in a back-and-forth conversation, email security tools naturally begin to trust the sender more, so when the sender does finally send their payload, it's much more likely to end up in the user's Inbox rather than being filtered.

2. They Allow Attackers To Abuse Societal Norms

Have you ever wondered why Voice Phishing (Vishing) attacks are so successful, even when they're much more manual and, in many cases, require direct human-to-human interaction?

It's because cybercriminals can directly abuse societal norms to hook people into a conversation while simultaneously developing a level of trust from a seemingly random person. Salespeople abuse these societal norms all the time, often convincing their customers to buy something they wouldn't otherwise buy.

Conversational phishing attacks are no different from this. These same societal norms can be abused by engaging a victim in conversation. Once engaged in conversation, the victim may feel obligated or indebted to the attacker because the attacker has spent their seemingly valuable time engaging with them.

How To Automate Phishing Attacks With AI In 5 Steps

Now that we understand what conversational phishing is, and why it's dangerous, let's walk through the step-by-step process of using Generative AI to automate conversational phishing attacks.

Step 1. Establish The Scenario

Image with AI on the left and various phishing scenarios on the right

Every phishing attack needs a believable scenario. Something the victim can relate to or is otherwise expecting. Scenarios that are common across industries and geographic locations include:

  • New System Access Requests: Pretending that a new system is being released and the victim has had an access request raised on their behalf.
  • Expense Claim Discrepancies: Pretending that an expense claim the employee recently submitted has some mistakes that need to be corrected before the request can be finalized.
  • Billing Overcharges: Pretending that the victim was accidentally overcharged for a service they procured from a fictitious or popular company.
  • Potential Job Opportunities: Pretending that a fictitious or popular company is recruiting individuals with their experience at above-market rates.
  • Attempted Delivery Notifications: Pretending that a courier failed to finalize a delivery due to insufficient delivery information.

Once a scenario is selected, pairing it with a payload uniquely suited to the end-state objective is crucial. Typically, these objectives are:

  • Credential Compromise (Phishing Website): By using phishing links and websites, cybercriminals can harvest credentials and impersonate the online identify of the victim.
  • Endpoint Compromise (Phishing Attachment): By using attachments, cybercriminals can gain access to the victim's host computer.
  • Business Email Compromise (Information Request): By getting a victim to disclose sensitive information, attackers can potentially perform identity theft or various other malicious attacks.

Step 2. Create The AI Persona

Image with information about how to make an AI as human-like as possible

Humans are unique, and their personalities or moods can change from day to day. We need to train our AI to replicate this.

In support of making our AI sound as human as possible, we need to define certain criteria that its persona should attempt to align with. These criteria include:

Who Is The AI?

This needs to be addressed from the victim's perspective. Is the AI a member of the victim's internal IT/Finance/HR/Legal team? Or perhaps it's a colleague, friend, or acquaintance? There are various possibilities for who the AI could be, but it's important that this is mapped to the scenario they'll be receiving.

For example, a victim wouldn't expect to receive a new system access request from a friend, but they would expect one from a member of their internal IT team.

What Is The Name Of The AI?

Depending on whether the AI is impersonating a specific individual, such as the CEO or a behind-the-scenes IT admin, the name of the AI may need to be static or random. Remember that the name of the AI should be directly tied to who the AI is impersonating.

What Is The Tone Of The AI?

Some humans are mean, some friendly, and others authoritative. The tone of a conversation can heavily influence the outcome, and in some cases, a certain tone is expected, particularly if the scenario calls for it.

For example, if you're conversing with a friend, you'd expect the tone to be friendly. Likewise, the tone will probably be professional if you're conversing with a colleague. Depending on the scenario, some different tones worth considering can include Friendly, Charismatic, Hostile, Stern, Casual, Professional, Apologetic, and Authoritative.

Step 3. Personalize Conversations With Victim Information

The more information you have about a victim, the better. You can use this to personalize the phishing conversation, making it more believable and more challenging to detect.

As a baseline, you should have the following information available for a simulated phishing campaign: Email Address, First Name, Last Name, Job Title, and Company Name.

Step 4. Use Generative AI To Create A Phishing Email

Now that we understand how to create a scenario, the AI persona, and utilize victim information, it's time to use Generative AI to create the initial phishing email. We'll showcase how this is done using prompts with a Large Language Model (LLM).

Let's set some parameters for demonstration purposes. We'll use the "New System Access Request" scenario; our AI will be a member of the victim's internal IT team, with a professional tone and the name John Doe. Based on this, let's develop the AI system prompts and detail what each prompt does.

System Prompt: This tells the AI who they are (relative to the victim). "You are a member of the IT department who works at the same company as the recipient. Your name is John Doe."
System Prompt: This sets the tone of the email conversation. "You are professional, and your writing style mimics the flow and tone of a natural email conversation."
System Prompt: This sets the scenario used to generate the initial email. "Over email, you are notifying an employee that a request has been received for them to access the new collaboration system managed by your team. Additional information is needed before this request can be processed."
System Prompt: This sets the action that's required of the victim. "The details of what's required will be provided after their response. You're going to confirm their availability to provide this information."
System Prompt: This tells the AI what information is known about the victim. "The following information is known about the individual being emailed: The company they work at is Contoso Corp."

Once these prompts are fed to the LLM, we'll end up with an email similar to the one demonstrated below:

An example of an AI powered conversational phishing message

Step 5. Use Generative AI To Read Emails & Create Counter-Responses

Once the initial phishing email is sent and a response is received, it's time to interact with the LLM again. Ideally, this should pick up where we've left off in Step 4, with all system prompts and generated text saved in a state file.

The reply received by the victim should be parsed and provided to the LLM as a user message. Alongside this user message, the AI should be prompted to deliver the simulated phishing payload.

System Prompt: This tells the AI what to do next (e.g., click a link). "Ask the user to..."

Once these prompts are fed to the LLM, we'll end up with an email counter-response similar to the one displayed below:

An example of an AI powered conversational phishing back-and-forth conversation

This same approach can then be used to continue the conversation for as many responses as are necessary to meet the end-state objective.


To stay a step ahead of cybercriminals, it's essential that we, as cyber defenders, use the same tools, tactics, and techniques that they do. By utilizing Generative AI to automate conversational phishing attacks, we can effectively train employees to spot malicious AI-generated phishing emails.

At CanIPhish, we've developed a fully automated conversational phishing engine that's powered by Generative AI. It provides realistic, nuanced, multi-lingual, and unique conversations, offering a truly immersive training experience. Don't just take our word for it; try our public phishing email simulator, which demonstrates how these conversations look and feel.

Free Tools

Free Security Awareness Program Generator

Is your organization taking the right steps to avoid a cybersecurity breach? Create your free tailored program today.

Generate your program
Avatar profile photo
Written by

Sebastian Salla

A Security Professional who loves all things related to Cloud and Email Security.