Digital Fraud, Threat Intelligence

How to prevent phishing scams that employ generative AI from hurting your business?

By Content Team on
SHARE

Phishing is often among the top cyber threats for any business. If you own an online platform, criminals can use phishing to steal your user's credentials, leading to data breaches and costs relating to resetting passwords and communicating with the affected users. 

Even businesses that don't operate a platform must be careful since they certainly use one. In fact, all businesses probably use multiple technology platforms for all kinds of tasks, from email to storage to social media management. Any employee or vendor can fall victim to phishing attacks that target these platforms' credentials and, worse, the corporate network itself.

This landscape is constantly shifting as hackers try out new approaches. That's why phishing messages entered social media websites—in comments, advertisements, and private messages. Now, yet another significant change is looming on the horizon: generative artificial intelligence.

Criminals can use AI to help them write phishing messages, create unique images for more engaging content, and analyze any data they can find about a victim to weave a more convincing or relevant narrative for a particular user. This also works for adapting a message to different social media platforms.

This shift is already ongoing – one report from SlashNext found a 1,265% increase in phishing emails since the introduction of ChatGPT. 

Although it's not easy to be sure that a specific message was written by a generative AI tool, that's also not entirely necessary to fight the phishing threat. What is important is the understanding that this technology can be used to help create a wider variety of phishing messages—even when it's not a completely automated process.

How do we keep up to protect our businesses and our clients from phishing?

 

Using AI to detect and respond to phishing

Criminals aren't the only ones to have AI at their disposal. Just like generative AI can speed up or even automate the process of creating a phishing message, machine learning models can be trained to recognize scam websites that attempt to steal credentials or other types of information.

Axur's Platform is focused on helping businesses remove phishing attacks that target their brand. We use AI technology to filter suspicious messages and content in many environments of the web, analyzing text for the presence of any terms or references to the brands being monitored by our solution. Audio content undergoes the same filtering after being transcribed by AI, too.

The links found inside suspicious messages are processed using a combination of AI and traditional algorithms that scan the page for the telltale signs of a phishing campaign, including:

  • An AI-powered visual inspection that identifies similar-looking elements, such as brand logos, photos of executives, or other objects.
  • Evasion techniques that attempt to prevent monitoring systems from accessing the website.
  • The language used and brand disambiguation to reduce false positives, especially for brands based on common words (Amazon, Apple, and Office are examples).
  • The presence of a password field suggests an attempt to steal credentials.

The visual processing enabled by AI is a real game-changer here. One significant challenge in phishing detection is that pages can use completely different code yet still look precisely like same. When we analyze phishing visually, we look at the page just like a human would. We have visibility on what the criminals want the victim to see, and we can do this at scale, 24/7.

Thanks to this, many phishing campaigns that used to look "different" are now correctly determined to be related. Regardless of how many AI-generated phishing scams an attacker attempts to create, the scanner can still detect a phishing attempt by identifying the essential elements that define a phishing attempt.

Everything that was found on a malicious web page can be used to establish a risk level for that campaign. This is where the response to the phishing attack can begin.

 

Preventing fraud with takedowns

No matter how many lures a phishing campaign uses, criminals still rely on malicious websites available at specific addresses. Even if generative artificial intelligence can automate the creation of the text and images used in phishing campaigns, the infrastructure that the attack needs to be carried out can't be automated similarly.

In Axur's Platform, the response to phishing attacks can be set based on their risk level so that campaigns considered to be "high risk" can be immediately set for automatic takedowns. 

When the web page is down, users can't fall victim to that campaign even if they were to click the link, email message, or social media post that is trying to lure them in.

In our experience, certain criminals even give up on targeting brands that fight phishing with takedowns. The reason is that setting up the infrastructure takes quite a bit of work – if the malicious web page does not stay online long enough for the fraud to be successful, the threat actor may realize they're just wasting their time.

In conclusion, using AI technology for visual inspection in conjunction with other customizable rules enables the detection of many phishing campaigns that would otherwise pose a challenge. Finally, as potential attacks are found, filtering campaigns by risk level and having the option to trigger a takedown request allows businesses to maintain an effective strategy to fight phishing at scale without getting bogged down by an excessive number of alerts or time-consuming manual procedures.

ESPECIALISTA CONVIDADO

Eduardo Schultze, Coordenador do CSIRT da Axur, formado em Segurança da Informação pela UNISINOS – Universidade do Vale do Rio dos Sinos. Trabalha desde 2010 com fraudes envolvendo o mercado brasileiro, principalmente Phishing e Malware

AUTHOR

Content Team

Experts in creating relevant external cybersecurity content to make the internet a safer place.