There is growing evidence that cybercriminals are leveraging AI chatbots for nefarious purposes such as phishing. AI chatbots such as ChatGPT are capable of generating content that is grammatically correct, and free of spelling mistakes, and they are capable of generating convincing content for social engineering and phishing. AI-generated phishing and social engineering content can be very difficult to identify as malicious, as the emails lack many of the tell-tale signs of a phishing email. While AI chatbots certainly have the potential to change the phishing landscape, that is not the only way that cybercriminals are using AI chatbots for phishing.
Chatbots such as ChatGPT have proven incredibly popular, and many companies have rushed to release their own AI chatbots. With multiple chatbots available and high demand for these tools, phishers have been taking advantage and have been creating websites offering fake AI chatbots. These websites claim that their AI chatbot is even more advanced than ChatGPT and can be used by anyone to get rich quick or can be used by businesses for handling customer service inquiries, eliminating the need for expensive human labor.
Links to these websites are sent out in phishing emails that promote these new tools. If the link is clicked, the user is directed to a website where they are asked to register and disclose sensitive information or download a chatbot app. The latter includes Trojan malware that provides the attacker with access to the victim’s device, spyware or a keylogger that can steal personal information and credentials, or other forms of malware.
AI chatbots are incredibly expensive to develop and train, with analysts estimating that the cost of training these AI tools is at least $4 million, and the running costs of ChatGPT have been estimated to be around $700,000 per day. AI chatbots are also attracting a lot of media attention, so the release of a new chatbot, especially one that is better than ChatGPT, is unlikely to fly under the radar. If you receive an email offering a new AI chatbot, it likely is a scam.
You could perform a check of the website to see when it is registered, see if there is any contact information on the site, or do a quick Google search to see if there has been any news coverage. The best thing to do, however, is to simply delete the email or report it to your security team. If you want to use an AI chatbot, use one of the reputable chatbots such as ChatGPT, Microsoft’s Bing, or Google’s Bard.
Cybercriminals can use other methods to drive traffic to their malicious websites, including malicious Google Ads. There has been an increase in ‘malvertising’ for malware delivery and phishing in recent months, where malicious ads are used to drive traffic to attacker-controlled websites. While these adverts are often rapidly identified and taken down by Google, they do not have to be active for long to drive huge amounts of traffic to malicious websites. Businesses can protect against these attacks by using a web filter such as WebTitan. For consumers, the same advice applies as to phishing. Be very cautious and if there is an offer that seems too good to be true, it is most likely a scam.
Due to the popularity of AI chatbots, businesses should consider adding chatbot-related lures to their phishing simulations to see how many employees click these links. This is easy to do with the SafeTitan security awareness training and phishing simulation platform. Any employee that clicks the link in the email will be automatically provided with training content relevant to that threat. By providing intervention training, the next time a similar email is received, employees will be more likely to recognize the scam and avoid it. For more information on SafeTitan, give the TitanHQ team a call.