Cybercriminals pose as employees of well-known companies to deceive their victims. (Photo: EFE/Hannibal Hanschke)
Artificial intelligence (AI) is being used by cybercriminals to trick hundreds of users into revealing confidential information, in order to access their email, or in the worst case, steal money from their bank accounts.
One of the latest recorded tactics affecting Gmail accounts involves the use of AI to simulate urgent interactions with Google customer service, but everything ends up being false.
This happened to Sam Mitrovic, solutions consultant at Microsoft, who was recently about to become a victim of this scamand told keys to how this modus operandi works.
The attack starts from the email platform. (Photo: Unsplash)
Mitrovic shared his story on his personal blog, detailing how the scam began with a seemingly legitimate notification in his Gmail account. As he explains, he received an alert requesting approval for an attempt to recover his account, a process he had not initiated.
Although he rejected the request, this was only the first step in a much more elaborate campaign. Shortly after this incident, Mitrovic received a phone call that, at first glance, appeared to come from Google support in Australia.
However, although the call seemed genuine, he did not want to answer at first. However, a week later, the scenario repeated itself. Moved by curiosity, he finally decided to respond, and what followed revealed the cunning and danger of these new forms of scam.
Answering unknown numbers is a very common threat. (Photo: Freepik)
On the other side of the line, An alleged Google operator informed him that they had detected suspicious activity on his account. As they explained to him, someone had accessed his personal data for a week. The conversation seemed real, with a professional and convincing tone.
But what Mitrovic didn’t know at the time is that the voice he heard was not coming from a human operator, but from an artificial intelligence system that imitated the behavior of a support employee.
Likewise, this type of scam is not an isolated case. According to recent reports, hundreds of Gmail users have been scammed using similar tacticsin which AI tools are used to replicate the interaction with customer service representatives from large technology companies.
These systems are capable of analyzing and responding in real time, adapting to the user’s responses to make the interaction even more credible.
Scammers use emergency themes for users. (Photo: Shutterstock)
The sophistication of these techniques lies in AI’s ability to simulate not only an operator’s voice, but also the typical behavior and responses you would expect from legitimate customer service.
In Mitrovic’s case, the scam was so convincing that, if he had not been alert, You might have finished providing access to critical information from your email account.
Furthermore, these types of tactics show a change in the scenario of online scams. Cybercriminals are no longer limited to sending poorly worded phishing emails or attempting to deceive with fraudulent text messages.
Now, AI is used to create interactions that feel authentic, increasing the likelihood of success. Also, these scams are not limited only to email accounts, but also seek to access victims’ social networks, bank accounts, and other online services.
AI is capable of copying several typical human actions. (Illustrative Image Infobae)
Mitrovic’s experience is a reminder of the importance of always being alert to unexpected notifications or calls. Although technology advances, so do threats, and the responsibility for protecting personal accounts falls largely on users.
The general recommendation to avoid falling into this type of trap is simple but effective: Always verify the authenticity of any communication that requests personal information or account access. If in doubt, contact the company through official channels and never provide sensitive information to strangers.
For its part, Users should be aware of some common warning signs. Unexpected calls, emails requesting personal information, or messages that seem urgent are common tactics in these attacks.
Scammers seek to create a sense of urgency so that the victim acts quickly without thinking. For scams involving artificial intelligence, the level of sophistication can make detection difficult, but remaining vigilant and wary of unsolicited communications remains one of the best defenses.