A South Korean mother received a chilling phone call in April, believing her daughter was in danger. The voice on the line, claiming to be her child, pleaded for help, stating, “Mom, I’ve been kidnapped. Save me…” However, the desperate plea was fabricated using artificial intelligence (AI) technology, part of a growing wave of sophisticated voice-based scams targeting parents, according to reports from the Korea Herald and Asia Economy.
Financial authorities in South Korea have issued a consumer alert following a surge in these AI-powered “voice phishing” schemes. Scammers are leveraging AI to clone the voices of children, often incorporating fabricated sounds of distress, such as crying or sounds of a struggle, to manipulate parents into quickly sending money. The Asia Economy reported that the Financial Supervisory Service (FSS) issued a “caution” level consumer alert on February 1, 2026, warning of the increasing prevalence of these scams.
The scams frequently begin with scammers contacting parents in areas with a high concentration of schools and academies. They often demonstrate specific knowledge, such as the child’s name, school, and even the academy they attend, lending an air of authenticity to their claims. Once they’ve established contact, they play the AI-generated audio of a child in distress, then invent scenarios – ranging from accusations of misbehavior to fabricated injuries – to justify an urgent request for funds. The MK Business News reported that scammers are now often requesting relatively small amounts, around 500,000 won (approximately $375 USD), aiming for quick payouts before victims can fully investigate.
The FSS has advised parents to immediately hang up on such calls and directly contact their children to verify their safety. The speed and emotional manipulation inherent in these scams are designed to bypass rational thought, making it crucial for parents to confirm their child’s well-being through independent means. The KyungHyang Shinmun reported that scammers are utilizing AI to create convincingly realistic, yet subtly indistinct, cries, making it difficult for parents to immediately recognize the voice as artificial.
Authorities are also warning of a related tactic where scammers claim a child has damaged a mobile phone screen and request money for repairs. This mirrors a pattern of increasingly plausible, everyday scenarios used to pressure victims. The FSS is collaborating with telecommunication companies to implement AI-based detection services aimed at identifying and blocking these fraudulent calls, but the evolving nature of the technology presents a continuous challenge.
As of February 1, 2026, the FSS has not announced any arrests related to these specific AI-powered scams, and the investigation remains ongoing. No further updates have been released regarding the development of preventative measures beyond the collaboration with telecommunication companies.

Leave a Reply