A San Diego grandmother narrowly avoided losing thousands of dollars to a scam involving AI-generated voice cloning that imitated her beloved grandson. The incident unfolded when Maureen, a North County grandma, received a phone call from an anonymous number, initially mistaken for her sister’s hidden number call. The voice on the other end sounded eerily like her distressed grandson.
Convincing AI deception
The caller claimed to be her grandson and informed Maureen that he had been involved in a car accident, was wearing a neck brace, and was on his way to the police station. He urgently requested $8,200 for bail. The deceptive AI-generated voice was so convincing that Maureen did not hesitate to believe it was her grandson. An alleged lawyer also joined the call, adding credibility to the scam by claiming that her grandson had struck a diplomat in the accident, emphasizing the need for secrecy within 72 hours.
Fearing for her grandson’s safety, Maureen fell for the scam. She hurriedly gathered the supposed bail money and rushed to the bank to obtain more. However, before handing over her hard-earned cash, she wisely contacted her daughter to verify her grandson’s well-being. To her relief, she discovered that her real grandson was safe, attending a golf tournament. This revelation enraged the scammer, who vented anger during a subsequent call with Maureen’s daughter.
AI-powered scams on the rise
Impersonation scams, such as the ‘grandma scam,’ involve scammers posing as trusted individuals to dupe victims into sending money due to fabricated emergencies. This trend, especially among the elderly, is rising and a growing concern for law enforcement. Artificial intelligence exacerbates the issue by making voice imitation more accessible and cost-effective.
AI tools like ElevenLabs and Stable Diffusion can manipulate voices and mouth movements, making it increasingly challenging for people to discern authentic audio or video recordings from fakes. According to the Federal Trade Commission, in 2022, impostor scams ranked as the second most prevalent scam in the U.S., with over 36,000 reported cases and over $11 million in losses attributed to phone-based incidents.
Protecting against AI scams
In response to the rising threat of AI-driven scams, Maureen’s family devised a protective strategy known as a “safe word.” This unique word is known only to family members and serves as a verification measure in cases of suspicious calls. The key is to avoid sharing this word via text or email and communicate it directly over the phone.
Maureen emphasized the emotional distress caused by the scam and expressed her desire to spare others from similar experiences. Using safe words and direct phone communication is a proactive approach to counter the growing challenge of AI-fueled deception.
The story of Grandma Maureen’s near encounter with an AI-generated voice-cloning scam highlights the increasing sophistication of fraudsters. As AI technology evolves, it becomes imperative for individuals and families to adopt protective measures like safe words and direct phone communication to safeguard against emotional and financial distress caused by these deceptive scams.
Land a High-Paying Web3 Job in 90 Days: The Ultimate Roadmap