Search Posts

How to Protect Yourself (and Your Loved Ones) From AI Scam Calls

AI tools are getting better at cloning people’s voices, and scammers are using these new capabilities to commit fraud. Avoid getting swindled by following these expert tips.

excerpt:

YOU ANSWER A random call from a family member, and they breathlessly explain how there’s been a horrible car accident. They need you to send money right now, or they’ll go to jail. You can hear the desperation in their voice as they plead for an immediate cash transfer. While it sure sounds like them, and the call came from their number, you feel like something’s off. So, you decide to hang up and call them right back. When your family member picks up your call, they say there hasn’t been a car crash, and that they have no idea what you’re talking about.

Congratulations, you just successfully avoided an artificial intelligence scam call.

As generative AI tools get more capable, it is becoming easier and cheaper for scammers to create fake—but convincing—audio of people’s voices. These AI voice clones are trained on existing audio clips of human speech, and can be adjusted to imitate almost anyone. The latest models can even speak in numerous languages. OpenAI, the maker of ChatGPT, recently announced a new text-to-speech model that could further improve voice cloning and make it more widely accessible.

 

Of course, bad actors are using these AI cloning tools to trick victims into thinking they are speaking to a loved one over the phone, even though they’re talking to a computer. While the threat of AI-powered scams can be frightening, you can stay safe by keeping these expert tips in mind the next time you receive an urgent, unexpected call.

Remember That AI Audio Is Hard to Detect

It’s not just OpenAI; many tech startups are working on replicating near perfect-sounding human speech, and the recent progress is rapid. “If it were a few months ago, we would have given you tips on what to look for, like pregnant pauses or showing some kind of latency,” says Ben Colman, cofounder and CEO of Reality Defender. Like many aspects of generative AI over the past year, AI audio is now a more convincing imitation of the real thing. Any safety strategies that rely on you audibly detecting weird quirks over the phone are outdated.

full article: https://www.wired.com/story/how-to-protect-yourself-ai-scam-calls-detect/