As we continue to embrace the wonders of artificial intelligence (AI), scammers have found a new way to exploit this technology. Fraudulent calls with AI-generated voices are on the rise, and thousands of people have already fallen prey to this type of scam.
According to Android Authority, scammers are using AI to mimic the voices of the victim’s friends or relatives to trick them into giving away sensitive information or money. This type of scam has become the second most popular type of fraud in the United States, with more than 36,000 cases reported in 2022 alone. Unfortunately, more than 5,000 of these victims lost money through the phone, with a total of $11 million in economic losses.
Elderly People are the Most Vulnerable
The elderly are the most vulnerable to these scams, as they are more likely to be tricked by fake calls. In fact, one particularly shocking story involved an elderly couple who sent $15,000 through a bitcoin terminal to a scammer after believing they were speaking to their son. The AI-generated voice convinced them that their son was in legal trouble after killing an American diplomat in a car accident.
Responsibility for Damages is Still Unclear
Sadly, US courts have not yet decided whether companies can be held responsible for damages caused by these scams. This is a concerning issue as scammers continue to exploit the vulnerable, with the use of AI as their latest weapon.
As we continue to witness the rapid development of AI technology, it is important to remain vigilant and be aware of potential scams. Remember to always verify the identity of the person on the other end of the line, and never share sensitive information or transfer money without verifying the legitimacy of the request.