Artificial intelligence (AI) has made its way out of sci-fi films and into real life, and it’s gradually taking on more sophisticated roles in our society. The bad news is that as AI becomes smarter, cheaper, and more widely available, it’ll only get easier for criminals to incorporate this technology into their scams. The worse news is that it’s already started.
With the spam and scam problem in full effect, AI has revealed itself as the next wrinkle consumers must prepare for. Whereas most spam calls are annoying but harmless (if you know how to handle them), AI scam calls can be dangerous and traumatic. These types of scams may use cloned voices of your loved ones, which can take an emotional toll. And, like any other scam, they can compromise your privacy and drain your finances.
Scammers and spammers stay at the cutting edge of technological advancements, so when a new type of tech is integrated into our culture, scammers integrate it into theirs. AI is the next frontier for fraudsters, and it’s up to us to understand how to protect ourselves and our families. Read ahead for a comprehensive look at AI scams, voice cloning, and how to avoid them both.
Whereas spam calls, scam texts, and phishing emails generally rely on the same tactics, AI scams add completely new elements to the fraudster’s repertoire. They may incorporate more traditional scamming methods in order to initially pull in their targets, but the use of artificial intelligence allows them to elevate their fraud.
AI scams take different forms, often using voice cloning or fake chatbots, but the goal is the same as always: to steal information, assets, or identities. Tactics like voice cloning can be emotionally triggering to their targets. By inventing a highly emotional situation, scammers trick their victims into cooperating with their demands before they realize they’re being deceived.
Like smishing, robocalling, and caller ID spoofing before it, voice cloning is the latest in a long list of technology-based scam tactics. As scammers build on traditional methods with newer AI capabilities, their schemes only stand to become more dangerous.
Scammers are finding various ways to use AI technology to their advantage. AI scams can be especially intense, which is why it’s crucial to understand how to recognize them. Here’s a brief overview of some of the new types of AI scams that have been hitting the phone lines.
One of the more popular uses of AI in phone scams is voice cloning. This particularly conniving tactic allows scammers to replicate the voices of their target’s loved ones. Once the voices have been copied, criminals can use these clones to activate voice-controlled devices, ask loved ones for personal information (voice phishing), or even commit virtual kidnapping.
Hands-free interaction with our devices is convenient, but voice cloning exposes it as a weak point in our security. Any scammer with access to your voice can communicate with your Siri and Alexa, which means they may also have access to your credit cards, bank accounts, and personal identification.
Between caller ID spoofing and voice cloning, a crafty scammer may have you thinking you’re on the phone with your best friend — the call will come up as their number, and you’ll hear what sounds like their voice. However, if they start asking questions that your friend wouldn’t normally ask, hang up the phone and call back. AI voice cloning allows scammers to casually ask for sensitive information you’d only share with friends and family.
Jennifer DeStefano and her family lived through the harrowing experience of an AI kidnapping scam and have shared their story with the world. While bringing her younger daughter, Aubrey, to the dance studio, Jennifer answered the phone to hear the crying voice of her 15-year-old daughter, Brianna, along with that of an unknown and threatening male.
As the caller made his demands — including $1 million in ransom — Jennifer had other moms at the studio call 911 and attempt to reach Brianna. Fortunately, the dispatcher recognized the situation as a scam, and Jennifer’s son was able to make contact with the allegedly kidnapped daughter to confirm that she was fine. Although everyone involved made it out with no injury or financial damages, the emotional impacts of this type of scam may last a lifetime.
A deepfake is when someone’s likeness or voice is copied in a realistic, believable way. You may have seen AI deepfakes of different actors’ faces over characters in your favorite movie, or you might have heard a soundboard that sounds just like a famous celebrity, saying things they definitely didn’t say. Deepfakes use AI to fabricate a close copy of the target’s persona, creating a situation in which it seems like the person is really involved.
Scammers use social engineering to manipulate their targets’ emotions, and deepfakes can be a direct shortcut. When convincing a target to turn over their personal information, the voice of a loved one in distress provides instant leverage. They think their family member is in trouble, so they readily cooperate with the scammer to de-escalate the situation. Some scammers may impersonate a government official or other influential person rather than a target’s friends or family.
People can easily use deepfake technology to create fraudulent content and associations. When you can paste someone’s face and voice over someone else’s, you can make that person appear to do or say just about anything. Since it allows you to create false evidence, deepfaking is a Swiss army knife for framing and extorting people.
Modern AI can be used to create everything from fake voices and speech patterns to fully fabricated images and videos, so generating text is no problem. Unfortunately, it may turn out to be a huge problem for us.
AI can generate text in the style of any person or entity, including authoritative news sources. This can be harmless and entertaining when used in a vacuum, but it can be dangerous when applied to the real world. AI-generated text can be used to leave fake reviews to inflate the value of a bad product or send phony emails and texts on behalf of a political candidate.
If you know a celebrity, athlete, or other public figure well enough, you might do a pretty good impression of them even without AI. With AI, however, it can be easy to impersonate a social media influencer or well-known personality — especially via text. AI can easily emulate the writing style of famous or powerful figures to scam people out of information, money, and other assets. Think twice before answering a direct message from a model or a text from your boss’s number that asks for unusual information.
The point of phishing is to make the target think a phony message is real, and AI has made it easier than ever for scammers to do just that. By using AI to copy the structure, style, and tone of actual marketing emails and social media posts, they can generate phishing and smishing scams that seem legitimate and professional.
In addition to caller ID, scammers spoof websites, down to the chatbots and AI assistants that live within them. Criminals can create near-identical copies of familiar websites, making you think the forms you’re filling out and chatbots you’re interacting with are trustworthy. This process usually starts with phishing links in an email or text, so it’s worth repeating: Never click links from unknown senders.
Spoofed websites can look just like the real thing, so it can be difficult to tell if you’re on one. If a chatbot or AI assistant asks for personal information that you wouldn’t usually share with a bot, like usernames, passwords, and financial information, then you might be conversing with malicious AI. These bots aren’t associated with a brand you know and trust — they’re out to steal your sensitive information so they can sell it to other scammers or use it to hack your accounts.
Chatbots and AI assistants are supposed to assist consumers by answering their questions, helping them navigate through the website, and setting up their appointments when the humans are off the clock. Scammers use these robotic helpers for evil, however, and may impersonate companies you’ve shopped with before.
With just a few seconds to a minute of data, scammers can use AI software to make a convincing copy of your voice and make it say anything they want. Criminals can create an infinite soundboard of your AI-generated voice just by running the samples through software. The fake voice matches your gender, tone, inflection, and emotion, creating a clone that even close friends and family would think is your real voice. Voice-cloning technology can be found for low prices or even for free, and it’s widely available online.
Spam calls and texts are annoying and potentially dangerous, but AI scams can be downright scary. That’s why it’s critical to understand how to detect them as they happen and how to protect yourself if you find yourself wrapped up in one.
AI scams are tricky by nature, which is one reason they tend to be effective. When you know what to look for, however, you empower yourself to shut them down before any damage can be done.
Keep yourself and your family secure by recognizing these red flags in AI scams:
There are steps you can take to make yourself less visible to bad actors and avoid being targeted by AI scams in the first place. If you do find yourself in the midst of one, it’s also important to know how to act.
Here’s how you can minimize your risk for AI scams and handle them safely if you’re targeted:
AI scams add a new dimension to the familiar credit card and car insurance scams we’ve all dealt with before, and they have the potential to do massive damage to businesses and individuals alike. The more of your voice that’s out there, the easier it is for criminals to clone it. This can be problematic for public figures, podcasters, singers, and anyone else whose voice is widely available online. Furthermore, businesses with remote employees may be more vulnerable to these types of scams because of their decentralized communications.
According to the FTC, consumers reported losing more than $2.6 billion to imposter scams out of $8.8 billion in total reported losses to fraud in 2022. Our research estimates that Americans actually lost over $85 billion to phone scams last year. Since the number has been growing year-over-year, even without AI we can expect things to get worse before they get better.
AI scams are becoming more popular, but they’re not a brand-new concept. People have been using artificial intelligence and machine learning to pull scams for years — now, it’s just easier for the average scammer to get their hands on more capable technology. With this high-tech form of scam infiltrating our airwaves, we’ll need new legislation to combat, punish, and ultimately eliminate AI scams.
While we already have certain laws and regulations governing AI, privacy, and cybercrime, they weren’t designed to fight the types of AI scams we’ve seen so far. Let’s take a look at why our current laws and regulations might not be enough to fight these new kinds of AI scams.
The United States has many laws related to privacy and data protection, which is actually part of the problem. Instead of having overarching, centralized laws that apply to data protection across the board, we have smaller clusters of laws spread between federal and state governments. They may be divided by location, demographic, and type of data being collected, making it difficult to protect “data” as a whole. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Electronic Communications Privacy Act of 1986 (ECPA), for example, are two types of data protection acts that focus on different types of data.
Much like privacy laws, some cybercrime legislation is already on the books. Again, however, the existing laws were not meant to handle the AI-infused phone scams that have recently been targeting American consumers. We will need to pass new legislation that focuses on these particular phone scams, and voice providers must continue to cooperate with agencies like the Federal Trade Commission (FTC) and Federal Communications Commission (FCC) to protect Americans against this new type of threat.
Although we have some legislation in place to help prevent cybercrime and protect consumer privacy and data, we’ll need to take additional measures to squash AI scams. There are a few specific areas in which we can focus our efforts to fight back against this particular type of fraud.
As of now, it’s relatively cheap and easy to create convincing deepfakes with AI. This is fine when you’re just pranking your friends or editing movie trailers for fun, but it has dangerous implications when scammers use this technology to defraud people of their money, assets, and identities. Future laws may seek to regulate the use of AI and deepfake technology to reduce the prevalence and impact of AI scams.
As the spam epidemic has shown us, the threat of law enforcement doesn’t deter foreign scammers from sneaking into our inboxes, voicemails, and bank accounts. It will take better international cooperation to eliminate spam and scams of all kinds from our airwaves — especially those that use AI.
Scammers stay up-to-date with the latest technology so they can incorporate it into their ploys. AI scams use tactics like voice cloning to defraud people and amplify the spam and scam calls that have already plagued our phones for years. That’s why it’s important to protect yourself with a third-party app like Robokiller.
Robokiller is a spam-blocking app that keeps scams, telemarketers, and robocalls from ringing your phone. It’s 99% effective at eliminating scam calls and texts thanks to audio fingerprinting, machine learning, and predictive algorithms. We’ve cultivated our AI to be the quintessential counterintelligence to the deviant AI scammers use.
Unlike the misused AI technology harnessed by fraudsters, our AI is deliberately designed to safeguard your peace. By outsmarting the deceitful algorithms employed by the scamming underworld, Robokiller boasts a high success rate in extinguishing scam calls and texts. It's a classic case of good AI triumphing over bad, where our AI doesn't just combat but excels.
Newly available AI technology has made it easy and affordable for scammers to spoof people’s voices. With just a few seconds of audio clips, scammers can hijack your voice and make it sound like you’re saying whatever they want you to.
Scammers incorporate your voice into their scams using a process called AI voice cloning, in which they feed a sample of your voice through a program that recreates your tone, inflection, and timbre. With a small amount of data, they can create and manipulate a realistic copy of your voice — which they then use to dig for personal information, request a wire transfer, or ask your grandparents for gift cards.
In a word: yes. Scammers might need as little as three seconds of your voice to create a realistic and believable clone of it. The longer the sample they have, the more accurately they can clone your voice.
AI scams are often built into the usual spam framework, so they come with many of the same signs as spam calls, texts, and emails. Beware of communications that create a sense of urgency, ask for personal information, or use voice cloning to impersonate friends, family members, or other recognizable figures.