While technology has empowered humans to do some incredible things, it’s also contributed to a rise in scams. According to the Federal Trade Commission (FTC), 2.4 million consumers filed fraud complaints in 2022, with over $8.8 billion lost to scams (an increase of more than 30% compared with 2021).

Though there are many types of fraud, imposter scams were the most frequently reported scam, with artificial intelligence (AI) making it easier than ever to pretend to be someone you’re not. A global study conducted by computer security company McAfee revealed that one in four people worldwide had either experienced a voice cloning scam or knew someone who had. With the prevalence of AI scams growing every day, it’s more important than ever to stay informed and take precautionary steps to protect yourself and those around you from AI scams.

How scammers are using AI

Though fraud has been around for centuries, today’s scammers are increasingly using artificial intelligence to carry out even more sophisticated scams. Common examples of AI scams include:

Voice cloning

Voice cloning uses AI to replicate someone else’s voice. Scammers may pose as relatives or friends over the phone, fabricating a story about an emergency and asking their victim to send money immediately. Scammers can also use AI voice cloning to try and gain access to victims’ financial accounts at institutions that use voice recognition prompts for security.

CEO scam

In a CEO scam, a scammer impersonates a business’s CEO and asks an employee to make a payment or share confidential information. Since these emails or texts appear to come from the CEO, some employees won’t think twice about questioning the head of the company.

Phishing

Phishing is a common impersonation scam where the attacker aims to steal sensitive information by pretending to be a trustworthy source. Scammers may utilize AI to add personalized greetings and perfect the company’s language, which can make phishing messages seem even more legitimate and convincing.

Malicious computer codes to crack passwords

Hackers may use malicious computer codes (aka malware) to try and steal passwords and gain access to bank accounts and other confidential information. Under the guise of a legitimate software program, scammers will trick computer users into downloading programs containing malicious code. The scammers will then run these codes on the victim’s computer, which can use different methods to steal passwords.

Ways to Protect Yourself from AI Scams

The best ways to protect yourself against AI scams are to be prepared, understand these scams and be on the lookout for them. If you suspect an AI scam, use these cybersecurity tips to help keep your information safe.

Create a “safeword” to share with family and friends to help authenticate phone calls.

Scammers can use AI to replicate the sound of someone’s voice, which can be very convincing. A safeword is a quick and easy way to verify a person’s identity.

Strengthen passwords by using two-factor authentication.

Some companies require two-factor authentication before granting you access to your email or financial accounts. A two-factor authentication code can help stop scammers in their tracks, keeping you safe in the event your passwords are compromised.

Let unknown calls go to voicemail, then call back to verify identity.

Scammers only need to record you speaking for a few seconds to clone your voice. If you receive a call from an unknown number, let it ring through to voicemail. Then, if you decide it seems legitimate, you can call back to verify the identity of the person who called.

Know most banks will never initiate conversation and ask for your personal information.

Never give your personal information over the phone to someone claiming to be your bank. Most banks will never initiate a conversation over the phone to ask for your personal information. If you’re ever in doubt, hang up the phone and call your local bank branch to verify any requests.

Educate yourself.

Taking time to educate yourself about AI and how criminals use it to scam people is one of the most important things you can do to keep your money and confidential information safe. By understanding how scam artists can trick people, you’ll be in a better position to recognize AI scams and avoid falling victim to these deceptive techniques.

Terms to know.

  • Deepfakes: Artificial videos or photos that synthetically replicate someone doing or saying something by using AI.
  • AI Chatbots: Software programs designed to communicate with people and have full, human-like conversations.
  • Natural Language Processing (NLP): Linguistics-based AI enabling computers to process human language similarly to how humans understand it.
  • Machine Learning: A subset of artificial intelligence, machine learning is a computer’s ability to replicate human thinking and behavior.
  • Voice Cloning: Also known as an audio deepfake, voice cloning artificially replicates someone else’s voice to make it sound nearly identical to the real person’s tone, pronunciation and intonation.
  • Two-Factor Authentication (2FA): Also known as multi-factor authentication (MFA), two-factor authentication is a security method that requires two forms of identification (such as a password or fingerprint and an email or text authentication code) to access information.
  • ChatGPT: ChatGPT is an AI chatbot that uses natural language processing to intelligently respond to user prompts and questions. ChatGPT was originally released in November 2022 and was developed by OpenAI, a California-based research organization.

Conclusion

The rise of publicly available AI tools like ChatGPT offers scammers a way to leverage AI to create even more sophisticated scams. While these scams can be convincing, you have the power to prevent them by recognizing the signs of an AI scam and following best practices for protecting your information.