Artificial intelligence accelerates our access to information but also facilitates cybercriminals in accessing our personal data.
AI was likely involved if you have been a victim of cybercrime, scams, or fraud attempts in the past year.
According to a Deloitte digital fraud study, AI-generated content caused over $12 billion in fraud losses in 2023. Within two years, this figure could soar to more than $40 billion in the US.
With some of the world's most sophisticated AI technologies now accessible to cybercriminals, we're entering a new era of digital fraud—a reality we must be better equipped to handle.
AI helps hackers
Cybercriminals have limited time and resources, and AI helps them overcome these challenges.
With just a few lines of code, they can launch a global phishing attack that can be translated into various languages, removing many apparent signs that the message is fraudulent. AI can correct poor grammar, fix spelling errors, and rephrase awkward greetings to make phishing messages appear authentic.
It also enables cybercriminals to more effectively target phishing attacks on specific industries, companies, or events, such as conferences, trade shows, or national holidays.
Researchers at the University of Illinois Urbana-Champaign, recently demonstrated how voice-enabled AI bots could execute some of the most common scams reported to the federal government before safely returning the money to victims. In some cases, these bots achieved a success rate of over 60% and completed the scams in mere seconds.
AI helps fraudsters
AI enables criminals to analyze vast amounts of data rapidly. This task was previously more challenging when dealing with massive collections of personal records, obtained through data breaches or purchased on the dark web.
Scammers can now use AI to identify patterns and extract valuable insights from large datasets. They can then exploit these insights for their schemes and coordinate cyberattacks.
Additionally, AI is enhancing other types of fraudulent activities:
Synthetic identity fraud
Synthetic identity theft involves taking a Social Security number—often from a child, an older adult, or someone homeless—and merging it with other stolen or fake information like names and birthdates to fabricate a new, false identity.
Hackers then use this fake identity to apply for credit, leaving the original owner of the SSN with the debt.
AI aids in this widespread fraud by simplifying the creation of persuasive forged identity documents and synthetic images that resemble real faces. These documents can bypass biometric verification systems like those on iPhones.
Deepfake scams
According to an estimate from the security firm Entrust, an AI-driven deepfake scam occurred every five minutes in 2024.
Numerous accounts exist of fraudsters using AI to deceive businesses and individuals out of millions. These malicious actors create compelling yet entirely fake videos and voices of people familiar to the victims, capable of deceiving even the most vigilant among us.
Less than a year ago, an employee at Arup, a British design and engineering firm, was duped into transferring $25 million to scammers by a deepfake video impersonating a CFO.
A recent Stanford University and Google DeepMind study revealed that AI can replicate voices and faces and mimic human personalities.
The study found that with minimal information about their targets, these artificial personalities could imitate political beliefs, personality traits, and likely responses to questions to deceive victims.
These findings, combined with the progress in deepfake video and voice cloning already exploited by cybercriminals, could make it even harder to discern whether the person you're communicating with online or over the phone is genuine or an AI replica.
Document copying
Even as technology becomes more integral to our lives, physical documents remain the primary means of identity verification.
AI has advanced in crafting convincing replicas of passports, driver's licenses, birth certificates, and other documents, prompting businesses and governments to urgently seek improved methods for verifying identities and protecting identity security.
How to protect yourself
Stay vigilant, secure your bank accounts with multiple layers of protection, use multifactor authentication, freeze and monitor your credit report, and sign up for identity theft protection.
As AI grows more powerful and prevalent, it's increasingly crucial to scrutinize what we observe:
- Always check with the sender by phone to ensure any account-related correspondence you receive is genuine.
- Verify the authenticity of online content before unintentionally sharing false information.
- To further protect yourself from phishing attacks, consider using a hardware security key, such as Yubikey or Google Titan, which can be purchased for as little as $30.
- If you haven't started using a password manager, now might be the right time. Popular options like 1Password, DashLane, and LastPass can assist you in generating unique passwords for each of your online accounts.
- Be alert for signs of deepfakes. Artificial voices may appear "flatter" or monotone, missing the emotional nuances of everyday human speech. In deepfake videos, look for irregular eye, mouth, or lip movements, facial distortions, and pixelation.
As technology progresses, AI-driven scams will become increasingly convincing. Your strongest defenses against these threats are being informed about typical scam strategies and applying common sense and vigilance.
Ready to protect your organization against AI-assisted cybercrime?
For more cybersecurity insights, follow Cyderes on LinkedIn and X.