Artificial intelligence accelerates our access to information, but it also facilitates cybercriminals in accessing our personal data.
If you've been a victim of cybercrime, scams, or fraud attempts in the past year, it's likely that AI was involved.
According to a Deloitte digital fraud study, AI-generated content was responsible for over $12 billion in fraud losses in 2023. This figure could soar to more than $40 billion in the US within two years.
With some of the world's most sophisticated AI technologies now accessible to cybercriminals, we're entering a new era of digital fraud—a reality we must be better equipped to handle.
Cybercriminals have limited time and resources, and AI helps them overcome these challenges.
With just a few lines of code, they can launch a global phishing attack that can be translated into various languages, removing many obvious signs that the message is fraudulent. AI can correct poor grammar, fix spelling errors, and rephrase awkward greetings to make phishing messages appear authentic.
It also enables cybercriminals to more effectively target phishing attacks on specific industries, companies, or events, such as conferences, trade shows, or national holidays.
Researchers at the University of Illinois Urbana-Champaign, recently demonstrated how voice-enabled AI bots could execute some of the most common scams reported to the federal government, before safely returning the money to victims. In some cases, these bots achieved a success rate of over 60% and completed the scams in mere seconds.
AI enables criminals to rapidly analyze vast amounts of data, a task that was previously more challenging when dealing with massive collections of personal records, obtained through data breaches or purchased on the dark web.
Scammers can now use AI to identify patterns and extract valuable insights from large datasets, which they can exploit for their schemes, as well as coordinate cyberattacks.
Additionally, AI is enhancing other types of fraudulent activities:
Synthetic identity fraud
Synthetic identity theft involves taking a Social Security number—often from a child, an elderly person, or someone who is homeless—and merging it with other stolen or fake information like names and birthdates to fabricate a new, false identity.
Hackers then use this fake identity to apply for credit, leaving the original owner of the SSN with the debt.
AI aids in this widespread form of fraud, by simplifying the creation of highly convincing forged identity documents and synthetic images that resemble real faces, allowing them to bypass biometric verification systems like those on an iPhone.
Deepfake scams
According to an estimate from the security firm Entrust, an AI-driven deepfake scam took place every five minutes in 2024.
Numerous accounts exist of fraudsters using AI to deceive businesses and individuals out of millions. These malicious actors create highly convincing yet entirely fake videos and voices of people familiar to the victims, capable of deceiving even the most vigilant among us.
Less than a year ago, an employee at Arup, a British design and engineering firm, was duped into transferring $25 million to scammers using a deepfake video that impersonated a CFO.
AI is not only replicating voices and faces, it can also mimic human personalities, as revealed by a recent study from Stanford University and Google DeepMind.
With minimal information about their targets, these artificial personalities could imitate political beliefs, personality traits, and likely responses to questions to deceive victims, the study found.
These findings, combined with the progress in deepfake video and voice cloning already exploited by cybercriminals, could make it even harder to discern whether the person you're communicating with online or over the phone is genuine or an AI replica.
Document copying
Even as technology becomes more integral to our lives, physical documents remain the primary means of identity verification.
AI has advanced in crafting convincing replicas of passports, driver's licenses, birth certificates, and other documents, prompting businesses and governments to urgently seek improved methods for verifying identities, and protecting identity security.
Stay vigilant, secure your bank accounts with multiple layers of protection, using multifactor authentication, freezing and monitoring your credit report, and signing up for identity theft protection.
As AI grows more powerful and prevalent, it's increasingly crucial to scrutinize what we observe:
As technology progresses, AI-driven scams will become increasingly convincing. Being informed about typical scam strategies and applying your common sense and vigilance are your strongest defenses against these threats.
For more cybersecurity insights, follow Cyderes on LinkedIn and X.