By Jasmina Tal, a 20+ year veteran in talent acquisition and human resources leadership
Over the last two decades, I’ve conducted thousands of interviews — in person, over the phone, and now more commonly, over video. I’ve developed a trained instinct for reading between the lines. But today, even those instincts are being tested.
A sound crisis is evolving in talent acquisition: candidates using AI and deepfake technologies to cheat the recruitment system. What started as exaggerating on a CV has now escalated into full-blown identity theft and deception during interviews. This new wave of fraud is not just unethical — it’s illegal and dangerous to business continuity.
Here is what we’re seeing.
The COVID-19 pandemic accelerated the normalization of remote interviews. While this allowed companies to tap into global talent pools, it also opened the floodgates for bad actors to exploit technology in ways we hadn’t imagined.
Here are two real and increasingly common scenarios:
I’m interviewing a candidate for a senior software development role. On paper, their profile is great! To-the-point education, a GitHub full of interesting projects, and relevant LinkedIn recommendations. During the interview, though, something feels…off.
The responses are slightly delayed, and I attribute it to a slow internet connection. But in reality, what’s happening is that the candidate is using an AI tool that listens to my questions in real-time, generates accurate, context-aware answers, and feeds those answers to the candidate as a teleprompter overlaid on their screen. The delay isn’t technical; it’s processing time.
Some tools are now so sophisticated they even mimic the candidate’s voice and filter it for clarity and confidence. Essentially, you're not evaluating the candidate — you're evaluating ChatGPT or a similar LLM in disguise.
Here’s another common scheme that I’ve seen in action: a fraudster copies an entire LinkedIn profile — work history, certifications, skills endorsements, even profile wording — and just switches out the photo with their own. Classic identity theft. Again, relying on real-time AI assistance to pass the interview.
But here is one that takes it to the next level – stealing even the face! Using video filter software (often deepfake-based), I’ve seen a “clean-cut” look that aligns with the original profile’s credibility.
In the interview, they pretend to be the real person whose identity they stole . At first, things might even seem ok, but I get a feeling that something is off. I spot the overly artificial movement of the facial features, or lack thereof - eyes that don't blink and a look that seems glassy. Somewhere around the 5th minute, I no longer have any doubts that it is someone else in front of me, but not the one who introduces himself to us. It always ends when I ask for ID proof.
These individuals have little to no intention of actually doing the job — they’re often fronting for someone else, hoping to secure the position and then “outsource” the work behind the scenes, often at lower rates.
This is not just impersonation. It’s digital identity theft with serious legal consequences.
Fraudulent candidates are growing more creative. Here are additional tactics we've uncovered:
Thankfully, trained HR professionals still have tools and instincts to detect these red flags:
Falsifying identity during a job application or interview is a criminal offense in many jurisdictions. Depending on the country, it can be considered:
As much as technology is helping streamline the recruitment process, it is also arming bad actors with tools to manipulate it. As HR professionals and business leaders, we must remain vigilant, update our evaluation methods, and work closely with legal and cybersecurity teams to ensure we maintain the integrity of our hiring process.
Recruiting is fundamentally about trust. Once that trust is broken — whether by a candidate or the systems we rely on — the costs to our organizations are more than just financial.
Let’s raise the red flags before someone dangerous gets through the door.
Stay informed. Stay skeptical. Stay human.
If you’ve encountered similar experiences or want to learn how to fortify your recruitment process against AI fraud, feel free to reach out or share your story.