More fake applicants are trying to trick HR, thanks to the rise of deepfakes
In my decades of working in cybersecurity, I have never seen a threat quite like the one we face today. Anyone’s image, likeness, and voice can be replicated on a photorealistic level cheaply and quickly. Malicious actors are using this novel technology to weaponize our personhood in attacks against our own organizations, livelihoods, and loved ones. As generative AI technology advances and the line between real and synthetic content blurs even further, so does the potential risk for companies, governments, and everyday people. Businesses are especially vulnerable to the rise of applicant fraud—interviewing or hiring a phony candidate with the intent of breaching an organization for financial gain or even nation-state espionage. Gartner predicts that by 2028, 25% of job candidates globally will be fake, driven largely by AI-generated profiles. Recruiters already encounter this mounting threat by noticing unnatural movements when speaking with candidates via videoconferencing. For many companies, the proverbial front door is wide open to these attacks without adequate protection from deepfake candidates or “look-alike” candidate swaps in the HR interview process. It’s no longer enough to just protect against the vulnerabilities in our tech stacks and internal infrastructures. We must take security a step further to address today’s uncharted AI-driven threat landscape, protecting our people and organizations from fraud and extortion before trust erodes and can no longer be restored. Fraud isn’t new, but it is taking a new form Here’s the thing: Synthetic identity fraud happens in the real world every day, and has for years. Think of the financial industry, where stolen Social Security numbers and other government identifiers allow fraudsters to open and close accounts in other people’s names and ransack savings and retirement funds. The difference now is that hackers no longer have to lurk in the shadows. Instead, a synthetically generated person shows up to a videoconferencing meeting and speaks to you live, and 80% of the time, people will perceive the AI-generated voice as its real counterpart. How do you protect against that? Interview impersonations are not new within HR. There have been cases where an employee’s family member interviews with a company, and a different person shows up on that first day of work. But as it becomes increasingly easier to create deepfakes (taking only about 10 minutes and a web browser), it becomes increasingly more difficult to differentiate between what’s real and what’s fake across applicants’ LinkedIn profiles, résumés, and the actual candidates themselves. Preparing our HR departments for a new attack landscape Unfortunately, HR teams—often understaffed and using outdated tech—are frequently perceived as the weakest part of the organization by hackers and fraudsters given their lack of security focus (other than perhaps background checks). That makes the HR department the ideal entry point for an adversary. Coming through the front door via the hiring process is often far easier and more fruitful for malicious actors than the back door (i.e., taking advantage of infrastructure vulnerabilities). Further, adversaries could even capture recordings of executives during the interview process for future impersonation attacks or gain access to product road maps or other strategic information that could compromise the company down the road. HR leaders must be aware that fraud at the hiring level can take many different forms, but they can’t be the only ones. The C-suite must also recognize these potential dangers to better equip HR teams to combat deepfake and impersonation fraud on the frontlines. For example, real-time deepfake video technology can be used to impersonate someone during virtual interviews, matching facial expressions and lip-syncing. Fraudsters will also use sophisticated voice cloning to simulate accents, intonations, or entire voices. Tools that most people use every day, like ChatGPT and Claude, are being used to fabricate résumés and cover letters, and even code samples or portfolio materials tailored to specific job postings. Information gleaned at any part of the interview process can be weaponized, including an organization’s competitive strengths and weaknesses. The individuals who commit applicant fraud can repurpose information to solicit personal or confidential company information that can be used later for more severe extortion. We have already seen nation-states like North Korea leverage these techniques to infiltrate enterprises through their human resources departments. It’s time we reassess security at every level and within every process to protect against these threats that show no signs of slowing down. Proper policies and procedures must be in place to navigate and respond to these attacks in real time. From an HR perspective, this involves awareness training on deepfakes, policy development, and

In my decades of working in cybersecurity, I have never seen a threat quite like the one we face today. Anyone’s image, likeness, and voice can be replicated on a photorealistic level cheaply and quickly. Malicious actors are using this novel technology to weaponize our personhood in attacks against our own organizations, livelihoods, and loved ones. As generative AI technology advances and the line between real and synthetic content blurs even further, so does the potential risk for companies, governments, and everyday people.
Businesses are especially vulnerable to the rise of applicant fraud—interviewing or hiring a phony candidate with the intent of breaching an organization for financial gain or even nation-state espionage. Gartner predicts that by 2028, 25% of job candidates globally will be fake, driven largely by AI-generated profiles. Recruiters already encounter this mounting threat by noticing unnatural movements when speaking with candidates via videoconferencing.
For many companies, the proverbial front door is wide open to these attacks without adequate protection from deepfake candidates or “look-alike” candidate swaps in the HR interview process. It’s no longer enough to just protect against the vulnerabilities in our tech stacks and internal infrastructures. We must take security a step further to address today’s uncharted AI-driven threat landscape, protecting our people and organizations from fraud and extortion before trust erodes and can no longer be restored.
Fraud isn’t new, but it is taking a new form
Here’s the thing: Synthetic identity fraud happens in the real world every day, and has for years. Think of the financial industry, where stolen Social Security numbers and other government identifiers allow fraudsters to open and close accounts in other people’s names and ransack savings and retirement funds.
The difference now is that hackers no longer have to lurk in the shadows. Instead, a synthetically generated person shows up to a videoconferencing meeting and speaks to you live, and 80% of the time, people will perceive the AI-generated voice as its real counterpart. How do you protect against that?
Interview impersonations are not new within HR. There have been cases where an employee’s family member interviews with a company, and a different person shows up on that first day of work. But as it becomes increasingly easier to create deepfakes (taking only about 10 minutes and a web browser), it becomes increasingly more difficult to differentiate between what’s real and what’s fake across applicants’ LinkedIn profiles, résumés, and the actual candidates themselves.
Preparing our HR departments for a new attack landscape
Unfortunately, HR teams—often understaffed and using outdated tech—are frequently perceived as the weakest part of the organization by hackers and fraudsters given their lack of security focus (other than perhaps background checks). That makes the HR department the ideal entry point for an adversary.
Coming through the front door via the hiring process is often far easier and more fruitful for malicious actors than the back door (i.e., taking advantage of infrastructure vulnerabilities). Further, adversaries could even capture recordings of executives during the interview process for future impersonation attacks or gain access to product road maps or other strategic information that could compromise the company down the road.
HR leaders must be aware that fraud at the hiring level can take many different forms, but they can’t be the only ones. The C-suite must also recognize these potential dangers to better equip HR teams to combat deepfake and impersonation fraud on the frontlines. For example, real-time deepfake video technology can be used to impersonate someone during virtual interviews, matching facial expressions and lip-syncing.
Fraudsters will also use sophisticated voice cloning to simulate accents, intonations, or entire voices. Tools that most people use every day, like ChatGPT and Claude, are being used to fabricate résumés and cover letters, and even code samples or portfolio materials tailored to specific job postings.
Information gleaned at any part of the interview process can be weaponized, including an organization’s competitive strengths and weaknesses. The individuals who commit applicant fraud can repurpose information to solicit personal or confidential company information that can be used later for more severe extortion. We have already seen nation-states like North Korea leverage these techniques to infiltrate enterprises through their human resources departments.
It’s time we reassess security at every level and within every process to protect against these threats that show no signs of slowing down. Proper policies and procedures must be in place to navigate and respond to these attacks in real time. From an HR perspective, this involves awareness training on deepfakes, policy development, and implementing solution deployment services throughout to prevent an attack.
With sophisticated tools, such as advanced audio and video content authentication and verification platforms that provide alerts if a threat of a deepfake is detected, we can also better detect and mitigate deepfakes, helping our teams understand exactly which aspects of a file are synthetic or manipulated.
It’s no longer enough to authenticate who is accessing a system from the outside. As we increasingly rely on images, audio, and video for critical decision-making, we now have a vested interest in verifying that every piece of digital content we consume is deemed trustworthy and accurate. If we don’t, we’re putting everyone—colleagues, executives, and ourselves—at risk.