When startup founder Cristina Flaschen first saw the TikToks of people using artificial intelligence to bluff their way through job interviews, she thought it was just a social media gimmick — until she started screening applicants for a junior developer role at Pandium, the software integration platform she launched in 2018. Several candidates appeared to be reading from scripts, which Flaschen says she initially chalked up to nerves. But one interviewee struggled to elaborate on anything from his own resume, including the company he supposedly co-founded.
That’s when Flaschen noticed the AI-generated text reflecting off of his glasses.
“Having someone straight-up lie on a resume and then reading something scrolling on the screen that I can see in the reflection of their glasses was new to me,” she told Built In. “They could at least wear contacts or something.”
How Do Candidates Use AI to Cheat In Job Interviews?
In virtual interviews, candidates may use AI-powered interview assistant apps that transcribe an interviewer’s spoken questions into text and generate answers in real time. The candidate then reads the AI-generated response aloud, sometimes from a phone, second monitor or translucent overlay on the screen. In more nefarious cases, AI allows candidates to fake their identity through the use of deepfake video and audio.
It may seem brazen, but this tactic has become fairly common with the ongoing rise of generative AI. In 2025, 20 percent of U.S. workers reported secretly using AI during job interviews, according to a study by anonymous professional networking app Blind. More than half of respondents said AI assistance during interviews has become the norm.
This is just one example of how the hiring process is eroding in the age of AI. Job seekers are using this technology to optimize their resumes and apply for more jobs, while recruiters are using it to filter through the hundreds of AI-generated applications they receive. In some cases, candidates have even been asked to participate in AI-led screening calls, adding yet another layer of automation to a process that is increasingly becoming more mediated by machines on both sides.
Now, with applicants turning to AI to get a leg up in interviews, employers are finding new ways to shut it down. Some are returning to in-person assessments, or changing the types of questions and assessments they use. Others are using software designed to flag AI-generated answers. It’s a nonstop cat-and-mouse game, with detection tools evolving almost as quickly as the technology they’re designed to catch.
How Applicants Use AI to Cheat In Interviews
In practice, candidates will keep an AI chatbot open in a separate window throughout the virtual interview process, and use a speech-to-text feature to transcribe the interviewers’ questions automatically in real time. Within seconds, the AI platform generates an articulate, polished response that the candidate can then read aloud. Some of these tools allow candidates to maintain their focus on the screen, as they can run in a translucent window overlaying the virtual meeting platform.
The scheme gained mainstream attention in 2025 with the launch of Cluely, an app that encourages users to “cheat on everything” from job interviews to sales calls. The startup was founded by Columbia University dropouts Neel Shanmugam and Chungin “Roy” Lee, who faced disciplinary action from the university for posting a YouTube video showing how he used the app to land an Amazon internship offer, which was later revoked.
The co-founders also developed InterviewCoder, which was specifically built to help developers cheat on the types of technical interview questions found on Leetcode, a test preparation website that has become an industry standard. Shanmugam and Lee have said Leetcode-style tests are “useless” and do not reflect the actual work of software developers.
Since the launch of Cluely and InterviewCoder, countless companies have flooded the internet with videos promoting their interview assistant apps. In staged demos, candidates will typically read AI-generated answers from their phone, a second monitor or from a translucent overlay on top of the video call. In some of them, the interviewer demands the applicant share their screen, only to find nothing suspicious. Cluely, Interview Coder and other apps claim they cannot be detected by other meeting participants — even when screen-sharing — which has led recruiters to develop some creative fraud prevention techniques. One reportedly asked an interviewee to close their eyes to prevent them from cheating with AI.
Should Applicants Be Allowed to Use AI in Interviews?
Recruiters generally agree that it’s inappropriate — even deceptive — for candidates to use AI when answering questions about their skills and experiences in a job interview.
While job candidates often feed their resumes into AI tools to generate answers aligned with their own work history, those answers are still fabricated by AI. If an interviewer can’t rely on a candidate to think for themself, speak from experience and tell the truth, they probably can’t trust them to be honest and use good judgment on the job.
After all, the point of an interview is not to see if a candidate can memorize facts, it’s to understand how they apply their knowledge in real-world situations. Recruiters and hiring managers want to see how a candidate solves problems and makes decisions on the fly — without using AI as a crutch. They also want to gauge soft skills that only come through when an interviewee is being honest and authentic.
“In a job interview, recruiters and hiring managers are assessing more than just a candidate’s skills and knowledge,” Matt Erhard, managing partner at Summit Search Group, told Built In. “We’re also looking for insights into their personality and communication style, things that we can’t assess if we’re not hearing the candidate’s answers in their own words.”
In extreme cases, this deception can go beyond exaggerating one’s skills or credentials and extend into outright identity fraud, with some candidates using AI filters and deepfake technology to disguise their real appearance or impersonate someone entirely. This has prompted companies like Cisco and hiring platform Greenhouse to partner with identity verification company Clear to help ensure applicants are who they say they are.
AI Use in Coding Tests Draws Mixed Opinions
While recruiters and hiring managers across all fields have shared horror stories about candidates using artificial intelligence in interviews, this trend has become especially common in technical interviews. Henry Kirk, a former co-founder of engineering agency Studio.init, told CNBC last year that more than half of the 700 job seekers who applied for an engineering role at his company used AI tools to complete a virtual coding test — despite explicit instructions not to.
But unlike regular job interviews, there’s some debate in the tech community as to whether using AI in this case is a bad thing. After all, many companies are encouraging their engineers and developers to use AI tools like GitHub CoPilot to improve productivity. If someone is going to use artificial intelligence as a tool on the job, shouldn’t coding tests evaluate how effectively they can use it?
On the other hand, some companies opt to not allow AI tools at all, as they worry that unqualified candidates will be able to fake their way through the hiring process. Other engineers lament that younger generations of so-called “vibe coders” will lose the ability to troubleshoot AI’s errors.
Silicon Valley’s biggest tech companies have taken different tacts in addressing this issue. At Amazon and Google, for example, applicants are told not to use AI, and they are disqualified if they are found using them, according to Business Insider. Somewhat ironically, Cursor, a popular AI coding assistant, doesn’t allow AI, either. Meta, meanwhile, is experimenting with a new type of coding interview that allows applicants to use an AI assistant, reflecting the company’s goal to generate more code through AI.
Flaschen, the tech founder who spotted candidates reading AI-generated answers, allows interviewees to use AI during their two-hour coding test, which is conducted in-person with team input. Almost every applicant who has gone through the coding test has used AI in the past couple years, she said, but they must still be able to talk with the engineering team, walk through their thought process and debug the code.
“If someone was prompting their way through the questions we were live-asking them on the screen, that would almost certainly be a pass because we’re not going to be able to communicate with that person,” Flaschen said.
How Employers Are Cracking Down on AI Cheating
The trend of AI cheating in interviews is relatively new, so most recruiting departments do not have best practices for dealing with it. However, more companies are starting to clarify early on that AI use is prohibited.
Some companies have gone even further to safeguard the integrity of their interview process. Large employers, like Google, Cisco and McKinsey, are increasingly inviting candidates into the office for in-person interviews, according to The Wall Street Journal.
Flying in candidates isn’t always practical, though. Luckily, there are several other measures employers can take to detect AI cheating — many of which should be self-evident. If a candidates’ eyes are shifting back and forth as if they were reading, they probably are. Interviewers should also be on the lookout for candidates who pause or use filler words like “Hmm,” as this could be a sign that they are waiting for the AI-generated text to load.
There are other less obvious signs, too, like candidates who speak in a stilted way or use overly formal language. They might be nervous, for example, and there’s nothing wrong with using formal language — as long as it’s consistent with their previous communications. But if they are struggling to read words, that’s a clear sign they are reading from an AI-generated script.
There are several AI detection platforms on the market as well, like Sherlock AI and Interview Guard, which claim to be able to spot when candidates open new tabs or apps. Many of these platforms also use eye-tracking technology, and they can tell if a candidate is receiving audio clues. Some of these tools can also flag signs of more serious fraud, such as deepfake video and audio, inconsistencies in the applicant’s location or the use of a VPN.
Coding assessment platforms have also added features to crack down on cheating in technical interviews. Tools like HackerRank, Code Signal, Codility and CoderPad can all detect when code looks too similar to the solutions in their databases. Some platforms can even tell when candidates copy and paste code from another window.
Instead of trying to detect AI, Dominick Prevete, founder of Superhumancy Talent Partners, told Built In that recruiters and hiring managers might be better off probing candidates with follow-up questions and questions that AI can’t answer. For example, he likes to ask machine learning engineers to walk him through a model they personally deployed to production, starting with an explanation of the business problem and ending with how they handled the first production incident. The answer to this question can be revealing. A candidate using AI may make it sound like the model was deployed perfectly, Prevete said, while a real engineer will probably share “war stories” about “things not working at 2 a.m.”
Recruiters and hiring managers can also dig into a candidate’s qualifications with behavioral interview questions, like “Tell me about a time you failed,” or situational interview questions that gauge how a candidate would act in hypothetical scenarios. When Marshall Scabet, founder and CEO of Precision Sales Recruiting, suspected a candidate’s answers were a little too polished, he tested her sales skills through a role-play scenario on objection handling.
“She did not do well,” Scabet told Built In. “It was apparent she did not have the experience she said she did, and that she was using AI to answer questions.”
Frequently Asked Questions
How common is it for job applicants to use AI in job interviews?
According to a study by anonymous professional networking app Blind, 20 percent of U.S. professionals said they secretly used AI during job interviews, and more than half said AI assistance during interviews has become the norm.
Which tools are candidates using to cheat in interviews?
Some of the most popular apps used to cheat in interviews include Cluely, Final Round AI, AIApply and LockedIn AI. Platforms like InterviewCoder and Leetcode Wizard, are specially designed to help software developers and engineers cheat in technical interviews.
How can interviewers tell if a candidate is using AI?
Common warning signs include candidates whose eyes move back and forth as if reading, unnatural pauses while waiting for responses to load, overly formal or stilted language and difficulty pronouncing words that appear unfamiliar.
