2 Sources
2 Sources
[1]
Why recruiters are making interviews 'AI-free zones'
As soon as Michael Kienle becomes accustomed to one use of AI, jobseekers think up improbable new ways to sneak it into the application process. "We know they're using it to write their CVs, their application letters," says Kienle, global vice-president for talent acquisition at L'Oréal. But recently candidates have become more brazen. One of his recruiters told him a jobseeker had used AI in a video interview, simply repeating answers the bot would provide. The deception was spotted because "the answers didn't come naturally", explains Kienle. Candidates reading AI-generated answers in interviews is just one of the unforeseen consequences employers are reporting as the technology rips through the jobs market. As vacancies shrink, the ease of making applications has left employers overwhelmed with candidates, but often deprived of meaningful information to identify promising hires from AI slop submissions. The din and distrust are prompting some companies to reassess their approach to hiring and training, introducing hidden difficulties in application forms, practical testing and banning online exams to prevent cheating, as well as increasing their own use of AI screening tools. "I think AI has actually pushed the interview process back to being more human focused," says David Brown, chief executive of recruiter Hays Americas. "Employers are struggling with deciphering between resumes, because now it's a lot easier to submit en masse and they all look like great fits. From a candidate perspective, it's how do I stand out?" In response, L'Oréal decided to "sanctuarise the interview" as a first principle, says Kienle. Beyond a basic transcription tool -- which candidates can opt out of -- he says AI is not used in interviews. "It will be in person, person to person . . . 45 minutes or one hour . . . that is an AI-free zone." The second principle is to ensure all candidates have at least one face-to-face interview before they start at the company. Kienle says he meets each of his 200-person team of recruiters and puts them all through two years of training focused on L'Oréal's particular recruitment needs. "If you have good recruiters, experienced recruiters, they do not ask the typical questions," he says. "It's all about authenticity -- if you just repeat things that AI told you it's not authentic, it's not you. [The recruiters] have to know the personality, who they are talking to." While accounting firm EY encourages applicants to use AI to help them prepare, "when you're in an interview and assessment we want to hear the real you and [it] is really not permitted", says global head of talent acquisition, Irmgard Naudin ten Cate. EY is prioritising in-person interaction and has trained more than 20,000 interviewers to "stress-test candidates' thinking", spotting answers that may have been prepared by AI without independent thought or real knowledge. Recruiters can identify AI answers if they "probe correctly" says Naudin ten Cate. "We don't want to know what you've done, we want to know how you think, how you make decisions, how you handle conflicts . . . It's that type of question . . . [where we can] detect rehearsed answers." February data from Deel, an HR platform, showed more than 40 per cent of employers had extended probation periods as they were finding it harder to assess people's true skills during the application process. Around three-quarters of surveyed senior HR leaders had noticed a steep rise in AI-generated applications, and a similar proportion deemed CVs and cover letters less reliable than two years ago. "AI has widened the gap between how candidates present themselves and how they perform," says Matt Monette, UK&I lead at Deel. "Employers are telling us they can only understand real capability once someone starts the job." The issue is also extending beyond hiring. The Association of Chartered Certified Accountants, the world's largest accounting body, said last year that it would require candidates to sit assessments in person, ending the online exams that have been running since the pandemic, as AI has made it harder to combat cheating. Rigorous testing does not mean excluding AI, however. Some employers want would-be hires to prove they can use it. Consultancy McKinsey is running a pilot that asks candidates to use its AI tool Lilli to analyse a case study as one of the tasks testing aptitude, curiosity and judgment during the application process. Mayank Gupta, chief executive of CaseBasix, a company that prepares McKinsey candidates, said in January that other firms such as Boston Consulting Group and Bain were likely to incorporate AI into the training process. Many candidates say they have turned to AI partly in response to the use of it by recruiters. A rise in automated screening means many jobseekers find applications are quickly rejected with minimal human interaction, prompting them to send even more, in a vicious circle of AI applying and screening. Employers admit AI will play a role in filtering applications. But some are holding back on rolling out tools more widely, in part due to concerns about whether they can adequately assess candidates or conform to legislation that demands protections against bias, and sharing or unnecessary processing of personal information. Recruiters are also starting to make candidates jump through more hoops in the early stages of recruitment in an attempt to weed out the least serious. Advertising agency VML, for example, was typically receiving about 400 applications for an entry-level role, but finding only 10 per cent fitted the brief. Last year, after it asked candidates to create a 60-second video saying why they were a good fit, 40 per cent dropped out, according to Graham Powell-Symon, talent acquisition director. "Trying to decipher whether they're actually viable candidates for the role is very time consuming. You have to be more rigorous," he says. "Anybody who's got the commitment to make a video likely has some degree of commitment to us as a business." VML and other employers including Virgin Group are also asking candidates to use new tools developed by start-ups such as Vizzy, to create profiles that showcase professional achievements that would typically be on CVs, alongside richer details such as psychometric test scores and portfolios. Sarah Lock, recruitment lead at Virgin Group, says the tool helps inform interviews, which now "place more emphasis on real-world thinking and values" and take a more conversational approach. "We're seeing more applicants than ever before, but less real differentiation," she says. "It helps us cut through the sameness [so] by the time candidates reach interview . . . we already have a much more rounded view of who they are. That allows interviews to go deeper." Some employers are looking to other ways to engage candidates. For example, L'Oréal's Brandstorm, an international competition where participants compete on real-life business scenarios to win a work placement, has been running for decades, but in the past three years registrations have tripled to 4,000 in the UK. "It's a very important source of recruitment," says Kienle. EY has created virtual job simulations, where would-be applicants can practise online role plays setting out what consultants do and what it might be like to work at EY, and which it says about 50,000 students have completed. Naudin ten Cate notes that 88 per cent of applicants who participate in the simulation say they are confident during their interview as a result. For EY, it has the benefit of engaging would-be hires early on, so anxious young jobseekers are more likely to make targeted rather than scattergun applications. "We engage with the person . . . guiding someone through it versus having this whole big funnel at the top," she says. "There's less noise in the system, and for the [applicant], it's a better experience."
[2]
Gen Z is using AI in job interviews as graduate unemployment climbs
The class of 2025 graduated into the worst entry-level job market in five years. Now a growing number of them are using AI tools during live job interviews, and a cottage industry of startups is rushing to sell them the means to do it. Whether that constitutes cheating or common sense depends on which side of the hiring table you sit on, but the numbers behind the trend are not in dispute. Unemployment among recent college graduates aged 22 to 27 climbed to 5.7 per cent by the end of 2025, according to the Federal Reserve Bank of New York, well above the 4.2 per cent national rate. Underemployment, which measures graduates working in jobs that do not require a degree, hit 42.5 per cent, its highest level since 2020. The tech sector, once the default destination for ambitious graduates, shed roughly 245,000 jobs in 2025, according to tracking data from Layoffs.fyi and TrueUp. Another 59,000 have gone in the first three months of 2026. The graduates who entered this market did so having watched an older cohort get hired, promoted, and then laid off at companies like Meta, Amazon, and Google in the space of 18 months. The lesson they drew was not subtle: competence and loyalty are insufficient protection. And so they arrived armed with a technology that their universities had spent four years telling them to learn. The phenomenon surfaced this week in a press release from LockedIn AI, a startup that sells a product called DUO: a service that combines real-time AI transcription of interview questions with a live human coach who can see the candidate's screen and provide strategic guidance during the conversation. The press release, distributed via GlobeNewswire, was framed as a trend piece about generational resilience. It was, more precisely, a product advertisement. LockedIn AI is not alone. Its founder, Kagehiro Mitsuyami, also co-founded Final Round AI, a similar product. Both companies have faced questions about the authenticity of their marketing: reviews on Trustpilot appear to be AI-generated, and independent reviewers have noted that the software can be visible to interviewers when candidates switch between windows. A Gartner survey of 3,000 job seekers found that six per cent admitted to interview fraud, including having someone else impersonate them. Fifty-nine per cent of hiring managers suspect candidates of using AI to misrepresent themselves. The market for these tools is growing precisely because the conditions that created them are getting worse, not better. The National Association of Colleges and Employers found that 45 per cent of employers characterised the job market for the class of 2026 as "fair," down from "good" the previous year. Hiring projections for new graduates are essentially flat, at 1.6 per cent growth. For candidates submitting dozens of applications and receiving interview invitations at rates below two per cent, the temptation to use every available advantage is considerable. The most effective argument in favour of AI-assisted interviewing is not about fairness in the abstract. It is about a specific inconsistency in how technology companies treat AI. Google's chief executive, Sundar Pichai, disclosed during an April 2025 earnings call that more than 30 per cent of the company's new code is now generated with AI assistance, up from 25 per cent six months earlier. Amazon, Microsoft, and Meta all encourage their engineers to use AI coding tools daily. Applicant tracking systems powered by AI screen and reject resumes before a human ever reads them. The hiring pipeline is automated from end to end, except on the candidate's side. For graduates who spent their university years being told that AI fluency would define their careers, being asked to pretend the technology does not exist during a 45-minute interview feels less like a test of competence and more like a test of compliance. The companies asking them to do so are, in many cases, the same ones that will expect them to use AI tools from their first day on the job. This argument has real force, but it also has limits. There is a difference between using AI to write code more efficiently and using AI to answer questions about your own experience, judgment, and problem-solving ability. An interview is, at least in theory, a conversation designed to evaluate what a candidate knows and how they think. Outsourcing those answers to a language model, or to a human coach whispering through an earpiece, undermines the purpose of the exercise regardless of how unfair the exercise may be. Companies are already adapting. In-person interview rounds rose from 24 per cent in 2022 to 38 per cent in 2025, according to hiring industry data. Seventy-two per cent of recruiting leaders now conduct at least one in-person stage specifically to combat AI-assisted fraud. Some firms have moved to whiteboard exercises, pair programming sessions, and unstructured conversations that are harder to augment with real-time tools. The deeper question is whether the interview itself is the right mechanism for evaluating candidates in an AI-saturated labour market. If the goal is to assess what a candidate can produce with the tools they will actually use on the job, then banning those tools during the evaluation makes little sense. If the goal is to assess raw cognitive ability and domain knowledge, then AI assistance defeats the purpose entirely. Most interviews attempt to do both, which is why the current system satisfies no one. What is clear is that the class of 2025 did not create this problem. They inherited a job market reshaped by pandemic-era overhiring, aggressive cost-cutting, and an AI revolution that is simultaneously creating and destroying opportunity at a pace that neither employers nor candidates have fully absorbed. Their decision to use AI in interviews is not rebellion. It is the predictable behaviour of rational actors in a system that has told them, repeatedly and in every other context, that AI is not optional. The fact that the system now objects to them taking that message seriously is, at minimum, worth examining.
Share
Share
Copy Link
Major employers are banning AI from interviews after discovering candidates using chatbots to answer questions in real-time. Companies like L'Oréal and EY are prioritizing in-person interactions and training thousands of interviewers to detect rehearsed responses. Meanwhile, Gen Z graduates facing 5.7% unemployment are turning to AI-assisted interview tools, creating an arms race between automated screening and authentic assessment.
The hiring process is experiencing a fundamental shift as recruiters discover job seekers using AI in increasingly brazen ways. Michael Kienle, global vice-president for talent acquisition at L'Oréal, recently learned that a candidate had used AI during a video interview, simply repeating answers generated by a bot. The deception became apparent because "the answers didn't come naturally," exposing a growing challenge in AI in recruitment
1
.
Source: The Next Web
This incident represents just one example of how AI tools in job interviews are reshaping talent acquisition. While employers have long suspected candidates of using AI to write CVs and cover letters, the technology has now infiltrated live conversations. The ease of submitting AI-generated applications has left recruiters overwhelmed with candidates but often deprived of meaningful information to identify genuine talent from what industry insiders call "AI slop submissions"
1
.In response to these challenges, major employers are creating AI-free zones during job interviews. L'Oréal has adopted a policy to "sanctuarise the interview" as a first principle, ensuring that beyond basic transcription tools—which candidates can opt out of—AI is not used in interviews. "It will be in person, person to person . . . 45 minutes or one hour . . . that is an AI-free zone," Kienle explains. The company's second principle mandates that all candidates have at least one face-to-face interview before starting
1
.EY has implemented similar measures for assessing authentic candidates. While the accounting firm encourages applicants to use AI for preparation, global head of talent acquisition Irmgard Naudin ten Cate states that "when you're in an interview and assessment we want to hear the real you and [it] is really not permitted." EY has trained more than 20,000 interviewers to "stress-test candidates' thinking" and spot answers prepared by AI without independent thought
1
.David Brown, chief executive of recruiter Hays Americas, observes that "AI has actually pushed the interview process back to being more human focused." This shift toward human-centric recruitment involves training recruiters to ask non-typical questions focused on authenticity. Kienle meets each of his 200-person team of recruiters and puts them through two years of training. "If you have good recruiters, experienced recruiters, they do not ask the typical questions," he says. "It's all about authenticity—if you just repeat things that AI told you it's not authentic, it's not you"
1
.Recruiting leaders are prioritizing in-person interviews to combat AI-assisted fraud. In-person interview rounds rose from 24% in 2022 to 38% in 2025, with 72% of recruiting leaders now conducting at least one in-person stage specifically to address AI-assisted cheating
2
. Some firms have moved to whiteboard exercises, pair programming sessions, and unstructured conversations that are harder to augment with real-time tools2
.Related Stories
The class of 2025 graduated into the worst entry-level job market in five years, driving Gen Z using AI for jobs to unprecedented levels. Graduate unemployment among recent college graduates aged 22 to 27 climbed to 5.7% by the end of 2025, according to the Federal Reserve Bank of New York, well above the 4.2% national rate. Underemployment hit 42.5%, its highest level since 2020
2
.A cottage industry of startups has emerged to serve job seekers, including LockedIn AI, which sells a product called DUO that combines real-time AI transcription with a live human coach who can see the candidate's screen and provide strategic guidance during conversations. A Gartner survey of 3,000 job seekers found that 6% admitted to interview fraud, including having someone else impersonate them, while 59% of hiring managers suspect candidates of using AI to misrepresent themselves
2
.An ethical debate has emerged around the use of AI by both employers and candidates. Google's CEO Sundar Pichai disclosed that more than 30% of the company's new code is now generated with AI assistance, up from 25% six months earlier. Amazon, Microsoft, and Meta all encourage their engineers to use AI coding tools daily. Applicant tracking systems powered by AI screen and reject resumes before a human ever reads them, creating what some see as a double standard
2
.February data from Deel, an HR platform, showed more than 40% of employers had extended probation periods as they were finding it harder to conduct skills assessment during the application process. Around three-quarters of surveyed senior HR leaders had noticed a steep rise in AI-generated applications, and a similar proportion deemed CVs less reliable than two years ago. "AI has widened the gap between how candidates present themselves and how they perform," says Matt Monette, UK&I lead at Deel
1
.The Association of Chartered Certified Accountants announced it would require candidates to sit assessments in person, ending online exams that have been running since the pandemic, as AI has made it harder to prevent cheating. Meanwhile, consultancy McKinsey is running a pilot that asks candidates to use its AI tool Lilli to analyse a case study, proving they can use AI effectively as part of their skills assessment
1
.Summarized by
Navi