Despite being just a toddler, AI has learned to walk, talk, and even ride a bike when it comes to matching candidates with jobs. Both sides of the recruitment process are using it, but we’re a long way from reaching consensus on its fair and ethical use.
Article summary:
AI is permeating the recruitment process, with both sides using it to their advantage, but it could be forcing us into a zero sum game.
The yin and yang:
- Recruiters can handle higher volumes in less time when using AI to write job descriptions, recommend interview questions, and screen applicants.
- But time savings get eroded when candidates apply en masse with AI-generated résumés and it’s unclear which candidates gamed the system.
Where we’re at now:
- TA leaders need to get clear on what AI is actually helping with, says CareerXroads’ Chris Hoyt. They have to wrap their arms around the differentiation between automation and AI-controlled decision-making in our space.
- Internal champions / centers of excellence can ensure “less tech buying for the sake of buying tech” and guide thoughtful decision-making around the smart and safe use of AI tools, Hoyt says.
- Candidates using AI to ‘cheat’ the system is a natural consequence of the industry’s history of ghosting and failure to manage candidate expectations. We’ve nudged them into the mindset, “How can I apply to a thousand companies just to get one of them to pay attention to me?” says CareerXroads’ Gerry Crispin.
Where we could go:
- Behavioral interviewing is an immediate win to get a clearer picture of the candidate behind the résumé.
- When embedding AI into recruiter workflows, it ideally should be done in a way that improves the candidate experience and makes sure their time isn’t being wasted nor devalued.
- More transparency on both sides — including what outcomes each stakeholder is aiming for — could result in a win-win scenario where everyone feels like they’re getting a fair deal from AI.
Is it right that AI should be making decisions that could change a candidate’s future?
As with all the big questions in life, the answer is muddled and nuanced. With ChatGPT and Deepseek now being used for literally everything — from homework assistance to guided meditations — it was obvious that talent acquisition (TA) would soon come calling. On the one hand, recruiters can handle higher volumes when using AI to write job descriptions, recommend interview questions, and screen applicants. On the other hand, candidates are flooding recruiters with AI-generated résumés, which all now look alike for keywords and phrases that match the job requirements. There’s yin and yang in this debate.
The reason, according to Gerry Crispin, co-founder of CareerXroads, is that employers “are getting a taste of their own medicine.”
He explains: “Anyone who has looked for a job, I can assure you that 9 out of 10 companies they dealt with did not respond to them. Period. Ignored them as if they did not exist.” He says we shouldn’t be surprised that candidates are now willing to use any tool at their disposal to game the system. We’ve nudged them towards the mindset, “How can I apply to a thousand companies just to get one of them to pay attention to me?“
“There’s a whole host of discomfort going on right now, but I do believe it will work itself out,” Crispin says. “I am very optimistic that we just need to consider how we embrace and manage the risk around AI.”
- Get the full perspective: Watch the roundtable featuring Gerry Crispin and Chris Hoyt of CareerXRoads, ‘Everything But AI: What’s In Store for Enterprise Hiring in 2025.‘
What is AI and who’s deciding how to use it?
Much of the discomfort around AI is that change is happening faster than employers can adjust to it — and, for once, employers are on the back foot. One reason is that AI’s role in recruiting has yet to be clearly defined. Is it an automation tool, best used for streamlining repetitive tasks like scheduling and résumé screening? Or is it a decision-making tool, capable of analyzing data to make unbiased hiring decisions?
Getting the distinction right is critical. When used to automate the ‘grunt work’, AI has huge potential for time saving. Some roles can receive thousands of applications — using AI to sift résumés and find qualified candidates can save days of work for recruiters.
That’s a very different use case to using AI to conduct candidate video interviews, for example. Then, you’re trusting the tech to make decisions: if AI is going to interpret a candidate’s responses and facial expressions, then you must be confident that it’s doing so fairly and accurately. Caution is needed with this use case, or your legal and compliance teams will be kept busy.
Chris Hoyt, President and co-founder of CareerXroads says he’s “optimistic about TA leaders wrapping their arms around the differentiation between automation and AI in our space.”
He advocates for “less tech buying for the sake of buying tech” and encourages employers to identify “internal champions” or establish “centers of excellence” to guide thoughtful decision-making around: “What tools should we be using? How should we be using them? How should we be doing AI smart, and doing it safely?” Focus first on defining the process, then select the tools to support it.
Training recruiters is going to be a big part of that, Hoyt adds. He expects AI training to be a significant trend in enterprise hiring strategies in 2025.
Is AI helping candidates game the system?
Recruiting is a two-way street, and AI purchasing decisions will, to some extent, be guided by how candidates react to it. Everyone wants a fair shot at landing their dream job. Already we’re seeing an ‘arms race’ of sorts, with candidates using AI tools to recreate their résumé so it’s precisely tailored for (AI-powered) ATS screening, or to get ‘live’ answers to interview questions from an (AI-powered) chatbot.
Some are even using tools like LazyApply to automate the entire job search and apply for thousands of jobs in a single click. One wonders if it is just bots talking to bots at this point and where is the human element in all of this.
But can AI really help candidates ‘cheat’ or ‘game’ the system? It’s a tricky question, as it depends on how you define those terms. Some experts, including Hoyt and Crispin, argue that using AI to get the best possible chance of being noticed by a recruiter is a natural response to the ghosting and automated rejection emails that candidates have been experiencing for years: “Why is the candidate’s time worth less than the recruiter’s time? Why shouldn’t they be given the same ability to apply to more jobs with less of their own time? 5,000 people applied to a job that had 20 roles to fill, and we left 4,000 plus of them without even a response from us. Well, boohoo recruiter…This has been a long time coming and I think there are just some adjustments that need to be made on both sides,” Hoyt says.
“Most companies didn’t set expectations about when the candidate might hear, whether the job was filled, or whether they would go forward with them — any number of things. We are far from telling candidates what ‘good’ looks like in the journey,” Crispin adds. “We have to figure out how to use technology to change the way we treat that top of the funnel, and the perception they have of us.”
Otherwise, he says, candidates’ willingness to outsource their job search to the bots will only increase.
Where do we go from here?
The good news is AI has the power to deliver significantly better candidate experiences. It can check statuses, update candidates exactly where they are in the process, and let them know what their next steps are. It’s simply a question of being clear about what AI is helping with — with transparency from both sides.
Crispin would like to see a future where everyone comes to the table with clean hands. “As employers, from both a policy and practice point of view, we need to be clear with candidates what we’re using AI for and why. So, first and foremost: ‘here’s how we’re using AI in a positive way to give you more opportunity to get up to bat and to compete for this job.’ And then: ‘Here’s what we expect of you in terms of your use of AI in a positive way that does not create a problem for us in seeing it as cheating.”
Until we reach that utopia, it’s about finding sensible workarounds, like behavioral interviewing to get a clearer picture of the candidate behind the bot-polished résumé.
Hoyt shares the story of an organization that flagged 80% of their candidate cohort as having used assistive technology to game their way through the system. It was a real problem until they took a candid approach: “They started every interview by saying, we use AI, and we expect you’re going to use it, too. So we’re going to ask questions a little differently. We’re not going to ask if you can code, we’re going to ask you why you selected that type of code, why this style over that style, and really dig into the humanistic element.”
For Hoyt, this is a great example of AI forcing employers to rethink the weaker aspects of their interviewing and other recruiting processes. “As recruiters, we often take the path of least resistance. When tools come along to help us, that’s great, and it’s also an opportunity to take a step back and make sure we’re using tools in a fair way, as those tools become more readily accessible and just easier to use.”
If there’s a word that brings this all together, it’s “transparency.” The audience at the roundtable agreed: we need to keep talking about AI, keep discussing its potential for good or ill — and make sure everyone knows what outcomes they’re aiming for.
“We want to shoot for outcomes that add value to each of the stakeholders,” Crispin says. “If being transparent causes candidates to believe they’re getting a fair deal, that they’re getting up to bat more often, that they’re getting hired or getting feedback, then you’re going to see a change in the way candidates perceive the recruiting journey. The employer brand is going to rise as a result of that…And once that happens, I think we’re in pretty good shape.”