1st Executive Blog

AI in Recruitment and Search - Tarred by One Brush

Written by Andrew Thoseby | Jan 22, 2026 9:15:03 PM

I just read the following in HRD Weekly.

It seems to me that AI is occasionally being positioned as an inherently unfair approach to talent selection. There is one quote: “The University of Melbourne, for instance, has warned that AI could worsen discrimination in hiring,” and that “the very features that make AI powerful can also deepen inequities if left unchecked.”

There is no doubt that any technology platform left unchecked can create problems for the audience it serves. That is not unique to AI. What is often missing from this debate, however, is a practical understanding of how the technology actually works and how it is applied. A lack of understanding is far more likely to create poor outcomes than the technology itself. If bias is built into an algorithm by a human, it can indeed be accentuated through repeated use—but that is a design and governance issue, not an inevitability. It serves nobody.

Much of the current commentary on AI in recruitment assumes that it is inclined to be unfair, that it accentuates bias, and that it creates inequities. My experience suggests the opposite: when used correctly, thoughtfully, and ethically, AI is more likely to improve recruitment outcomes for job seekers.

One of the persistent challenges in recruitment is volume. Recruiters and search consultants manage large numbers of applicants, while candidates quite reasonably expect to be treated personally. When that does not happen, the industry is criticised for poor “service levels.”

In an ideal world, all candidates would be treated equally and fairly. In reality, the recruiter’s and the hiring organisation’s primary obligation is to the paying client to in-house hiring manager. Before large language models emerged, advertised roles—particularly via job boards such as SEEK—were often flooded with applications generated from loosely matched candidate profiles. In many cases, candidates were not consciously applying for a specific role at all.

Commercial reality meant that experienced recruiters would assess a CV quickly against the requirements of a role and, if no immediate alignment was evident, move on. This effectively created a form of Russian roulette for candidates, hoping their CV would align closely enough with the stated requirements. Compounding this is the fact that advertisements are designed to attract applicants and rarely tell the full story of the role.

Even so, candidates who do progress to interview—particularly in white-collar roles—are generally treated objectively and professionally by recruiters, search consultants, and hiring managers. That said, human bias, often subconscious, inevitably plays some role. Many recruiters will recognise the frustration of interviewing a strong candidate, identifying genuine value, only to see them rejected by a hiring manager purely on the basis of the CV.

Whether this is fair to candidates or to recruiters and search consultants is debatable. Candidates want roles aligned to their career ambitions; recruiters and search consultants want candidates who genuinely meet the requirements. They also need to do this consistently enough to sustain their own roles and businesses.

Now let’s consider a best-practice AI-enabled approach. At its core, AI screening is a highly effective data-matching tool. In practice, this looks like the following:

  • A job description is created within a recruitment system
  • That job description contains far more information than appears in the advertisement, including skills, experience, soft competencies, and other specific criteria
  • On receipt of the job description, the recruitment system uses AI to search its own database to assess how closely existing candidates match the role. AI is matching data points—its “bias” is purely logical: it will not rank a Human Resources CV for a Sales role because the data points do not align
  • These results are presented in the recruiter’s or search consultant’s workspace. A human then decides which candidates may be suitable. Bias can occur here—but it is human bias. Search consultants, in particular, are often dealing with passive candidates and will typically seek an expression of interest before progressing
  • New applicants then enter the system
  • The recruiter instructs an AI assistant to generate a small number of text and voice-based screening questions based on the role requirements
  • At this point, AI has simply analysed the role data and recommended what should be explored in a pre-interview screening conversation. These questions may cover skills, experience, and soft competencies
  • The recruiter reviews and refines these questions
  • An AI agent conducts the screening call, matches responses to data points, and provides structured feedback to the recruiter
  • At this stage, AI has removed the “quick CV scan” bias and created an additional opportunity for candidates whose CVs may not align perfectly but who demonstrate relevant capability
  • The recruiter or search consultant then conducts a human-to-human interview, presents suitable candidates to the hiring manager, and the hiring manager repeats this human process before making a decision
  • At no point does AI replace the value-adding work of the recruiter or search consultant. Instead, it identifies capable talent at precisely the stage where humans, under time pressure, might otherwise miss them. It is simply better at early-stage, high-volume data analysis than entire recruitment teams. Candidates benefit directly

Our view is that professional recruiters—whether in-house or external search consultants—clearly understand and actively protect the value they bring to the hiring process. That value lies in deeper, more meaningful human-to-human interaction and exploration. They also recognise that candidates will increasingly use AI to strengthen their CVs against role requirements. Most recruiters have no issue with this; to do so would be hypocritical.

Experienced recruiters and search consultants are highly capable of identifying when someone is not what they claim on a CV. Just as importantly, they are skilled at uncovering the hidden qualities between the lines—the attributes that rarely appear on a written CV but often determine whether a career move is truly successful.

 

What “AI in recruitment” usually means (so we stop talking past each other)

One reason this debate goes in circles is that “AI in hiring” can mean wildly different things:

  • Search + matching (ranking CVs/profiles against role criteria)
  • Parsing (turning a CV into structured data)
  • Screening assistants (chat/voice prompts that ask consistent questions)
  • Assessment tooling (tests, work samples, or automated scoring)
  • Video / audio analysis (higher risk, especially for accessibility and disability impacts)

When people say “AI is unfair,” they often mean the last category — but most of the day-to-day value (and frankly, most of the realistic upside for candidates) comes from the first three.

 

Where bias actually shows up (and why it’s not “AI’s personality”)

If you’re trying to be practical about fairness, the useful question isn’t “does AI discriminate?” It’s where can discrimination enter the system?

Here are the common fault lines:

  1. Historical data baked into models
    If past hiring reflected bias, a model trained on that history can reproduce it at scale.
  2. Proxy variables
    Even if you remove protected attributes, other signals can act as stand-ins (school, postcode, gaps, naming patterns, etc.).
  3. Different error rates for different groups
    This matters a lot in speech/video tooling (accent, disability-related speech patterns, assistive tech). The EEOC explicitly calls out examples where automated tools can disadvantage people with disabilities.
  4. Over-filtering for “perfection”
    Systems can default to narrow patterns unless you deliberately design for range and potential — which is exactly why your point about giving candidates “another opportunity” is so important.

So yes: bias is real. But it’s not mystical. It’s usually measurable, testable, and governed.

 

A practical governance checklist

If I were advising a hiring team (or a vendor), this is what I’d want to see in place — not as theatre, but as real process:

1) Transparency (tell people what’s happening)

  • Candidate notice that AI/automation is used at screening stages (at least at a high level).
  • A simple “how decisions are made” explanation in plain language.

2) Job-relatedness (prove the tool is assessing relevant criteria)

  • Clear mapping: role requirements → screening questions → scoring rubric.
  • Avoid “because the model said so” logic.

3) Bias testing (don’t guess — measure)

  • Run adverse impact checks and keep evidence. The EEOC makes the point plainly: even “neutral” tools can be illegal if they cause unjustified disparate impact.

4) Human accountability (humans stay responsible)

  • Human review points that are real, not rubber stamps.
  • A documented override process (and logging of overrides).

5) Accessibility + accommodations (often overlooked)

  • If screening uses voice/video or timed tools, ensure candidates can request alternatives.
  • Test for disability impacts explicitly (not as an afterthought).

6) Vendor discipline (procurement is governance)

  • Ask vendors for: what data they trained on, what they measure, how they test for bias, and what audit artefacts you can access.
  • Don’t accept “trust us” assurances (that’s where legal risk and reputational risk live).

7) Use an actual risk framework

Two credible ways teams are doing this: