I just read the following in HRD Weekly.
It seems to me that AI is occasionally being positioned as an inherently unfair approach to talent selection. There is one quote: “The University of Melbourne, for instance, has warned that AI could worsen discrimination in hiring,” and that “the very features that make AI powerful can also deepen inequities if left unchecked.”
There is no doubt that any technology platform left unchecked can create problems for the audience it serves. That is not unique to AI. What is often missing from this debate, however, is a practical understanding of how the technology actually works and how it is applied. A lack of understanding is far more likely to create poor outcomes than the technology itself. If bias is built into an algorithm by a human, it can indeed be accentuated through repeated use—but that is a design and governance issue, not an inevitability. It serves nobody.
Much of the current commentary on AI in recruitment assumes that it is inclined to be unfair, that it accentuates bias, and that it creates inequities. My experience suggests the opposite: when used correctly, thoughtfully, and ethically, AI is more likely to improve recruitment outcomes for job seekers.
One of the persistent challenges in recruitment is volume. Recruiters and search consultants manage large numbers of applicants, while candidates quite reasonably expect to be treated personally. When that does not happen, the industry is criticised for poor “service levels.”
In an ideal world, all candidates would be treated equally and fairly. In reality, the recruiter’s and the hiring organisation’s primary obligation is to the paying client to in-house hiring manager. Before large language models emerged, advertised roles—particularly via job boards such as SEEK—were often flooded with applications generated from loosely matched candidate profiles. In many cases, candidates were not consciously applying for a specific role at all.
Commercial reality meant that experienced recruiters would assess a CV quickly against the requirements of a role and, if no immediate alignment was evident, move on. This effectively created a form of Russian roulette for candidates, hoping their CV would align closely enough with the stated requirements. Compounding this is the fact that advertisements are designed to attract applicants and rarely tell the full story of the role.
Even so, candidates who do progress to interview—particularly in white-collar roles—are generally treated objectively and professionally by recruiters, search consultants, and hiring managers. That said, human bias, often subconscious, inevitably plays some role. Many recruiters will recognise the frustration of interviewing a strong candidate, identifying genuine value, only to see them rejected by a hiring manager purely on the basis of the CV.
Whether this is fair to candidates or to recruiters and search consultants is debatable. Candidates want roles aligned to their career ambitions; recruiters and search consultants want candidates who genuinely meet the requirements. They also need to do this consistently enough to sustain their own roles and businesses.
Now let’s consider a best-practice AI-enabled approach. At its core, AI screening is a highly effective data-matching tool. In practice, this looks like the following:
Our view is that professional recruiters—whether in-house or external search consultants—clearly understand and actively protect the value they bring to the hiring process. That value lies in deeper, more meaningful human-to-human interaction and exploration. They also recognise that candidates will increasingly use AI to strengthen their CVs against role requirements. Most recruiters have no issue with this; to do so would be hypocritical.
Experienced recruiters and search consultants are highly capable of identifying when someone is not what they claim on a CV. Just as importantly, they are skilled at uncovering the hidden qualities between the lines—the attributes that rarely appear on a written CV but often determine whether a career move is truly successful.
One reason this debate goes in circles is that “AI in hiring” can mean wildly different things:
When people say “AI is unfair,” they often mean the last category — but most of the day-to-day value (and frankly, most of the realistic upside for candidates) comes from the first three.
If you’re trying to be practical about fairness, the useful question isn’t “does AI discriminate?” It’s where can discrimination enter the system?
Here are the common fault lines:
So yes: bias is real. But it’s not mystical. It’s usually measurable, testable, and governed.
If I were advising a hiring team (or a vendor), this is what I’d want to see in place — not as theatre, but as real process:
1) Transparency (tell people what’s happening)
2) Job-relatedness (prove the tool is assessing relevant criteria)
3) Bias testing (don’t guess — measure)
4) Human accountability (humans stay responsible)
5) Accessibility + accommodations (often overlooked)
6) Vendor discipline (procurement is governance)
7) Use an actual risk framework
Two credible ways teams are doing this: