Blog
When AI Hiring Blocks Talent: How Discrimination Against People with Disabilities Appears in IT

Modern approaches to IT recruitment increasingly rely on automated systems – from resume screening to video interviews and behavioral analysis. However, while saving HRs time, these tools often replicate existing social biases and filter out talents who should have received an interview invitation

Image: cnbc.com
How This Shows Up in Practice
A University of Melbourne study (2025) found that candidates with accents or speech differences (for example, related to disability) face frequent misinterpretation by speech recognition tools – with error rates up to 22% for Australians, compared to less than 10% for U.S. native English speakers (source: The Guardian). Such unfairness leads to lower interview scores, even when professional skills are the same.
In the field of LLM-based ranking (the use of AI models like ChatGPT to assess and order items such as web pages, documents, products, search results, or resumes), evidence shows that resumes mentioning participation in disability-related events or projects received lower scores from GPT-4 – even when the format and content were otherwise identical (source: arXiv). A broader study on arXiv (2025) demonstrated that candidates who explicitly stated “no disability” had an advantage, even over those who simply did not disclose their status.
Why This Hits the IT Industry Especially Hard
When selecting programmers, DevOps engineers, or testers, employers often rely on algorithms that:
- are trained on datasets where people with disabilities are rare or absent;
- fail to account for context (e.g., a medical leave or the use of assistive technologies), treating it not as a strength but as a “deviation”;
- base evaluations on patterns that people with disabilities may not match.
This issue extends beyond hiring to performance monitoring: tools tracking click activity or other metrics may misinterpret behavior during longer pauses or an unusual work rhythm (source: Technical.ly).
Why Automation Alone Does Not Solve the Problem
A University of South Australia study (2025) reports that AI can only help improve hiring diversity if:
- The system can explain its decisions from an inclusivity standpoint.
- The company uses clear DEI indicators (quantity + quality, not just numbers).
- Organizational support pushes HRs to critically interpret AI results, rather than blindly trusting them (source: Home Tech, Xplore).
Key Problems That Need Attention – and How to Address Them
The issue of AI in IT recruitment for people with disabilities is not about “broken technology,” but about social biases embedded into it:
- systems work with incomplete or distorted data;
- transparency is lacking – neither candidates nor HRs know why certain decisions were made;
- there is no auditing – regular checks for discrimination are absent;
- the focus is on efficiency, not fairness.
Steps to Mitigate AI-Driven Discrimination:
- conduct bias audits before implementing AI hiring solutions;
- guarantee human oversight for critical decisions: keep a manual review option;
- involve accessibility experts and people with disabilities in product testing – apply “inclusive design” from the start;
- develop policies for explaining AI decisions to both HRs and candidates;
- foster a culture of positive examples: share success stories of IT professionals with disabilities who passed real technical screens.
At Alite Recruiting, we believe that modern HR technologies should not only look for “perfect CVs,” but also support professionals who can perform at their best when treated fairly. We actively implement inclusive design principles across all stages of IT recruitment, ensure transparency of AI-driven decisions, and safeguard the human factor in every important selection process.