Britain’s data protection regulator said it will study if the use of artificial intelligence (AI) systems in recruitment would result in the wrongful denial of opportunities based on race.
The Information Commissioner’s Office said it is considering the impact the use umela intelligence of AI in recruitment on neurodiverse people who weren’t part of the testing for this software.
The investigation is part of ICO25 - a three-year plan setting out the watchdog’s regulatory approach and priorities.
The regulator’s decision comes amid concerns that the use of algorithms to sift through job applications affects employment opportunities for people from ethnic minorities, the Guardian reported.
The jobs website ZipRecruiter revealed to the newspaper that at least three-quarters of all CVs submitted for jobs in the US are read by algorithms.
“We will be investigating concerns over the use of algorithms to sift recruitment applications, which could be negatively impacting employment opportunities of those from diverse backgrounds,” the ICO said.
John Edwards, who took over as the UK’s information commissioner earlier this year, said the regulator would be “looking at the impact AI use could be having on groups of people who aren’t part of the testing for this software, such as neurodiverse people or people from ethnic minorities”.
David Leslie of The Alan Turing Institute which is focused on data science and artificial intelligence, said: “The use of data-driven AI models in recruitment processes raises a host of thorny ethical issues, which demand forethought and diligent assessment on the part of both system designers and procurers.
“Most basically, predictive models that could be used to filter job applications through techniques of supervised machine learning run the risk of replicating, or even augmenting, patterns of discrimination and structural inequities that could be baked into the datasets used to train them,” Leslie told the Guardian.
AI recruitment systems to be investigated over racial bias
Recently, there have been growing concerns that AI, in many cases, discriminates against minorities

The FBU is planning to introduce new internal policies and wants the TUC to take action as well. (Representational image: iStock)
FBU chief raises concern over rise in racist online posts by union members
THE FIRE Brigades Union (FBU) and other trade unions are increasingly concerned about a rise in racist and bigoted online comments by their own members and officials, according to Steve Wright, the FBU’s new general secretary, speaking to the Guardian.
Wright said internal inquiries have revealed dozens of cases involving members using racist slurs or stereotypes, often aimed at asylum seekers.
He said similar issues were reported in other unions, prompting a joint campaign to counter false narratives around immigration and race promoted by far-right groups online.
“People with far-right views are becoming more brazen in what they do on social media, and I’ve witnessed it with my own union around disciplinary cases and the rhetoric of some of our own members,” Wright said to the newspaper.
He added, “Some of our members and sometimes our reps have openly made comments which are racist and bigoted. In my time in the fire service, that has gone up.”
The FBU is planning to introduce new internal policies and wants the TUC to take action as well. A formal statement addressing far-right narratives will be launched at the union’s annual conference in Blackpool next month.
Wright cited the influence of social media and figures like Donald Trump and Nigel Farage as factors contributing to these incidents. “It feels like an itch that we’ve got to scratch,” he said.
The FBU barred a former official last year for allegedly endorsing racist content on X, including posts from Britain First and Tommy Robinson.
Wright also warned that the union could strike if the government moves to cut frontline fire services.