Why AI Recruitment Systems Can Appear to be Racist or Sexist

Artificial Intelligence (AI) is still in its infancy, but it’s already become mainstream in many parts of business. One area in which it’s become a standard tool is recruitment, where it’s commonly used to find candidates, screen their applications for fit against a job spec and keep them appraised of progress.

This is despite periodic news headlines about AI discriminating against candidates, typically because of race or gender. These raise concerns that are further fuelled by reports of similar discrimination in other contexts, such as Google’s facial recognition that confused black human faces with gorilla images, or Microsoft’s chatbot being ‘taught’ by members of the public to make pro-Nazi statements.

The reality is that, like other computer software, AI-powered recruitment systems do what their creators design them to do. So if they do demonstrate such inappropriate bias, it’s because of flaws introduced by human designers. Such behaviour isn’t of course deliberately built in, and is invariably the result of human error, carelessness or inexperience.

To understand why AI designers can be guilty of this, we need to understand a little about how AI works, and how that’s applied to recruitment in particular.

You can read this article in full in MI Business Magazine.


You may also like

Want to Discuss Working Together?

Drop me a line