The dangers of using AI in recruitment
03/05/2021 by MRL
Quick Job Search
Artificial and augmented intelligence, or AI, is a term that has been around since 1956 and refers to a machine being trained to understand human language and ‘think’ for itself. Automation is being used now more than ever to speed up processes. Some may say that these machines are even more intelligent than humans and are now being harnessed to save recruiters valuable time.
Recruitment automation refers to a technology that can carry out tasks and workflows without human monitoring, thus, freeing up recruiters to work on relationship building. Recruitment automation has been known to increase productivity, improve the talent profiles within an organisation and accelerate time-to-fill while reducing the cost-per-hire.
When thinking about all that is involved in a hiring process, hearing that recruitment AI can take some of that burden away from teams may feel like music to your ears. However, as one big brand in particular found, not everything is always as it seems.
There are two main tasks where artificial and augmented intelligence is being used within recruitment:
Writing job adverts and descriptions
Screening potential candidates
A former Microsoft employee has designed augmented writing software called Textio that creates enticing job ads, whereby it selects terminology based on the geographic area it is targeting. This will undoubtedly save recruiters valuable time, as it can come up with synonyms on the spot and use data to ascertain which words and phrases will increase the number of applications submitted.
We all know that the job advert and description writing process is time-consuming, and AI's, such as Textio or some other machine, may feel like the solution. However, a problem occurs when the data pool being used becomes stale.
There was a time in recent history where job advertisements would be seeking a 'ninja' or 'guru'; this practice was particularly present in the technology sector but quickly became outdated.
If the data pool used by an AI is not regularly updated with the swiftly changing language preferred by candidates, the machine will create sub-standard adverts and descriptions.
Therefore, while the actual writing process can be handed over to a machine, a human must still do continual research to ensure that the advertisements and descriptions remain on-trend. This should also include spot-checking the copy to see whether outdated terminology is seeping in.
Another issue with the data used by these AI's is the unintended use of gender-coded words. While heralded to utilise inclusive language, the machines will base its decision purely on the data. The AI may steer clear of using 'he' or 'she', but does it understand the impact of specific words, like 'perfectionist', on the applicant?
Augmented intelligence may produce exceptional copy that gets results; however, this will always be based on data and not actual knowledge or experience. The machine will not know or understand the impact the words have on potential candidates, and recruiters will have to continually analyse if unintentional bias is taking place. For example, if you begin to see a higher amount of men being identified for a position, this signals unintentional bias on behalf of the machine. Even in a male-dominated industry, steps should be continually taken to close the gender gap.
Big brands received hundreds, if not thousands, of CV’s for their job vacancies, so investing in artificial intelligence to take over the screening process may sound like the perfect solution. Two AI recruitment software providers use two different approaches to screening potential candidates:
Pymetrics ask candidates to play a variety of games that secretly measure personality traits such as risk aversion. They state that the process is used:
"to fairly and accurately measure cognitive and emotional attributes in only 25 minutes".
Whereas HireVue requests candidates to upload videos of themselves answering interview questions so that the AI can measure both verbal and body language to screen candidates.
These machines are supposed to take a more impartial approach than a human interviewer.
However, in 2018, Amazon had to stop using their system because the AI had analysed the data and began biasing against women. Their AI analysed CV’s and created a list of top contenders. The problem was that it was trained using data from a ten year period, whereby most of the CVs were from men, as men dominated the tech industry.
While Amazon did edit the program, there was no way of knowing if the machine was learning other biases and was eventually shut down.
This incident highlights a critical issue with using AI in recruitment: the data. These machines are supposed to be impartial but are using data based on thousands or even millions of decisions made by humans.
That doesn’t mean that AI for recruitment is impossible; however, responsibility should never solely be with the machine. Continual tests should be carried out to ascertain whether accidental bias is creeping in.
Indeed, a 'middle ground' whereby AI and humans work together is needed, and these machines shouldn't be wholly relied upon and trusted. Incidents like the one Amazon came across proves that AI in recruitment is not 100% there...yet.