Artificial intelligence is being trained to be sexist and racist

In the 21st century, robots are among our closest and most trusted allies. Just over the past few years, many of them have been responsible for operating our vehicles, for carrying out critical and complicated medical operations and, according to US Army General Robert Cone, they could soon be dominating the battlefield, replacing a quarter of all US combat soldiers by 2030. Much more importantly though, robot bartenders have been introduced, meaning that the machines could soon be responsible for pouring our post-work bevvies - now there's a big responsibility in itself.

Yet, as our reliance on machines grows with the passing of time, the moment has finally come to consider the possibility that they may not be the things to put our unabridged trust in. Why, you ask? Mostly due to the fact that a lot of them are racist. Oh, and sexist. Yep, the next time you put your complete and undivided faith into an automated machine, you might want to remember that they could be dragging you down a dark path.

Racist protest Credit: Getty

So, how exactly do we know this for sure? The idea originated in a study published in 2017 in the journal Science, which revealed that as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases within our language. Of course, it isn't the robots making our society racist, it's the human beings who make and use them.

The research focused on a machine learning tool known as "word embedding", something that is already used in web search and machine translation. The process works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers based on which other words most frequently appear alongside it. To make it clear, if you had a word like "flower" it would be linked to pleasantness, but if you had a word like "spider" it would be linked to unpleasantness.

Going on this basis, the research paper that appeared in Science pinpointed some deeply concerning trends that had been acquired by algorithms. These included the fact that the artificial intelligence system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words. In addition, the words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions.

image

Joanna Bryson, a computer scientist at the University of Bath and a co-author of the study, was keen to make it clear that these disturbing trends were the fault of humans, not of AI. She said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”

Her opinion is reinforced by the fact that these prejudices are exactly the same biases exemplified in implicit association tests, where people in the US and the UK were asked to match words to faces and ended up matching pleasant words to white faces. In addition, one previous study highlighted that an identical CV is 50 per cent more likely to result in an interview invitation if the candidate’s name is European American as opposed to African American.

In fact, there have been multiple studies on the fact that the technology that we create is becoming more and more discriminatory towards minorities. Perhaps one of the most worrying ones was the Correctional Offender Management Profiling for Alternative Sanctions, which plays a role in determining when criminal defendants should be released, across the US.

Women's rights protest Credit: StockSnap

When Northpointe (the company that created it) ran a risk assessment comparing 10,000 Floridian inmates in 2016, the scores found that black people were almost twice as likely as white people to be assessed as “higher risk” when it came to the probability that they would commit another crime. Black people were found to be 77 per cent more likely to commit a violent crime in the future and 45 per cent more likely to commit any kind of crime in the future. In reality, this figure is extremely different, with only 20 per cent of Northpointe's predictions on future violent crime being accurate.

However, if we're the ones making these machines, surely we can be the ones to fix them, right? You would think this however, experts have cast doubt on whether bias can be completely eliminated from algorithms without stripping away their powers of interpretation too. In the past, experts have called for an AI watchdog to be set up in order to ensure that people are not discriminated against by the secretive algorithms that technology companies often use, maintaining there must always be an explicit part of an AI that is driven by moral ideas and not ingrained prejudice.

However, it appears that this is easier said than done. In pondering how to protect individuals from racist and sexist AI, it seems that one always goes around in a circle - because unless humankind changes, AI will stay largely the same. Until we as a society terminate our racial and gender prejudices, the robots that we create are most likely always going to discriminate unjustly against certain people in one way or another.