Uncategorised4 min(s) read
Published 14:12 28 Jan 2018 GMT
Uncategorised4 min(s) read
Published 14:12 28 Jan 2018 GMT
[[heroimage||http://cdn.junglecreations.com/wp/junglecms/2018/01/robot-walking-compressor.jpg||image]]
Joanna Bryson, a computer scientist at the University of Bath and a co-author of the study, was keen to make it clear that these disturbing trends were the fault of humans, not of AI. She said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.” Her opinion is reinforced by the fact that these prejudices are exactly the same biases exemplified in implicit association tests, where people in the US and the UK were asked to match words to faces and ended up matching pleasant words to white faces. In addition, one previous study highlighted that an identical CV is 50 per cent more likely to result in an interview invitation if the candidate’s name is European American as opposed to African American. In fact, there have been multiple studies on the fact that the technology that we create is becoming more and more discriminatory towards minorities. Perhaps one of the most worrying ones was the Correctional Offender Management Profiling for Alternative Sanctions, which plays a role in determining when criminal defendants should be released, across the US. When Northpointe (the company that created it) ran a risk assessment comparing 10,000 Floridian inmates in 2016, the scores found that black people were almost twice as likely as white people to be assessed as “higher risk” when it came to the probability that they would commit another crime. Black people were found to be 77 per cent more likely to commit a violent crime in the future and 45 per cent more likely to commit any kind of crime in the future. In reality, this figure is extremely different, with only 20 per cent of Northpointe's predictions on future violent crime being accurate. However, if we're the ones making these machines, surely we can be the ones to fix them, right? You would think this however, experts have cast doubt on whether bias can be completely eliminated from algorithms without stripping away their powers of interpretation too. In the past, experts have called for an AI watchdog to be set up in order to ensure that people are not discriminated against by the secretive algorithms that technology companies often use, maintaining there must always be an explicit part of an AI that is driven by moral ideas and not ingrained prejudice. However, it appears that this is easier said than done. In pondering how to protect individuals from racist and sexist AI, it seems that one always goes around in a circle - because unless humankind changes, AI will stay largely the same. Until we as a society terminate our racial and gender prejudices, the robots that we create are most likely always going to discriminate unjustly against certain people in one way or another.