Artificial intelligence may be set to transform workplaces, but in a new report, the Confederation of British Industry (CBI), are asking employers to consider ethics when implementing AI, and to put monitoring checks and balances in place to ensure that bias, sexism and racism does not creep in.
Early experiments with AI showed that self-learning algorithms will need to be watched closely. Microsoft’s attempts at a self-learning chatbot in 2016 failed spectacularly when ‘Tay’, their chatbot, became sexist and racist within 24 hours of interaction with other users on social media, tweeting less than 24 hours after its launch, ‘bush did 9/11 and Hitler would have done a better job than the monkey we have got now. donald trump is the only hope we’ve got.’ She was taken off line and re-programmed, but this time her corruption took only 20 minutes – swiftly closed down again, we haven’t heard from her since.
With AI set to enter the workplace, albeit a few years down the line, it will be vital for employers to monitor AI decisions and tackle any ethical issues, such as updating governance processes, challenging unfair bias and ensuring customers understand when decisions are taken by AI and how their data is being used.
The “AI: Ethics Into Practice” report suggests that involving diverse teams to work on AI offers the best route to prevent potential issues surrounding the way that it’s used.
Diverse teams “are more likely to spot problems in data and challenge assumptions that could lead to unfair bias being programmed into AI,” the CBI wrote.
It also called on companies to check the data being fed into AI systems to stop it containing “historic prejudice against particular groups” in case the AI continues those biases.
“At a time of slowing global growth, AI could add billions of pounds to the UK economy and transform our stuttering productivity performance,” said Felicity Burch, CBI director of digital and innovation.
“The government has set the right tone by establishing the Centre for Data Ethics & Innovation, but it’s up to business to put ethics into practice.”
Agata Nowakowska, AVP at eLearning specialist, Skillsoft commented on the news.
“Organisations need to make sure they are using AI technology responsibly, and this means recognising and preventing the potential for bias,” she stated. “While it’s alarming that AI can be trained to become racist or sexist, it’s not surprising. The workers creating these algorithms are predominantly white males, who are likely to programme their own subconscious bias about gender and race into the algorithms. It’s a sad fact, but it’s true, and it could become dangerous for society as a whole if it goes unchecked.”
Business leaders need to take action now. By encouraging a more diverse group into STEM fields, organisations may be able to re-address the balance before it’s too late. Latest figures show that women, for example, make up just 14.4% – this is nowhere near equal. It indicates, unfortunately, that things are likely to get worse before they get better.
When it comes to AI, businesses who prioritise fairness and inclusion are more likely to create algorithms that make better decisions. This isn’t just good for ethical AI – it’s better for business.