Kerrie Heath, European Sales Director, AI, OpenText, discusses the need to balance harnessing the power of AI with the need for maintaining fairness and avoiding bias.
As we enter 2020, there is no doubt that artificial intelligence (AI) has great potential to help humans make fairer decisions. But increasing use of AI in the sensitive areas of various industries means the debate around bias continues to grow daily.
In all sectors, there has been a data explosion. Digital citizen services, Internet of Things (IoT) devices, and enterprise applications are collecting huge amounts of data every day, all at a much faster rate than ever before. Previously, the industry relied on a human workforce to collect, manage and process the information – but this is no longer an option. The vast quantity of data involved, means that the unassisted analysis of data by human beings is no longer a viable solution. Instead, companies are turning to technology – like AI – to plug the gaps.
This increased use of AI could have many potential benefits when it comes to decision making, especially when it comes to reducing human bias. However, certain ethical questions must be considered as we begin to rely on machine and AI-enabled decision making. This is particularly true for government departments and public bodies looking to automate functions given their impact on both citizens themselves and the wider economy.
So as we look at the year ahead, what practical steps can be taken to drive ethical, unbiased AI use?
Organisations need to build in processes and policies to prevent and address bias
In both the private and public sectors, organisations are recognising the growing need to develop strategies to mitigate bias in AI. With issues such as amplified prejudices in predictive crime mapping, organisations must build in checks for both AI technology itself and their people processes.
One of the most effective ways to do this is to ensure data samples are robust enough to minimise subjectivity and yield trustworthy insights. Data collection cannot be too selective and should be reflective of reality, not historical biases.
AI systems are built on data, meaning they will only be as objective and unbiased as the data put into them. If human bias is introduced into datasets, bias will be generated in the outcomes of the application of those datasets.
The best way to prevent bias in AI systems is to implement ethical code at the data collection phase. This must start with a large enough sample of data to yield trustworthy insights and reduce subjectivity. Therefore, a robust system able to collect and process the richest and most complex sets of information, including both structured and unstructured data, is required to produce the most accurate insights. Data collection principles should also be examined by teams which include a variety of backgrounds, views and characteristics.
Yet even this careful, preventive approach cannot fully protect data against bias at all times. So results must be monitored for signs of prejudice, and any notable correlations between race, sexuality, gender, religion, age and other similar factors should be investigated. If a bias is detected, organisations can implement mitigation strategies, such as adjusting sample distributions.
Harness diverse teams for ethical AI
The teams responsible for identifying business cases and creating and deploying machine learning models should represent a rich blend of backgrounds, views, and characteristics.
Recently, the UK government became the first to pilot diversity regulations for staff working on AI in order to reduce the risk of sexist and racist computer programs. Drawn up by the World Economic Forum, these guidelines mean that teams commissioning the technology from private companies should “include people from different genders, ethnicities, socioeconomic backgrounds, disabilities and sexualities”.
This can only be described as a welcome step in the right direction towards ensuring the ethical implementation and use of AI technologies. Organisations should also test machines for biases, train AI models to identify bias, and consider appointing an HR or ethics specialist to collaborate with data scientists, thereby ensuring cultural values are being reflected in AI projects.
As technology continues to evolve and innovate, the next few years will see AI technology completely transform industry, as more menial tasks are digitalised through AI and process automation. But this does not need to be a cause for fear. The changes will allow workforces to be more productive and efficient, as technology will undoubtedly take some of the day-to-day strain off employees.
The growing concerns around AI bias should, and must, be addressed head on. Ultimately, AI systems are only as good as the data put into them, so ethical code must be put in place from the very start. If organisations prioritise and champion a clear goal that aligns with their values, and routinely monitor the outcomes resulting from AI systems, industries affected by changes through technology, will be able to reap the rewards of AI and automation without putting their citizens at risk of AI bias.