Latest News

The Ethics of AI: dealing with bias in algorithms

Written by Wayne Butterfield, Director, ISG

The Ethics of AI: dealing with bias in algorithms

As more organisations explore the benefits of using artificial intelligence, we’ll inevitably see more examples of AI bias in action. But, of course, this bias isn’t really from AI itself, but from the people who program it.

We all have unconscious bias formed from the environment we were raised in and the brain’s need to identify threats and opportunities quickly. But these biases also shape the opportunities other people have. Some benefit from it, while others are at a distinct disadvantage.

It’s not just the creation of AI algorithms we need to worry about. Executives should be aware that the data their organisations feed into AI or machine learning software will contain whatever bias is present in their business.

AI bias in action

Twitter recently started paying people for finding bugs in its algorithm. In August 2021, Twitter had to pay one graduate student $3,500 for finally proving what Twitter users had been saying for a while – that its image cropping algorithm was consistently cropping out darker faces in photos and highlighting lighter ones in the feed.

When users brought the issue to Twitter’s attention back in 2020, it responded that the team “did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do.”

More recently, a former Uber driver is taking the firm to court after its face-scanning system failed to recognise him as the person who had taken jobs from Uber for five years, and blocked him from the app. Both he and the IWGB union are calling on Uber to get rid of its “racist algorithm”.

The truth is, the AI models your business uses reflect the success of its diversity, inclusion and equity efforts. It also reveals whether apparently successful diversity efforts are really just surface deep.

Humans are hard-wired to be biased

Our propensity for bias has helped us survive. We needed quick ways to tell the difference between safe and dangerous. Examples include knowing the shape and colour of poisonous mushrooms or being able to recognise when someone’s a stranger or part of a rival group and therefore an unknown quantity.

Our brains still do this. As we struggle to deal with all the world is throwing at us, our brains continue to categorise and distinguish between threats and safety, generating responses like flight, fight, freeze, flop (disengaging) and friend (attaching) to keep us physically and mentally safe.

But when we’re the ones feeding data into an AI, the risk is that our biases – garnered from everything from the way we were raised to our response to what we see in the media – program that intelligence to see the world in the same way we do (and sometimes in ways we’re not even aware of).

How do we remedy this?

The solution lies in us, rather than the AI. We’re the ones who need to change. While we developed these ways of thinking to aid our individual survival, they’re now damaging us and people in wider society.

We need to focus on four areas to tackle bias in AI.

1.Be aware of the problem

It’s easy for some of us to feel defensive when we’re accused of having a bias for or against something or someone. I’m not like that! is an understandable response because much of this bias is unconscious. We don’t mean to harm others, but we have to acknowledge that this is often the result of our thoughts.

Once we’re aware of it, the challenge is not to shut down that line of thinking. Now that we’re aware, we must work to identify and eliminate bias from our work.

We can remove unconscious bias from the AI development process, and when we do, it can produce powerful results – such that the medical AI developed to identify and score knee pain manages to accurately identify pain experienced by Black people (who traditionally are more likely to have their pain under-estimated by healthcare workers), or people who speak English as a second language, for example. In this example, the AI became an advocate for patients as it was programmed to learn from them and their experiences, not from the doctor’s bias.

2.Build diverse teams

To get an AI that’s as free from bias as it can be, you need a diverse team programming it.

For example, while a recruitment team may make decisions based on their biases, a recruitment tool that collects and scans CVs has the ability to make thousands of additional recruitment decisions. If this tool has been programmed by (for example) an all white and middle-aged group of men, chances are it will be biased in favour of CVs that fit that pattern. Other CVs then get rejected before a human has a chance to review them and imagine what a different perspective could bring to the role.

Until we acknowledge, and start working to counter, our unconscious biases, we’ll continue to create tools, programs and algorithms that make it harder to build diverse workforces.

3.Stay informed

Open-sourced lists like the “Awful AI” database, curated by AI pioneer David Dao, keep track of issues with AI and help to raise awareness of systemic biases. Lists like this help hold businesses to account and encourage developers to work on ways to eliminate bias in artificial intelligence.

Keeping an eye on lists like these will help to keep bias in AI at the forefront of your mind.

4.Don’t forget to work on AI bias testing

AI bias testing is crucial, but for it to be effective, you also need to focus on your process and data governance.

Again, it’s about acknowledging that unconscious bias is a problem and asking what changes you can make to improve the situation – such as making changes to the sort of data you collect and feed into the AI.

Eliminating bias from AI can be a challenging process, but if your business truly values diversity and inclusion and wants to make a positive difference in the world, it’s a journey worth making.