As we say goodbye to a decade that has seen major technological innovation, it seems only right to reflect on what has been AI’s latest rise to prominence once again. In the last year alone, AI adoption has tripled. We’ve seen the technology being integrated into everything from risk assessment to customer services and even border control – but not all of it has been a success. In fact, the perception of AI amongst the general public remains one of suspicion. A recent Ipsos study has found that 53% of people do not feel comfortable with AI making decisions that affect them.
A key barrier to mass acceptance of AI in 2019 has been the proliferation of biased or ‘unethical’ automated decision-making. Reports of ‘biased AI’ at Apple and even in the Home Office have created the perception that the technology cannot be trusted. Despite this, there are a number of AI-for-good initiatives already in motion.
In 2020, the adoption of AI by organisations across most sectors is likely to become even more commonplace, but in order to ensure we move past the adoption of biased or unethical AI that is creating so much public unease, a number of fundamental changes need to take place:
1. Humans need to be put back at the centre of decision-making
In 2019 we’ve seen a number of cases in which machine learning systems have developed irrational prejudices. Rather than relying solely on data-derived algorithms to make the right decisions, organisations need to put humans back into the loop. Domain expertise is key, and human-centric, rules-based AIs are proving themselves able to not only make complex decisions, but provide an audit-trail for every decision made without the need for external scrutiny. This is made possible because such systems can explain their decisions in human terms. Explainable AI leaves an audit trail that humans can easily interpret – a particularly important capability given that accountability will be under the spotlight from consumers and regulators alike in 2020.
2. Data privacy and transparency will become a focal point for consumers, organisations need to be prepared for scrutiny
Transparency engenders trust. Organisations need to provide a clear, human-centric, rationale to the consumer if their automated decisions are to be trusted. This will be especially true if consumers are expected to give permission for their data to be used to drive such outcomes. More people are starting to understand how their data is being used, which in 2020 will mean we see greater backlash against organisations or governments that misuse consumer data. There needs to be more transparency and open debate about individuals’ roles in data privacy amongst organisations who intend to use data in this way.
While this is likely to end up with the public sharing less data to begin with, it isn’t to say that big data dreams will be lost. Rather, organisations will need to better understand data sharing and engagement habits amongst different groups of people, so that they can develop strategies to improve the way data sharing and privacy is communicated to these different demographics.
3. Legislation must catch up to tackle the issues we’ve seen in 2019
Governments are starting to understand that we need stronger and better regulation around AI. It is vital that they start taking action to develop new frameworks so organisations can innovate in a way that is impactful and progressive but also responsible. Stronger regulation allows organisations and individuals alike to understand the way data should be used and shared responsibly.
At a government level, it will be interesting to see the discussions coming out of groups such as the Centre for Data Ethics and Innovation and what will be done to tackle the challenges we’ve all seen play out over the last year.
4. Organisations should start realising the benefits of AI in back-end processes
We’ve seen many AI projects focus on making customer interaction smoother. A large proportion of AI adoption has so far been front-end focused. However, organisations sit on vast amounts of data that can benefit ‘behind the scenes’ decision-making by combining with human-centric machine intelligence. New benefits can be derived from automating high value, predominantly transactional decisions, such as risk analysis or governance decisions, by scaling a company’s domain expertise. More companies need to realise that the real value of AI is in these middle or back-end processes.
To improve the front-end, in 2020 more emphasis needs to be put on using data in the middle- or back-office to proactively solve problems before they ever reach the consumer. For example, there is currently a push in banks to streamline the process of logging and addressing fraud claims using AI. However, if a credit card provider can use their expertise to spot and handle fraudulent transactions before they affect the customer experience, friction will be reduced many times over.
Success in 2020
AI will continue to boost innovation across most sectors in 2020 but in order to be successful, organisations need to think carefully about the data they have and what they can (and should) do with it. While it’s easy to jump into adopting data-driven solutions that seem to solve an immediate problem, more care needs to be taken in ensuring platforms are transparent and have the ability to truly benefit both the company and the customer. We’ll start to see a shift towards more conscious uses of AI, but there is still a battle to be had to find the right balance between AI decision-making and human intervention.
Key to this will be companies embracing transparency, both in the solutions they adopt and in their own use of data. Only when automated decision-making can be understood by all can we really move forward and start to fulfil the true potential of the technology.