Written by Jen Rodvold, Head of Digital Ethics & Tech for Good, Sopra Steria
The concept of digital ethics is no longer new. Leading academic institutions and think tanks have been raising ethical concerns around technology for decades, and the last several years have seen a plethora of guidelines for creating, testing and managing technology so it’s responsible and trustworthy.
But, the last year has seen digital ethics hit the mainstream, with organisations in the private and public sector focusing their attention and, increasingly, resources, on these matters. There are four forces at work that are currently transforming awareness of digital ethics – accelerated adoption, growing awareness, falling trust levels and future regulation. .
- Accelerated adoption
The seemingly overnight transformation of organisations during the pandemic is now well documented, with many shifting to enable remote working and offer digital products and services in order to survive.
Underneath transformation is an acceleration of the adoption of more advanced technologies such as automation and artificial intelligence (AI). Although data had already been proclaimed as the ‘new oil’ in terms of its perceived value to business and our economy, organisations are now taking a more serious look at what that means for them, and are developing data strategies that could shift entire business models.
Without integrating digital ethics into this acceleration, the ethical risks proliferate. The more data we use, the more technology we incorporate without understanding the potential consequences. Without sufficient testing or embedding the right guardrails, the more potential there is for harm – to individuals, society and the reputations of the organisations themselves.
- Growing awareness
The benefits of using technology to improve our everyday personal and professional lives are clear. In fact, technology is and will continue to play a prominent role in addressing some of the world’s biggest challenges, from climate change to providing better, more equitable healthcare. However, the public is now more aware of the potential for unintended consequences of technology, such as the amplification of misinformation, and bias and discrimination on digital services based on poor quality data or faulty algorithms.
At the same time, more people are aware of how some digital business models work, making use of personal data in ways that, increasingly, are worrying to consumers. A 2020 report from Doteveryone showed a 5% decrease (from 25% to 20%) in the number of British people who felt they didn’t need to read the terms and conditions of digital products. Additionally, the proliferation of media stories reporting incidents of ethical failings due to technology or data use has put these issues in the public consciousness, and the popularity of films such as The Social Dilemma and Coded Bias shows a widespread interest in these topics.
- Faltering levels of trust
The annual Edelman Trust Barometer report has shown a number of emerging concerns over the past few years. This includes a widening trust gap between the informed public and the general population, and a decline in public trust in the technology sector.
The aforementioned Doteveryone report also showed half of respondents believe that being cheated or harmed on the internet just went along with being online, and that they didn’t believe technology providers created their products with people’s best interests in mind.
While levels of trust are clearly falling, the understanding that trust has real value to businesses and public sector organisations – and is a critical success factor in achieving their ambitions – has grown. One example of this can be found in a recent study published by the Open Data Institute and Frontier Economics, which shows there is a link between trust and people’s willingness to share their data – something many organisations would like their customers or service users to do in order to provide more or better services.
With public trust in a precarious position organisations are starting to recognise the need to address digital ethics concerns.
- Prospects of regulation
While governments are still not keeping pace with the rate of change in tech and the acceleration of the adoption of increasingly advanced technologies such as AI, there are now signs they are on the political and legislative radar. For example, the EU has proposed regulation on AI, and the US is increasing the scrutiny on tech giants including Apple, Google, Facebook and Amazon. It is now likely many parts of the western world will see regulation introduced in the next couple of years.
The fact that regulation is only on the horizon, but not imminent, is no excuse for organisations not to act. In light of all the factors described above – accelerated adoption leading to increased ethical risk, growing public awareness, and shaky public trust – there are plenty of reasons to integrate digital ethics into organisational strategies and governance immediately.
Moreover, working now to understand data and technology risks will prepare organisations for what’s ahead by equipping them with greater knowledge. This can then be applied to items we might see in the EU legislation (and any UK version of it) such as AI risk assessments and product labelling.
Signs of progress
These four forces describe the drivers behind the signs of progress on digital ethics we’re now seeing. Organisations understand they need to build and maintain trust amongst employees, customers and other stakeholders, and trust cannot be achieved if something goes wrong with the data or technology they are using. They are also starting to see ethical risks as business risks, which are multiplying as digital strategies advance.
While this is progress, most organisations are struggling to find ways to take practical action, and many don’t know where to start.
Where to go from here
Now it’s clear digital ethics is a strategic concern and not just something abstract and theoretical, it’s time to take action. We recommend starting by identifying digital ethics risks and opportunities within your current digital programme, as well as in your future roadmap by asking three critical questions:
- Have I evaluated the extent to which my digital programme supports or undermines my organisation’s other strategic aims? For example, can my employees with disabilities use our organisation’s technology in a way that aligns to our diversity and inclusion policy? Do my customers understand how we use their data and have we confirmed they understand this so we can build trust and loyalty? Digital programmes are often a reflection of only part of an organisation’s strategy, sometimes falling out of alignment with strategic goals such as employee engagement and brand image. Checking for misalignment will reveal digital ethics risks.
- Do I understand how the technology we use really works? Most business leaders don’t or will at least have gaps in their knowledge around the digital tools and products they use and create. And the more gaps, the more risk.
- Am I making the most of data and technology to deliver benefits to society and improve digital ethics? Digital ethics is not just about risk mitigation, but understanding how an ethical approach to technology can open opportunities to create a positive impact. For example, while organisations must be sensitive to the need to eliminate collection of unnecessary data, this information can help provide better, fairer and more accessible services by helping them understand user demographics and needs.
By critiquing your organisation against these three areas, you can get a high-level understanding of where challenges – and opportunities – may lie. It’s then possible to prioritise action against the areas of biggest concern, as well as build in the mechanisms needed to create better alignment between organisational and digital strategy.
While the costs of inaction in terms of risk are too high, the rewards for making progress are great, from preparing for regulation to improving user engagement and building stakeholder trust.