Artificial intelligence (AI) is still a divisive topic. There are huge opportunities it offers organisations, and the world, to drive real change. While the full capabilities of AI haven’t been realised yet, we should proceed with caution.
Businesses are really starting to now consider different ways to use this intelligent technology to solve challenges, simplify everyday life and create efficiencies in ways previously unimagined. Its potential seems without limits, which is why we must also consider how it affects society and the questions around morals and risk.
So, how can organisations give customers confidence in trusting brands with AI implementations?
Transparency is key
Businesses can’t just deploy AI technologies and leave it. They must be responsible at the time of deployment and accountable throughout its entire lifecycle.
If a business doesn’t understand how its algorithms make decisions, you allow it to dictate the reality for your customers. While it can be exceedingly challenging to determine how a mathematical model reaches a certain conclusion, in particular with deep learning, businesses have to find a way to explain the “why” behind the decisions reached with AI and make sure that the datasets they are using have no biases.
Organisations need strong barriers in place to keep up with and stay in control of the models as they are continuously self-learning and evolve at scale. It’s important to be responsible for the outcomes, make sure the AI produces the intended results and that you take ownership. As a business, you don’t want to be exposed when the outcome of an interaction or decision is unethical or unfair. Blaming the algorithm or the data is an indefensible position. Not only do businesses need to be able to explain the algorithm, there also needs to be a way to understand how a model evolves over time, rather than just a black box without human intervention.
Transparency is also the way to mitigate bias, especially for models that are difficult to explain. The key is to provide customers a view into the data that led to the decision in the first place. Businesses can provide this view by supplying customers with basic insight into the factors that drive algorithmic decisions, and how that input is analysed.
Don’t ignore the data
As more artificial intelligence use cases are developed globally, the need for data will only increase. However, the risk that comes with assembling this data lies in the potential disruption and possibility for discrimination. Algorithms likely base its decisions on model’s past preferences and behaviour unless you’ve done some work to prevent it. If you don’t use the right data or all of the data, you may introduce a bias which could lead to potential discrimination.
Even seemingly innocuous uses of bias should be suspect and subject to scrutiny. The most well-intentioned companies should exercise due diligence when basing actions on the output of machine learning algorithms.
Ultimately, customers have the right to know how much of their data is being stored and how it is used to make decisions and recommendations. Therefore, businesses can’t just deploy AI and leave it. They must be responsible at the time of deployment and accountable throughout its entire lifecycle. There is a constant need for organisations to make sure they’re really tuned to what is happening, why it’s happening and the impact of the algorithm’s outcomes have on people. Additionally, they need to make sure they aren’t inadvertently introducing new biases. The message is clear: don’t ignore data as it could lead to serious repercussions.
Reassurance with GDPR
GDPR appears to create a large hurdle for AI implementation, but it’s also an opportunity to ensure that AI is maximising the privacy of the individuals involved. More data means not just collecting it in more places but holding it for longer periods of time. However, GDPR conditions state that data is not to be held longer than needed for its identified purpose.
GDPR will eventually build public trust for AI if it’s embraced by companies that process data. Transparent use of AI, with a focus on privacy and security, will make it much more palatable to the general public that has become cautious of data breaches and AI. Data privacy regulations continue to evolve, but GDPR is here to stay and will be key to regulate AI for a long time to come.
The advent of AI and its maturity is a great opportunity for legal teams to consider AI from both a legal and an ethical standpoint. The goal of organisations should be to define best practices and create industry-leading standards. Once those standards are in place, its important businesses adhere to them.
Government regulations will also go further into enforcing explainable and transparent AI. GDPR already mandates a “right to explanation” for all high-stakes automated decisions. This is one of the first legal steps toward a more ethical AI. However, organisations can’t rest on their laurels. They should take the lead and implement what they consider to be industry best practice, rather than just waiting for legislation to drive action. Prefectly compliant can be the enemy of the good, to paraphrase Voltaire.
With AI potential comes responsibility
This is only the beginning of what AI can do. However, with this potential comes immense responsibility for businesses to use AI in the most ethical and unbiased ways possible. Fortunately, our economies are progressively adapting to AI through partnerships and regulations to ensure a sound and ethical deployment and use of AI powered applications. It’s essential, however, that brands using AI engage with the broader debate and help to frame how these topics should be tackled across regulation and rules, to build customer trust in the technology.
Only by taking every stakeholder, internal and external along for the journey can we build and create a trustworthy experience with AI.
Please visit: https://www.genesys.com/en-gb