Latest News

GDPR didn’t mean the end of business software, and neither will the EU’s proposed ‘GDPR for AI’ act

Written by Mark Bakker, Regional Lead Benelux, H2O.ai

The European Union (EU) is proposing the first ever legal framework on AI (Artificial Intelligence), in the form of a regulatory framework that would lay down harmonised rules on artificial intelligence that it believes addresses the inherent risks.

It is very early days. But the positioning is strong, with Brussels arguing that the EU must “act as one” to harness the “many opportunities and address challenges of AI in a future-proof manner”—and that seeks to promote the development of AI and manage the “potential high risks” it poses to safety and fundamental rights.

As a supporter of AI working with European public and private sector organisations looking to maximise the value of AI, I have strong views about this development. Overall, I see a lot of merit in the idea—but I do understand that some might see this as an inhibitor to progress. After all, let’s be honest; I have yet to see an example of regulation where companies did not find some inconvenience.

Firstly, I think the best comparison to an EU led AI set of regulations would be the AI form of GDPR, the General Data Protection Regulation; so as GDPR has quickly established itself as not just the European, but in many ways the global, benchmark framework for data, this would be the same for AI.

The growing importance of fully ‘Responsible AI’

Investing in what we call ‘Responsible AI’ right now is critical. It boils down to a matter of trust. As it stands, there isn’t a lot of trust in AI in the social mainstream, as people don’t really understand what it’s doing; beyond it being a black box that seems to do something interesting. That’s a little scary for people, and I see it in my job every day. There are examples that show AI could be discriminatory, which leads to real life disadvantages, in recruitment for example.

There is also a problem from a sales perspective in that the technology industry needs to better explain all the steps it makes to improve its chances of acceptance and adoption.

On the business side, organisations absolutely see the potential of AI, because they can develop a better understanding of their customers, they can cut down costs, and deliver new and innovative ways of working, but they don’t always trust what their data scientists say they should do, because there is no obligation to explain what the AI is doing.

This is the problem the EU regulations could potentially address, and interestingly data is again at the heart of this, as it is for GDPR. You don’t care what use is made of your data in conventional business software applications, like, for example, a telco company’s marketing database set up to stop you switching (churn) or a bank delivering you a better product (upsell). But you do care about your personal data’s use if it’s in a much more sensitive, AI-powered application, like deciding if you’re eligible for certain sorts of medical care or insurance cover or credit.

That’s the challenge we have here in Europe; if it’s your data in question, where is it going to be stored, what are the AI or cloud vendors going to do with it in the future and how do I get them to remove it? The issue ultimately is around privacy, and this could be a really useful extension of GDPR. With GDPR, we are trying to secure privacy of our own data. What we need to thank the EU Commission for here is that if we could agree the proper use and control of personal data in future AI applications, it would head off the main potential big AI killer down the road—the suspicion about misuse of personal data.

To address the AI privacy issue in a responsible way could actually expand the market for AI and Machine Learning, and the AI community would absolutely welcome that. We would breathe a sigh of relief that there are guardrails being put into place to help ensure AI it’s being used to make the world better, versus potentially perpetuating the worst aspects of it. But it is probably going to take some cooperative community effort, including between business and suppliers, to identify models that are less trustworthy, or what data sets are being flagged for discrimination. Flagging those and ensuring those data sets and models aren’t used would mean we could dramatically improve the trust in the AI industry and encourage responsible AI adoption, which would help everyone.

A price worth paying to increase trust

But could a GDPR for AI inhibit innovation? That is an important question. Will it make it harder to develop an AI system because your AI models and your data will be regulated? After all, there will be compliance obligations to address, is this low impact or high impact and what are the requirements to do impact assessments. But if that’s the price to enable organisations to deploy highly useful smart systems easier and in a safer fashion, that will only increase trust, and AI will soon be more broadly accepted as a hugely valuable addition to both commerce and society.

Speaking for my company, we assert that proper governance is really important. We already very carefully document all that we do and try to interpret and explain the governance of our data models. And with this, we hope that businesses invest in Explainable AI—AI that is transparent and uses customer data safely and in an easily accountable fashion—because it is the right thing to do, both for their customers, society and themselves.

So, overall we welcome the EU’s idea, and think the AI industry should too, and we need to make sure our expert voice is added to the conversation, and now, not later. However, I do believe the bridge from today’s relative confusion and lack of trust to true Explainable AI is now visible. And that’s pretty cool.