Rapid advances in AI require compliance with high ethical standards, for both legal and moral reasons.
During a session at this year’s AI & Big Data Expo Europe, a panel of experts weighed in on what companies need to consider before deploying artificial intelligence.
Here is a list of the panel participants:
- Moderator: Frans van Bruggen, Policy Officer for AI and FinTech at De Nederlandsche Bank (DNB)
- Aoibhinn Reddington, Artificial Intelligence consultant at Deloitte
- Sabiha Majumder, Model Validator – Innovation & Projects at ABN AMRO Bank NV
- Laura De Boel, Partner at Wilson Sonsini Goodrich & Rosati
The first question prompted reflection on current and upcoming regulations affecting AI deployments. As a lawyer, De Boel started with her opinion.
De Boel highlights the EU’s forthcoming AI law, which builds on the foundations of similar legislation such as the GDPR, but adds artificial intelligence to it.
“I think it makes sense that the EU wants to regulate AI, and I think it makes sense that it focuses on the highest-risk AI systems,” says De Boel. “I just have a couple of concerns.”
De Boel’s first concern is how complex it will be for lawyers like her.
“The AI law creates many different responsibilities for different actors. There are providers of AI systems, users of AI systems, importers of AI systems into the EU – they all have responsibilities and lawyers have to clarify that,” explains De Boel.
The second concern is how costly this will all be for businesses.
“One concern I have is that all of these responsibilities will be onerous for businesses and involve a lot of bureaucracy. That will be costly – costly for SMEs and costly for startups.”
Similar concerns have been raised in relation to the GDPR. Critics argue that over-regulation is crowding out innovation, investment and jobs from the eurozone, leaving countries like the US and China to take the lead.
Peter Wright, Solicitor and MD of Digital Law UK, once told AI News about the GDPR: “You have your Silicon Valley startup that has access to large amounts of money from investors, has access to expertise in this area and is not going to fight with one Arm tied behind back like a competitor in Europe.”
The concerns expressed by De Boel align with Wright, and it’s true that this will have a bigger impact on startups and smaller companies that are already struggling against established industry titans.
De Boel’s final concern on the matter is enforcement and how the AI law goes beyond the already severe GDPR penalties for non-compliance.
“The AI law truly copies GDPR enforcement but imposes even larger fines of €30 million, or six percent of annual revenue. So these are really big fines,” comments De Boel.
“And we see with the GDPR that those kinds of powers are used when you give them.”
Other laws apply outside of Europe. In the United States, rules such as biometric recognition can vary widely from state to state. China, meanwhile, recently introduced a law requiring companies to give consumers the ability to opt-out of things like personalized advertising.
Keeping up with all of the ever-changing laws around the world that can impact your AI deployments will be a difficult task, but failure to do so can result in severe penalties.
The financial sector is already highly regulated and has used statistical models for things like lending for decades. Industry is now increasingly using AI for decision-making, which brings both great benefits and significant risks.
“The EU requires auditing of all high-risk AI systems in all sectors, but the problem with external auditing is that there might be internal data, decisions or confidential information that cannot be shared with an external party,” explains Majumder.
Majumder goes on to explain that it is therefore important to have a second line of opinion – which is within the organization – but they look at it from an independent perspective, from a risk management perspective.
“So there are three lines of defense: First, in the development of the model. Second, we evaluate independently through risk management. Third, the auditors as regulators,” concludes Majumder.
Of course, if AI always makes the right decisions, everything is great. Failure to do so can result in serious damage.
The EU is keen to ban AI for “unacceptable” risk purposes that can endanger people’s livelihoods, security and rights. Three other categories (high risk, limited risk and minimal/no risk) are allowed, with the number of legal obligations decreasing as you go down the scale.
“We all agree that transparency is really important, right? Because let me ask you a question: if you apply for some type of service and get rejected, what do you want to know? Why am I being denied service?” says Redington.
“If you’re denied service by an algorithm that can’t find a reason, how do you react?”
There is a growing consensus that XAI (Explainable AI) should be used in decision-making so that the reasons behind the outcome are always understandable. But Bruggen cautions that transparency may not always be a good thing — for example, you don’t want to tell a terrorist or someone accused of a financial crime the reason they were denied a loan.
Reddington believes that’s not why people should be taken out of the loop. The industry is a long way from reaching this level of AI anyway, but even if/if available there are ethical reasons why we shouldn’t eliminate human input and oversight entirely.
But AI can do it too increase Justice.
Mojumder cites the example of her finance practice, where historical data is often used to make decisions like loans. People’s situations change over time, but they might struggle to get loans based on historical data.
“Rather than using historical credit ratings as input, we can use new types of data like mobile data, utility bills, or education, and AI has enabled us to do that,” explains Mojumder.
Of course, using such a relatively small dataset then poses its own problems.
The panel offered some fascinating insights into the ethics of AI and the current and future regulatory environment. Like the AI industry in general, it’s fast-moving and difficult to keep up with, but critical to it.
Find out more about upcoming events in the global AI & Big Data Expo series here.

Want to learn more about AI and Big Data from industry leaders? Check out the AI & Big Data Expo taking place in Amsterdam, California and London.
Find out about other upcoming enterprise technology events and webinars hosted by TechForge here.