Who should define the ethics of artificial intelligence?

Recently, AI has been an unavoidable topic in cultural, political and online conversations. There’s no escaping it, whether it’s about new tools, Elon Musk digging into the technology and proposing a temporary halt to their development, or Italy banning ChatGPT across the country. It’s new, we think we understand it, but not many actually fully understand it. Also, those who understand often disagree.

Indeed, when it comes to these types of novel and rapidly evolving issues, regulation is often interdisciplinary. It affects many areas, and those areas are not immediate, straightforward, or scalable to a one-fits-all solution. In the case of AI, there are many variables to consider: decision making, bias, privacy, and of course, governance. Before we come up with practical solutions and thereby ask what the regulatory policy should look like, we should ask who should write it.

What does that mean?

This means that the question of who should be appointed is quite contentious. As AI becomes an increasingly present element in our daily lives, there are many ways to implement regulatory guidelines and the discussion could go on for days. For this article, let’s consider two: a fully democratic process and a careful curation by a task force of experts.

In the first case, people might want to have a say in how we should limit the reach, development, and application of AI. However, on the one hand, democratization seems to be the most direct and fruitful approach, where many compelling issues could be addressed through a public vote, with the risk that non-expert opinion will influence suboptimal or overly politicized outcomes. On the other hand, requiring experts to edit a code of ethics seems to be the most reliable approach for some. However, there is a risk here of devolving power over such a basic tool to a limited number of people, thereby forging a kind of “intellectual oligarchy”.

The best articles in your inbox every Sunday! Education, Governance and Regulation

To try to understand possible outcomes, Innovation Origins spoke to three experts who work closely with AI. The first is dr. Sara Mancini, a senior manager at an Italian consultancy, who made the 2023 list of the 100 brilliant women in AI ethics. In her opinion, before even talking about governance, there is an even earlier step that needs to be considered when it comes to AI: education.

“The ethical question that concerns me the most concerns the interaction between humans and AI.” Mancini says: “We are heading towards a scenario where the skills required are not only technological but also critical thinking, even for those not involved in this area of ​​interest. Therefore, people need to get proper training in AI from a young age so that everyone can access the technology in a mindful way.”

dr Mancini believes this perspective could ultimately lead to productive use of artificial intelligence, leveling the playing field across society and avoiding a likely situation where only those who have access to AI tools can understand and benefit from them . Also, the approach would enable a relationship with AI based on “augmentation” rather than “automation.” That is, people would learn to consciously use AI for support and facilitation in various areas, rather than completely “taking over” tasks and making humans rely on them for such tasks.

In the short term, however, she believes that experts should draft regulatory guidelines. She explains that consultation methods are already being used in Europe and the United States, with companies and universities discussing regulation to address the urgency of the problem.

The business side of things

Since today it is primarily companies that develop, control and sell AI, the discussion about regulation naturally also has a strong impact in this world. Sue Turner, OBE, Chief Executive of AI Governance Limited knows more than a thing or two about running a successful business. She is also part of the 100 Brilliant Women in AI Ethics list mentioned above. On the regulatory front, she believes in the need for a dual approach: both top-down and bottom-up. With that in mind, she says, we should consider the framework that politicians are currently setting and how that fits with social norms that are changing in society as people’s views of AI evolve. However, she says: “Right now the space is moving so fast that we are not being guided by the top-down approach nor seeing the impact of the bottom-up approach.”

Just like dr. Mancini believes in education as a powerful way to bring people closer to understanding AI and therefore its ethics, education in this regard is also necessary for adults and especially for those working in companies with AI as one of the main components of their model. “[Many business leaders] don’t even know how to identify the kind of questions they should be asking. That’s the biggest challenge: How do you convey the knowledge to these executives about what they should consider?”.

Transparency and decentralization

However, some people already have this knowledge and are thinking about using AI ethically with a critical outward view. Buster Franken is CEO of FruitPunchAI, a challenge-based learning startup working in the field of competency accreditation. He has some strong and informed opinions on governance and believes that decentralization is a viable answer to the problem.

“Moral and legal codes should be decided in a democratic manner, as is the case now. This is not AI specific. In order to steer the AI, the following things also need to happen: Open-sourcing all models and preferably data and financial incentives for finding “flawed” (biased) models. This will automatically spawn companies that specialize in testing these biased models. What certain laws mean in relation to AI can be determined by experts. It’s important,” he continues, “that they don’t have the leadership, that would become a technocracy.” In his opinion, then, companies would adhere to a national code of ethics, and companies could act as mutual watchdogs and maintain control over one another, where everyone has an incentive – even financial – to advocate for ethical principles that ultimately benefit the public.

What is undeniable with this prior art is that the field is changing, and it is changing rapidly. In order for regulators to keep up with this, we should first define regulators, but we cannot afford to waste too much time on definitions.