by Juan F. Samaniego, Universitat Oberta de Catalunya
A robot must not injure a human or, through inaction, allow a human to be harmed. A robot must obey orders given to it by humans, unless such orders would conflict with the First Law. A robot must protect its own existence as long as that protection does not conflict with the First or Second Law.
The three laws of robotics were laid out by Isaac Asimov eighty years ago, long before artificial intelligence became a reality. But they perfectly illustrate how people have dealt with the ethical challenges of technology: by protecting users.
However, the ethical challenges facing humanity, whether related to technology or not, are not really a technological problem but a social one. Therefore, technology in general, and artificial intelligence in particular, could be used to empower users and help us move toward a more ethically desirable world. In other words, we can rethink the way we design technology and artificial intelligence and draw on them to build a more ethical society.
This is the approach advocated by Joan Casas-Roma, a researcher in the SmartLearn group of the Faculty of Computer Science, Multimedia and Telecommunications at the Universitat Oberta de Catalunya (UOC), in his open-access book Ethical Idealism, Technology and Practice: a Manifesto, published in Philosophy & Technology. To understand how this paradigm shift can be implemented, we need to go back in time a bit.
Artificial intelligence is objective, right?
When Asimov first laid out his laws of robotics, the world was a very low-technology place compared to today. It was 1942 and Alan Turing had only just finalized the formal algorithmic concepts that decades later would be key to the development of modern computers. There were no computers, no internet, let alone artificial intelligence or autonomous robots. But Asimov already suspected the fear that humans might succeed in making machines so intelligent that they would end up rebelling against their creators.
But later, in the early days of computing and data technologies in the 1960s, these issues were not among the main concerns of science. “There was a belief that because the data was objective and scientific, the resulting information would be true and of high quality. They were derived from an algorithm, just as something is derived from a mathematical calculation. Artificial intelligence was objective and therefore helped us to eliminate human prejudices,” explained Joan Casas-Roma.
But this was not the case. We realized that the data and the algorithms replicated the model or worldview of the person who used the data or designed the system. In other words, technology itself did not eliminate human prejudice, but transferred it to a new medium. “Over time we have learned that artificial intelligence is not necessarily objective and as such its decisions can be very biased. The decisions have perpetuated inequalities rather than correcting them,” he said.
So we ended up at the same point predicted by the laws of robotics. Questions about ethics and artificial intelligence were brought to the table from a reactive and protective perspective. Realizing that artificial intelligence is neither fair nor objective, we decided to take action to limit its harmful effects. “The ethical question of artificial intelligence arose from the need to build a protective shield so that the technology’s unwanted effects on users would not be perpetuated any further. That was necessary,” said Casas-Roma.
As he explains in the manifesto, the fact that we had to react in this way has meant that we have not examined another fundamental question in the relationship between technology and ethics in recent decades: what ethically desirable consequences a range of artificial intelligences could have ? Access to an unprecedented amount of data help us reach? In other words, how can technology help us build an ethically desirable future?
On an idealistic relationship between ethics and technology
One of the main medium-term goals of the European Union is the transition to a more inclusive, integrated and collaborative society, where citizens better understand global challenges. To achieve this, technology and artificial intelligence could be a major obstacle, but also a great ally. “Depending on how people interact with artificial intelligence, a more cooperative society could be promoted,” said Casas-Roma.
In recent years, online education has experienced an undeniable boom. Digital learning tools have many benefits, but they can also contribute to a feeling of isolation. “Technology could encourage greater togetherness and create a greater sense of community. For example, instead of having a system that only automatically corrects assignments, the system could also send a message to another classmate who solved the problem, to make it easier for students to help each other. It’s just an idea to understand how technology can be designed in a way that helps us interact in a way that encourages community and collaboration,” he said.
According to Casas-Roma, an ethically idealistic perspective can rethink how technology and the way users use it can create new opportunities to generate ethical benefits for users themselves and society as a whole. This idealistic approach to the ethics of technology should have the following characteristics:
- Expansive. Technology and its use should be designed to enable its users to thrive and become stronger.
- idealist. The end goal that should always be kept in mind is how technology could make things better.
- Activate. The opportunities created by technology must be carefully understood and designed to ensure they encourage and support the ethical growth of users and societies.
- mutable. The current status should not be taken for granted. The current social, political and economic landscape, as well as technology and the way it is used, could be reshaped to allow progress towards another ideal state.
- principle based. The way technology is used should be seen as an opportunity to enable and encourage behaviors, interactions and practices that are consistent with certain desired ethical principles.
“It’s not so much about data or algorithms. It’s about rethinking how we interact and how we want to interact, which we enable through a technology that is encroaching as a medium,” concluded Joan Casas-Roma.
“This idea is not so much a suggestion about the power of technology as it is about the mindset of whoever develops the technology. It is a call for a paradigm shift, a rethink. The ethical implications of technology are not a technological issue, but rather a societal issue. They represent the problem of how we interact with each other and our environment through technology.”
Joan Casas-Roma, Ethical idealism, technology and practice: a manifesto, Philosophy & Technology (2022). DOI: 10.1007/s13347-022-00575-7
Provided by the Universitat Oberta de Catalunya
Citation: Could artificial intelligence help us build a technological world that is more ethical? (2022 December 13) Retrieved December 13, 2022 from https://techxplore.com/news/2022-12-artificial-intelligence-technological-world-ethical.html
This document is protected by copyright. Except for fair trade for the purpose of private study or research, no part may be reproduced without written permission. The content is for informational purposes only.