Who is liable if AI makes mistakes? – Monash lens

Artificial intelligence (AI) or machine learning is a process of programming computers that has recently become a subject of widespread public discussion.

This is mainly due to the rapid technological development of recent years, which has made AI seem closer than ever.

AI will become even more important in the future as companies look to automate more tasks. For example, by 2023, AI will be responsible for managing 30% of all customer service interactions.

With the increasing spread of AI in our society, the important question arises: Can AI make mistakes?

Yes, it certainly can. In fact, AI is more likely to make mistakes than humans because it relies on data that is often incomplete or inaccurate.

As a result, the AI ​​is constantly learning and evolving as it interacts with more data. The more data it has to process, the more accurate its predictions and recommendations become.

Because of this, companies are always looking for ways to collect more data. One way to do this is to use AI-powered chatbots to interact with customers. Chatbots can collect data about customer preferences and behavior. This data can improve the customer experience by making better recommendations or providing a more personalized service.

More accurate data with better algorithms can (at least to some extent) replace the mistakes or inaccuracies of the AI.

Who is liable if things go wrong?

The question is who is liable if the AI ​​system makes a mistake? The user, programmer, owner or the AI ​​itself?

Sometimes the AI ​​system can be solely responsible. In other cases, the people who created or use the AI ​​system may be partially or fully responsible.

READ :  Hamdan witnesses the signing of memoranda of understanding to promote robotics and technology in Dubai's aviation sector

Determining who is responsible for an AI error can be difficult and may require legal experts to determine liability on a case-by-case basis.

Arguably, it can be difficult to hold individuals accountable without their direct link between AI failures and individuals. As a result, it is reasonable and fair to hold AI liable instead of individuals.

Read more: ChatGPT: Old AI problems in a new guise, new problems in the guise

How can we hold AI liable? Can we file lawsuits against AI? We can, but only if AI is undisputedly a legal entity.

The law or the legal order allows legal action to be brought against legal or natural persons. Is AI a legal entity or organization?

It is also a gray area whether AI is a legal entity like a company or works as an agent. Proponents argue that legal entity is a legal concept that gives legal entities, such as corporations or individuals, specific rights and obligations.

But AI systems are considered property and do not have the same legal rights and obligations as humans or legal entities.

They believe that the AI ​​should not be held responsible for its mistakes as it is not a conscious being and therefore cannot be held responsible for its actions in the same way as a human.

Is AI punishable?

On the other side of the argument, some believe that AI should be held accountable for their actions like any other entity. Because if AI is able to make decisions, then it should also be responsible for the consequences of these decisions.

READ :  AI chip startup SiMa.ai raises $37 million in B1 round

But can AI make a decision without the help of a person behind it? If not, why is the AI ​​taking responsibility for its mistake?

Instead, we often see that clients or employers (with some exceptions) are responsible for the actions of their agents or employees. This theory of vicarious liability was developed in the UK in 1842 in the case of R v Birmingham & Gloucester Railway Co Ltd. This oldest case was the first in which a company was held liable for the actions of its employees.

Can we think of AI as a business or a company?

Read more: ChatGPT: We Need More Debate About Generative AI’s Impact on Healthcare

We all know that AI is machine learning where a scientist or programmer has set code for it. AI will never work without systematic coding set by programmers.

For example, ChatGPT is now widely known. It is a kind of AI system created by a natural person or group of people.

Imagine ChatGPT offers a hospital to offer its service for an annual fee of $1000. If AI’s algorithms misdiagnose diseases that lead to patient deaths, isn’t it reasonable to file a lawsuit against Open AI, ChatGPT’s parent company?

I ask how many people know its founder? And together about Open AI and its product ChatGPT?

So if users are suffering from ChatGPT, isn’t it reasonable to sue Open AI? Since it is a separate entity and is considered a separate entity as a legal entity, which took place in the well-known Solomon v Solomon & Co. case, [1897] UKHL 1, [1897] AC 22

READ :  Leading Food and Beverage Company Chooses Veryfi to Enhance Customer Loyalty Applications | News

Consistent with this motion, some jurisdictions are beginning to explore the concept of giving legal personality to AI systems in certain circumstances.

In addition to the liability of the legal entity, liability may be extended to individuals where error or error is due to their express consent, toleration or omission.

Whether the liability lies with AI or individuals, it’s a fantastic tool that can help us in a number of ways.

However, there are also some risks associated with its use. We must be aware of these challenges and take action to mitigate them.