Dependence on artificial intelligence will create chaos and distrust without regulation

In the euphoria of ChatGPT’s rapid progress, we shouldn’t shut the risks posed by opaque logic driving AI decision-making. The lack of governance standards to combat anti-competitive and malicious activity could be disastrous

Advances in natural language processing that enable artificial intelligence (AI) conversation, coupled with open-source access to AI-driven tools, have eliminated traditional skepticism about AI and its capabilities.

ChatGPT’s responses have been described as “an immediate echo of human speech,” and this human-like exposure to AI has, to some extent, mitigated the reluctance we once held to the use of AI. Our amusement at how ChatGPT mimics human artistic and abstract abilities made us empathize with its inaccuracies. However, these inaccuracies compared to the proliferation of AI specifically to support decision-making underscore the need for regulation.

Trust AI too much?

AI-based systems are already outperforming human specialists, from writing poetry to helping run operations. Of course, reliance on AI to support critical decisions has steadily increased—from determining creditworthiness and eligibility to ratifying hiring decisions based on background checks powered by AI. However, reliance on AI without regulation undermines the accountability we expect from its developers.

Although AI convincingly demonstrates intelligence, it is trained to interpret situations within the contours of the logic it was trained with, and its intelligence formed baseline data sets that may not necessarily apply to every situation. Accordingly, AI is based on learned logic and applies it within narrowly defined exceptions.

Human decisions are constantly checked against different parameters ranging from ethics to common sense to professional prudence. In stark contrast, society has relied confidently on AI-driven decisions, even though the logic used to arrive at such decisions is shrouded in confidentiality and often obscured in the face of scrutiny.

READ :  Study finds the risks of sharing health care data are low | MIT News

Although AI decisions are “data-driven,” they must be able to be scrutinized to eliminate malicious logic, bias, or prejudice inherited from the architects and designers of the logic that creep into the decision-making process.

Realize the limits of AI

Although seemingly data-driven and devoid of human weaknesses such as emotion or prejudice, reliance on AI decision-making tools can reinforce biases inherited from their human creators. For example, self-preservation or self-promotion logic taught to an AI-powered search engine may result in results from queries, articles, or content critical of that search engine being eliminated or suppressed.

Another example is how app-based taxi call services determine price increase based on AI. There have already been allegations that certain apps have used data to determine that users are more likely to pay for price increases when their phone’s battery is low and they are not on public transport.

This is ironically reminiscent of taxi driver price gouging based on human-perceived extraneous factors that accelerated the adoption of app-based taxi services in the first place.

How AI can undermine itself

In the case of unsavory decisions, we would rather not have our conscience, there has been increasing reliance on AI decisions for guilt compensation. In situations where the outcome includes amplified outcomes, such as B. determining who should be prioritized in an ER, even the most staunch advocate of AI adoption would shy away from blindly relying on AI.

While AI-based decision-making has been touted as a transformative tool to remove human oversight and bias (where AI-driven outcomes are overwhelmingly data-driven decisions), the very bias humans are afflicted with often seeps into the logic that drives AI -based decisions drives manufacturing.

READ :  Freshworks launches collection of updates powered by AI

These exaggerated examples only underscore some of the insidious ways in which reliance on AI without accountability will eventually lead to an erosion of trust in AI-based decision-making.

With this ever-increasing dependency, challenges arise in regulating and setting standards for AI in a way that anticipates and discourages detrimental or illegal end-uses. AI-driven decisions are often based on opaque logic, and without regulation or auditability, ever-changing parameters allow anti-competitive or malicious logic to be introduced into AI and then withdrawn without consequences.

Liability for AI-based decisions

Until a clear consensus is reached on accountability and accountability for the liability that derives from reliance on AI-driven decisions, government regulators have the unenviable responsibility of shaping an evolving regulatory framework. The design and deployment of AI must be governed in a way that balances the need to encourage innovation while ensuring transparency and accountability.

Transparency in AI-based decision-making relies on traceability of developer input, and auditability of the source code used in AI development is key to instilling trust in AI. Any legal framework that is adopted must first address this need by holding AI developers accountable to ensure they do not exploit the ambiguity of the logic that drives them.

Admittedly, regulating a technology when its end uses are not fully understood seems counterintuitive, even oppressive. However, the crypto meltdown has shown that public trust can be boosted by regulating tech products to prevent adverse consequences.

For example, adopting a framework that ensures AI is deployed while respecting privacy, ethical considerations, and moral guard rails that we take for granted could build trust and reduce resistance to AI adoption. Rather than legislating after an incident that allows AI to operate in an ecosystem where broad policies apply, it would limit the creative possibilities of abuse.

READ :  Popular 'Harry Potter' characters recreated by AI based on their book descriptions

Akash Karmakar is a partner at the law firm Panag & Babu and leads the firm’s fintech and regulatory advisory practice. He tweets @akashxkarmakar. Views are personal and do not represent the status of this publication.

Akash Karmakar is a partner at the law firm Panag & Babu and leads the firm’s fintech and regulatory advisory practice. He tweets @akashxkarmakar. Views are personal and do not represent the status of this publication.