Dealing with the risks of AI requires teamwork

A visitor sees an AI sign on an animated screen at Mobile World Congress in Barcelona last month. (Photo: AFP)

The last few months might be remembered as the moment when predictive artificial intelligence went mainstream. While predictive algorithms have been around for decades, the release of applications like OpenAI’s ChatGPT3 — and its rapid integration with Microsoft’s Bing search engine — may have opened the floodgates when it comes to easy-to-use AI.

Within weeks of ChatGPT3’s release, it had already attracted 100 million monthly users, many of whom have no doubt already experienced its darker side: from insults and threats to disinformation and a proven ability to write malicious code.

The chatbots making headlines are just the tip of the iceberg. AIs for creating text, speech, art, and video are evolving rapidly, with far-reaching implications for governance, commerce, and civic life.

Unsurprisingly, capital is pouring into the sector as governments and corporations alike invest in startups to develop and deploy the latest machine learning tools. These new applications will combine historical data with machine learning, natural language processing, and deep learning to determine the likelihood of future events.

Crucially, the adoption of new natural language processing and generative AI will not be limited to the wealthy countries and companies like Google, Meta and Microsoft that have spearheaded their development. These technologies are already spreading into low- and middle-income environments, where predictive analytics for everything from reducing urban inequality to addressing food security offers tremendous potential for cash-strapped governments, corporations and NGOs to improve their efficiencies and seek social and economic benefits.

READ :  Using artificial intelligence technology for IVF embryo selection - ScienceDaily

The problem, however, is that insufficient attention has been paid to the potential negative externalities and unintended impacts of these technologies. The most obvious risk is that unprecedentedly powerful forecasting tools will strengthen the surveillance capacity of authoritarian regimes.

A much-cited example is China’s “social credit system,” which uses credit histories, criminal convictions, online behavior, and other data to assign a score to every person in the country. These results can then determine whether someone can get a loan, go to a good school, travel by train or plane, and so on. Although China’s system is touted as a tool to improve transparency, it is also a tool for social control.

Yet even when deployed by seemingly well-intentioned democratic governments, social impact-focused companies, and progressive nonprofits, predictive tools can produce subpar results. Design flaws in the underlying algorithms and biased data sets can lead to data breaches and identity-based discrimination. This has already become a glaring problem in the criminal justice system, where predictive analytics routinely maintain racial and socioeconomic differences. For example, an AI system developed to help U.S. judges estimate the likelihood of reoffending incorrectly found that black defendants are at far greater risk of reoffending than whites.

There are also growing concerns about how AI could deepen inequalities in the workplace. To date, predictive algorithms have increased efficiency and profits in ways that benefit managers and shareholders, at the expense of ordinary workers (particularly in the gig economy).

In all of these examples, AI systems hold up a funny mirror to society, reflecting and magnifying our prejudices and injustices. As technology researcher Nanjira Sambuli notes, digitization tends to exacerbate rather than improve pre-existing political, social, and economic problems.

READ :  University of Chicago researchers present 3D Highlighter: An artificial intelligence (AI) method for locating regions on 3D shapes using text descriptions

Enthusiasm for adopting predictive tools needs to be balanced against an informed and ethical consideration of their intended and unintended impacts. Where the effect of powerful algorithms is disputed or unknown, the precautionary principle would advise against their use.

We must not allow AI to become just another area where decision makers seek forgiveness rather than permission. For this reason, the United Nations High Commissioner for Human Rights and others have called for moratoria on the adoption of AI systems until the ethical and human rights frameworks are updated to reflect their potential harms.

The elaboration of suitable frameworks requires a consensus on the basic principles that should inform the design and use of predictive AI tools. Fortunately, the race towards AI has led to a parallel wave of research, initiatives, institutes and networks on the topic of ethics. And while civil society has taken the lead, intergovernmental bodies like the OECD and Unesco have also gotten involved.

The UN has been working to develop universal standards for ethical AI since at least 2021. Additionally, the European Union has proposed an AI law — the first such effort by a major regulator — that would block certain uses (e.g., those similar to China’s). social credit system) and subject other high-risk applications to special requirements and oversight.

So far, this debate has mostly focused on North America and Western Europe. But low- and middle-income countries have their own fundamental needs, concerns and social inequalities that need to be addressed. There is a wealth of research showing that technologies developed by and for markets in advanced economies are often unsuitable for less developed economies.

READ :  Robotics ETF Betting on Automation

If the new AI tools are simply imported and deployed at scale before the necessary governance structures are in place, they could easily do more harm than good. All of these questions need to be considered if we are to develop truly universal principles for AI governance.

Recognizing these gaps, the Igarapé Institute and New America recently launched a new global task force on Predictive Analytics for Security and Development. The task force will bring together digital rights advocates, public sector partners, technology entrepreneurs and social scientists from the Americas, Africa, Asia and Europe with the aim of defining first principles for the use of predictive technologies in public safety and sustainable development in the Global South .

Formulating these principles and standards is only the first step. The greater challenge will be organizing the international, national and sub-national cooperation and coordination needed to translate them into law and practice. In the global rush to develop and deploy new predictive AI tools, loss prevention frameworks are essential to ensure a safe, prosperous, sustainable, and human-centric future. ©2023 Project Syndicate