Neither a savior nor a Frankenstein monster

For the past five years, Techlash has focused on social media algorithms, mostly algorithmic recommendations. Now we only talk about generative AI. Its potential impact has been the subject of many sensational headlines. While this craze feels novel, it’s simply a repeat of previous AI hype cycles.

We can follow The genesis of the current hype goes back to a Google engineer who called Google’s text generator LaMDA “sentient” (a statement he was strong on criticized). Over the summer, the hype reached new heights as image generators like DALL-E, Stable Diffusion, and MidJourney allowed people to write text announcements and receive AI-generated illustrations in seconds.

Creative industries found clear benefits, including advertising, marketing, gaming, architecture, fashion, graphic design, and product design. Then came tools like Astria AI, my inheritance‘s “AI Time Machine” or Lensa, which allowed people to create fake-looking profile pictures – using AI and their own selfies. These products are easy to use and have quickly moved from the world of early adopters and geeks to the mainstream.

Now OpenAI’s new chatbot, ChatGPT, is causing a firestorm. This type of Generative Large Language Model (LLM) is trained to predict the next word for a given input, not whether a fact is correct. So we quickly found that it generates well-written explanations that combine facts and outright bullshit.

Reactions to these new tools have ranged from hype (this technology is overwhelming and will improve of our lives) to Criti hype (this technology is overwhelming and will ruin our life). There are also some anti-hype people who claim it’s all a bubble (this technology is Not unbelievable). Overall, the current reporting sounds very familiar – because it is. It follows the same patterns as previous hype cycles, moving from utopian to more specific dystopian descriptions.

This “know-how” of AI media coverage is well documented in academic research. Studies on the framing of AI found that the media discusses AI in terms of either “risks and challenges” or “benefits and opportunities”. Its potential impact on society is identified as either desirable or threatening.

Furthermore, the risks of AI are discussed more specifically (loss of jobs, embedded bias, privacy, ethical concerns and misuse) than the benefits of AI (economic benefit, improvement of human life and well-being, reduction of human prejudice or social inequality). ). This means that dystopian frames are described with more examples and metaphors than utopian frames. They range from “Gateway to Heaven” (a win-win situation for people) to the most alarming “Frankenstein monster‘ (loss of human control).

Reactions to these new tools have ranged from hype (this technology is overwhelming and will improve our lives) to critical hype (this technology is overwhelming and will ruin our lives).

The two most common frames used by the media when talking about AI are “social progress” (a means of improving quality of life or solving problems) and “Frankenstein monster/Pandora’s box” (possible catastrophe and out of control guessed consequences). The extreme ends of these narratives pose a problem: overly promising advances can create unrealistic expectations and a hype bubble. As news discourse becomes more critical, false fears can also mislead public debate.

When I asked ChatGPT to explain how artificial intelligence is portrayed in media coverage, its response reflected the academic studies mentioned above:

“Artificial intelligence (AI) is often described in media coverage as both utopian and dystopian. On the one hand, AI is often portrayed as a technology that has the potential to bring great benefits to society by solving complex problems, increasing efficiency and improving people’s quality of life.

On the other hand, AI is also often portrayed in a more dystopian light, as a technology that has the potential to disrupt society, destroy jobs and pose a threat to humanity. From this perspective, AI is seen as a potential enemy of humanity rather than a tool that can improve people’s lives.”

Listen to the latest Generative AI hypewould you think that it would benefit us 1) to make our work more efficient and to help with tasks and 2) to create content (texts, images) with such brilliance (masterpieces).

Listening to Generative AI criticism hypeYou would think it would harm us by 1) making our work dispensable and a threat to jobs and 2) creating convincing false information and imagery (AI generated BS).

As expected, the human-caused coverage generated exaggerated headlines as it introduced generative AI to the masses. “Will ChatGPT kill the student essay?” The answer was “Yes”. AI-generated art was presented as another deadly conflict: “Will AI image generators kill the artists?” on one side and “Angry Artists Try To Kill AI” on the other. Even thoughtful people can fall victim to sensationalism.

In response to the Generative AI hype and Criti hype, AI experts expressed frustration at how such headlines distort their nuanced scientific discussion. “Generative AI shouldn’t be labeled ‘human versus machine’. What we actually see are humans and AI work together‘ an AI scientist told me.

Grady Boochsenior software development scientist at IBM Research, wrote that it is a case study of “why we in the scientific field are suspicious of you in the media because you come to us with a point of view that you see support for, instead of listening first.” Alfred Spector, a visiting scholar at MIT’s Department of Electrical Engineering and Computer Science, wrote: “The press might want to write something polarized debate when technological development and human adaptation should be better thought out.” He proposed a middle ground that indicates what think most scientists: Generative AI will achieve elements in both sections “as technology always has positive and negative effects”.

Similar, Roy Bahat, Bloomberg Beta Lead, commented, “We love picking teams. Reality has no teams. The answer is [almost] always both and‘ instead of either/or. Neil Turkewitz, CEO of Turkewitz Consulting Group, summarizes: “There are no two sides to the debate. The utopians and dystopists may be consuming a lot of oxygen, but most of us involved are realists interested in decisions about technology being made consciously and thoughtfully human decision making.”

Nonetheless, Generative AI is treated as if the machines rule us, rather than us using them. This is due to the advent of Technological Determinism: If you believe that technology is deterministic, you will view any new technology as the determinant of society. When it comes to generative AI, people’s imaginations run wild, like we’re a bunch of hopeless muppets in the hands of a mind-controlling Skynet.

The less Hollywood-like option is that social forces shape technology, so it’s society that influences technology (and not the opposite). It is still possible for humans to exercise control over their lives (human agency) and influence the design and use cases of technology.

Unfortunately, current technology coverage is deterministic, as is our perceived control (or lack thereof). While technological advances are impressive, these tools are built by humans and used (and abused) by humans. They should be treated as such in a more realistic narrative.

The release of GTP4 next year will likely intensify the AI ​​debate. Now is the time to level it up and emphasize the commonalities in scientific conversation more than the extreme edges. There is a great gray scale between utopian dreams and dystopian nightmares.

The key is to break through the hype, look at the complex reality, and see people, not machines, in the lead. Various societal forces are at play here: researchers, politicians, industry leaders, journalists and users who continue to shape the technology.

How should the media report on Generative AI?

As a technology in the process of being designed, with a set of decisions to be made and problems to be solved together. We can still set some guard rails and set norms, such as B. Standard procedures for consent and transparency of data sources (e.g. development of watermarking tools), guidelines for oversight and accountability and better education for AI literacy. It’s going to be a long road and we have time to change our partnership with AI.