How ChatGPT breathed new life into the search wars on the web

Bing and OpenAI logos. Illustrated | Getty Images

ChatGPT, the viral chatbot by artificial intelligence startup OpenAI, has sparked a renewed battle for AI supremacy as tech giants Google and Microsoft compete to use the technology to transform the way the world uses search engines. Here’s everything you need to know:

How important is ChatGPT?

Since its debut in December, ChatGPT, OpenAI’s generative chatbot, has grown into a global phenomenon. Just five days after its release, 1 million people reportedly signed up to try the chatbot, which can quickly and seamlessly generate advanced content like essays, poetry, and fiction. As of January, ChatGPT had 100 million monthly active users, making it “one of the fastest growing software products in store,” reports the New York Times. For comparison, it took TikTok nine months to reach 100 million users and over two years for Instagram, according to Reuters.

How did ChatGPT reignite the search wars on the web?

The software’s popularity “has prompted a rush of investors trying to participate in the next wave of the AI ​​boom,” writes the Times. For example, OpenAI recently signed a $10 billion deal with Microsoft and will also partner with BuzzFeed, which plans to use the company’s list and quiz building technology. The announcement caused BuzzFeed’s stock price to more than double.

Amid the growing frenzy surrounding ChatGPT, other tech companies have started announcing their competing chatbots. Google executives declared a “Code Red” in response to OpenAI’s software, accelerating development of many AI products to bridge the growing gap between itself and its emerging rivals. Shortly thereafter, the company introduced its own chatbot, Bard, and began giving select users a peek at its own chatbot, which – similar to ChatGPT – uses information from the internet to generate text responses to user queries.

Then, in February, Microsoft announced it would integrate ChatGPT into its Bing search engine and other products, to which Google responded by announcing that it would also integrate generative AI into its own search capabilities.

READ :  The internet is divided over the story of a McDonald's employee who was reinstated after quitting his job three times

The story goes on

“The search wars on the internet are back,” says Richard Waters of the Financial Times. Generative AI has “opened the first new front in the battle for search dominance since Google fended off a concerted challenge from Microsoft’s Bing more than a decade ago.” And for Google in particular, this arms race could pose a serious threat to its core search engine business, which relies heavily on digital ads. “Google has a problem with the business model,” Amr Awadallah, a former Google employee who now runs Vectara, an LLM-based search platform, told the Times. “If Google gives you the perfect answer to every query, don’t click any ads.”

Otherwise, why is Google at a disadvantage?

The fact that Google has been able to catch up is ironic, especially since the tech company “was early in the advanced conversational AI game,” says CNBC. In fact, since its inception in 2016, CEO Pinchai has strived to rebrand Google as an AI-first company.

In 2018, Google debuted Duplex, “an amazingly human-sounding AI service” programmed to mimic human verbal tics while making automated calls to restaurants that don’t have online reservations. While many were “rightfully impressed” by the program, others were “a little worried and unsettled,” reports Forbes. Media has expressed concern over the ethics of a program that intentionally deceives employees; At the time, NYU professors Gary Marcus and Ernest Davis called it “a bit creepy” in an op-ed for the Times. And sociologist and writer Zeynep Tufecki tweeted: “Silicon Valley is ethically lost, leaderless and unlearned.”

Although Google faced similar criticism for its Google Glass smart glasses that launched in 2012, “the duplex debacle threw a spanner in the works,” notes Forbes. Then, under Pichai, rather than flaunting its new pivot toward AI, Google became “a monument to Silicon Valley’s ignorance: cool tech tied to a lack of human foresight.” Two former company executives told Forbes that the negativity surrounding Duplex’s launch is “one of many factors that contributed to an environment where Google has been slow to ship AI products.” You may also remember LaMDA, or Google’s language model for conversational applications, which became embroiled in controversy after a company engineer claimed the program was sentient. His claims were later denounced by members of the AI ​​community. (LaMDA is responsible for the assistive technology at the heart of Bard.)

READ :  Before The Supreme Court Destroys The Internet, It Might First Destroy Art

Controversy within Google’s AI department also played a role in their now backward approach. After signing a deal with the Pentagon in 2018 to develop technology for Project Maven, an attempt to use AI to improve drone strikes, Google faced criticism from its employees. After the pushback, the company declined a contract extension and published an ethical guide for developing AI technology called “AI Principles”. In 2020, the company’s Ethical AI leaders, Timnit Gebru and Margaret Mitchell, were fired after publishing a paper criticizing the bias of the AI ​​technology used in the Google search engine. Jeff Dean, head of Google Research, later acknowledged that the AI ​​unit suffered “reputational damage” after the layoff.

“It is quite clear that Google did it [once] in a way where it might have dominated the kind of conversations we now have with ChatGPT,” Mitchell told Forbes. However, she added that a series of “short-sighted” decisions have brought the company “to a place where there is now so much concern about any kind of pushback.”

What are the ethical and legal implications of AI-integrated search engines?

Despite ChatGPT’s viral popularity, questions remain about the ethics of the powerful text generator, “especially as it’s being rolled out at breakneck speed,” writes CNN analyst Oliver Darcy. “We are re-living the social media era,” Beena Ammanath, Head of Trustworthy Tech Ethics at Deloitte and Executive Director of the Global Deloitte AI Institute, told Darcy. She warned that AI chatbots will have “unintended consequences” unless serious precautions are taken. Ammanath equated the rapid advent of AI integration with “building Jurassic Park, putting some danger signs on the fences, but leaving all the gates open.” She pointed out that scientists have yet to solve bias problems in AI and the technology also tends to confuse misinformation with fact.

READ :  Internet Cheers Store Manager Banning Customer in Her 80s: 'Rude'

“The challenge with new language models is that they mix fact and fiction,” Ammanath continued. “It’s effectively spreading misinformation. It can’t understand the content. So it can spit out perfectly logical-sounding content, but it’s wrong. And it delivers it with complete confidence.” Case in point: Bard incorrectly answered a search query during a widely viewed promotional video as part of his launch. The botched response then resulted in a $100 billion drop in market value for Google’s parent company Alphabet, according to Reuters. Company employees also criticized the incident, calling it “hasty,” “botched,” and “un-googley” in an internal forum.

With their recent tit-for-tat announcements, both Google and Microsoft show they “well understand that AI technology has the power to reshape the world as we know it,” says Darcy. But with so many vulnerabilities yet to be ironed out, he wonders, “Are they going to follow the Silicon Valley maxim that’s caused so much uproar in the past?”

The looming problem of misinformation could also become a burden for Google and Microsoft as they change how search engine results are presented, says John Loeffler in a comment for Tech Radar. By using AI to rewrite responses to queries, search engines “ultimately become the publishers of that content, even if they cite someone else’s work as the source.” By integrating AI tools and assuming the role of publisher, tech companies are taking on the legal responsibility that comes with potentially publishing misinformation. “The legal dangers of being a publisher are as infinite as the opportunities to defame someone or to spread dangerous, unprotected speech,” writes Loeffler, “so it’s impossible to predict how damaging AI integration will be.” “

You may also like

Americans apply for controversial “golden passports” more than any other nationality, the report said

Why US teenagers don’t get driver’s licenses

A game changer for weight loss