FOCUS-For tech giants, AI like Bing and Bard represents a billion-dollar search problem

By Jeffrey Dastin and Stephen Nellis

MOUNTAIN VIEW, Calif., Feb 22 (Reuters) – As Alphabet Inc looks past a chatbot bug that helped slash its market value by $100 billion, another challenge emerges from its efforts, its popular Google search to add generative artificial intelligence: the cost.

Executives from across the technology sector are talking about how to run AI like ChatGPT while considering the high cost. OpenAI’s wildly popular chatbot, which can compose prose and answer search queries, has “startling” computational costs of a few cents or more per conversation, the startup’s CEO Sam Altman said on Twitter.

In an interview, Alphabet chairman John Hennessy told Reuters that an exchange using AI, known as the large language model, is likely to cost 10 times more than a standard keyword search, although fine-tuning will help bring costs down quickly.

Even with

Earnings from potential chat-based search ads

, the technology could add billions of dollars to Alphabet’s bottom line, analysts said. His net income in 2022 was nearly $60 billion.

Morgan Stanley estimated that Google’s 3.3 trillion searches last year cost about a fifth of a cent each, a number that would increase depending on how much text AI needs to generate. Google, for example, could face a $6 billion cost increase by 2024 if ChatGPT-like AI processed half of incoming requests with 50-word answers, analysts predicted. It’s unlikely that Google would need a chatbot to perform navigational searches on sites like Wikipedia.

Others arrived at a similar bill in different ways. For example, SemiAnalysis, a research and advisory firm that focuses on chip technology, said adding ChatGPT-style AI to Alphabet search could cost $3 billion, an amount surpassed by Google’s internal chips called Tensor Processing Units or TPUs is limited along with other optimizations.

READ :  Cloud computing in the pharmaceutical market is expected to show strong growth: CloudCare, VMware, Cisco Systems

The story goes on

What makes this form of AI more expensive than conventional search is the associated computing power. Such AI depends on billions of dollars in chips, costs that must be spread over their multi-year useful life, analysts said. Electricity also increases costs and pressure for companies with carbon footprint targets.

The process of processing AI-powered search queries is called “inference,” in which a “neural network” loosely modeled on the biology of the human brain derives the answer to a question from previous training.

In contrast, in a traditional search, Google’s web crawlers scanned the web to create an index of information. When a user enters a search query, Google returns the most relevant answers stored in the index.

Alphabet’s Hennessy told Reuters, “It’s the cost of inference that you need to bring down,” calling it “a two-year problem at worst.”

Alphabet is under pressure to take on the challenge despite the cost. Earlier this month, its competitor Microsoft Corp held a high-profile event at its headquarters in Redmond, Wash., to unveil plans to embed AI chat technology into its Bing search engine, with top executives targeting Google’s 91% share of the search market Estimate from Similarweb.

A day later, Alphabet talked about plans to improve its search engine, but a promotional video for its AI chatbot, Bard, showed the system inaccurately answering a question, leading to a stock slide that eroded its market value by $100 billion.

Microsoft later came under its own scrutiny when its AI reportedly made threats or declared love for test users, prompting the company to limit long chat sessions that allegedly “provoked” unintended replies.

READ :  ASUS Launches Zenbook OLED Fold 17 Laptop; Preorder at B&H

Microsoft’s chief financial officer Amy Hood told analysts that the benefits from user acquisition and advertising revenue will outweigh the expenses as the new Bing rolls out to millions of consumers. “That’s additional gross margin dollars for us, even at the costs we’re discussing,” she said.

And another Google competitor, You.com CEO Richard Socher, said adding an AI chat experience and applications for charts, videos and other generative technologies has increased costs by between 30% and 50%. “Technology is getting cheaper at scale and over time,” he said.

A source close to Google warned that it’s still early to pinpoint exactly how much chatbots might cost, given that efficiency and usage vary widely by technology, and AI is already powering products like search.

Still, footing the bill is one of two key reasons search and social media giants with billions of users haven’t launched an AI chatbot overnight, said Paul Daugherty, Accenture’s chief technology officer.

“One is accuracy, and the second is you need to scale this in the right way,” he said.

MATH WORKS

For years, researchers at Alphabet and elsewhere have been studying how to train and run large language models more cheaply.

Larger models require more chips for inference and therefore cost more. The AI, which dazzles consumers for its human-like authority, has exploded in size, reaching 175 billion so-called parameters, or various values ​​that the algorithm takes into account, for the OpenAI model updated in ChatGPT. Cost also varies based on the length of a user’s query, measured in “tokens” or parts of words.

A senior technology executive told Reuters that such AI remains prohibitively expensive to get into the hands of millions of consumers.

READ :  Cloud Systems Management Software Market is expected to progress at a splendid pace during 2022 to 2032 | Flexera, Dell, Adaptive Computing, Inc., Zoho Corp

“These models are very expensive, and so the next stage of invention will be to reduce the cost of both training these models and inferring them so that we can use them in any application,” the executive said on condition of anonymity.

So far, computer scientists within OpenAI have figured out how to optimize inference costs through complex code that makes chips run more efficiently, said a person familiar with the effort. An OpenAI spokesman did not immediately comment.

A longer-term problem is to reduce the number of parameters in an AI model by 10x or even 100x without losing accuracy.

“How to most effectively weed out parameters is still an open question,” said Naveen Rao, who formerly led Intel Corp’s AI chip effort and is now working to reduce AI computational costs through his startup MosaicML.

Meanwhile, some have considered charging for access, like OpenAI’s $20-per-month subscription for better ChatGPT service. Tech experts also said a workaround is to apply smaller AI models to simpler tasks Alphabet is studying.

The company said this month that a “smaller model” version of its massive LaMDA AI technology will power its chatbot Bard, which “requires significantly less processing power, allowing us to scale to more users.”

When asked about chatbots like ChatGPT and Bard at a conference called TechSurge last week, Hennessy said more focused models, rather than one system that does it all, would help “tame the costs.” (Reporting by Jeffrey Dastin in Mountain View, California and Stephen Nellis in Sunnyvale, California; Additional reporting by Greg Bensinger; Editing by Kenneth Li and Claudia Parsons)