Google CEO tells employees to spend hours training ‘Bard’ AI kinks

Google CEO Sundar Pichai told employees that “thousands” of workers are already “dogfooding” their Bard AI. Photo: Anna Moneymaker (Getty Images)

Google knows its AI just isn’t ready for prime time, so it has a new plan to iron out all the problems by forcing thousands of its employees to spend hours poking and prodding the poor AI until it no longer embarrassing the company it’s finally released.

Business Insider reported, based on a leaked company-wide email, that Google is asking all of its employees to set aside two to four hours a day to test Google’s “Bard” AI, the same system the company powers in its chat function wants to integrate. It’s unclear if all Googlers around the world received the same request. The company recently announced it would cut 12,000 jobs from its global workforce, but Google still employs over 170,000 around the world excluding its parent company Alphabet.

In that memo, Google CEO Sundar Pichai said he would “appreciate” it if all employees “contributed more deeply” and took two to four hours to pressure-test Bard. Anyone who’s read an “suggestion” email from their boss knows that this is more of an assignment than anything else. Based on the body of the email, it’s unclear whether the two- to four-hour suggestion would be polled every day or spread out over a longer period of time.

Google unveiled Bard last week to maintain its lead over Microsoft, which has introduced its own chatbot AI to Bing search. During a recent introductory presentation, the AI ​​presented an incorrect statement about the Webb Space Telescope, an error that reportedly caused the company to lose $100 million in stock.

READ :  Huge Warzone 2 leak reveals Activision's 2023 content plan

According to the memo, Google has already begun internal testing, dubbed “dogfooding,” as of Tuesday, with Pichai saying the company already has “thousands” of external and internal testers messing around with Bard. These testers are reportedly investigating quality and security concerns with the search AI, as well as their “down-to-earthness,” which could relate to whether the text responses generated by the AI ​​are read as “human.”

G/O Media may receive a commission

Pre-order now

Galaxy Book 3 series

Available February 24th
Every new laptop model comes with a free memory upgrade. The 1TB version of each is priced the same as the 512GB version, which basically means the 1TB version is $200 cheaper.

A Google spokesperson told Gizmodo in an email that “Testing and feedback from Googlers and external trusted testers are important aspects of improving Bard to ensure it’s ready for our users. We often ask Googlers for input to improve our products, and that’s an important part of our internal culture.” The company didn’t respond to questions about how long and how often employees should stress test the AI.

Google has been smart ever since, especially as comparisons between Bard’s rather lackluster display and Bing’s long list of new features in Bing’s search AI have left the company behind. Demonstrations of Google’s bard, unlike Bing search, did not provide citations for the content displayed. However, quotes are not the end, be everything to give credibility to AI answers. Margaret Mitchell, the senior ethics researcher at Hugging Face, who was previously fired from Google’s AI team, told MIT Technology Review that “a lot of people don’t check citations” and that showing up citations could only lend credence to incorrect information.

READ :  I'm an Equinox member - this hack gets me their fancy gym toiletries for free

Bing Search had a lot more bells and whistles at launch than Google’s offering, but it suffers from the same issues that other AI chatbots have long had, namely that they’re absolutely riddled with inaccuracies and, well, weird responses to user input.

And to get the AI ​​to stop sharing horrible content – be it xenophobia, racism or anti-Semitism – as chatbots are known to do, it can take many hands working many hours to get it into a halfway decent shape. OpenAI, the creator of ChatGPT who helped Microsoft create its Bing AI, has contracted with low-wage workers in Kenya to sift through thousands of samples of horrible content. This included child sexual abuse content, murder, torture, suicide and more.

It’s unclear if Googlers will be exposed to some of this, but it probably won’t be anywhere near fun for the thousands of employees expected to stress-test the AI ​​with prompts. Google recently invested nearly $400 million in OpenAI competitor Anthropic, a company that is now hiring a “quick engineer” for someone to develop ways for large language models to perform specific tasks.