From marketing to design: How big brands are adopting AI tools despite the risks

Even if you haven’t tried artificial intelligence tools that can write essays and poetry on command or conjure up new images, the companies that make your household products are probably already doing it.

Mattel made the AI ​​image generator DALL-E work by coming up with ideas for new Hot Wheels toy cars. Used car seller CarMax aggregates thousands of customer reviews using the same “generative” AI technology that powers popular chatbot ChatGPT.

Meanwhile, Snapchat is bringing a chatbot to its messaging service. And grocery delivery company Instacart is integrating ChatGPT to answer customer questions about groceries.

Coca-Cola plans to use generative AI to create new marketing content. And while the company hasn’t explained exactly how it intends to use the technology, the move reflects growing pressure on companies to use tools that many of their employees and consumers are already trying out for themselves.

“We have to accept the risks,” Coca-Cola CEO James Quincey said in a recent video detailing a partnership with startup OpenAI — makers of DALL-E and ChatGPT — through an alliance led by consulting firm Bain was announced. “We need to be smart about taking those risks, experimenting, building on those experiments, driving scale, but not taking those risks is a hopeless starting point.”

In fact, some AI experts warn that companies should carefully assess potential harm to customers, society, and their own reputations before introducing ChatGPT and similar products in the workplace.

“I want people to think hard before using this technology,” said Claire Leibowicz of The Partnership on AI, a nonprofit group founded and sponsored by the big tech vendors that recently published a set of recommendations for companies, that produce AI-generated synthetic images and audio, and other media. “They’re supposed to be messing around and tinkering, but we should also be thinking, what are these tools for anyway?”

READ :  Metatron Inc. Signs Contract to Complete Its First

Some companies have been experimenting with AI for some time. Mattel announced in October its use of OpenAI’s image generator as a customer of Microsoft, which has a partnership with OpenAI that allows it to integrate its technology with Microsoft’s cloud computing platform.

But it wasn’t until the release of OpenAI’s ChatGPT, a free public tool, on November 30 that widespread interest in generative AI tools began to seep into workplaces and boardrooms.

“ChatGPT really showed how powerful they were,” said Eric Boyd, a Microsoft executive who leads the AI ​​platform. “That’s changed the conversation in a lot of people’s minds, where they really understand it on a deeper level. My kids use it and my parents use it.”

However, there is reason for caution. While text generators like ChatGPT and Microsoft’s Bing chatbot can make writing emails, presentations, and marketing pitches faster and easier, they also tend to confidently present misinformation as fact. Image generators trained on a vast trove of digital art and photography have raised copyright concerns with the original creators of these works.

“For companies that are really in the creative industries, it’s still an open question whether they want to make sure they have copyright protection for (the results of) these models,” said attorney Anna Gressel of the law firm Debevoise & Plimpton. advises companies on the use of AI.

A safer use sees the tools as a brainstorming “thought partner” that won’t produce the final product, Gressel said.

“It helps create models that are then turned into something more concrete by a human,” she said.

READ :  Netflix Machine Learning Fraud Detection Framework for Streaming Services

And that also contributes to the fact that people are not replaced by AI. Forrester analyst Rowan Curran said the tools should speed up some of the “little things” of office tasks — similar to previous innovations like word processors and spell checkers — rather than putting people out of work, as some fear.

“Ultimately, it’s part of the workflow,” Curran said. “It’s not like we’re talking about a big language model just generating a whole marketing campaign and launching it without experienced senior marketers and all sorts of other controls.”

It gets a little more difficult for consumer-facing chatbots that integrate with smartphone apps, Curran said, as safeguards are needed for technologies that can respond to user queries in unexpected ways.

Public awareness has fueled growing competition between cloud computing providers Microsoft, Amazon and Google, which sell their services to large organizations and have the massive computing power needed to train and run AI models. Microsoft earlier this year said it would invest billions more into its partnership with OpenAI, though it also competes with the startup as a direct provider of AI tools.

Google, which pioneered generative AI but has cautiously introduced it to the public, is now catching up to capitalize on its commercial opportunities, including an upcoming Bard chatbot. Facebook parent Meta, another AI research lead, builds similar technologies but doesn’t sell them to companies in the same way as its big tech peers.

Amazon has taken a more muted tone, but is making its ambitions clear through its partnerships — most recently an expanded collaboration between its cloud-computing division AWS and startup Hugging Face, makers of a ChatGPT rival called Bloom.

READ :  What is Web 3.0? [2023 guide to Web 1.0 vs Web 2.0 vs Web 3.0]

Hugging Face decided to double down on its Amazon partnership after seeing the explosion in demand for generative AI products, said Clement Delangue, the startup’s co-founder and CEO. But Delangue contrasted its approach with competitors like OpenAI, which doesn’t disclose its code and datasets.

Hugging Face hosts a platform that allows developers to share open-source AI models for text, image, and audio tools that can form the basis for the development of various products. This transparency is “really important because it allows regulators, for example, to understand and regulate these models,” he said.

It’s also a way for “underrepresented individuals to understand where the biases may be (and) how the models were trained,” so that the biases can be mitigated, Delangue said.