Explainer: What is Generative AI, the technology behind OpenAI’s ChatGPT?

March 17 (Reuters) – Generative artificial intelligence has become a buzzword this year, drawing public attention and sparking a run at Microsoft (MSFT.O) and Alphabet (GOOGL.O), products with technologies to bring to market that they believe will change the nature of their work.

Here’s everything you need to know about this technology.

WHAT IS GENERATIVE AI?

Like other forms of artificial intelligence, generative AI learns from past data to take action. It creates brand new content – a text, an image, even computer code – based on this training, rather than simply categorizing or identifying data like other AIs.

The most well-known generative AI application is ChatGPT, a chatbot released by Microsoft-backed OpenAI late last year. The AI ​​powering it is known as a large language model because it takes a text prompt and writes a human-like response from it.

GPT-4, a newer model announced by OpenAI this week, is “multimodal” in that it can recognize images as well as text. OpenAI’s president demonstrated Tuesday how it could take a photo of a hand-drawn mockup for a website it wanted to create and turn it into a real one.

WHAT IS IT GOOD FOR?

Aside from demonstrations, companies are already using generative AI.

For example, the technology is helpful for creating a first draft of marketing copy, although it may need cleaning up because it’s not perfect. One example comes from CarMax Inc (KMX.N), which used a version of OpenAI technology to aggregate thousands of customer reviews to help buyers make a used car purchase decision.

Generative AI can also take notes during a virtual meeting. It can design and personalize emails and create slide presentations. Microsoft Corp and Alphabet Inc’s Google each demonstrated these features in product announcements this week.

READ :  What is Rate of Learning?

WHAT’S WRONG WITH IT? A response from ChatGPT, an AI chatbot developed by OpenAI, can be seen on its website in this illustrative image taken on February 9, 2023. REUTERS/Florence Lo/Illustration/File Photo

Nothing, although there are concerns about the potential misuse of the technology.

School systems have worried about students turning in AI-created essays, undermining the hard work it takes them to study. Cybersecurity researchers have also raised concerns that generative AI could allow bad actors, even governments, to produce far more disinformation than before.

At the same time, the technology itself is error-prone. Actual inaccuracies confidently touted by AI, so-called “hallucinations,” and responses that seem unpredictable, like declarations of love to a user, are all reasons companies wanted to test the technology before making it widely available.

IS THERE ONLY GOOGLE AND MICROSOFT?

These two companies are at the forefront of research and investment in large language models and are the largest to integrate generative AI into widely used software such as Gmail and Microsoft Word. But they are not alone.

Large companies like Salesforce Inc (CRM.N) as well as smaller ones like Adept AI Labs are either developing their own competing AI or bundling technologies from others to empower users through software.

HOW IS ELON MUSK INVOLVED?

He was one of the co-founders of OpenAI along with Sam Altman. But the billionaire left the startup’s board in 2018 to avoid a conflict of interest between the work of OpenAI and the AI ​​research of Telsa Inc (TSLA.O) — the electric vehicle maker he heads.

READ :  Unveiling the true potential of artificial intelligence by shifting from model-centric to data-centric AI

Musk has raised concerns about the future of AI and has advocated for a regulator to ensure the technology’s development serves the public interest.

“It’s quite a dangerous technology. I’m afraid I did some things to speed it up,” he said near the end of Tesla Inc.’s (TSLA.O) Investor Day event earlier this month.

“Tesla is doing good things in AI, I don’t know, it’s stressing me out, I’m not sure what else to say.”

(This story has been refiled to correct the dateline to March 17th.)

Reporting by Jeffrey Dastin in Palo Alto, California and Akash Sriram in Bengaluru; Adaptation by Saumyadeb Chakrabarty

Our standards: The Thomson Reuters Trust Principles.