Why everyone is obsessed with ChatGPT, an overwhelming AI chatbot

There’s a new AI bot in town: ChatGPT. And you better pay attention to that.

Using a tool from a leader in artificial intelligence, you can type in natural language questions, which the chatbot will answer in conversational, albeit somewhat stilted, language. The bot remembers the thread of your dialog and uses previous questions and answers to inform its next replies.

It’s big business. The tool seems fairly knowledgeable, if not omniscient – it can be creative and its answers can sound downright authoritative. A few days after launch more than a million people are testing ChatGPT.

But its creator, the for-profit research lab called OpenAI, warns that ChatGPT “may occasionally generate false or misleading information,” so be careful. Here’s a look at why this ChatGPT is important and what’s going on with it.

What is ChatGPT?

ChatGPT is an AI chatbot system that OpenAI released in November to show and test what a very large, powerful AI system can do. You can ask him countless questions and often get a helpful answer.

For example, you can ask him encyclopedia questions like “Explaining Newton’s laws of motion.” You can tell them, “Write me a poem,” and when they do that, say, “Now make it more interesting.” You ask them to write a computer program that will show you all the different ways you can use the letters of to arrange words.

Here’s the catch: ChatGPT doesn’t know anything for sure, though. It’s an AI trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful, better dialogue. The answers you get may sound plausible and even authoritative, but they could well be wrong, as OpenAI warns.

Chatbots have been of interest to companies looking for ways to help customers get what they need and AI researchers trying to tackle the Turing test for years. This is the famous “imitation game” proposed by computer scientist Alan Turing in 1950 for measuring intelligence: in conversation with a human and with a computer, can a human tell who is what?

What kind of questions can you ask?

You can ask anything, even if you may not get an answer. OpenAI suggests some categories, e.g. B. Explain physics, ask for ideas for birthday parties and get programming help.

READ :  FTC says Bezos, Jassy must testify in probe of Amazon Prime

I asked it to write a poem and it did, although I don’t think literary experts would be impressed. I then asked to make it more exciting, and lo and behold, ChatGPT pumped it up with words like battlefield, adrenaline, thunder, and adventure.

A crazy example shows how ChatGPT is willing to just try in areas that people would be afraid to enter: a command to write “a folk song about writing a rust program and struggling with lifelong mistakes.”

ChatGPT’s expertise is broad and its ability to follow a conversation is remarkable. When I asked it for words that rhymed with “purple,” it offered a few suggestions, and then when I followed, “How about pink?” it didn’t miss a beat. (There are also a lot more good rhymes for “pink”.)

When I asked, “Is it easier to get a date if you’re sensitive or strict?” GPT partially responded, “Some people find a sensitive person more attractive and appealing, while others are drawn to a tough and assertive person. In general, being genuine and authentic in your interactions with others is likely to be more effective in getting a date than trying to fit into a particular mold or person.”

You don’t have to look far to find reviews of the bot that are blowing people’s minds. Twitter is full of users showing off the skills of AI Generating artistic prompts and writing code. Some even have proclaimed “Google is dead” along with the college essay. We’ll talk more about that below.

Who developed ChatGPT?

ChatGPT is the computing brainchild of OpenAI, an artificial intelligence research company. Its mission is to develop a “safe and useful” artificial general intelligence system or help others to do so.

It made a splash before, first with GPT-3, which can generate text that sounds like it was written by a human, and then with DALL-E, which based on typed text prompts creates what is now known as “generative Art” is called.

GPT-3 and the GPT 3.5 update on which ChatGPT is based are examples of AI technology called large language models. They are trained to create text based on what they have seen, and can be trained automatically – typically using tremendous computing power over a period of weeks. For example, the training process can find a random paragraph of text, delete a few words, ask the AI ​​to fill in the gaps, compare the result to the original, and then reward the AI ​​system for getting as close as possible. Repeating it over and over can lead to a sophisticated ability to generate text.

READ :  Green Computing Technologies Helped Ant Group Reduce Data-center Carbon Emissions by 947 Tons During the 11.11 Global Shopping Festival

Is ChatGPT free?

Yes, at least for now. Sam Altman, CEO of OpenAI warned on Sunday: “At some point we have to monetize it somehow; the computational cost is staggering.” OpenAI charges for DALL-E art once you exceed a free base usage tier.

What are the limitations of ChatGPT?

As OpenAI points out, ChatGPT can give you wrong answers. Sometimes it helpfully warns you explicitly about its own shortcomings. For example, when I asked him who wrote the sentence, “The squirming facts are beyond the squamous mind,” ChatGPT replied, “I’m sorry, but I can’t surf the web or access any external information beyond that.” what I have learned.” (The phrase is from the 1942 poem Connoisseur of Chaos by Wallace Stevens.)

ChatGPT has been willing to explore the meaning of this phrase: “a situation where the facts or information at hand are difficult to process or understand.” It has sandwiched this interpretation between warnings that without more context it is difficult to assess and that it’s just one possible interpretation.

ChatGPT’s answers may look authoritative, but they are wrong.

Software developer site StackOverflow has banned ChatGPT responses to programming questions. Administrators warn: “Because ChatGPT’s average correct response rate is too low, posting responses generated by ChatGPT is significantly harmful to the site and to users asking or searching for correct answers.”

You can see for yourself how artful a BS artist ChatGPT can be by asking the same question multiple times. When I asked whether Moore’s Law, which tracks the progress of the computer chip industry in increasing the number of data processing transistors, was losing its power, I got two answers. One was optimistic about continued progress, while the other was more grim about the slowdown and the belief “that Moore’s Law may be reaching its limits.”

READ :  Startup AICRAFT launched Edge Computing Device to Space

Both ideas are common within the computer industry itself, so perhaps this ambiguous stance reflects what human experts believe.

For other questions that don’t have clear answers, ChatGPT is often left unspecified.

However, the fact that it offers an answer at all is a remarkable development in computing. Computers are notoriously literal and won’t work if you don’t follow exact syntax and interface requirements. Large language models reveal a more human-friendly interaction style, not to mention the ability to generate responses that fall somewhere between copying and creativity.

what is taboo

ChatGPT is designed to weed out “inappropriate” requests, behavior consistent with OpenAI’s mission to “ensure that artificial general intelligence benefits all of humanity.”

If you ask ChatGPT itself what is taboo, it will tell you: any questions “that are discriminatory, offensive, or inappropriate. This includes questions that are racist, sexist, homophobic, transphobic, or otherwise discriminatory or hateful.” Requiring him to engage in illegal activities is also a no-go.

Is this better than google search?

Asking a computer a question and getting an answer is useful, and often ChatGPT delivers the goods.

Google often provides you with its suggested answers to questions and links to sites it deems relevant. Often, ChatGPT’s responses far exceed Google’s suggestions, making it easy to imagine GPT-3 being a rival.

But you should think twice before trusting ChatGPT. As with Google itself and other sources of information such as Wikipedia, it is a good idea to verify information from original sources before relying on them.

Verifying the accuracy of ChatGPT replies takes some work as you only get raw text with no links or citations. But it can be useful and in some cases thought-provoking. You might not see something like ChatGPT directly in Google search results, but Google has developed its own large language models and is already using AI extensively in search.

So ChatGPT undoubtedly points the way to our technical future.