What can the extraordinary chatbot with artificial intelligence do?


Since its launch in November last year, ChatGPT has become an exceptional hit. Essentially a souped-up chatbot, the AI ​​program can provide answers to life’s biggest and smallest questions, writing college essays, fictional stories, haikus, and even cover letters. It does this by drawing on what it has gleaned from a staggering amount of text on the web, with careful guidance from human experts. Ask ChatGPT a question like millions have done over the last few weeks and it will do its best to answer unless it knows it can’t. The answers are written confidently and fluently, even if they are sometimes spectacularly wrong.

The program is the latest from OpenAI, a research lab in California, and is based on an earlier AI the team used called GPT-3. Known in the art as the Large Language Model, or LLM, the AI ​​is fed hundreds of billions of words in the form of books, conversations, and web articles, from which it creates a statistical probability-based model of the words and phrases that tend to match the preceding text consequences. It’s a bit like text prediction on a cell phone, but massively scaled up to generate whole answers instead of single words.

The significant advancement in ChatGPT lies in the additional training it has received. The initial language model was refined by feeding it a variety of questions and answers from human AI trainers. These were then included in his data set. Next, the program was asked to provide several different answers to a variety of questions, which were then ranked from best to worst by human experts. This human-led fine-tuning means that ChatGPT is often very impressive at figuring out what information a question is really aimed at, gathering the right information, and crafting an answer naturally.

READ :  Introducing the GelSight Mini: A Human Resolution Tactile Sensor for Engineers and Scientists | News

The result is “scary good,” according to Elon Musk, as many early users — including college students who see it as a late-duty savior — will attest. It’s also harder to corrupt than previous chatbots. Unlike older chatbots, ChatGPT was designed to reject inappropriate questions and avoid making things up by churning out answers to problems it wasn’t trained on. For example, ChatGPT does not know anything in the world after 2021 as its data has not been updated since then. It also has other, more fundamental limitations. ChatGPT does not hold the truth, so there is no guarantee that they are correct, even if answers are fluent and plausible.

As OpenAI states, “ChatGPT will sometimes write plausible-sounding but incorrect or nonsensical responses.” and ‘will sometimes respond to harmful instructions or exhibit biased behavior.’ There can also be lengthy answers, a problem its developers attribute to trainers who “prefer long answers that look more expansive”.

One of the biggest problems with ChatGPT is that it is very self-conscious about coming back with untruths. You should absolutely not rely on that. You have to check what it says.

We’re nowhere near AI’s Hollywood dream. It can’t tie shoelaces or ride a bike. If you ask her for a recipe for an omelet, she’ll probably do a good job, but that doesn’t mean she knows what an omelette is. It’s very much a work in progress, but a transformative one nonetheless.