Flashback to 2022, ChatGPT & GPT-3
In 2023, AI will no longer hide behind office building walls, IT systems or user interfaces in apps. AI has crept into the hearts and minds of entire populations, including those who have no interest in technology.
Last year, OpenAI’s text-to-image model DALL-E 2 took the Internet by storm. Followed by the equally impressive Midjourney and the open-source Stable Diffusion model. Then LensaAI selfies flooded social media feeds. But it was OpenAI’s ChatGPT that saw the AI sink deep into the collective consciousness. Now anyone with an internet connection can access and use the raw computer intelligence of a sophisticated chatbot easily and for free.
ChatGPT is a refined version of GPT-3, a general purpose language model released by OpenAI in 2020. GPT-3 didn’t have a commercial breakthrough like ChatGPT as it was only released in private beta via a waiting list (I never got access to it). And then it was exclusively licensed to Microsoft. GPT-3 is trained with much more data and contains many more parameters compared to ChatGPT. All of the articles on Wikipedia were included in the GPT-3 training set and accounted for only about 0.5%.
As powerful and impressive as GPT-3 is, it’s also racist and terrible. It tends to indulge in prejudice and discrimination race, gender and religion, spewing obscenities, making racist jokes, condoning terrorism and accusing people of being rapists. Compared to GPT-3, ChatGPT is impressively good at spinning around controversial topics, refusing to respond to inappropriate prompts and admitting when it doesn’t know the answer to a tricky question.
AGI & GPT-4
ChatGPT is still far from AGI (Artificial General Intelligence). The holy grail for AI researchers. OpenAI CEO Sam Altman says ChatGPT doesn’t even come close.
In short, AGI marks a point where AI achieves a certain level of autonomy and can surpass humans in various fields. Imagine a single AI model that could beat you at chess, write emails for you at work, and drive your car home. We have domain-specific AIs that could do each of the tasks individually well, but no superintelligence that could manage them all at will. AGI is still far away. Or is it?
In the summer of 2022, public debates about AGI gained momentum after The Washington Post reported that Google engineer Blake Lemoine believed Google’s chatbot LaMDA had come to life. Google responded that there was no evidence to support Lemoine’s claims and fired him shortly thereafter. But judge for yourself, here are his conversations with it.
Speaking of AGI, GPT-4 is slated for release in Spring 2023. Nothing substantial has been revealed about it yet, however the expectations are sky high. However, the few people who have tried it are said to not be allowed to talk about it due to NDAs It should be as exciting a leap as GPT-3.
(Human) AI author Gary Marcus is certain that when GPT-4 is finally released, heads will be blown away, and it will be bigger and better than all the impressive things we’ve seen from OpenAI so far. However, Marcus predicts that GPT-4 will still have some of the same bugs as GPT-3 since it’s essentially built on the same architecture. For example, Marcus predicts that GPT-4 will still make “head-shaking stupid mistakes”, mix truth with lies, hallucinate, be untrustworthy, and unreliable.
We can probably assume that GPT-4 will be the world’s best bull-shitter. Capable of mimicking human thinking almost perfectly. However, common sense is lacking. “The dark matter of intelligence” as computer scientist Yejin Choi describes it:
One way to describe it is that common sense is the dark matter of intelligence. Normal matter is what we see, what we can interact with. We thought for a long time that there is just that in the physical world – and that’s exactly what it is. It turns out that’s only 5 percent of the universe (..) It’s the unspoken, implicit knowing that you and I have. It’s so obvious that we often don’t talk about it. For example, how many eyes does a horse have? Two. We don’t talk about it, but everyone knows it.
“Common sense” in humans is developed through physical experiences, eg we do not put our hands on a hot stove because we know it will hurt. AIs are “innately” unable to put two and two together like that, simply because they can’t experience the world the way we do. Therefore, we can expect GPT-4 to be impressive, but not autonomous or sentient, not Frankenstein’s monster or rocket ship, but a great source of entertainment and wonder.