ChatGPT: USC experts explain what you need

Since its launch in November 2022, ChatGPT has grown tremendously in popularity and widespread usage, with millions of users around the world turning to generative AI technology to spark conversations that range from practical to creative.

But while it shows promise for uses like writing cover letters, debugging code, and even writing screenplays and song lyrics, the application’s popularity also raises ethical dilemmas. How accurate are the answers? How did you train? How could it change the way we live, for better or for worse? And would you trust it to act as your therapist?

To get a sense of the potential promises and dangers of ChatGPT, we reached out to a group of USC computer scientists and natural language processing experts.

Overall, how do you rate ChatGPT’s performance?

“I’m impressed with his ability to generate quick, coherent and relevant responses and maintain conversations. Some specific areas that are particularly impressive are the ability to generate code, debug code, and summarize web content, all in a multi-turn conversation where it can remember previous information exchanges. I’m also reassured by its ability to enforce safeguards against some potentially toxic issues, although people have found workarounds since its inception.” — Swabha Swayamdipta, Gabilan Assistant Professor and Assistant Professor of Computer Science

“I’m definitely thrilled with his performance. I am sure that many NLP researchers did not expect that this level of performance could be reached so quickly. The general idea behind it wasn’t complicated and the devil is in the implementation details and calculation. Because of this, for researchers, it’s less of a scientific advance and more of a major win for ‘scaling can bring us much more’.” – Xiang Ren, Andrew and Erna Viterbi Early Career Chair and Assistant Professor of Computer Science

READ :  What Microsoft Azure Offers Businesses That AWS Doesn’t

“In general, ChatGPT and the large pre-trained speech models we’ve launched over the past few years have been surprisingly good at unconstrained speech generation. “Good” means “generated content that is relevant to the prompt/question and is syntactically correct and locally coherent.” All in all, it’s hard to say what is and isn’t surprising in a fully closed model.” – Jesse Thomason, Assistant Professor of Computer Science

How is GPT different from other language generation models?

“ChatGPT and its variants are specially trained to handle instructions well. ChatGPT was also designed to integrate human feedback through multiple rounds of conversations. Typical language models are simply trained on text, ie given text it is predicted which words should follow it. ChatGPT is based on an extremely extensive language modeling that went into the creation of its predecessor GPT-3 and was trained with almost 45 TB of data. Of course, OpenAI has not released the exact details of how ChatGPT was trained, so there are many unknown details.” — Swabha Swayamdipta

What are some of its limitations? Can we trust this kind of system?

“From my perspective, the most worrying issue is what we call “hallucination” during ChatGPT conversations with humans. The answer will seem fairly believable to a layperson on the subject in terms of tone and wording, but could be dead wrong in terms of factuality. This could be detrimental when used for educational scenarios, tempting decision makers to base their predictions on the wrong evidence.” –Xiang Ren

“Although it hasn’t been trained to solve math problems, some of the basic mistakes it makes are quite disappointing. At a higher level, ChatGPT’s most fundamental limitation is its unreliability. There are some questions that may have relevant, concise, and appropriate answers, while others are simply wrong. And it can’t predict when it’s wrong. This will be a fundamental obstacle to its deployment.” – Swabha Swayamdipta

READ :  Distributed Computing Fueling Sharing Economies-Jayesh Shah

“I think ChatGPT and related models will make it much easier for government actors and malware or scam companies to flood user-contributed content pages and emails with large volumes of coherent and harder-to-detect spam.” – Jesse Thomason

Do you have concerns about the potential to generate fake content and how it might impact society? (Could it fuel a crisis of scholarly integrity, for example?)

“This is definitely a risk inherent in any language model – the tendency to ‘hallucinate’ new information that may seem real but isn’t really. However, I believe that pretty soon we will get better at recognizing ChatGPT generations as opposed to human written language, or rather we might be able to develop this technology. And sure, it can fool peer reviewers, but it can’t appear at conferences or run experiments or do field research. However, there might be some instances where it successfully misleads people before we can learn to recognize these counterfeits.” – Swabha Swayamdipta

Are there areas where you see an opportunity for ChatGPT to help people with their jobs?

“I think it could herald a different era of writing – where writers learn to use ChatGPT as an assistant that gives them ideas instead of writing from scratch. The same could be said of programmers. I think many are excited about ChatGPT’s potential to act as an AI therapist, which is something I’m not entirely comfortable with. For one thing, it cannot easily replace human therapists, especially in high-risk patients who are prone to harmful behavior, for example. I think there are many risks associated with this capability and safeguards need to be in place before this type of functionality is made widely available.” – Swabha Swayamdipta

READ :  British Columbia regional quantum initiative launched

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of the press releases published on EurekAlert! by contributing institutions or for the use of information about the EurekAlert system.