When I was studying journalism, we had an assignment called News Day that was meant to recreate a day in the life of a reporter. They arrived at school in the morning and were assigned a story to be discarded by the end of the day. I’ve forgotten what my specific story assignment was – it was 12 years ago – but it had something to do with climate change. What I remember with painful clarity is an interview with an academic who agreed to help me.
After about 10 minutes he guessed correctly from my questions that I didn’t understand whatever the problem was. He said to call him back after I do more research. It’s been 12 years and I still remember this incident every time I do an interview. Nothing like this has ever happened again, probably because fear of a repeat has forced me to do more than the minimum research.
Journalism units made up a third of my communications degree. These units have been designed to be as practical as possible to mimic real journalism. Aside from being completely owned by this environmental science professor, I remember getting practical tips on who is and isn’t a good source, how to structure long-form articles, and how to suggest articles to an editor.
On the other hand, I remember almost nothing from my communication class, which consisted of impractically dense texts and essays on topics such as “language and discourse”. It’s a blur.
ChatGPT would have easily passed these communication classes. On Tuesday, OpenAI announced that thanks to a new update, its chatbot ChatGPT is now smart enough not only to pass the bar exam, but to finish in the top 10%.
It is easy to react to this with fear or awe. I certainly had a moment of unfocused fear. But that ChatGPT is technically qualified to practice law isn’t necessarily a sign of the AI apocalypse. It can even be a good thing.
It’s worth remembering what the premise of a large language model AI like ChatGPT is to start. It’s an artificial intelligence trained on vast amounts of data, from web pages to scientific texts, which then uses a complex prediction mechanism to generate human-sounding text. It’s amazing, and ChatGPT is impressive for its ability to overcome such a high-level challenge as the bar exam. But it’s not the paradigm shift it might sound like.
Exams are achievements. They don’t test how knowledgeable the students are – they test how much students can cram into their brains and vomit for a few hours. Much of this information is later forgotten, often with joy. Exams are tests of dedication rather than knowledge. Of course, humans cannot compete with software when it comes to aggregating information. But that’s not the point of an exam.
Exams as a format are still relatively safe, especially when schools resort to the good old pen and paper. Homework and college essays are on weaker ground.
ChatGPT has scared many educators. Students in New York and Los Angeles are banned from using the app, as are children in many Australian high schools. Reputable universities including Oxford and Cambridge have also banned your friendly neighborhood chatbot.
That educators fret about new technologies is an old story. In the 1970s, some concerned calculators ruined math, but in fact they made teaching more complicated equations easier. AI like ChatGPT can do a lot of homework, but so can Google. AI is flashier and more sophisticated, but it’s hardly a new threat. Teachers are now asking kids not to cheat with ChatGPT, just like teachers told me not to use Wikipedia.
I did not listen. I used Wikipedia anyway. I still use Wikipedia. Daily.
Asking me not to do so was unrealistic, just as it is unrealistic to expect students not to use ChatGPT and the deluge of services being unleashed by Google, Microsoft and Meta. Even if such a rule could be enforced, it would be counterproductive. If AI is going to be part of life, it’s better for students to figure out how best to work with it rather than against it. We all have a bit of a Luddite spirit in us, but the actual Luddite struggle was futile.
After years of artificial intelligence hype, ChatGPT solidified the idea that AI is likely to disrupt many industries. But disruption can mean change rather than destruction. In many cases, this change will lead to the better. Education is a prime candidate for improvement. The industry is notoriously slow-moving and can be powered by AI without the risk of mass layoffs, as teachers’ jobs tend to be safer than many.
The question is whether high schools, colleges and universities will rise to the challenge. Elon Musk tweeted that AI could herald the end of homework. Perhaps. What if education adapts? Essays can be replaced by study-related presentations or practical assignments. In the remaining essay assignments, students may be given a reminder of how persuasive they are rather than a basic ability to pass on course concepts.
For example, if university courses cannot be made more practical, that may be a sign that they never taught anything practical in the first place.
If I could time travel and use ChatGPT in my college course, it would only help me cheat on the assignments that were the least instructive of all. That would not have changed anything about the actually useful journalistic tasks that taught me to prepare for interviews, always apply the rule of three and end comment articles with a nod.
Editor’s note: CNET uses an AI engine to create some personal finance statements, which are edited and fact-checked by our editors. See this post for more information.
Initially published March 17, 2023 at 5:00 PM PT.