It’s worth arguing that in our 24/7 news culture, where access to information is instantaneous and perhaps plentiful, we need to worry more than ever.
Globally there are ongoing conflicts (e.g. in Ukraine, Sudan or Yemen) and the now ubiquitous issues of migration and internal displacement.
Closer to home, there is the fear and hardship surrounding the cost of living crisis and its associated ramifications — and a growing disillusionment with the ability of conventional political parties to resolve these issues.
In such an environment, the role of responsible, accurate journalism is paramount. This is why there are currently such concerns about artificial intelligence (AI) and the ability of applications like ChatGPT to “use machine learning algorithms to process massive amounts of text data, including books, news articles, Wikipedia pages, and millions of websites.” .
A chatbot or “artificial conversational unit” is, at its most basic level, a computer program that allows the user to interact with a computer as if they were a human being. This means that information can be generated by an individual request, which can then be interpreted from the vast amount of text available online.
The problem, as Emily Bell of the Columbia University Graduate School of Journalism writes, is that such applications “have absolutely no obligation to the truth. Just think how quickly a ChatGPT user could flood the internet with fake messages that appear to have been written by humans.”
Therein lies the problem of authenticity. On May 1, online anti-misinformation group NewsGuard reported that in April it had uncovered 49 news and information sites that appeared to be created by AI. Newsguard states that these websites publish a large amount of content – belonging to a variety of topics that are instantly recognizable on legitimate news sites. Importantly, these sites, “which often do not disclose ownership or control” and had content that promoted “false narratives.”
This is the tip of the iceberg, and concerns are beginning to be raised about the rapid rise in AI capabilities. Elon Musk, who recently bought social media site Twitter and was an investor in ChatGPT, stated earlier this month: “Even a benign reliance on AI/automation is dangerous to civilization if it goes so far that we eventually… forget how the machines work”.
His words were soon echoed by the so-called “godfather of AI,” Geoffrey Hinton, who resigned from his role as Google’s technology director with a warning about the “existential risk” of creating true digital intelligence.
As I write this on May 3rd, this is the 30th annual Unesco World Press Freedom Day. Part of the focus this year has to do with the impact of disinformation and misinformation online. Reporters Without Borders emphasizes that journalism is under threat from the “fake content industry” and that “the ability of artificial intelligence to create what looks like journalism meant that the principles of rigor and reliability were easily circumvented “.
However, concerns about the impact of AI extend beyond the realms of the journalism industry and into the rest of society. Sir Patrick Vallance, the UK’s outgoing chief scientist, told the Commons’ Science, Innovation and Technology Committee that “there will be a big impact on jobs and that impact could be as big as the industrial revolution… There will be jobs, These can be done by AI, which can either mean a lot of people don’t have jobs, or a lot of people have jobs that only a human could do.”
But is this view too dystopian? Isn’t the fear of new technologies a characteristic of enlightened modern man? I remember reading as a student that television corrupts the minds of youth. Before that it was WiFi. The radio is partly responsible for the children’s declining interest in school, it said. Radio broadcasts “upset the balance of her excitable mind.”
Artificial intelligence is already part of our lives in ways that we might underestimate. If we started enumerating the ways in which computerization has changed society, in the smallest and most consistent ways, we might never stop.
This fact is inescapable: AI makes most things in life easier and better. As technology evolves and becomes more sophisticated at an accelerating pace, governments need to recognize the potential impact on society and review corporate behavior.
As US President Joe Biden noted, AI has enormous potential to help, for example, in the fight against climate change. However, technology developers need to be aware of the potential dangers to citizens.
The words of the Alliance for Universal Digital Rights (AUDRi) are apt: “We believe in a future in which all citizens of the global digital ecosystem, no matter who they are or where they live, can enjoy equal rights to security, liberty and dignity. But such a future can only come about when governments and leaders around the world come to mutual and binding agreements to uphold these rights, minimize the chances of their violation, and develop universal mechanisms to hold violators accountable.”
That is the ghost. Can humans not do so much more than machines?
Read more opinion articles here:
Continue reading
Continue reading
Continue reading
Continue reading
Continue reading
Continue reading
* dr Jewell is Director of Undergraduate Studies at Cardiff University’s School of Journalism, Media and Culture.