Published on February 24, 2023
As an artificial intelligence researcher, Cynthia Rudin has watched the technology’s recent explosive growth with a keen, concerned eye.
She sees both enormous potential and overwhelming risk in the current state of the AI industry, a wild west of unbridled experimentation, investment and expansion. The recent rise of ChatGPT, an AI-based tool that lets users edit and order written products from a computer algorithm, has shed new light on the tech, and Rudin says lawmakers need to get a grip on it all — and although fast.
Rudin is the Earl D. McLean, Jr. Professor of Computer Science, Electrical and Computer Engineering, Statistics, Mathematics and Biostatistics, and Bioinformatics at Duke University, where she directs the Interpretable Machine Learning Lab. She recently spoke to Duke Today about her many concerns about the growth and power of artificial intelligence and the industry that builds tools with it.
Here excerpts:
They feel that artificial intelligence technology is currently out of control. Why?
AI technology is currently like a runaway train and we are trying to follow it on foot. I feel this way because technology is increasing very quickly. It’s amazing what it can do now compared to a year or two ago.
Misinformation can be generated very, very quickly. Also recommendation systems (that push content to people) in directions we don’t want. And I feel like people haven’t had a chance to talk about it yet. It’s really tech companies that are forcing it on us instead of giving people the power to choose what they want.”
Are there incentives for tech companies to act ethically when it comes to AI?
They have an incentive to make a profit and if they are monopolies they have no real incentive to compete with other companies on ethics or any other thing that people want. The problem is when they say things like, “We want to democratize AI,” it’s really hard to believe when they’re making billions and billions of dollars. So it would be better if these companies weren’t monopolies and people had a choice about how to use this technology.
Why do you think it is so important for the federal government to regulate tech companies?
The government should definitely step in and regulate AI. It’s not that they haven’t been warned enough. The technology has been developing for years. The same technology that was used to create ChatGPT has been used in the past to create chatbots, which are actually pretty good. Not as good as ChatGPT, but pretty good. So we’ve had enough warnings. Content recommendation systems have been in use for many years, and we have yet to regulate them in any way. One reason is that the government does not yet have a mechanism to regulate AI. There is no (federal) commission for AI. There are commissions for many other things, but not for AI.
How could this AI revolution most impact people in their daily lives? What should you pay attention to?
AI affects people, ordinary people, every day of their lives. When you go to a website on the Internet, the advertisements on that website are served only to you. Every time you watch content on YouTube, the recommendation systems recommend what to watch next based on your data. When you read Twitter, the content you see and the order in which you see it is designed by an algorithm. All of these things are AI algorithms that are essentially unregulated. So normal people are constantly interacting with AI.
Do people have a real say in how this technology is forced upon them?
Generally no. You have no way of tweaking the algorithm to deliver the content you want. If you know you’re happier when your algorithm is tuned a certain way, there’s not really a way for you to change it. It would be nice if you could choose from a variety of companies for many of these different types of recommender systems. Unfortunately, there aren’t too many companies out there, so you don’t really have a lot to choose from.
What is the worst case scenario you can imagine if there is no regulation?
Misinformation is not innocent. It does real harm to people on a personal level. It was the cause of wars in the past. Think World War II, think Vietnam. What I’m really concerned about is that in the future, misinformation will lead to a war that AI will be at least partly to blame for.
Many of these companies simply claim that they are “democratizing” artificial intelligence with these new tools.
One thing I’m concerned about is that you have these companies that are developing these tools and they’re very excited about releasing these tools to people. And certainly the tools can be useful. But you know, I think if they were victims of AI-based bullying, or had fake pictures of themselves created online that they didn’t want, or if they were about to become a victim of an AI-powered misinformation massacre, they may feel different.
How does content moderation fit into all of this?
There is a lot of very dangerous content out there and a lot of dangerous misinformation that has claimed many lives. I’m specifically talking about misinformation about the Rohingya massacres, about the January 6, 2021 uprising, misinformation about vaccines. While it’s important that we have free speech, it’s also important that content is moderated, not disseminated. Even if people say things we don’t agree with, we don’t have to use algorithms to spread those things. When misinformation from trolls from different countries tries to influence politics or have any kind of social impact, these trolls can take over our algorithms and plant misinformation.
We really don’t want that. Child abuse content (for example) – we need to be able to filter this out of the internet.