Several world governments are becoming increasingly nervous about the Pandora’s box of advanced artificial intelligence that has been opened wide with OpenAI’s public release of ChatGPT. As they ponder possible regulations, it’s unclear if the genie can even be forced back into the bottle.
On Tuesday, Canada’s privacy commissioner said he was investigating ChatGPT, joining colleagues in a growing list of countries – including Germany, France and Sweden – who have raised concerns about the popular chatbot after Italy banned it outright on Sunday.
“AI technology and its impact on privacy is a priority for my office,” Philippe Dufresne, Canada’s Data Protection Commissioner, said in a statement which is one of my main areas of focus as Commissioner.
Italy’s ban stemmed from a March 20 incident in which OpenAI acknowledged a flaw in the system that exposed users’ payment information and chat history. OpenAI briefly took ChatGPT offline to fix the bug.
“We don’t need a ban on AI applications, but ways to secure values such as democracy and transparency,” a spokesman for the Federal Ministry of the Interior told the Handelsblatt on Monday.
But is banning software and artificial intelligence even possible in a world where virtual private networks (VPNs) exist?
A VPN is a service that allows users to access the Internet securely and privately by creating an encrypted connection between their device and a remote server. This connection masks the user’s private IP address, making it appear as if they are accessing the internet from the remote server’s location rather than their actual location.
Additionally, “an AI ban may not be realistic as many AI models are already in use and more are being developed,” Jake Maymar, vice president of innovation at AI consultancy Glimpse Group, told Decrypt. “The only way to enforce an AI ban would be to ban access to computers and cloud technology, which is not a practical solution.”
Italy’s attempt to ban ChatGPT comes amid growing concerns about artificial intelligence’s impact on privacy and data security, and its possible misuse.
One AI think tank, the Center for AI and Digital Policy, filed a formal complaint with the U.S. Federal Trade Commission last month, accusing OpenAI of fraudulent and unfair practices after an open letter surfaced by several high-profile members of the tech was signed by a community that called for a slowdown in the development of artificial intelligence.
While some are sounding the alarm at ChatGPT, others say the chatbot isn’t the problem, but a broader problem of society’s intended use.
“What this moment offers is an opportunity to reflect on what kind of society we want to be – what rules we want to apply equally to all, AI-enabled or not – and what kind of economic rules best serve society,” Barath Raghavan , associate professor of computer science at USC Viterbi, said Decrypt. “The best policy responses will not be those that target specific technological mechanisms of today’s AI (which will quickly become obsolete), but rather behaviors and rules that we would like to see universally applied.”