The popularity of artificial intelligence (AI) has skyrocketed in recent months after Microsoft-backed AI startup OpenAI launched its chatbot ChatGPT. However, concerns about fraud and misinformation have forced several countries to enact regulations to stem the unbridled growth of AI. According to a Reuters report
Microsoft
president
BradSmith
has stated that his biggest concern regarding AI is deep fakes. It includes all realistic looking fake content.
In a speech in Washington, Smith addressed the question of how best to regulate AI. He suggested steps to ensure users know when a photo or video is real or if it was created by AI, possibly with malicious intent.
What Smith said about the concerns surrounding deep fakes
“We need to address the issues surrounding deep fakes. In particular, we need to address what worries us about most foreign cyber influence operations, namely the activities that the Russian government is already conducting “Chinese and Iranians. We must take steps to protect ourselves from altering legitimate content with the intent to deceive or scam people through the use of AI,” he noted.
Smith also called for licensing for the most critical forms of AI with “commitments to protect security, physical security, cybersecurity and national security.”
“We need a new generation of export controls, at least the evolution of export controls that we have, to ensure these models are not stolen or used in a way that would violate the country’s export control requirements,” he added.
Smith also stated that people must be held accountable for any problems caused by AI. To keep people in control of the AI used in the power grid, water supply and other critical infrastructure, he also urged lawmakers to ensure the technology is safety-braked.
He also suggested using a “Know Your Customer” system for developers of powerful AI models to keep track of technology usage. Smith also urged developers to let the public know what content the AI is creating so they can identify fake videos.
Washington’s measures to regulate AI
Lawmakers in Washington have been discussing laws to control AI for weeks. The move comes as both large and small companies compete to bring advanced AI-based features and services to market.
In his first appearance before
congress
last week, CEO of OpenAI
Sam Altman
said a
senate
The panel said the use of AI to compromise election integrity is a “significant area of concern”. He also urged officials to include this in the regulation. Altman also called for global collaboration on AI and incentives for security compliance.
According to the report, some proposals are under consideration
Capitol Hill
would focus on AI that could endanger human life or livelihoods. It includes areas such as medicine and finance. Others are pushing for regulations to ensure AI is not used to discriminate or violate civil rights.