New developments in artificial intelligence are unstoppable, despite a call from some experts to put the brakes on, business lawyer David Miller said.
“AI is already embedded in so many businesses — from gas compression monitoring to medical advances,” Miller said. “The problem with the pause is, if we find a way in the US to pause it, capital and labor will move elsewhere.”
A national debate over whether AI developments are moving too fast to be safe was started by an open letter from the Future of Life Institute last month.
“Powerful AI systems should only be developed when we are sure that their effect is positive and their risks are manageable,” says the letter, which had more than 24,000 signatures on Thursday.
“The problem with AI is that it’s difficult to define, so it’s difficult to regulate. Taking a break without defining it will be impossible and not beneficial,” said Miller, a University of Oklahoma graduate student and supporter who practices in Dallas. “We could spend 10 years defining AI and by then it could be something else.”
Legislation and regulation of specialties like AI are usually defined by courts, Miller said. More than 100 AI-related lawsuits will be filed in 2022, 10 times more than five years ago, he said. “They will set some parameters as they move through the system.”
Industry associations can set standards for the use of AI. That would be much faster, but would require the cooperation of competitors, Miller said.
“We’re going to go through some very challenging times,” he said. “It’s a fascinating problem that we all have to grapple with.”
The results of a global survey released Wednesday show that 65% of business and IT leaders believe their organization has data distortions, and 78% believe data distortions will become a bigger problem as the use of artificial intelligence and machine learning is increasing.
Data Bias: The Hidden Risk of AI is published by Progress, a company that helps its clients use data intelligently to improve business outcomes. Conducted by research firm Insight Avenue, the survey was based on interviews with more than 640 business and IT professionals at director level and above who use data to make decisions and use or plan to use AI and ML to support their decision making.
With AI and ML, the algorithms are only as good as the data used to create them. If data sets are flawed — or worse, biased — incorrect assumptions will feed into any resulting decision, the report says.
“Every day, prejudice can negatively impact business operations and decision-making – from corporate governance and loss of customer trust to financial repercussions and potential legal and ethical risks,” said John Ainsworth, executive vice president and general manager of Progress.
Business practices based on biased AI data can have serious consequences for those negatively affected, the survey found, citing examples in retail, finance and healthcare.
A famous retailer found that a flawed hiring algorithm hired only males for open technology positions and excluded otherwise qualified female candidates.
A financial institution incorrectly rejected qualified loan applicants because a flawed AI tool discriminated based on the applicant’s zip code.
A company using AI to assign health care eligibility incorrectly assigned black patients a lower health risk status, denied them the appropriate care to which they were entitled, and resulted in negative medical outcomes.
In the legal space, Miller sees the benefit of using AI to search for relevant case law. He also knows of half a dozen cases where AI has been used to create a will or contract, which have proven inadequate and raised the question: who is responsible – the user or the provider of the document?