A New York attorney filed a brief citing six non-existent ChatGPT court decisions. Experts warn that lawyers must ensure competence and confidentiality when using AI
May 30 (Reuters) – A New York lawyer faces possible penalties over an erroneously flawed brief he drafted using ChatGPT.
It’s a scenario that legal ethics experts have been warning about since ChatGPT burst onto the scene in November, ushering in a new era for AI capable of evoking human-like responses based on vast amounts of data.
Steven Schwartz of Levidow, Levidow & Oberman faces a sanctions hearing before US District Judge Fr. Kevin Castel on June 8 after he admitted using ChatGPT to brief his client’s personal injury case against Avianca Airlines. The brief cited six non-existent court decisions.
Schwartz said in a court filing that he “greatly regrets” his reliance on the technology and is “unaware of the possibility that its contents could be false.”
Attorneys representing Avianca brought the nonexistent cases to the attention of the court, cited by Schwartz, who did not respond to a request for comment Tuesday.
The American Bar Association’s Model Rules of Professional Conduct do not specifically address artificial intelligence. However, several existing ethical rules apply, experts say.
“Ultimately, you are responsible for the representations you make,” said Daniel Martin Katz, a professor at the Chicago-Kent College of Law who teaches professional responsibility and studies artificial intelligence in law. “It’s your bar menu.”
This rule requires attorneys to be competently represented and up to date with the latest technology. They need to ensure the technology they use is delivering correct information – a big problem considering tools like ChatGPT have proven to be a substitute. And lawyers must not rely too much on the tools to avoid making mistakes.
“Blindly relying on generative AI to provide you with the text you use to deliver services to your client is not going to work,” said Andrew Perlman, dean of law at Suffolk University, a leading expert on legal technology and ethics.
Perlman posits that competency rules ultimately require some level of proficiency in artificial intelligence technology. AI could revolutionize the practice of law so much that not using it could one day be like stopping using computers for research, he said.
This rule requires attorneys to “use reasonable efforts to prevent the accidental or unauthorized disclosure of, or unauthorized access to, information in connection with representing a client.” Attorneys using programs such as ChatGPT or Bing Chat risk AI -Companies share their clients’ data to train and improve their models, which may violate confidentiality rules.
“It’s one of the reasons why some law firms have specifically told attorneys not to use ChatGPT and similar programs for client matters,” said Holland & Knight partner Josias Dewey, who works at his firm developing internal artificial intelligence programs.
Some law-specific artificial intelligence programs, including CaseText’s CoCounsel and Harvey, are tackling the confidentiality issue by protecting their data from external AI vendors.
RESPONSIBILITIES REGARDING ASSISTANCE NOT ATTORNEY
Under this rule, attorneys must supervise attorneys and non-lawyers they assist to ensure their conduct conforms to professional rules. The ABA clarified in 2012 that the rule also applies to non-human assistance.
This means that lawyers must oversee the work of AI programs and understand the technology well enough to ensure it meets the ethical standards that lawyers are required to adhere to.
“You must make reasonable efforts to ensure that the technology you use is consistent with your own ethical responsibilities to your customers,” Perlman said.
Startup backed by OpenAI brings chatbot technology to first major law firm
Will ChatGPT make lawyers obsolete? (hint: be scared)
Some law professors fear the rise of ChatGPT, while others see opportunities
Reporting by Karen Sloan
Our standards: The Thomson Reuters Trust Principles.