Michael Cohen Used Artificial Intelligence in Feeding Lawyer Bogus Cases
Michael D. Cohen, the onetime fixer for former President Donald J. Trump, mistakenly gave his lawyer bogus legal citations concocted by the artificial intelligence program Google Bard, he said in court papers unsealed on Friday.
The fictitious citations were used by the lawyer in a motion submitted to a federal judge, Jesse M. Furman. Mr. Cohen, who pleaded guilty in 2018 to campaign finance violations and served time in prison, had asked the judge for an early end to the court’s supervision of his case now that he is out of prison and has complied with the conditions of his release.
The ensuing chain of misunderstandings and mistakes ended with Mr. Cohen asking the judge to exercise “discretion and mercy.”
In a sworn declaration made public on Friday, Mr. Cohen explained that he had not kept up with “emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like ChatGPT, could show citations and descriptions that looked real but actually were not.”
He also said he had not realized that the lawyer filing the motion on his behalf, David M. Schwartz, “would drop the cases into his submission wholesale without even confirming that they existed.”
The episode could have implications for a Manhattan criminal case against Mr. Trump in which Mr. Cohen is expected to be the star witness. The former president’s lawyers have long attacked Mr. Cohen as a serial fabulist; now, they say they have a brand-new example.
The ill-starred filing was at least the second this year by lawyers in Manhattan federal court in which lawyers cited bogus decisions generated by artificial intelligence. The legal profession, like others, is struggling to account for a novel technology meant to mimic the human brain.
Artificial intelligence programs like Bard and ChatGPT generate realistic responses by hazarding guesses about which fragments of text should follow other sequences. Such programs draw on billions of examples of text ingested from across the internet. Although they can synthesize vast amounts of information and present it persuasively, there are still bugs to be worked out.
The three citations in Mr. Cohen’s case appear to be hallucinations created by the Bard chatbot, taking bits and pieces of actual cases and combining them with robotic imagination. Mr. Schwartz then wove them into the motion he submitted to Judge Furman.
Mr. Cohen, in his declaration, said he understood Bard to be “a supercharged search engine,” which he had used previously to find accurate information online.
Mr. Schwartz, in his own declaration, acknowledged using the citations and said he had not independently reviewed the cases because Mr. Cohen indicated that another lawyer, E. Danya Perry, was providing suggestions for the motion.
“I sincerely apologize to the court for not checking these cases personally before submitting them to the court,” Mr. Schwartz wrote.
Barry Kamins, a lawyer for Mr. Schwartz, declined to comment on Friday.
Ms. Perry has said she began representing Mr. Cohen only after Mr. Schwartz filed the motion. She wrote to Judge Furman on Dec. 8 that after reading the already-filed document, she could not verify the case law being cited.
In a statement at the time, she said that “consistent with my ethical obligation of candor to the court, I advised Judge Furman of this issue.”
She said in a letter made public on Friday that Mr. Cohen, a former lawyer who has been disbarred, “did not know that the cases he identified were not real and, unlike his attorney, had no obligation to confirm as much.”
“It must be emphasized that Mr. Cohen did not engage in any misconduct,” Ms. Perry wrote. She said on Friday that Mr. Cohen had no comment, and that he had consented to the unsealing of the court papers after the judge raised the question of whether they contained information protected by the attorney-client privilege.
The imbroglio surfaced when Judge Furman said in an order on Dec. 12 that he could not find any of the three decisions. He ordered Mr. Schwartz to provide copies or “a thorough explanation of how the motion came to cite cases that do not exist and what role, if any, Mr. Cohen played.”
The matter could have significant implications, given Mr. Cohen’s pivotal role in a case brought by the Manhattan district attorney that is scheduled for trial on March 25.
The district attorney, Alvin L. Bragg, charged Mr. Trump with orchestrating a hush money scheme that centered on a payment Mr. Cohen made during the 2016 election to a pornographic film star, Stormy Daniels. Mr. Trump has pleaded not guilty to 34 felony charges.
Seeking to rebut Mr. Trump’s lawyers’ claims that Mr. Cohen is untrustworthy, his defenders have said that Mr. Cohen lied on Mr. Trump’s behalf but has told the truth since splitting with the former president in 2018 and pleading guilty to the federal charges.
On Friday, Mr. Trump’s lawyers immediately seized on the Google Bard revelation. Susan R. Necheles, a lawyer representing Mr. Trump in the coming Manhattan trial, said it was “typical Michael Cohen.”
“He’s an admitted perjurer and has pled guilty to multiple felonies and this is just an additional indication of his lack of character and ongoing criminality,” Ms. Necheles said.
Ms. Perry, the lawyer now representing Mr. Cohen on the motion, said that Mr. Cohen’s willingness to have the filings unsealed showed he had nothing to hide.
“He relied on his lawyer, as he had every right to do,” she said. “Unfortunately, his lawyer appears to have made an honest mistake in not verifying the citations in the brief he drafted and filed.”
A spokeswoman for Mr. Bragg declined to comment on Friday.
Prosecutors may argue that Mr. Cohen’s actions were not intended to defraud the court, but rather, by his own admission, were a product of a woeful misunderstanding of new technology.
The issue of lawyers relying on chatbots exploded into public view earlier this year after another federal judge in Manhattan, P. Kevin Castel, fined two lawyers $5,000 after they admitted filing a legal brief filled with nonexistent cases and citations, all generated by ChatGPT.
Such cases appear to be rippling through the nation’s courts, said Eugene Volokh, a law professor at U.C.L.A. who has written about artificial intelligence and the law.
Professor Volokh said he had counted a dozen cases in which lawyers or litigants representing themselves were believed to have used chatbots for legal research that ended up in court filings. “I strongly suspect that this is just the tip of the iceberg,” he said.
Stephen Gillers, a legal ethics professor at New York University School of Law, said: “People should understand that generative A.I. is not the bad guy here. It holds much promise.”
“But lawyers cannot treat A.I. as their co-counsel and just parrot what it says,” he added.
The nonexistent cases cited in Mr. Schwartz’s motion — United States v. Figueroa-Flores, United States v. Ortiz and United States v. Amato — came with corresponding summaries and notations that they had been affirmed by the U.S. Court of Appeals for the Second Circuit.
Judge Furman noted in his Dec. 12 order that the Figueroa-Flores citation actually referred to a page from a decision that was issued by a different federal appeals court and “has nothing to do with supervised release.”
The Amato case named in the motion, the judge said, actually concerned a decision of the Board of Veterans’ Appeals, an administrative tribunal.
And the citation to the Ortiz case, Judge Furman wrote, appeared “to correspond to nothing at all.”
William K. Rashbaum contributed reporting.