Dee Ware | Tactical Law
There is no question that artificial intelligence (“AI”) can be a valuable research and analytical tool, but beyond hallucinations and expanding regulation applying to the use of AI by attorneys, courts are grappling with the consequences of both AI acting like a lawyer in certain instances and users treating AI like their lawyer. The two cases discussed below illustrate an accelerating collision between AI tools and privilege and licensure rules.
AI Acting as an Unlicensed Attorney? Nippon v. OpenAI
One interesting case to follow is the suit filed earlier this month by Nippon Life Insurance Company of America (“Nippon”) against OpenAI Foundation and OpenAI Group PBC (“OpenAI”) in U.S. District Court for the Northern District of Illinois, asserting three claims against OpenAI: (a) tortious interference with contract, (b) unlicensed practice of law, and (c) abuse of process. Nippon asserts that OpenAI’s ChatGPT provided legal assistance to a user named Graciela Dela Torre (“Dela Torre”) without proper licensure, encouraging her to breach a valid settlement agreement with Nippon by filing a motion to reopen a dismissed lawsuit and then to name Nippon in a second lawsuit.
The factual allegations of the Complaint state that Dela Torre was a participant in her employer’s long-term disability (“LTD”) policy issued by Nippon. In 2019, she submitted a claim, alleging that she suffered from carpal tunnel syndrome and tennis elbow, and the claim was approved. However, a little over two years later Dela Torre’s benefits were terminated, and she brought suit against Nippon, alleging the termination of benefits violated the LTD policy. At the beginning of 2024, the parties reached a settlement, which included a release of claims, and the lawsuit was dismissed with prejudice. A year later, Dela Torre wrote to her attorney, expressing her belief that the terms of the settlement resulted from potential errors or omissions of important facts and documentation and that she wanted to challenge or reopen the settlement. Her attorney refuted her allegations of errors or the omission of key evidence and reminded Dela Torre that she had signed a mutual release releasing Nippon from any further causes of action related to the dispute and that the case had been dismissed with prejudice and could not be reopened.
After Dela Torre received the attorney’s response, she uploaded it to ChatGPT and asked whether she was being “gaslighted”. The Complaint alleges that “ChatGPT analyzed the response and determined that [her attorney’s] response invalidated Dela Torre’s feelings, dismissed her perspective, and deflected responsibility for her dissatisfaction. ChatGPT concluded that the tactics used in [the attorney’s] response constituted gaslighting and were aimed at emotionally manipulating Dela Torre. . . As a result of her interactions with ChatGPT, Dela Torre fired the attorneys who had previously appeared on her behalf in the lawsuit against [Nippon] . . . [and][t]hereafter, she attempted to vacate the Agreement and reopen the lawsuit herself by using ChatGPT.”
Following her filing of the motion to reopen the lawsuit, Nippon alleges that Dela Torre filed 21 motions, a subpoena and eight notices and statements, all of which were compiled and drafted by ChatGPT. However, the AI generated filings could not overcome a lack of legal merit, and by an Order dated February 13, 2025, the Court denied Dela Torre’s motion to reopen the case.
Nippon further alleges that, on February 12, 2025, Dela Torre initiated a new lawsuit naming other defendants, but when the court denied her motion to reopen the lawsuit against Nippon, she used ChatGPT to amend that complaint to add Nippon as a named defendant, raising the same claims as asserted in the first lawsuit. Since then, she has reportedly filed a high volume of legal documents, all of which Nippon alleges were drafted by ChatGPT and are based on legal research, analysis and advice provided by ChatGPT.
In its lawsuit against OpenAI, Nippon has asserted that:
- OpenAI “designed ChatGPT to provide legal services and . . . was aware that the program was being used to provide legal assistance to users. Yet OPENAI did not amend its Model Spec or its terms of use to prohibit ChatGPT from providing legal assistance until October 29, 2025. Its failure to modify either the Model Spec or terms of use, despite knowledge that the program was being used to provide legal services, demonstrates [OpenAI’s] complicity in ChatGPT’s unlicensed provision of legal assistance.”
- OpenAI, “through ChatGPT, induced Dela Torre to breach the settlement [a]greement by knowingly generating legal arguments and drafting court filings that encouraged and facilitated a challenge to the [a]greement.”
- OpenAI “purposefully programmed ChatGPT to reason and deliberately drive user engagement within the parameters it set in the Model Spec. As a result, ChatGPT deliberately provided Dela Torre with legal assistance designed to induce Dela Torre into breaching the terms of her settlement [a]greement with [Nippon], in order to encourage Dela Torre to further engage with the chatbot.”
- OpenAI, “through its AI chatbot program ChatGPT, aided and abetted Dela Torre’s abuse of process by providing Dela Torre with legal advice, legal analysis and legal research, as well as by assisting Dela Torre in the drafting and preparation of her frivolous motions and requests for judicial notice against [Nippon].”
- OpenAI, “through its AI chatbot program ChatGPT, provides legal advice, legal analysis, legal research and can draft legal documents and papers for submission to a Court. ChatGPT provides these legal services to any user who requests them” and that OpenAI “is guilty of the unlicensed practice of law.”
While the claims against OpenAI are unproven and pending, it will be interesting to monitor whether this case ultimately affects AI platform disclaimers and use restrictions.
When AI Use Destroys Privilege: U.S. v. Heppner
Also issued last month was a ruling on a matter of first impression by the United States District Court for the Southern District of New York in U.S. vs. Heppner, wherein the court ruled that certain exchanges that the defendant had with the publicly available AI platform, Claude, were not protected from discovery by either the attorney-client privilege or work-product doctrine.
The case involves securities and wire fraud and other related offences. In 2025, after receiving a grand jury subpoena, Bradley Heppner (“Heppner”) reportedly used AI, without instruction from his attorney to do so, to generate reports outlining defense strategy based on facts and law that were anticipated to be raised by the government. When later compelled to produce these reports, Heppner’s attorney asserted privilege, arguing that “(1) Heppner had inputted into Claude, among other things, information that Heppner had learned from counsel; (2) Heppner had created the AI Documents for the purpose of speaking with counsel to obtain legal advice; and (3) Heppner had subsequently shared the contents of the AI Documents with counsel.” However, the Court ruled that the requisite elements for invoking attorney-client privilege were not satisfied.
First, the Court pointed out that “all ‘[r]ecognized privileges’ require, among other things, ‘a trusting human relationship,’ such as, in the attorney-client context, a relationship ‘with a licensed professional who owes fiduciary duties and is subject to discipline.’” (citation omitted). Claude is not an attorney, and the AI-generated documents were, therefore, not communications between Heppner and his attorney. Second, Heppner could not have a “reasonable expectation of confidentiality in his communications” with Claude based on the platforms privacy policy which provides for the collection of data and use of such data to “train” Claude. Additionally, Anthropic expressly reserves the right to disclose such data to a host of third parties, including to government regulatory authorities and in connection with litigation. Further, the AI reports are not akin to confidential notes prepared with the intent of sharing them with his attorney as Heppner first shared the equivalent of his notes with a third-party, Claude. Third, as Heppner communicated with Claude of his own volition, rather than pursuant to his attorney’s instructions, and Claude disclaims giving legal advice, the Court concluded that Heppner did not communicate with Claude for the purpose of obtaining legal advice and those communications were not privileged at the time they took place. The Court further held that the AI documents did not acquire privilege protection merely because they were shared with counsel. Likewise, the Court concluded that because the AI reports were not generated at the behest of counsel and did not disclose counsel’s strategy, they were also not entitled to work product protection.
Key Takeaways
The legal landscape concerning the use of AI platforms will no doubt continue to evolve, but here are two takeaways:
- For individuals: Avoid using AI tools for legal strategy, especially without first getting directions from your attorney to do so, as even the savviest legal counsel may not be able to shield materials shared with or created using public AI platforms from discovery.
- For companies: Adopt AI governance policies and educate employees to prevent legal or investigative information from being shared on unsecure platforms. Policies should include safeguards to prevent privileged communications with counsel and materials that are not already part of the public record from being uploaded, provide guidance on using AI for legal analysis or drafting, and align with existing data security and confidentiality obligations.
When one of your cases is in need of a construction expert, estimates, insurance appraisal or umpire services in defect or insurance disputes – please call Advise & Consult, Inc. at 888.684.8305, or email experts@adviseandconsult.net.
