Osha Attorneys
Osha Attorneys
Osha Attorneys
Osha Attorneys
Attorney search
Search by

The collective expertise of our global team distinguishes OBWB in the field of Intellectual Property Law. We align our best resources to meet each client's specific needs and we treat each matter with the highest degree of attention and care.

Are Your AI Prompts Discoverable? Recent Cases Every Company and Law Firm Should Know

日本語 简体中文 繁體中文 عربي

Are your ChatGPT, Claude, and Gemini prompts and responses safe from discovery? Recent decisions shed light on the issue to reveal when your AI chat history can be targeted in discovery and when it remains off-limits. How current rules are being applied to frontier generative AI and what every company, in-house counsel, and law firm must do to avoid an AI discovery disaster.

To understand the latest rules of the road for AI discovery, you have to look at the copyright infringement lawsuit between The New York Times (NYT) and OpenAI. NYT sued OpenAI alleging massive copyright infringement, claiming Large Language Models (LLMs) were improperly trained by using millions of NYT’s copyrighted articles without permission or compensation. The core questions are whether OpenAI’s data-scraping qualifies as “fair use” and whether the resulting ChatGPT models improperly regurgitate NYT’s copyrighted content.

Against this backdrop, two recent discovery decisions by U.S. District Judge Ona T. Wang offer a masterclass on when AI prompts and logs are potentially discoverable.

In the first ruling, OpenAI sought to compel discovery of the New York Times’ internal “ChatExplorer” logs including NYT employees’ prompts to an internal tool that used OpenAI models, and the outputs they received.[1] OpenAI argued the logs were relevant to fair use and substantial non-infringing uses. The court disagreed and denied the motion to compel because Defendants had not shown that the ChatExplorer logs were relevant or proportional to the needs of the case. Judge Wang explained that the fair-use inquiry focuses on OpenAI’s use of NYT’s copyrighted works, not NYT’s downstream use of OpenAI’s tools. The court also held that NYT’s use of ChatExplorer was not probative of market harm in the relevant copyright sense because NYT cannot be a “competing substitute” for itself. Even if there were some marginal relevance, the burden of reviewing more than 80,000 log entries for privilege made the request disproportionate. 

Judge Wang’s subsequent decision in the same litigation shows the other side of the coin. Instead of OpenAI seeking the NYT’s internal AI-use logs, NYT sought production of OpenAI’s own consumer ChatGPT output logs to prove that the AI actually regurgitates the newspaper’s copyrighted articles.[2] In that ruling, Judge Wang denied OpenAI’s motion for reconsideration and forced the company to produce a de-identified sample of 20 million consumer ChatGPT logs. The court held that these logs were highly relevant to the core merits of the case (whether the outputs reproduce NYT works) and proportional, as 20 million logs represent less than 0.05% of the tens of billions of logs OpenAI retains. The outputs could also be relevant to OpenAI’s own defenses, including fair use and substantial non-infringing uses. The court further found that de-identification was already largely complete, and privacy protections were already in place.

Outside of the copyright infringement context, the recent ruling in United States v. Heppner sheds light when attorneys are not yet involved. Bradley Heppner is a defendant in the federal criminal case involving alleged securities and wire fraud.[3] Before his arrest, Heppner used Anthropic’s Claude to generate documents about the government’s investigation and possible defense strategy. There, U.S. District Judge Rakoff held that these generated documents were not protected by attorney-client privilege or the work-product doctrine. The court reasoned that Claude is not a lawyer, the communications were not confidential, and the documents were not prepared by or at the direction of counsel. The opinion also noted Anthropic’s terms and privacy disclosures, including that inputs and outputs could be collected, used to train Claude, and disclosed to third parties. The court did note that if counsel had directed the client to use Claude, a closer privilege question might have been presented, but that was not case here. The ruling shows that a client’s use of AI without a lawyer in the loop may well be discoverable when privilege and work-product arguments collapse.

These cases tell us something important: AI prompts are not automatically discoverable just because they exist. A party still has to show that the prompts, outputs, or logs matter to the actual claims or defenses and that the burden of collecting and reviewing them is justified. The broad requests for AI logs can fail when they are too attenuated from the merits or too burdensome to review. On the other hand, AI prompts are more likely to be discoverable when 1) they are relevant to a claim or defense, 2) the request for them is proportional to the needs of the case, 3) they are not protected by attorney-client privilege or the work-product doctrine, 4) they were not created at counsel’s direction, or 5) the AI tool’s terms or settings undermine confidentiality. At bottom, traditional discovery rules apply, even to data generated by cutting-edge LLMs and frontier AI platforms.

These cases are a glaring reminder that professional responsibilities extend strictly to how a firm and its clients use AI. Prompts and outputs are electronically stored information (ESI) and are subject to the same ethical and discovery obligations as emails or texts. Model Rule 1.1, Comment 8, of the American Bar Association’s Model Rules of Professional Conduct requires attorneys to understand the benefits and risks associated with relevant technology.   If a lawyer inputs sensitive client data into a public, consumer-facing generative AI tool that trains on user inputs or shares information with third parties, they risk violating the ethical rules. Worse, they could waive the attorney-client privilege entirely, meaning those prompts would lose their protection and become discoverable by opposing counsel. 

Model Rules 3.3 and 3.4 also come into play because lawyers may not submit false AI-generated material to a tribunal, and they may not obstruct access to evidence or permit potentially relevant material to be lost or destroyed. If AI prompts or outputs are relevant to a claim, attorneys must take immediate steps to preserve them when litigation is reasonably anticipated. This means actively advising clients to turn off auto-delete functions for AI chat logs (an issue that arose in the OpenAI litigation, where the court had to intervene to stop the routine deletion of consumer output logs). As recent news items have demonstrated,  lawyers cannot submit false AI-generated material (such as AI-hallucinated case law) to a tribunal. Law firms must independently verify any output generated by an LLM before relying on it.

In addition, ABA Model Rules 5.1 and 5.3 require firms and supervising lawyers to implement reasonable measures to ensure that both lawyers and nonlawyer assistants use AI in a manner consistent with professional obligations. Ignorance is not a defense. Supervising lawyers and law firms must implement clear, written policies and reasonable measures to ensure that both lawyers and nonlawyer assistants use AI in a manner that protects client data and complies with discovery obligations.

AI prompts are not automatically discoverable, but they are not automatically shielded, either. They are discoverable when they directly touch the merits of a claim, are proportional to the case, and are not protected by privilege.  Companies should implement strict AI retention and usage policies today, before they are forced to turn over their chat history tomorrow.

 

[1] In re OpenAI, Inc., Copy. Infringement Litig., 800 F. Supp. 3d 602, 606 (S.D.N.Y. September 19, 2025).

[2] In re OpenAI, Inc., Copy. Infringement Litig., No. 25-MD-3143 (SHS) (OTW), 2025 WL 3468036, at *1 (S.D.N.Y. Dec. 2, 2025).

[3] U.S. v. Heppner, No. 25 CR. 503 (JSR), 2026 WL 436479, at *1 (S.D.N.Y. Feb. 17, 2026).