SDNY Rules Communications With a Public Generative AI Platform Are Not Protected by Attorney-Client Privilege or Work Product Doctrine

Key Points
- In a question of first impression, the U.S. District Court for the Southern District of New York (SDNY)1 ruled that a criminal defendant’s written exchanges with a publicly available generative AI platform are not protected by the attorney-client privilege or the work product doctrine.
- The court found that no attorney-client relationship can exist with a public AI platform and that users lack a reasonable expectation of confidentiality due to the platform’s privacy policy, which discloses the use of “inputs” and “outputs” for AI training and reserves the right to share such data with third parties.
- The court also determined that the work product doctrine does not apply to materials prepared by a client on their own volition without the direction or involvement of counsel.
- Whether privilege is protected within a closed, enterprise-grade AI system, where counsel directs the use of AI platforms and/or where inputs and outputs are not used to train models, remains an open question.
Background
On October 28, 2025, a grand jury indicted Bradley Heppner on multiple charges, including securities fraud and wire fraud, stemming from his time as an executive of several corporate entities. During a search of Heppner’s home upon his arrest, the FBI seized numerous documents and electronic devices containing approximately 31 documents that memorialized communications between Heppner and the generative AI platform “Claude,” which is operated by Anthropic.
Heppner’s counsel asserted that these “AI Documents” were privileged because Heppner input information learned from counsel to generate defense strategy reports in anticipation of an indictment, which he later shared with counsel. Heppner’s counsel conceded, however, that they did not direct Heppner to run the searches on the AI platform. In a “Privilege Protocol Stipulation,” the parties agreed that the Government would not inspect the AI Documents pending resolution of Heppner’s privilege claims and Heppner’s counsel listed the AI Documents in a privilege log.
The Government subsequently moved for a ruling that the AI Documents were neither protected by the attorney-client privilege nor the work product doctrine.
The Court’s Decision
On February 17, 2026, U.S. District Judge Jed S. Rakoff issued a memorandum detailing his bench ruling granting the Government’s motion and determining that the AI Documents lacked the essential elements of both the attorney-client privilege and the work product doctrine.
Attorney-Client Privilege: The court held that the AI Documents were not shielded by the attorney-client privilege for three reasons. First, the AI platform is not a licensed attorney and therefore cannot form a fiduciary attorney-client relationship. Second, the communications were not confidential because the platform’s privacy policy expressly notifies users that it collects data on users’ “inputs” and Claude’s “outputs” to train its tools and may disclose that data to third parties, destroying any reasonable expectation of privacy. Third, the communications were not made for the purpose of obtaining legal advice. Rather, Heppner communicated with Claude of his own volition and not at the direction of counsel and the AI tool explicitly disclaims the ability to provide formal legal advice or recommendations. Furthermore, the court noted that sharing unprivileged documents with counsel after the fact does not retroactively shield them.
Work Product Doctrine: The court found that the AI Documents were not prepared by or at the behest of counsel. Because Heppner initiated the AI communications on his own volition, the documents did not reflect the mental processes or strategy of his attorneys at the time of their creation. In reaching this conclusion, the court disagreed with a prior SDNY magistrate judge decision, Shih v. Petal Card, Inc., 565 F. Supp. 3d 557 (S.D.N.Y. 2021), which had extended work product protection to materials generated by non-lawyers without an attorney’s direction, emphasizing that the core purpose of the doctrine is to protect lawyers’ mental processes.
Practical Implications
The Heppner decision serves as a critical warning for companies and individuals utilizing generative AI tools in connection with potential or ongoing litigation. Because the court ruled that using a public AI platform breaks confidentiality and falls outside the protection of both the attorney-client privilege and the work product doctrine, organizations should take steps to mitigate these risks:
- Update Acceptable Use Policies: Companies should consider implementing AI usage policies that prohibit employees and executives from inputting sensitive, confidential or litigation-related information into publicly available generative AI platforms.
- Recognize the Threat of Privilege Waiver: Employees should be trained to understand that sharing legal strategies, facts of a case or advice previously provided by counsel with a public AI tool could act as a waiver of privilege, akin to sharing that information with any other third party. The court emphasized that AI platforms’ terms of service, which often allow the collection, retention and sharing of user prompts with third parties and regulatory authorities, destroy any reasonable expectation of privacy.
- Counsel Should Direct Litigation Prep: If AI tools are to be used for organizing thoughts or generating strategy related to anticipated litigation, that work should be explicitly directed by counsel. The court noted that because the defendant acted of his own volition, the work product doctrine did not apply. While the court left open the possibility of AI functioning as a “lawyer’s agent” if counsel directs its use, its ruling makes reliance on public AI tools highly risky.
- Vet Enterprise AI Solutions: The court’s reasoning relied heavily on the public nature of the AI tool and its specific privacy policy. Businesses should consult with legal counsel to evaluate whether closed, enterprise-grade AI systems with strict “zero data retention” and non-training agreements might offer stronger arguments for maintaining confidentiality.
- Do Not Seek “Legal Advice” from AI: Courts may continue to look to platforms’ own disclaimers. Because AI platforms generally disclaim the ability to provide formal legal advice, attempting to use them for this purpose risks a court finding that the information cannot be protected by the attorney-client privilege.
1 United States v. Heppner, No. 1:25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026), Dkt. No. 27.







