The Bluebook’s New AI Citation Rule Misses the Mark

In case you missed it, the latest edition of the Bluebook has introduced a new rule on citing AI-generated content. While I agree that a rule addressing GenAI was needed, unfortunately, Rule 18.3’s approach misses the mark.

University of Idaho law professor Jessica Gunder has published a thorough critique of the new rule titled “Yikes! The Bluebook’s Generative AI Rule is Flawed” that lays out exactly why Rule 18.3 is problematic. Her analysis raises serious concerns that legal educators, practitioners, scholars, and courts should consider carefully.

When Should You Even Cite AI?

While Rule 18.3 provides detailed formatting requirements for citing GenAI, it fails to address the question of when a citation is appropriate in the first place. Gunder distinguishes between two very different scenarios: citing AI as a source of evidence (like pointing to a ChatGPT response that included glue in a pizza recipe to demonstrate the tool’s unreliability) versus citing GenAI when it’s been used as a research or drafting tool.

Traditional citation serves specific purposes: helping readers locate sources, communicating the weight of authority, demonstrating credibility, and avoiding plagiarism. As Gunder points out, none of these rationales support citing AI when it’s used merely as a tool rather than as a source of evidence.

Technological Burden is Unreasonable

When citing GenAI, Rule 18.3 requires users to “save a screenshot capture of that output as a PDF to be stored on file.” Not only is this cumbersome, but many students and legal professionals may lack the technical skills needed for this task. The problem becomes even more complex when you consider that meaningful AI interactions often span multiple screens, requiring scrolling screenshots to capture conversations that extend beyond a single page view. This technological hurdle is likely to result in poor compliance with the rule, undermining the Bluebook’s goal of creating uniform citation practices.

Misunderstanding How AI Is Actually Used

Rule 18.3 assumes simple, one-sentence prompts, as demonstrated by its sample citations. But effective AI use involves complex, iterative conversations. A skilled user might upload documents, provide detailed context, refine responses through multiple rounds of feedback, and engage in lengthy back-and-forth exchanges.

The rule’s simplistic approach to prompt citation makes it unworkable for how AI is actually employed in legal practice. The Bluebook’s examples suggest a relatively rudimentary level of AI usage that’s not very useful, while more sophisticated uses of AI become impossible to cite under this framework.

Serious Ethical Concerns

Most troubling is that a strict interpretation of Rule 18.3 could force attorneys to violate their ethical obligations. If the rule requires citing AI use whenever generative tools assist with drafting or research, attorneys might need to disclose confidential client information that was included in their prompts. As Gunder notes, this could breach the duty of confidentiality and waive work product protections.

Providing context is important in using AI effectively. Some AI systems provide safeguards – such as secure, in-house AI systems or AI services with data protection features – so that attorneys may provide factual details about cases, client circumstances, and strategic thinking to get better output.

But here’s the catch: while using AI with proper safeguards might be ethically permissible, requiring disclosure of those AI interactions through citations in court filings creates an entirely different problem. The citation requirement may force public disclosure of information that confidentiality rules are designed to protect.

The Challenge Ahead

While I applaud the Bluebook editors for recognizing that AI citation guidance was needed, unfortunately, Rule 18.3’s execution falls short of what the legal profession requires. Legal writers will likely need to navigate these contradictions and choose their own approaches for the foreseeable future, which undermines the Bluebook’s central purpose of creating uniform citation standards.

See Gunder’s article for a more detailed analysis. It’s worth reading for anyone grappling with these issues in practice, scholarship, or teaching.