A Practical Guide on AI Prompting for Legal Professionals

The November issue of Wisconsin Lawyer features a practical guide I wrote on using generative AI effectively in legal practice. “AI Prompting for Legal Professionals: The Art of Asking the Right Question” offers a framework for attorneys looking to improve their interactions with AI tools while maintaining professional standards.

The central premise is straightforward: the same critical thinking skills lawyers use when interviewing clients or examining witnesses apply equally when working with AI. Vague information from a client won’t help you build a strong case, and vague prompts to AI won’t produce useful results. Context matters as much with AI as it does with people.

The 7 Ps Framework

The article introduces a systematic approach called the “7 Ps Framework” for crafting effective AI prompts:

  1. Persona – Give AI a role to play; help it understand the perspective and expertise level you need.
  2. Product – Tell AI exactly which format you want for the response.
  3. Prompt – Name the specific task you want AI to perform with a clear action verb.
  4. Purpose – Explain the “why” behind your request.
  5. Prime – Provide the legal and factual context that guides AI’s analysis.
  6. Privacy – Never include client names, confidential details, or privileged information in your prompts.
  7. Polish – Refine your conversation until you get useful outcomes.

Not every prompt requires all seven elements, but understanding each component helps lawyers make deliberate choices about what information to include.

Privacy Considerations

The privacy component deserves special attention. The article stresses that attorneys must never include client names, confidential details, or privileged information in AI prompts. Instead, create hypothetical scenarios that capture the essential legal issues without compromising attorney-client privilege. For example, rather than using real client information, describe “a 45-year-old employee terminated three days after filing a harassment complaint.”

Advanced Techniques

Beyond the basic framework, the article warns about several AI pitfalls:

  • Sycophancy – AI systems want to please and may tell you what they think you want to hear. Prompt them to be critical or play devil’s advocate.
  • Hallucinations – Always ask for sources and verify them independently. Asking “is this case real?” doesn’t work because AI will confirm citations that don’t exist.
  • Drift – Long conversations can cause AI to lose track of context. Limit exchanges to no more than 15 follow-up questions before starting fresh.
  • Watch out for “pink elephants” – Avoid telling AI what not to do. Just like when someone says “don’t think of a pink elephant,” and you immediately picture one, AI systems often struggle with negative instructions. Instead of “don’t cite other states,” say “focus only on Wisconsin cases.”

The Bottom Line

The key takeaway is that generative AI amplifies legal expertise but doesn’t replace professional judgment. Lawyers remain responsible for verifying results, maintaining ethical standards, and exercising critical thinking. The technology works best when guided by someone who understands what questions to ask and how to evaluate the answers.

 

Disclosure: I used Claude AI to help me develop this post, following the same critical curation and review process described in my earlier post on AI best practices.

Posted in AI