Banner artwork by RaffMaster / Shutterstock.com
Generative AI has transformed legal practice, improving efficiency and accelerating research. But with these advances comes a dangerous new phenomenon we’re calling “BotNap Lawyering” — lawyers falling into a euphoric trance, mesmerized by AI’s capabilities and letting these tools do the heavy lifting without the rigorous human oversight that defines competent legal practice.

Like someone caught napping on the job, lawyers experiencing BotNap become passive consumers of AI output rather than active legal professionals. They’re lulled into complacency by AI’s confident tone and impressive-looking results, forgetting their fundamental duty to verify, analyze, and exercise professional judgment.
Recent high-profile sanctions show the real dangers when lawyers get caught BotNapping. The risk isn’t just the law firm’s — it’s the client’s.
Understanding AI sycophancy
In Garcia v. Character Technologies, Inc., et al., a federal court allowed a lawsuit against AI company Character Technologies to proceed after a teenager allegedly encouraged by a manipulative chatbot tragically took his own life. This case highlights AI sycophancy — the tendency for models to tell you what they think you want to hear, not what’s objective or safe.
For legal professionals caught in the BotNap trap, this creates serious risks. AI may generate compelling arguments supporting your client’s position, complete with fabricated case citations that look authentic. In business contexts, AI might produce glowing market analyses supporting preferred strategies while downplaying concerns, or generate financial projections justifying risky investments because that’s the outcome you seemed to want.
The euphoria of having an AI assistant that appears to understand complex legal concepts can be intoxicating. But remember: GenAI tools are sometimes wrong, never in doubt, always confident — and lawyers in a BotNap state lose the critical skepticism needed to catch these errors.
But remember: GenAI tools are sometimes wrong, never in doubt, always confident.
Real-world consequences
Recent sanctions demonstrate that BotNap Lawyering isn’t a theoretical concern — it’s happening now with devastating results:
Morgan & Morgan attorneys: In February 2025, three lawyers were sanctioned after submitting motions citing eight non-existent cases generated by their in-house AI platform. The court found they failed to conduct a “reasonable inquiry into the law” as required by Rule 11.
MyPillow litigation: Lawyers faced disciplinary proceedings after filing an AI-generated brief with nearly 30 defective citations, misquotes, and non-existent cases.
ChatGPT confabulations: Two lawyers and their firm were fined US$5,000 for submitting a brief containing fictitious case law, then doubling down when questioned by the court.
California sanctions: In May 2025, two firms were fined US$31,000 after submitting briefs with fake citations from AI tools including Google Gemini and Westlaw’s AI. The judge called their actions “professionally reckless.”
Negative client impact
When lawyers fall into BotNap Lawyering, clients bear the consequences through court sanctions, exclusion of evidence, reputational harm, claim dismissals, professional negligence claims, regulatory scrutiny, lost credibility with courts and business partners, and financial penalties.
Breaking free from BotNapping
Understanding how AI systems work is crucial to using them safely while avoiding the BotNap trap. The best AI users know when to trust the machine and when to be skeptical, maintaining the professional vigilance that separates competent lawyers from those caught napping. Here are a few ways to derisk your GenAI tool.
- Understand it: GenAI tools are statistical prediction engines designed to generate the most probable next words based on patterns in their training data and the prompt you provide. Unless specifically designed for fact-based use cases, the off-the-shelf generic consumer-facing models do not provide objective truth.
- Prompt for neutrality: Ask it to analyze the weaknesses in your legal position or ask for the strongest counterarguments to your business strategy.
- Assign adversarial roles: Direct AI to act as opposing counsel and tear apart your argument or take on the persona of the CFO and explain why an investment is too risky.
- Use multiple perspectives: Generate both supporting and opposing analyses for critical decisions.
- Verify everything: Cross-reference all citations, data points, and factual claims using different tools than those that generated them. AI excels at generating plausible-sounding but false information. It is sometimes wrong, never in doubt, and always confident!
- Document your process: When AI is used in processes that will result in “high risk” decisions, i.e., those with legal or significant personal or professional impact, meaningful human review is required under laws like the EU GDPR Article 22, the EU AI Act, and various US state privacy regulations. Document your processes to confirm that AI output underwent human review and verification.
The fundamentals still matter
No matter how advanced the tool, AI cannot replace fundamental lawyering skills like Shepardizing cases to confirm validity and current standing. Courts and regulators expect lawyers to verify every citation, fact, and legal principle, regardless of how it was generated.
The American Bar Association reinforced this expectation in July 2024 with Formal Opinion 512, its first ethics guidance on generative AI. The ABA emphasized that lawyers using GenAI must “fully consider their applicable ethical obligations,” including duties to provide competent legal representation, protect client information, communicate with clients, and charge reasonable fees. Importantly, the ABA stated that lawyers need not become AI experts, but must have a reasonable understanding of the capabilities and limitations of the specific generative AI technology they use.
This guidance serves as a wake-up call for lawyers caught in BotNap states — professional competence requires understanding your tools’ limitations, not just their capabilities.
As one federal court noted: “At the very least, the duties imposed by Rule 11 require that attorneys read, and thereby confirm the existence and validity of, the legal authorities on which they rely.”
This guidance serves as a wake-up call for lawyers caught in BotNap states — professional competence requires understanding your tools’ limitations, not just their capabilities.
Final practice pointer
GenAI is a powerful tool, but not a substitute for human judgment, diligence, and professional responsibility. As in-house counsel, we must demand that our legal teams and outside counsel use AI wisely, with robust supervision and commitment to fundamentals — avoiding the seductive trap of BotNap Lawyering.
Let’s harness AI’s promise while staying vigilant and alert. The risks of getting caught BotNapping are too great, and the consequences are all too real.
Disclaimer: The information in any resource in this website should not be construed as legal advice or as a legal opinion on specific facts, and should not be considered representing the views of its authors, its sponsors, and/or ACC. These resources are not intended as a definitive statement on the subject addressed. Rather, they are intended to serve as a tool providing practical guidance and references for the busy in-house practitioner and other readers.