Even the Smartest AI Gets Anxious: Why Lawyers Must Verify

Even the Smartest AI Gets Anxious: Why Lawyers Must Verify

Lawyers Cannot Afford to Trust AI Blindly

A Fortune article this week revealed something fascinating: Claude Sonnet 4.5, one of the most advanced AI models available, experiences what researchers call “context anxiety.” When approaching its processing limits, it starts cutting corners, rushing through tasks, and leaving work unfinished—even when it actually has plenty of capacity left.

Sound familiar? It should. Because if the most sophisticated AI gets nervous and makes mistakes under perceived pressure, what does this tell us about relying on AI for mission-critical work?

This is exactly why AI should augment your work, never replace your judgment.

We’ve already seen the consequences when lawyers treat AI as a magic button. Remember the attorney who submitted a brief filled with completely fabricated case citations generated by ChatGPT? Or the lawyers sanctioned for filing motions citing non-existent cases? These weren’t hypothetical risks—they were real sanctions, real embarrassment, and real damage to client relationships.

The Fortune article’s findings about Claude’s context limitations add another layer to what we already know: AI tools have limits, blind spots, and failure modes that aren’t always obvious. Even when they appear confident, they can be wrong. Even when they seem thorough, they might be cutting corners you can’t see.

This is why I’ve upgraded my AI tool subscriptions: not to rely on them more, but to mitigate the risks associated with using them to augment my research and due diligence. Larger context windows and advanced features help, but they don’t eliminate the need for human oversight.

For my fellow attorneys (and business leaders), here’s what this means practically:

  • Always verify. Every case citation, every legal principle, every factual assertion. No exceptions. AI doesn’t have a law license—you do, and your name is on the work.
  • Use AI as a starting point. Let it draft that first outline, summarize that deposition, or brainstorm arguments. Then apply your expertise to refine, verify, and improve it.
  • Understand the tool’s limitations. Know what your AI can and can’t do. Check those context windows. Be aware that quality can degrade in long documents or complex tasks.
  • Stay current on competence requirements. Professional responsibility requires that we understand the technology we use. Ignorance isn’t a defense when AI goes wrong.
  • Build verification into your workflow. Create systems where AI output goes through review checkpoints before anything client-facing goes out the door.

The Fortune article is a powerful reminder of why the human element remains irreplaceable. Use these powerful tools. Let them make you faster and more efficient. But never, ever let them replace your professional judgment.

AI is here to stay. The question is, are you using it wisely?

Read the full Fortune article here.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *