So, is legal AI actually working, or is it just another expensive tech fad destined for the corporate graveyard? We asked Greg Lambert, the man behind the Geek in Review podcast and a Chief Innovation Officer at Jackson Walker. His answer? It’s complicated.
Lambert, a seasoned observer of legal tech, didn’t pull any punches in his recent interview. He’s seen the promises, and he’s seen the reality. And the reality, it turns out, is a lot messier than the slick marketing materials suggest.
The Onboarding Abyss
Getting new AI systems into a law firm isn’t like plugging in a toaster. It’s a monumental undertaking. Lambert points out the friction. It’s not just about the tech; it’s about the people. And the processes. And the deeply ingrained habits that resist change like a stubborn mule.
And then there’s the rampant tool duplication. Every shiny new AI product promises to revolutionize practice. Firms end up with a buffet of similar tools, none fully utilized, all draining budgets. It’s a recipe for inefficiency, not innovation.
Productivity overload. It sounds like a contradiction, doesn’t it? But Lambert explains it well. Too many tools, too many notifications, too much data flying around. Lawyers get swamped. They’re not more productive; they’re just more busy. It’s like giving a chef a thousand gadgets but no clear recipe.
“We’re seeing a lot of organizations, they’ve adopted so many different tools and they’re not getting the benefit out of any of them.”
Research Errors: A Lingering Ghost
Legal research is where AI was supposed to shine. Faster, more accurate, groundbreaking insights. But Lambert highlights the persistent issue of errors. AI can hallucinate. It can miss nuances. It can present outdated or incorrect information with utter confidence.
This isn’t just a minor glitch. It has real-world consequences. A flawed research memo can sink a case. The long time horizons for true change mean these aren’t problems that will vanish overnight. We’re talking years, maybe decades, for AI to become truly reliable in the core functions of legal practice.
Lambert also touched upon developments like Claude for agents and Newcode.ai, hinting at the next wave of AI applications. He sees potential for truly automated contract negotiation, a concept that has long been the holy grail of legal tech. But even there, he’s cautious. The devil, as always, is in the details.
Is It All Just Hype?
This interview cuts through the relentless optimism often peddled by AI vendors. Lambert isn’t saying AI is useless. Far from it. But he’s urging a dose of pragmatism. Adoption is a marathon, not a sprint.
His insights echo a broader trend: the gap between AI’s potential and its practical implementation in established industries like law. It’s a familiar story. Early adopters often face the steepest learning curves and the highest costs. The rest of the pack waits, watches, and hopefully learns.
What’s missing from the vendor narrative is often the sheer complexity of integrating these tools into existing workflows. It requires a fundamental rethinking of how legal work gets done. And that’s a heavy lift for any organization, especially one as tradition-bound as a law firm.
Lambert’s view isn’t pessimistic; it’s realistic. It’s the kind of grounded analysis Legal AI Beat readers have come to expect. We need to understand the pitfalls before we can truly celebrate the successes. The real progress won’t be in the flashy demos; it will be in the quiet, difficult work of making AI truly useful and trustworthy for lawyers and their clients.
For those attending Legal Innovators California in San Francisco (June 10-11) or Legal Innovators Europe in Paris (June 24-25), expect more of these frank discussions. The industry is still figuring itself out. And Greg Lambert will likely be there, cutting through the noise with that signature blend of expertise and skepticism.