The AI Hyperproductivity Trap Is Proving Itself—But It's Not Too Late
Where I explore how Berkeley researchers just proved what I warned about nine months ago and why legal leaders need to act before the trap snaps shut
Welcome back, legal rebels! 🚀
Back in May 2025, I wrote a piece called “The Hyperproductivity Trap: How AI May Reshape Our Expectations, and Ourselves.” In it, I told the story of Morgan, a fictional Manhattan associate who discovered that getting more productive with AI didn’t mean working less, it meant her supervising partner filled every freed hour with new assignments and higher expectations. I warned about Parkinson’s Law getting a 21st-century upgrade: “Work expands to fill your new capacity generated by AI.”
This week, Berkeley researchers proved it’s not just theory. It’s empirical fact.
A new study published in Harvard Business Review tracked how generative AI changed work habits over eight months at a 200-person technology company.
Their finding: AI tools didn’t reduce work. They intensified it.
Workers worked faster, took on broader responsibilities, blurred the lines between work and rest, and multitasked aggressively, all without being explicitly asked to do so.
If you read my May piece, none of this will surprise you. But having empirical research confirm the pattern, and reveal just how deep it runs, should sharpen the urgency for every legal organization adopting AI right now.
If this sounds interesting to you, please read on…
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
What the Berkeley Researchers Found
The study identified three distinct forms of work intensification. Each one maps directly onto dynamics I described in “The Hyperproductivity Trap,” but now we have data, not just a compelling narrative.
1. Task Expansion: The Morgan Effect, Quantified
In the study, workers started absorbing responsibilities that previously belonged to other people. Product managers began writing code. Researchers took on engineering tasks. People attempted work they would have outsourced or deferred entirely. The AI made unfamiliar tasks feel accessible, so people just… did them.
This is exactly what happened to Morgan, my then fictional Manhattan associate. And it’s exactly what we’re seeing across the legal profession right now. Associates use AI to tackle matters outside their experience level because it feels manageable. Paralegals draft substantive documents that previously required attorney oversight. Solo practitioners take on practice areas they wouldn’t have touched a year ago. Vibe-coding lawyers are taking on coding projects previously outsourced to programmers.
Here’s what the researchers added that I hadn’t fully anticipated: the knock-on burden on experts. Engineers ended up spending more time reviewing and correcting AI-assisted work from colleagues who were “just trying things.” They became informal quality controllers for work that shouldn’t have been attempted in the first place. In law, that burden falls squarely on senior attorneys and partners, adding to their workload while everyone else feels more productive.
2. Blurred Boundaries: The Blackberry Cycle, Reloaded
In my May 2025 piece, I drew a direct line from the Blackberry era to AI: “What started as a competitive advantage became an industry norm.” The Blackberry promised liberation from the office. What it actually delivered was the expectation of 24/7 availability. Those who forget history are doomed to repeat it.
The Berkeley researchers found the same dynamic with AI, but with a twist that makes it even more insidious. Because prompting an AI feels more like chatting than working, employees slipped work into lunch breaks, commutes, and evenings without registering it as additional effort. Work became what the researchers called “ambient”; something that could always be advanced a little further.
For lawyers, this is the Blackberry saga on steroids. At least with email, you knew you were working. With AI, the conversational interface is seductive precisely because it doesn’t feel like labor. A quick prompt during dinner. A research query while watching TV. A “last prompt” before bed. It’s all cognitive effort, and it all compounds, but it never quite feels like you’re “at work.”
3. Constant Multitasking: Parkinson’s Law, Amplified
The third pattern was pervasive multitasking. Workers managed multiple AI threads simultaneously, drafting documents while AI generated alternatives, running parallel queries, reviving long-deferred tasks because the AI could “handle them” in the background.
This is my updated Parkinson’s Law in action. The AI doesn’t give you a breather; it gives you capacity. And that capacity immediately gets filled. Not with strategic thinking. Not with client development. Not with the novel you always wanted to read. It gets filled with more tasks, more threads, more cognitive load.
The researchers found that workers felt more productive but not less busy, and in many cases, busier than before. For lawyers juggling multiple client matters, the cost of this fragmented attention isn’t just fatigue. In a profession where a missed detail can be malpractice; it’s a liability.
The Hyperproductivity Trap Snaps Shut
What makes this truly dangerous is that the three patterns reinforce each other. AI accelerates tasks → expectations for speed increase → workers rely more heavily on AI → reliance widens scope → wider scope creates more work → more work demands more AI. The cycle tightens.
“You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don’t work less. You just work the same amount or even more.” — Engineer in the Berkeley study
That quote could have come straight from Morgan’s mouth. And it should be printed on every legal AI vendor’s sales deck, right next to the one about “freeing lawyers for strategic work.”
From Prediction to Prescription
In “The Hyperproductivity Trap,” I offered strategies to head off this cycle: redefining success beyond speed, setting realistic boundaries, creating “focus windows,” and leveraging AI for quality rather than quantity. The Berkeley researchers arrived at strikingly similar prescriptions, they call it an “AI practice” of intentional pauses, sequencing, and human grounding.
I’m glad the research validates those recommendations. But nine months later, I think we need to push further. For legal organizations specifically:
—> Audit reality against the promise. If your firm adopted AI expecting to reduce workloads, measure whether that’s actually happened. Not task completion speed: total hours worked, scope of responsibilities assumed, and wellbeing indicators. The Berkeley research suggests you probably won’t like what you find.
—> Build governance for human behavior, not ideal behavior. The study found that work intensification was entirely voluntary and self-directed. Nobody told these workers to do more. Top-down policies about “approved AI use cases” will miss the problem entirely. You need norms around when not to use AI, when to stop expanding scope, and when to protect unstructured thinking time.
—> Protect the work that matters most. Judgment. Empathy. Ethical reasoning. Creative problem-solving. The kind of focused, reflective attention that AI-driven intensification erodes. If your AI strategy doesn’t explicitly protect space for that work, you’re optimizing for speed at the expense of the thing that actually makes lawyers valuable.
—> Name the trap out loud. One of the most powerful things leaders can do is simply acknowledge the dynamic. Tell your team: “We adopted AI to work smarter, not to pile on more work. If you’re busier than before, that’s a signal we need to recalibrate.” The Berkeley research shows that without this kind of intentional conversation, intensification happens silently.
Closing Thoughts
I’ll be honest with you: there’s a part of me that read the Berkeley study and thought, “I told you so.” But, the bigger feeling was concern. Because if this dynamic is already measurable in a tech company where people are relatively AI-savvy, imagine what it looks like in a profession that’s already drowning in overwork, where burnout is endemic, and where the consequences of impaired judgment aren’t a buggy feature: they’re a client’s liberty, livelihood, or life savings.
I’ve spent over a decade building AI tools for lawyers. I believe in this technology deeply. I’ve seen it expand access to justice for people who have nowhere else to turn. I’m not sounding the alarm because I think AI is the problem. I’m sounding it because how we adopt AI is the problem.
The narrative that AI will free lawyers up for higher-value work isn’t just optimistic. It’s a misunderstanding of how these tools interact with human psychology. AI doesn’t create leisure. It creates capacity. And without intentional systems, that capacity gets filled, not with strategic thinking, but with more of everything.
This is why I keep coming back to purposeful integration.
Not AI for AI’s sake. Not AI because the vendor promises productivity gains. But AI deployed within a strategic framework that starts with why: why are we using this tool, what human values does it serve, and what guardrails will prevent it from quietly consuming the very space it was supposed to create?
Nine months ago, I called it the Hyperproductivity Trap. Now Berkeley has the data to prove the trap is real. The question is: what are we going to do about it?
The legal profession doesn’t need more speed. It needs more wisdom about how to use speed wisely. And that’s the kind of work no AI can do for us.
Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and Author of the forthcoming AI with Purpose: A Strategic Blueprint for Legal Transformation (Globe Law and Business). He is “The AI Law Professor” and writes his eponymous column for the Thomson Reuters Institute.
Related:
The Hyperproductivity Trap: How AI May Reshape Our Expectations, and Ourselves (May 6, 2025)
Aruna Ranganathan & Xingqi Maggie Ye, AI Doesn’t Reduce Work—It Intensifies It (February 9, 2026) HBR



