The Intelligence Razor: Choose the Dumbest Technology That Solves Your Problem
Where I offer a bite-size reflection on why the latest and greatest is not always the best option for your legal AI project
So, this isn’t my usual full-length thought-piece, but more of a bite-size reflection on a topic I thought you might find interesting: the hype around AI Agents and Agentic Workflows. I hope you like it!
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
How choosing the least sophisticated technology that solves your problem can revolutionize your approach to law firm automation
In the 14th century, William of Ockham gave us a principle that has guided scientific thinking for over 700 years: when faced with competing explanations, choose the simplest one. Occam's Razor cuts through complexity to find truth.
Today, as businesses rush headlong into an AI-first, AI-everything world, we need a new razor. One that cuts through technological complexity to find the right solution, not just the most impressive one.
I call it the Intelligence Razor: When choosing between AI solutions, select the least sophisticated option that adequately solves the problem.
The Seduction of Sophistication
I've been fielding a lot of questions about AI Agents and Agentic Workflows on podcasts lately, and there's a familiar pattern emerging. The same breathless excitement that once surrounded ChatGPT as an all-knowing oracle has now shifted to AI Agents as digital genies, autonomous problem-solvers that can somehow tackle any challenge you throw at them.
Walk into any legal technology conference today and you'll be bombarded with promises of AI agents that can think, reason, and act autonomously. The demos pitched sophisticated AI agents that could "replace junior associates" and "think through complex legal issues." The possibilities seem endless. The marketing materials practically glow with possibility.
This magical thinking misses the point entirely. AI Agents aren't mystical fixers; they're tools. Sophisticated tools, yes, but tools nonetheless. And like any tool, their value depends entirely on whether you're using the right one for the job at hand.
But here's what they don't tell you: most legal work doesn't need artificial general intelligence. It just needs reliable, understandable solutions that work consistently, maintain client confidentiality, and don't create malpractice risks.
The Four Levels of Technological Sophistication
Not all problems require the same technological firepower. Think of it as choosing the right tool for the job:
Level 1: If-Then Workflows
The humble if-then statement is law practice's equivalent of a well-drafted checklist—simple, reliable, and surprisingly powerful. These rule-based systems excel at predictable, structured tasks.
Perfect for: Statute of limitations tracking, conflict checking, standard motion deadlines, client intake routing, basic compliance calendaring.
Why it works: Zero ambiguity, complete audit trail, minimal malpractice risk, rock-solid reliability that satisfies professional responsibility requirements.
Level 2: AI Workflows
Structured AI assistance that recognizes patterns and handles defined outputs. Think of it as having a very good paralegal who follows clear procedures and never misses details. You add in the use of an LLM, but only for specific tasks.
Perfect for: Document categorization for discovery, citation checking and formatting, contract clause identification, time entry optimization, basic legal research organization.
Why it works: Handles tedious work lawyers hate, provides consistent results, remains explainable to clients and courts, maintains necessary audit trails.
Level 3: Agentic Workflows
Multi-step AI processes that can use multiple legal databases and make sequential analytical decisions. Like having a diligent first-year associate who can follow complex research protocols across multiple sources. An LLM is used for planning and making choices.
Perfect for: Multi-jurisdiction research projects, complex due diligence with cross-referencing requirements, comprehensive discovery document review, regulatory compliance analysis across multiple authorities.
Why it works: Handles genuinely complex legal analysis while maintaining attorney oversight and the ability to explain reasoning to clients and courts.
Level 4: True AI Agents
Autonomous systems that can adapt, learn, and make independent legal judgments. The technological equivalent of hiring a contract attorney with decision-making authority. An LLM is used for planning, making choices, and adapting to changed circumstances.
Perfect for: Situations requiring genuine legal judgment and independent decision-making... which create serious professional liability concerns and are rarer than vendors suggest.
Why it's often problematic: Legal practice demands predictability, explainable reasoning, and attorney accountability, not AI creativity.
The Psychology of Advanced Technology
There's a cognitive bias at play in legal AI selection that I call "sophistication signaling," the unconscious belief that more advanced technology signals more competent lawyering. It's the legal tech equivalent of citing every case in the jurisdiction to sound more authoritative.
But sophisticated technology often creates sophisticated problems for law practices:
Malpractice Risk: Advanced AI systems that make legal judgments create new liability exposures that may not be covered by professional liability insurance.
Explainability Problem: The more sophisticated the system, the harder it becomes to explain to clients, opposing counsel, or courts why it reached a particular legal conclusion, a serious issue when your professional duty requires clear reasoning.
Ethical Compliance: Complex AI systems raise thorny questions about Model Rule 5.3 (supervision of nonlawyer assistants) and Rule 1.1 (duty of competence) that simpler systems avoid entirely.
The Intelligence Razor cuts through these problems by forcing a simple question: "What's the least sophisticated technology that reliably solves this specific legal problem while maintaining professional responsibility compliance?"
The Intelligence Razor: A Practical Framework
Here's how to apply the Intelligence Razor to your next technology decision:
Step 1: Define Success What specific legal outcome do you need? Avoid vendor feature lists. Focus on value and on measurable improvements in client service, risk reduction, or practice efficiency.
Step 2: Start with Simple Rules Can basic if-then logic handle this legal task? You'd be surprised how many "complex" legal processes are just multiple simple rules that reflect existing firm policies.
Step 3: Escalate Only When Necessary Move to higher levels only when the simpler solution clearly cannot handle the legal requirement. Document why the simpler solution won't work; this analysis often reveals that you don't need the complexity after all.
Step 4: Maintain Professional Oversight Regularly audit whether your technology choices still comply with evolving ethical requirements. Bar associations are still developing guidance on AI use, and your simple solution today may need adjustment tomorrow.
Step 5: Resist Sophistication Creep Just because legal tech vendors offer AI doesn't mean you need it. Ask: "Does this additional complexity solve a real legal problem, improve client service, or just sound more impressive in marketing materials?"
The Counterintuitive Truth About Innovation
Here's what surprised me most about applying the Intelligence Razor to my own development of intelligent systems: it often leads to more innovation, not less.
The most innovative law firms I work with have the most boring, reliable core systems. They've applied the Intelligence Razor ruthlessly to their routine legal processes (document review, deadline management, conflict checking) which frees them to innovate where it actually matters: client relationships, legal strategy, and business development.
William of Ockham would never have imagined his razor would need updating for the age of artificial intelligence and algorithmic legal practice. But the principle remains sound: the simplest explanation that accounts for all the facts is usually correct.
In legal technology, the simplest solution that reliably solves the problem while maintaining professional responsibility compliance is usually best.
The Intelligence Razor isn't about avoiding advanced legal technology; it's about using the right tool for the legal job. Sometimes that tool is a sophisticated AI system for complex research. More often, it's a well-designed if-then workflow that just works, protects client confidentiality, and lets lawyers focus on practicing law.
The sharpest solution is often the simplest. Cut accordingly: your clients, your malpractice carrier, and your sanity will thank you.
By the way, did you know you that I now offer a daily AI news update? You get 5 🆕 news items and my take on what it all means, delivered to your inbox, every weekday.
Subscribe to the LawDroid AI Daily News and don’t miss tomorrow’s edition:
LawDroid AI Daily News, is here to keep you up to date on the latest news items and analysis about where AI is going, from a local and global perspective. Please share this edition with your friends and colleagues and remember to tell me what you think in the comments below.
If you’re an existing subscriber, you read the daily news here. I look forward to seeing you on the inside. ;)
Cheers,
Tom Martin
CEO and Founder, LawDroid