The Intractability of Law: Why Lawyers Will Matter Even When AGI Arrives
Where I explore why computer science’s hardest unsolved problem reveals what no machine can take from lawyers
If we achieve artificial general intelligence, will we still need lawyers?
The typical answers fall into two camps. The doomers say no: lawyers are just information processors, and AGI will process information better. The deniers say of course: because law is special and machines could never understand it. Both camps are wrong. And the reason they’re wrong isn’t philosophical. It’s mathematical.
There’s a concept in computer science called tractability that gives us a far more precise, and far more useful, answer to this question than any amount of hand-waving about “the human touch.” It tells us exactly what remains for human lawyers, and why, even in a world with superintelligent machines.
If this sounds interesting to you, please read on…
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
What Is Tractability?
In computer science, problems fall into two buckets. Tractable problems can be solved efficiently: as the problem gets bigger, the time it takes to solve it grows at a manageable rate. Sorting a list of names or searching a database? Tractable. A computer handles these gracefully, even at enormous scale.
Intractable problems are different. They aren’t impossible, they’re just impossible to solve efficiently as they grow. Think of the difference between cleaning a room and counting every grain of sand on a beach by hand. You could technically do the latter, but you’d run out of time before you finished.
The classic example is the Traveling Salesperson Problem: given a list of cities, what’s the shortest possible route that visits each city once and returns home? With 10 cities, your laptop can figure it out over lunch. With 100 cities, the number of possible routes exceeds the number of atoms in the observable universe. The problem didn’t change; it just scaled beyond any computer’s reach.
One of the greatest unsolved questions in all of science, known as the P vs NP problem, asks whether every problem whose answer can be checked quickly can also be solved quickly. In plain English: if I hand you a solution and you can verify it’s correct in seconds, does that mean a fast method to find that solution must exist? Nobody knows. It’s a million-dollar question, literally, the Clay Mathematics Institute has a prize waiting. Most experts suspect the answer is no, meaning some problems are fundamentally, permanently hard to solve efficiently, no matter how powerful the computer.
This matters enormously for law.
Mapping Tractability Onto Legal Work
Here’s where it gets interesting. Much of what lawyers do day-to-day is already tractable, or rapidly becoming so.
Document review? Tractable. Predictive classification can sort millions of files into “relevant” and “not relevant” at scale. Legal research? Tractable. Finding a specific precedent is fundamentally a search-and-index problem, and databases like Westlaw handle it beautifully. Contract management? Tractable. Tracking thousands of expiration dates and flagging non-standard clauses is exactly the kind of structured, repeatable work that machines eat for breakfast.
These are the tasks that AGI, or even today’s narrow AI, will continue to absorb. And we should let it. Automating tractable work frees lawyers to focus on what actually requires them.
Because the practice of law, at its core, is intractable.
Why Legal Judgment Resists Computation
Corporate litigation is the textbook real-world intractable problem. Consider the variables: millions of documents, thousands of precedents, multiple parties with competing interests, and the gloriously unpredictable variable of human behavior. Add more defendants and the number of possible cross-claims and settlement combinations doesn’t just double: it explodes exponentially.
Worse, litigation is a moving target. A witness changes testimony. A judge issues a surprise ruling. New regulations land mid-trial. In a tractable math problem, the rules stay fixed. In litigation, the problem itself keeps changing.
And then there’s the adversarial dimension. Unlike a math equation, litigation involves an opponent actively trying to make the problem harder for you. This creates layers of strategic recursion (“I think that they think that I think...”), that no algorithm can cleanly resolve.
In-house counsel face their own version of intractability. Balancing a CEO’s aggressive growth targets against a CFO’s budget constraints, a board’s fiduciary duties, and a regulator’s shifting rules is what physicists call an “N-body problem.” Calculating the gravitational pull between two planets is straightforward. Three or more? Famously unsolvable by any simple formula. There is no “correct” answer that satisfies everyone simultaneously. There is only judgment, applied under pressure, in real time.
And consider what I call the Prevention Paradox. In computer science, the Halting Problem tells us it’s impossible to build a program that can always predict whether another program will eventually stop or run forever. In-house counsel face an analogous challenge: their primary job is preventing things that haven’t happened yet. How do you calculate the value of a lawsuit that didn’t occur? How much risk is too much? The problem space is infinite, filled with unknown unknowns.
How Lawyers Already “Solve” Intractability
Lawyers have been managing intractable problems for centuries using the same strategies computer scientists use.
Settlement is a heuristic — a “good enough” solution that avoids the exponential cost of a full trial. If solving the problem costs more than the solution is worth, you settle. It’s not mathematically perfect, but it’s rational.
Summary judgment is scope restriction — pruning the decision tree, removing legal issues from dispute to make the remaining problem smaller and more manageable.
Standard contract templates are what I’d call “tractabilizing” — taking a complex negotiation with infinite possible variations and reducing it to a plug-and-play exercise with known parameters.
Lawyers have been acting as human heuristics, making the intractable manageable, all along. They just didn’t have the vocabulary for it.
How Tractability Informs Our Response to AGI
When AGI arrives, the tractable work (the research, the review, the routine drafting) will be handled by machines, and much of it already is. Lawyers who define their value by those tasks are in trouble.
But the intractable work isn’t going anywhere. Strategic judgment in multi-variable, adversarial, constantly shifting environments. Ethical reasoning at the edges where rules conflict. The human capacity to weigh incommensurable values — justice against efficiency, risk against opportunity, the letter of the law against its spirit — under genuine uncertainty.
You can automate the process of law. You cannot automate the judgment of law.
AGI, when it comes, will be the most powerful tool lawyers have ever had for conquering tractable problems. But intractability isn’t a limitation of current technology. It’s a property of the problems themselves. More computing power doesn’t make the Traveling Salesperson Problem tractable. And more artificial intelligence won’t make the judgment calls of a general counsel or a trial lawyer computable.
Closing Thoughts
I’ve spent a decade building AI tools for the legal industry. I’ve watched AI go from a novelty to a necessity. And I’ll be the first to tell you: the vast majority of what lawyers currently bill for is tractable work that machines will do better, faster, and cheaper.
But the longer I work at this intersection, the more convinced I become that the core of lawyering (the part that actually matters) was never about processing information. It was about making judgments in the face of irreducible uncertainty. About standing in a room and saying, “This is what I believe is right, and here’s why, and I’ll be professionally responsible for it.”
That’s not a tractable problem. It’s not even an intractable one. It’s a human one.
And, that’s not a consolation prize. That’s the point of being a lawyer.
The machines will handle the sand-counting. You handle the judgment.
Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and Author of the forthcoming AI with Purpose: A Strategic Blueprint for Legal Transformation (Globe Law and Business). He is “The AI Law Professor” and writes his eponymous column for the Thomson Reuters Institute.



