Tom, this is very good and thought-provoking. I’ve been processing similar thoughts along different lines, but I think we’re in agreement that “reliable judgment under uncertainty” is the lawyer’s killer app — and that “trustworthy competence to form reliable judgment” ought therefore to be the only real requirement for lawyer licensure.
The only thing that concerns me is the possibility that people (individually and in their corporate form) might decide “judgment” is an unnecessary luxury, and that they’ll settle for “decisions with coverage”. It won’t matter to them that they get the *right* advice or the *best* call — they’ll only care that someone made a decision and someone else will pay compensation if it goes wrong.
I think back to when title insurance first arrived in the real estate market in the 1990s. Real estate lawyers were apoplectic: “Title insurance doesn’t prove you have title to the property you’re buying; it just papers over the cracks in the title system and pays out an insurance policy if title is invalid. Only a properly conducted lawyer search of title records can guarantee both the validity of your property ownership and the integrity of the title system itself.”
And it turned out that nobody really cared. People were more than content to pay a small fraction of a real estate lawyer’s fee in exchange for a promise that, in the unlikely event of a challenge to title, the insurance company would handle it. And from what I can tell, hardly anyone pays for lawyers’ title searches anymore, at least for residential and maybe for commercial too.
I suppose what I’m worried about today is not that AI will replace lawyers’ capacity for judgment. It’s that, in our increasingly coarsening and deadening world, people won’t care enough about judgment to pay for it.
Thank you Jordan! With this piece, I was trying to define the "lawyer space" more precisely as it's usually framed as hand-wavy "strategy" or "higher value work." But, I completely agree with you that this line is theoretical and requires client adherence as well and that, especially for those who are unfamiliar with and do not value, professional judgment, it is subject to cannibalization by "good enough" judgment.
And yet, I'm still not sure myself that "judgment" is the bedrock layer -- I think there's further we can drill down. Judgment (or discernment or evaluation or wisdom) isn't something people desire in and of itself. It's a means to an end -- a capacity that's applied to a situation in order to ... what? What's the outcome of applied judgment? What does it create, or change?
I think that the point of applying judgment is to bring an uncertain situation to a (positive) conclusion -- to replace uncertainty (which humans don't like) with clarity (which humans do like) and the capacity to proceed further, from a better position than before. Somewhere in there, I think, is the point of lawyers. I'm still mulling it over.
We'll have to dig into this on your podcast again sometime! Thanks again.
I think the bedrock for clients is feelings. The client deliverable from hiring a lawyer has always been "peace of mind" or "confidence." That doesn't mean that the lawyer delivers the outcome they want or expect, but that they feel (1) that the lawyer takes on the problem and makes it their own, (2) given that, the lawyer will pursue all available legal means to achieve the desired outcome, and (3) that should the lawyer make a mistake, they (the lawyer) are financially responsible for professional negligence. These 3 things engender that sense of peace of mind and/or confidence.
I agree with your observation that feelings are at the core of the attorney-client relationship, Tom. The relief a client feels upon hiring an attorney would quickly evaporate if the attorney’s emotional investment disappeared as soon as an arbitrary context window filled up. “CLAUDE: If I previously gave you the exact opposite advice from what I am now saying it is unsurprising that you lost your case, were sanctioned, and were jailed for contempt. You are right to push back.” (Of course, AIs may soon give us a run for our money when it comes to social intellignce.)
Lawyers have a different capacity and tolerance for reasoning under uncertainty than regular people.
I don't think it's something we learn in law school. Instead, it's a a feel that comes from the practice of hearing a good fact in an intake, exploring it in written discovery and then watching it transform into a bad fact during a deposition with a skilled opponent.
I represent people who are very sophisticated in their own domain yet somehow lack a lawyer's appreciation for the fact that different people can view the same car wreck/traffic stop/family dynamics and see different things.
Clients believe their perception is uncontroverted fact and start reasoning from there. So they prompt an AI with a factual position that supports their outcome and then ask the lawyer to affirm their reasoning.
Meanwhile, the lawyer assumes that not everyone agrees that the traffic light was red, or green, or whatever.
A lot of clients treat their perception as settled fact and then reason forward. Lawyers, by contrast, instinctively treat perception itself as provisional, something that will be reframed, contested, and pressure tested.
That’s also where AI gets interesting. A model will happily reason from whatever factual framing it’s given. It doesn’t naturally inject: what if the light wasn’t red? unless prompted to.
So the real skill isn’t just reasoning under uncertainty, it’s remembering that the “facts” are often the first layer of uncertainty.
This was an insightful framing with real technical depth. What resonates for me is how you map tractability from computer science onto legal practice. It's a crisp way to explain why some legal problems are amenable to automation and others are not, even for powerful AI. That lens helps cut through a lot of vague hype about “AI replacing lawyers.”
Really appreciate how you grounded this in a well‑defined concept instead of abstract rhetoric. It opens up a more honest conversation about where AI fits in legal work.
Your argument seems to be using different standards for humans and AGI, rather than comparing apples to apples.
No, AGI can't solve intractable problems efficiently, but neither can humans. Yes, human lawyers can manage intractable problems with heuristics, but by definition so can AGI.
It's logically inconsistent to say that AGI can't replace human lawyers because humans can manage intractable problems while AGI can't solve them.
We can teach both humans and machines a better grammar of thought and perception. The "AI can't feel and it's not empathic like a human" is both a little funny and sad, because often they can both be bad at it.
AGI (however defined, if it ever comes in the forms we think) might radically expand the set of tractable problems. But intractability in law and governance isn’t just a compute issue, it’s about value conflict under uncertainty. You can scale research, drafting, and synthesis, but you can’t compute away trade-offs between incommensurable goods.
More intelligence doesn’t dissolve normativity. It amplifies whatever decision grammar is already in play.
If anything, the closer we get to systems that can handle vast tractable workloads, the more pressure shifts onto judgment like how values are weighted, how uncertainty is tolerated, how escalation is constrained. That layer doesn’t disappear with capability, it just becomes more exposed.
So the real question isn’t just what AGI can automate, but what governs its use.
The halting problem is only a part. I’ve been working in jurisprudence and information philosophy. It a rich story. Working on a monograph on this. Check out my series on mathematics and law.
Tom, this is very good and thought-provoking. I’ve been processing similar thoughts along different lines, but I think we’re in agreement that “reliable judgment under uncertainty” is the lawyer’s killer app — and that “trustworthy competence to form reliable judgment” ought therefore to be the only real requirement for lawyer licensure.
The only thing that concerns me is the possibility that people (individually and in their corporate form) might decide “judgment” is an unnecessary luxury, and that they’ll settle for “decisions with coverage”. It won’t matter to them that they get the *right* advice or the *best* call — they’ll only care that someone made a decision and someone else will pay compensation if it goes wrong.
I think back to when title insurance first arrived in the real estate market in the 1990s. Real estate lawyers were apoplectic: “Title insurance doesn’t prove you have title to the property you’re buying; it just papers over the cracks in the title system and pays out an insurance policy if title is invalid. Only a properly conducted lawyer search of title records can guarantee both the validity of your property ownership and the integrity of the title system itself.”
And it turned out that nobody really cared. People were more than content to pay a small fraction of a real estate lawyer’s fee in exchange for a promise that, in the unlikely event of a challenge to title, the insurance company would handle it. And from what I can tell, hardly anyone pays for lawyers’ title searches anymore, at least for residential and maybe for commercial too.
I suppose what I’m worried about today is not that AI will replace lawyers’ capacity for judgment. It’s that, in our increasingly coarsening and deadening world, people won’t care enough about judgment to pay for it.
Thank you Jordan! With this piece, I was trying to define the "lawyer space" more precisely as it's usually framed as hand-wavy "strategy" or "higher value work." But, I completely agree with you that this line is theoretical and requires client adherence as well and that, especially for those who are unfamiliar with and do not value, professional judgment, it is subject to cannibalization by "good enough" judgment.
Yeah, I keep circling "judgment" as the concept that's getting the most airtime in the "post-AI lawyer" value package these days. I wrote a whole article about it earlier this month: https://jordanfurlong.substack.com/p/how-to-surface-lawyers-professional.
And yet, I'm still not sure myself that "judgment" is the bedrock layer -- I think there's further we can drill down. Judgment (or discernment or evaluation or wisdom) isn't something people desire in and of itself. It's a means to an end -- a capacity that's applied to a situation in order to ... what? What's the outcome of applied judgment? What does it create, or change?
I think that the point of applying judgment is to bring an uncertain situation to a (positive) conclusion -- to replace uncertainty (which humans don't like) with clarity (which humans do like) and the capacity to proceed further, from a better position than before. Somewhere in there, I think, is the point of lawyers. I'm still mulling it over.
We'll have to dig into this on your podcast again sometime! Thanks again.
I think the bedrock for clients is feelings. The client deliverable from hiring a lawyer has always been "peace of mind" or "confidence." That doesn't mean that the lawyer delivers the outcome they want or expect, but that they feel (1) that the lawyer takes on the problem and makes it their own, (2) given that, the lawyer will pursue all available legal means to achieve the desired outcome, and (3) that should the lawyer make a mistake, they (the lawyer) are financially responsible for professional negligence. These 3 things engender that sense of peace of mind and/or confidence.
I agree with your observation that feelings are at the core of the attorney-client relationship, Tom. The relief a client feels upon hiring an attorney would quickly evaporate if the attorney’s emotional investment disappeared as soon as an arbitrary context window filled up. “CLAUDE: If I previously gave you the exact opposite advice from what I am now saying it is unsurprising that you lost your case, were sanctioned, and were jailed for contempt. You are right to push back.” (Of course, AIs may soon give us a run for our money when it comes to social intellignce.)
Lawyers have a different capacity and tolerance for reasoning under uncertainty than regular people.
I don't think it's something we learn in law school. Instead, it's a a feel that comes from the practice of hearing a good fact in an intake, exploring it in written discovery and then watching it transform into a bad fact during a deposition with a skilled opponent.
I represent people who are very sophisticated in their own domain yet somehow lack a lawyer's appreciation for the fact that different people can view the same car wreck/traffic stop/family dynamics and see different things.
Clients believe their perception is uncontroverted fact and start reasoning from there. So they prompt an AI with a factual position that supports their outcome and then ask the lawyer to affirm their reasoning.
Meanwhile, the lawyer assumes that not everyone agrees that the traffic light was red, or green, or whatever.
This is a great way to put it.
A lot of clients treat their perception as settled fact and then reason forward. Lawyers, by contrast, instinctively treat perception itself as provisional, something that will be reframed, contested, and pressure tested.
That’s also where AI gets interesting. A model will happily reason from whatever factual framing it’s given. It doesn’t naturally inject: what if the light wasn’t red? unless prompted to.
So the real skill isn’t just reasoning under uncertainty, it’s remembering that the “facts” are often the first layer of uncertainty.
This was an insightful framing with real technical depth. What resonates for me is how you map tractability from computer science onto legal practice. It's a crisp way to explain why some legal problems are amenable to automation and others are not, even for powerful AI. That lens helps cut through a lot of vague hype about “AI replacing lawyers.”
Really appreciate how you grounded this in a well‑defined concept instead of abstract rhetoric. It opens up a more honest conversation about where AI fits in legal work.
One of the best articles I've read on the intersection between jurisprudence and Artificial intelligence.
Thank you Manzi! 🙏
Your argument seems to be using different standards for humans and AGI, rather than comparing apples to apples.
No, AGI can't solve intractable problems efficiently, but neither can humans. Yes, human lawyers can manage intractable problems with heuristics, but by definition so can AGI.
It's logically inconsistent to say that AGI can't replace human lawyers because humans can manage intractable problems while AGI can't solve them.
We can teach both humans and machines a better grammar of thought and perception. The "AI can't feel and it's not empathic like a human" is both a little funny and sad, because often they can both be bad at it.
AGI (however defined, if it ever comes in the forms we think) might radically expand the set of tractable problems. But intractability in law and governance isn’t just a compute issue, it’s about value conflict under uncertainty. You can scale research, drafting, and synthesis, but you can’t compute away trade-offs between incommensurable goods.
More intelligence doesn’t dissolve normativity. It amplifies whatever decision grammar is already in play.
If anything, the closer we get to systems that can handle vast tractable workloads, the more pressure shifts onto judgment like how values are weighted, how uncertainty is tolerated, how escalation is constrained. That layer doesn’t disappear with capability, it just becomes more exposed.
So the real question isn’t just what AGI can automate, but what governs its use.
Can you explain why these problems are intractable in computational terms?
The halting problem is only a part. I’ve been working in jurisprudence and information philosophy. It a rich story. Working on a monograph on this. Check out my series on mathematics and law.