Discussion about this post

User's avatar
Jordan Furlong's avatar

Tom, this is very good and thought-provoking. I’ve been processing similar thoughts along different lines, but I think we’re in agreement that “reliable judgment under uncertainty” is the lawyer’s killer app — and that “trustworthy competence to form reliable judgment” ought therefore to be the only real requirement for lawyer licensure.

The only thing that concerns me is the possibility that people (individually and in their corporate form) might decide “judgment” is an unnecessary luxury, and that they’ll settle for “decisions with coverage”. It won’t matter to them that they get the *right* advice or the *best* call — they’ll only care that someone made a decision and someone else will pay compensation if it goes wrong.

I think back to when title insurance first arrived in the real estate market in the 1990s. Real estate lawyers were apoplectic: “Title insurance doesn’t prove you have title to the property you’re buying; it just papers over the cracks in the title system and pays out an insurance policy if title is invalid. Only a properly conducted lawyer search of title records can guarantee both the validity of your property ownership and the integrity of the title system itself.”

And it turned out that nobody really cared. People were more than content to pay a small fraction of a real estate lawyer’s fee in exchange for a promise that, in the unlikely event of a challenge to title, the insurance company would handle it. And from what I can tell, hardly anyone pays for lawyers’ title searches anymore, at least for residential and maybe for commercial too.

I suppose what I’m worried about today is not that AI will replace lawyers’ capacity for judgment. It’s that, in our increasingly coarsening and deadening world, people won’t care enough about judgment to pay for it.

Alex Freeburg's avatar

Lawyers have a different capacity and tolerance for reasoning under uncertainty than regular people.

I don't think it's something we learn in law school. Instead, it's a a feel that comes from the practice of hearing a good fact in an intake, exploring it in written discovery and then watching it transform into a bad fact during a deposition with a skilled opponent.

I represent people who are very sophisticated in their own domain yet somehow lack a lawyer's appreciation for the fact that different people can view the same car wreck/traffic stop/family dynamics and see different things.

Clients believe their perception is uncontroverted fact and start reasoning from there. So they prompt an AI with a factual position that supports their outcome and then ask the lawyer to affirm their reasoning.

Meanwhile, the lawyer assumes that not everyone agrees that the traffic light was red, or green, or whatever.

13 more comments...

No posts

Ready for more?