The Selection Problem: Why AI Can Solve, But Can’t Choose, Problems Worth Solving
Where I explore why the most strategic act in law isn’t solving problems, but deciding which ones matter
Who chooses the problem we’re solving today?
You can build the most powerful search engine ever created, but someone still has to type in the query. You can build an AI that drafts a brilliant legal memo in seconds, but someone still has to decide that this memo, on this issue, for this client, is the thing that needs drafting right now, and not the forty-seven other things competing for attention.
We’ve spent the last three years getting very excited about AI’s ability to solve problems. We’ve spent almost no time asking who decides which problems get solved. And that silence tells you something about a profession that has, for too long, confused doing legal work with solving legal problems.
I call this the Selection Problem: the irreducibly human act of looking at the full landscape of possible problems and deciding which ones deserve our finite attention, resources, and care. It is, I believe, the domain where lawyers will create the most value in an AI-augmented world, and it is the domain that AI is least equipped to occupy.
If this sounds interesting to you, please read on…
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
The Objective Function
Here’s a concept from the world of algorithms that I think illuminates everything. When engineers build a system to find the “best” answer to a problem, they first have to define what “best” means. They write what’s called an objective function: a set of instructions that tells the system what to aim for. Minimize cost. Maximize speed. Find the shortest route. Each goal produces a different answer, and the system will faithfully pursue whichever goal you set. But, and this is the critical part, the system never chooses the goal for itself.
Want the cheapest solution? The system will find it. Want the fastest? It’ll find that too. But it will never look at your situation and say, “Actually, you’re optimizing for the wrong thing. Cost isn’t your real problem here.” That judgment, the act of deciding what matters, always comes from outside the system. Always.
A logistics AI doesn’t wake up one morning and conclude that carbon emissions matter more than delivery speed. A legal AI doesn’t decide that a client’s emotional wellbeing should outweigh the letter of the contract. Those are value judgments, and value judgments live upstream of computation.
Now, to be fair, AI has become remarkably good at creating structure from unstructured information. Hand it a pile of disorganized client documents, contradictory witness statements, or a sprawling regulatory landscape, and it will impose order. It will categorize, cluster, summarize, and surface patterns you might have missed. That is genuinely valuable, and it would be dishonest to pretend otherwise.
But structuring information is not the same as deciding which structures matter. AI can organize the mess on your desk into neat piles. It cannot tell you which pile represents the problem your client is afraid to name, or which pile contains the seeds of a crisis that hasn’t yet become visible. It doesn’t know what counts as a problem because “what counts as a problem” is a function of human values and worry, fear, sleepless nights, shame, but also institutional priorities, and contextual judgment, not data patterns.
The gap is between structured and what’s worth solving.
The Myth of the Liberated Lawyer
You’ve heard the pitch a hundred times. AI will handle the drudge work, the document review, the research, the first drafts. You’ll be free to “think more strategically.” You’ll “practice at the top of your license.” The tedious stuff drops away, and what remains is the good stuff: the high-level judgment, the creative strategy, the work you went to law school to do.
I want to take this claim seriously, because I think it contains a genuine insight buried inside a significant blind spot.
The insight is real: AI does, in fact, liberate time. When a task that took forty hours takes four, something has to fill the remaining thirty-six. And the optimistic case is that lawyers fill it with higher-order thinking.
The blind spot is: “think more strategically” is not a self-executing instruction. Strategy doesn’t simply appear when you clear calendar space for it. Strategy begins with problem identification, the act of surveying a complex, ambiguous, often contradictory landscape and deciding what the actual problem is. Not the obvious problem. Not the problem the client thinks they have. The real one.
And here is where the profession needs to have an honest conversation with itself. For too long, we’ve defined lawyers by their outputs: wills drafted, contracts reviewed, motions filed. We’ve treated legal work as a series of tasks to be completed, turning lawyers into, frankly, task rabbits. But the definition of a lawyer has never been “a person who produces legal documents.” A lawyer is a person who identifies and solves legal problems. The document is an artifact of the solution; it is not the solution itself.
When we reduce lawyering to task completion, we hand AI its easiest possible victory. Of course a machine can draft a will. The question was never whether it could produce a document. The question is whether it can sit across from a grieving widow, understand the family dynamics she’s too proud to articulate, and recognize that the real problem isn’t the will at all; it’s the estranged son and the business succession plan nobody wants to discuss. That’s problem identification. That’s lawyering.
Consider a general counsel with fifty matters on her desk. AI can help with every single one of them. It can draft motions, summarize contracts, flag regulatory changes, analyze discovery. But it cannot tell her which five of those fifty matters actually threaten the company’s strategic position. It cannot weigh the CEO’s appetite for risk against the board’s tolerance for ambiguity against the competitive dynamics of a shifting market. It cannot feel the political undercurrent in the organization that makes one seemingly minor compliance issue a powder keg and another a non-event.
Those are selection problems. And they require a human being standing in the middle of the mess, accountable for the consequences.
The Multiplication Paradox
Here is where the Selection Problem gets harder, not easier, with better AI.
Consider what happens when AI gets dramatically better at solving problems. Every task that used to take a team of associates a week now takes an afternoon. Contract review that consumed months of junior lawyer time happens in hours. Regulatory analysis across twelve jurisdictions, something that would have been a major staffing decision, becomes a Tuesday morning prompt.
This sounds like liberation. It is, in fact, a multiplication of the Selection Problem.
When your capacity to solve problems expands by an order of magnitude, the number of problems you could solve expands with it. That general counsel who had fifty matters on her desk? Now she can meaningfully act on all fifty. But she still has the same number of hours, the same budget, and the same board with the same risk tolerance. AI didn’t reduce the number of decisions she has to make; it increased them. Every problem that was previously too expensive to touch is now within reach, which means every one of them demands a selection decision that didn’t exist before.
A law firm that once had to triage aggressively because capacity was scarce now faces a different kind of scarcity: the judgment to know which of its newly solvable problems actually deserve solving. A legal aid organization that can suddenly process ten times the inquiries now confronts a question it used to answer by default through resource constraints: which of these clients’ problems do we prioritize when we can technically help all of them?
More capability, more choices. More choices, more consequential selection. The resource allocation question, who gets our finite attention, our finite hours, our finite best thinking, doesn’t get answered by better AI. It gets amplified by it.
And the counterargument, that AI will eventually learn to select problems through preference learning or value alignment, misses the point. Even if an AI could perfectly model a firm’s stated values and a client’s expressed preferences, it would still be optimizing against an objective function that someone had to define. The recursive problem remains: who decides what the AI should value? Who writes the function that tells the machine which problems are worth its attention? That’s still a human standing in the gap, making a judgment call, bearing the consequences.
The Selection Problem doesn’t shrink as AI improves. It grows.
The Exposed Gap
If the Selection Problem is as important as I’ve argued, you would expect the legal profession to have spent decades cultivating it. Training it. Rewarding it. Building institutions around it.
We haven’t.
Here’s what actually happened: AI didn’t create the Selection Problem. It exposed a gap that was already there. For decades, the economics of legal practice allowed us to avoid confronting it. When the billable hour was king, there was no incentive to ask whether a problem was worth solving; there was only an incentive to solve it and bill for the time. When task volume was the measure of a practice, selection was a luxury. You did the work in front of you. You didn’t ask whether it was the right work.
Law schools don’t teach problem selection. They teach issue spotting, which is not the same thing. Issue spotting is a bounded exercise: here is a fact pattern, find the legal issues. Problem selection is unbounded and human-centered: here is a client, a community, an institution embedded in a web of human relationships and competing pressures. What, among everything that could be a legal problem here, actually is one? And among those, which ones matter most?
The profession never built this muscle at scale because the old economic model didn’t require it. Task completion was profitable. Document production was measurable. Problem selection was invisible, done informally by senior partners with good instincts, never codified, rarely taught, impossible to bill for directly.
Now AI is stripping away the task layer. The work that used to fill our days and justify our billing is increasingly handled by machines. And what’s left, what AI cannot do, is the Selection Problem. The thing we should have been training for all along.
This is not a story about AI’s limitations. This is a story about ours. And it leaves us with a single, direct challenge: when the task work disappears, what remains underneath it?
Closing Thoughts
I started with a simple question: who decides what is worth solving? The answer, it turns out, is more revealing than I expected.
AI is the most powerful problem-solving engine humanity has ever built. It can draft, research, analyze, and synthesize faster and more reliably than any team of lawyers. And as it gets better, it doesn’t solve the Selection Problem; it makes the Selection Problem bigger, more urgent, more consequential. Every new capability is another fork in the road that requires a human being to choose a direction.
The profession’s future doesn’t belong to the lawyers who learn to use AI most efficiently. It belongs to the ones who can stand in a room full of newly solvable problems and say: this is the one that matters. Not that one. Not those. This one. And here’s why, and I’ll own the consequences of being wrong.
We’ve been task rabbits long enough. The machines are here for the tasks.
It’s time to choose wisely.
If you found this article useful, you’ll love the LawDroid AI Conference 2026. April 28–29, virtual, and completely free — two days of keynotes, panels, and workshops on AI and the legal profession. I’d love to see you there.



