The Jagged Fit: Why Your AI May Not Be That Into You..
Where I explore how the irregular contours of AI capability and human intelligence might be made to interlock, and what that means for finding your perfect match
Have you ever felt out of sync with the AI you work with? That it does not quite get you the way another model used to? Maybe you think fondly of the easy back-and-forth you used to have with GPT-4o, before it was quietly put out to pasture, and the model that replaced it, for all its raw capability, just is not the same. The drafts come back a half-step off. The prompts that used to land now glance off. The rhythm has gone, and you cannot quite say why.
If that is your experience, you are not alone, and the explanation is probably not the one you have been reaching for. We tend to blame the model, or blame ourselves, or blame the prompt. “It’s not you, it’s me.” Most of the time, however, none of those is the real culprit.
The real culprit is fit.
The real question is whether the you and your AI are a good fit.
I call this The Jagged Fit.
AI capability has an irregular contour, brilliant at some tasks, hopeless at adjacent ones. Human ingenuity also has an irregular contour, brilliant at some things, hopeless at adjacent ones. The two surfaces can lock together or they can grind against each other. The pairing, not the person and not the model, is the measure of capability.
If this sounds interesting to you, read on...
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
The Jagged AI Frontier
Ethan Mollick gave us the metaphor that started this. He called it the jagged frontier of AI capability. Picture a coastline, irregular and unpredictable. On one side of the line, the model is genuinely capable. On the other side, it fails, sometimes catastrophically. The line itself does not run where you expect it to run. Tasks that look hard turn out to be easy for the model. Tasks that look easy turn out to be hard. You cannot tell from the outside which is which.
The current generation of frontier models is remarkable. They can extract information and produce structure from unstructured text at speeds no human can match. They draft, summarize, translate, classify, and pattern-match across enormous bodies of material. For a profession that has been buried in documents for centuries, this is not a small thing.
But the jagged frontier is real. Inside the same conversation, a model that has just produced a useful contract analysis will confidently invent a citation. A model that has summarized a 90-page deposition with insight will fail at a piece of basic arithmetic. The capabilities of AI are like a coastline, with bays and inlets and the occasional cliff. Mollick’s contribution was to give us a way to see this clearly. AI is not uniformly competent or uniformly incompetent. It is jagged.
The natural conclusion most people draw from this is: learn the coastline. Find out where the model is strong and where it is weak, and stay on the strong side of it. That is good advice as far as it goes. But it stops one move short of the more interesting question and acknowledgment.
Humans Are Jagged Too
Psychology has been telling us this for a century. The single-number version is IQ. One score, one point on a bell curve, one ranking. IQ has been useful for some things and disastrous for others. It is contaminated by cultural assumptions baked into the tests. But, the deeper issue with IQ is it pretends intelligence is one dimensional.
Howard Gardner’s theory of multiple intelligences, whatever its empirical limits, points toward different dimensions: linguistic intelligence, logical-mathematical intelligence, spatial intelligence, musical intelligence, interpersonal intelligence, intrapersonal intelligence, kinesthetic intelligence. People are not equally strong across these dimensions. They are not even comparable across them. A great trial lawyer and a great patent lawyer are drawing on different intelligences, and the patent lawyer might be lost in front of a jury while the trial lawyer might be lost in describing an invention. Both are excellent. But, their excellence has a different shape.
Personality has the same structure. The Big Five model gives us five independent axes: openness, conscientiousness, extraversion, agreeableness, neuroticism. Two people can have the same overall “score” and be unrecognizably different in how they work, what they notice, what frustrates them, what energizes them. Cognitive style is similar. Some people think in pictures, some in words, some in systems. Some need silence to concentrate. Some need pressure. Some are at their best at 6 a.m. Some are night owls.
Take the Myers-Briggs types as accessible shorthand, mindful of the instrument’s well-known limits. Picture an INFJ lawyer and an ISTJ lawyer working the same case. The INFJ thinks in patterns and meanings. She sees the matter as a story before she sees the facts as a list, and she is uneasy until the narrative comes together. The ISTJ thinks in ordered facts. He sees the matter as a record before he sees it as a story, and he is uneasy until the timeline is airtight and the citations are all squared away. Both can be excellent lawyers. Both can sit at desks across the hall from each other. Their best work has different shapes, and the work that drains one is often the work that energizes the other. Hand them the same memo to draft and you will get two unrecognizable products, each defensible, each shaped by the mind that produced it.
When you put all of this together, the picture is unavoidable. Human intelligence, like AI capability, is jagged. Each of us has a coastline. Each of us has bays where the work flows and cliffs where it stalls. The jaggedness is not a flaw to be smoothed; it is a feature of being a unique individual.
When Two Jagged Edges Meet
Now place the two maps side by side.
If you press two jagged edges together at random, they clash. There are gaps where neither side fills the space, and ridges where both sides claim the same territory. This is the bad fit. The lawyer’s strength overlaps with the model’s strength, so neither is leveraged. The lawyer’s weakness overlaps with the model’s weakness, so neither is covered. The pairing produces less than either party would produce alone.
But if you align two irregular surfaces with attention to their contours, something different happens. The peaks of one fit the valleys of the other. The pieces interlock. The lawyer’s weakness is met by the model’s strength; the model’s weakness is met by the lawyer’s strength. The pair becomes more capable than either party alone, and the increase is not modest. It is the difference between dragging a tool and dancing with a partner.
Consider two associates. The first associate has a strong intuitive sense for narrative and a weak appetite for procedural detail. Pair her with a model that is rigorous about citations, exhaustive in checking deadlines, and willing to fact-check her drafts without complaint. Her weak side is covered. Her strong side is amplified, because she now has time to do what she is best at. The fit works.
The second associate is the inverse. He is meticulous with detail, careful with the record, and slow to commit to a narrative arc. Give him the same model and you have doubled down on his strength and left his weakness exposed. He needs a different partner: one tuned to push him toward narrative commitment, to surface the human story buried in the documents, to draft boldly so that he can edit. Same firm, same case type, different fit.
This is why “which AI is best for lawyers” is the wrong question. The right question is which AI is best for this lawyer, on this kind of work, in this phase of her career.
I want to be careful because the dating metaphor in the title can be taken too far. I am not arguing that AI tools have personalities in any deep sense, or that we should anthropomorphize them. I am saying something different. Different models, with different training, different defaults, different reasoning styles, behave differently. Different lawyers, with different intelligences, different temperaments, different working styles, work differently. When you pair an AI with a lawyer, the combination has capabilities neither has on its own. Some work. Some don’t.
The Matchmaking Problem
Organizational psychology has worked on a version of this problem for fifty years under the name person-environment fit. The literature is large and the findings are robust: when a person’s strengths, values, and working style align with the demands and culture of their role, performance and well-being both rise. When they misalign, performance drops and burnout follows.
The same logic applies to person-AI fit. On the human side: cognitive style, domain expertise, personality profile, the specific tasks the lawyer actually performs in a typical week, the kinds of mistakes she is prone to making, the kinds of work that energize her, the kinds that drain her. On the AI side: model behavior under different prompt styles, default tone, willingness to push back, depth of reasoning, hallucination patterns by domain, latency, the shape of its strengths and weaknesses across the practice areas in question. None of this is mysterious. Most of it is measurable. We have not measured it yet in any organized way because we are still treating AI procurement as a software decision rather than as a partnership decision.
In machine learning, every serious model now ships with what its developers call a model card: a document describing the model’s intended uses, training data, performance characteristics, and known limitations. The cards exist precisely because models are not interchangeable. Read three cards side by side and you start to see why fit matters.
Model A, the Cautious Generalist. Strong at nuanced writing, careful with citations, willing to flag its own uncertainty, capable of sustained reasoning over long documents. Slower to commit to a thesis. Hedges when pressed. Best fit: lawyers who already have strong views and want a partner that will test them.
Model B, the Confident Synthesizer. Fast first drafts, broad general knowledge, willing to commit to a structure or a position. Confident even when wrong. More prone to inventing citations under pressure. Best fit: lawyers with deep domain expertise and strong editorial instincts who need a fast starting draft to react against.
Model C, the Citation-Anchored Specialist. Grounded in retrieval, will not invent cases, careful with procedural detail, deep in regulated practice areas. Narrow outside its domain. Less fluent at narrative or argument. Best fit: lawyers in heavily regulated areas where errors are catastrophic and retrieval discipline is the central virtue.
Now hand each of those models to the INFJ and the ISTJ from earlier and watch what happens. The INFJ paired with Model C is in a good marriage. The ISTJ paired with Model B is in a dangerous one. The INFJ paired with Model A may have found a sparring partner that finally pushes her ideas into shape. The ISTJ paired with Model A may find his work bogged down. The model cards and the personality types are crude tools, and yet even with crude tools the matching question becomes clear.
I do not think we are far from being able to do this matching with AI, especially as models become more powerful. The signals are there. The instrumentation is there. What is missing is the recognition that the question matters.
Closing Thoughts
The first generation of AI in law has been dominated by a question of the form: which model is best? I think it’s the wrong question, the way “what is the best food” is the wrong question. It assumes a universal palate. There is no universal taste. There is no universal lawyer either.
The second generation will be dominated by a different question. Not which model, but which pairing. Not capability, but fit. The lawyer who flourishes with AI will not necessarily be the most technical or the most enthusiastic. She will be the one who has found, by luck or by design, an AI partner whose jagged edges meet hers.
The technology will keep getting better. The frontier will keep moving. The coastline will keep shifting. None of that changes the underlying point, which is that two irregular shapes, well matched, can do what neither could do alone.
So if your AI is not that into you, it might not be your fault. It might not be that you are simply with the wrong partner.
The right fit is out there. Play the field.
Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and Author of AI with Purpose: A Strategic Blueprint for Legal Transformation (Globe Law and Business). He is “The AI Law Professor” and writes his eponymous column for the Thomson Reuters Institute.



