AI Blinders: Breaking Free from the Anchor of Generative AI's First Answer
Where I explore how relying on AI’s first answer can limit our legal imagination, and offer ways to keep our curiosity alive
Welcome back, curious minds and boundary-breakers! Are you ready to see how that oh-so-convenient AI-generated first answer can box us in without us even noticing? In this article, I’m exploring the phenomenon I like to call “AI Blinders,” where we trust our digital oracle a bit too readily and let its initial response shape our entire train of thought. ⚖️✨
Pulling threads from cognitive psychology, Daniel Kahneman’s Nobel-winning insights, and even Malcolm Gladwell’s Blink, we’ll explore how you, as a forward-thinking lawyer, can keep your curiosity alive and avoid snapping on those AI blinders. Ready to unlock the next level of critical thinking? Let’s get started!
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
We’ve all been there. You ask an AI system, maybe OpenAI’s Deep Research or another cutting-edge model, a question. It responds with a crisp, neatly packaged answer and even citations and analysis. Suddenly, you feel like the puzzle is solved. The answer seems thorough; you have what you need.
Before we go any further, let’s make one thing clear: this article isn’t about “AI hallucinations,” where a model invents facts or sources out of thin air. Even a perfectly accurate AI response can become a powerful mental anchor. In psychology, this is called “anchoring bias” or “first impression bias,” and it has a subtle but potent gravitational pull. When it latches on, curiosity can dwindle, eclipsed by a sense of finality that wasn’t there moments before.
If you’re a lawyer, especially a technologically savvy early adopter, you may already be integrating AI into your practice. It’s fast. It’s convenient. But whether you’re drafting a brief, conducting research, or brainstorming litigation strategy, the risk is that AI’s first salvo becomes the entire war. Let’s call this phenomenon “AI blinders.” You get an answer, and that becomes the lens through which all further discussion unfolds.
This article is a nudge, a gentle but insistent reminder, to keep exploring, keep questioning, and keep using your own expertise long after the AI has spoken.
The Sticky Power of First Impressions
Malcolm Gladwell, in his book Blink: The Power of Thinking Without Thinking, explores how quickly we form impressions. In mere seconds, our brains can conclude whether we like someone, whether a piece of art is genuine, or even who might win an election debate. Gladwell’s thesis is that these snap judgments have surprising accuracy but can also lead us astray if we don’t know when (and how) to engage deeper critical thinking.
In the realm of AI, these “blink” moments manifest as that instant where the machine’s first answer seems definitive. This often happens because:
Authority Bias: AI is shiny and new, and we subconsciously think, “If the computer said it, it must be right.”
Cognitive Ease: It’s much easier to accept a provided answer than to challenge it, particularly under tight deadlines or heavy workloads.
Confirmation Bias: If the AI’s response aligns with what we secretly hoped to hear, we’re more likely to accept it as fact.
But as any seasoned litigator knows, the truth is rarely revealed in a single, neatly packaged statement. If anything, the pat answer should make us inquisitive. The law itself is an endless labyrinth of precedents, exceptions, and evolving interpretations. Why should we expect an AI to pin it down perfectly on the first go?
Anchoring Bias Meets Artificial Intelligence
Anchoring bias is a classic concept in cognitive psychology and behavioral economics, famously outlined by Daniel Kahneman and Amos Tversky. In one classic 1974 study, participants were asked to spin a wheel that landed on a random number, then estimate the number of African countries in the United Nations. Those who saw higher numbers on the wheel guessed far more countries than those who saw lower numbers, despite the wheel’s irrelevance to the estimation. The first piece of information (the number on the wheel) functioned as an anchor.
Transplant that logic into an AI-driven legal environment. You ask your AI to outline the three strongest defenses for your client. It gives you a set of arguments. Because those defenses arrived in the form of an AI-generated list, you run with them. You don’t dig deeper into obscure precedents that might yield a fourth, a fifth, or a sixth defense. From my own experience, it is sometimes these paths least traveled that afford us a new line of argument or perspective in framing the discussion. With AI as an initial drafter, you may never challenge the fundamental assumptions behind the AI’s response. If an AI’s first draft is an arbitrary “wheel spin,” it can be the anchor that steers your research down a narrow path, sometimes at the expense of discovering more robust arguments.
Why “AI Blinders” Are So Tempting
When you’re juggling a heavy caseload, bringing technology into the mix can feel miraculous. Large language models can comb through massive data sets in seconds, summarizing them in easy-to-digest formats. They can spar with you in brainstorming sessions, playing devil’s advocate one minute and a helpful paralegal the next.
However, that same allure can discourage deeper inquiry. Our minds and our businesses love efficiency, particularly when the alternative is hours spent poring over legal documents. It’s no surprise that once an AI-generated answer appears, we tend to adopt it as a baseline for all subsequent thought, effectively putting on “AI blinders.”
This phenomenon is reminiscent of the blinders worn by racehorses. They keep the horse focused on what’s right in front, preventing distractions from the sidelines. AI blinders serve a similar function: once fixated on AI’s “take,” we tune out the periphery. Perhaps that’s welcome if you’re galloping down the track at Churchill Downs. But in legal practice, seeing the bigger picture can make the difference between a winning and losing argument. And, in many ways, especially now, our value as lawyers is in seeing this bigger picture, asking questions, and making decisions on the path forward strategically.
The Lawyers’ Path to Curious Thinking
If “AI blinders” risk curbing your curiosity, what’s the antidote? Lawyers, by trade, are taught to keep digging, keep asking critical questions and hypothesizing “What if?” scenarios. So how do we retain that spirit in an AI-rich landscape?
Here’s what I recommend:
Ask the Next Question
Think of the AI’s first answer not as a period but as a comma, an opening salvo for your next, more nuanced question. For instance, if AI says, “The strongest defense is laches,” follow up: “Under what circumstances does laches fail?” or “What alternative defenses exist if the laches argument is dismissed?”
Encouraging yourself to push past the surface ensures you’re not anchored to a single perspective. Asking the right questions, rather than having all the answers, is a valuable skill lawyers still bring to the table. And, knowing what questions to ask implies an impeccable familiarity with the subject matter; we cannot (nor should we want to) entirely delegate knowledge work to machines.
Play Devil’s Advocate
Turn your AI partner into your adversary by asking it to argue the opposite perspective. This fosters a healthy dialectic. You might discover that the best angle lies somewhere in the tension between two polar arguments. Also, utilize an AI partner to role play from a number of different viewpoints: devils’ advocate, neutral fact-finder, distracted juror, public opinion. By exploring all of these mental frames, you can avoid the lock-in of the first thought and explore the alternatives.
Cross-Reference with Human Expertise
AI can be a powerful tool, but it doesn’t replace peer collaboration or mentorship. Bouncing ideas off colleagues, especially those with deep domain expertise, can help identify blind spots that AI might miss or gloss over. Historical understanding, institutional knowledge, human relationships: all of these hard facts and soft skills, won over a career, are invaluable and should be mined. The AI of today has an excellent horizontal understanding, but cannot plumb the depths of vertical expertise that a person’s career, and all of the insights such work provides, accumulates.
Keep Reading and Researching
At a minimum, instead of blindly trusting the AI’s first citation, check the actual source. Does the source actually support the factual or legal contention? Is it a stretch? Completely off base? With legal matters, it’s crucial to confirm citations are accurate, up-to-date, and on-point. This added step preserves that vital connection to the raw material of the law.
Get a “Second Opinion” Rule
Implement a process in which you always take at least one step beyond the AI’s answer. It could be an additional question, reading one extra case, or even running a separate query in a different AI tool to see if the results align. It could be taking a break and coming back to the issue after a walk with a (hopefully) refreshed set of eyes. Take a second look before you make a commitment.
Remembering the Human Element
Law is more than a compilation of rules; it’s a tapestry of human stories, conflicts, and resolutions. AI might be able to parse those narratives, but it doesn’t live them. It doesn’t experience the emotion of delivering a closing argument before a jury. It doesn’t feel the friction of negotiating a settlement that impacts a community.
That’s where you come in. Lawyers aren’t just technicians applying rules; you’re empathetic navigators of human conflict and resolution. The danger of “AI blinders” is that by chaining yourself to the anchor of the first algorithmic answer, you risk missing the human elements that may tilt the scales of justice in surprising ways.
Curiosity as a Professional Imperative
Curiosity is a core skill in the legal profession. The best trial lawyers keep prodding until they find a story’s hidden hinge. The sharpest transactional attorneys anticipate pitfalls two deals in advance.
In an AI-saturated environment, curiosity becomes even more valuable. Without it, we risk devolving into silent editors of AI output rather than creative, nimble problem-solvers. A monkey pushing the button. The very human capacity for wonder, skepticism, and imagination sets us apart from the machines, and ensures that the law remains a living, breathing enterprise.
There’s a famous quote by physicist Richard Feynman: “Science is the belief in the ignorance of experts.” Even the most prestigious sources (human or machine) can and should be questioned. If we’re to remain intellectually honest and inventive, we have to wonder what else might be out there, beyond the boundaries of the AI’s immediate, and possibly incomplete, view.
Closing Thoughts
As AI continues to reshape our professional and personal lives, “AI blinders” remain a real and pressing concern—especially in a field as intellectually rigorous as the law. Anchoring bias is not new, but AI can intensify it by presenting answers so swiftly and confidently that they appear irrefutable.
Your job is to resist the pull of that first, easy response. Question the answer, test its boundaries, and keep your curiosity alive. Remember that if we allow ourselves to stop after the first question, we may never discover the deeper layers of truth, the creative arguments, or the nuanced insights that give the law its texture and meaning.
Treat AI like an associate who’s brilliant but incomplete. You’d verify an associate’s work, right? You’d ask follow-up questions? The same standard applies to AI. Think of the first AI-generated answer as the starting block in a race. It gives you momentum, but you must run the lap on your own legs. Finally, embrace the unexpected, from wherever it may come. AI may produce an odd turn of phrase or a non-conventional angle. Rather than dismissing it, mine that oddity for potential gold. It might unlock an unforeseen argument or creative negotiation tactic.
Let AI be your partner, not your warden. And always, always ask the next question.
By the way, as a LawDroid Manifesto reader, you are invited to an exclusive event…
What: LawDroid AI Conference 2025
Day 1 - 7 panel sessions, including top speakers like Ed Walters, Carolyn Elefant, Bob Ambrogi, and Rob Hanna—they’re well familiar with how to harness AI as a force multiplier.
Day 2 - It will also feature 3 hands-on workshops from AI experts and demos from over a dozen legal AI companies where you can discover the latest and greatest technology to get you ahead.
Where: Online and Free
When: March 19-20, 2025, 8am to 5pm PT
How: Register Now!
Click here to register for free and secure your spot. Space is limited. Don’t risk being left behind.
Cheers,
Tom Martin
CEO, LawDroid
P.S. Check out the Day 1 & Day 2 schedule—packed with panels, workshops, demos, and keynotes from the industry’s leading experts.