The Intelligence Paradox: How AI's Superhuman Vision Can Blinds Us
Where I explore how intelligence is like screen resolution, why AI sees patterns we can't, and when less detail paradoxically yields more wisdom
Welcome back, high-definition thinkers and resolution-seeking legal minds! 🔍⚖️ Today we're adjusting our cognitive aperture to explore intelligence itself: that elusive quality that lets some minds see the world in 8K while others squint through standard definition fog. Grab your mental magnifying glass, calibrate those neural sensors, and prepare to question whether sharper vision always means clearer judgment.
We'll venture beyond the range of human perception, where AI superminds parse patterns we can't even detect, like digital bloodhounds hearing whistles pitched far above our comprehension. Let's focus this lens together and discover why sometimes the most profound insights come not from seeing more, but from knowing when we've seen enough.
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
As a child of the 80’s, one of my favorite movie characters was Superman, at the time played with earnest aplomb by the late Christopher Reeve. Superman is, of course, famous for having many superpowers, among them super-hearing and x-ray vision. Superpowers are, by definition, powers that we ordinary human beings do not and cannot possess. Superman uses his x-ray vision to see through walls, super-hearing to anticipate danger, and both to save Lois Lane on occasion from certain death.
Superman is fiction, but I learned then that certain animals can perceive sounds well above the range of human hearing. My family’s dog, for instance, seemed privy to a hidden world of high-pitched squeals, footsteps on distant sidewalks, or rustles in the bushes that my ears simply couldn’t detect. It was both surprising and intriguing. Here was a creature whose senses were, in some respects, more acute than mine. Yet I would argue that in many important ways, I had a more “detailed” understanding of reality. That contrast set me on a lifelong curiosity about intelligence, perception, and what it means to truly “see” the world.
If this sounds interesting to you, please read on…
Intelligence as Resolution
I’ve often found it helpful to think of intelligence as the resolution of a television screen. When you step up from a standard-definition TV to an HD screen, the clarity and detail of the image transform the entire viewing experience. You notice subtleties in shadows, depth in textures, and new gradations of color. Move further up to 4K, 8K, or beyond, and each increment in resolution helps you see finer and finer points. However, at some point, our eyes simply can’t appreciate the improvement. The human visual system has physical (and perhaps evolutionary) limits. Once you reach a certain pixel density, you no longer notice the difference.
In much the same way, intelligence, often measured by the imperfect yardstick of IQ, grants increased “resolution” on the world. In a high-resolution cognitive style, even complex scenarios become clearer: patterns emerge, nuances snap into focus, and strategic options reveal themselves that might remain invisible to minds operating at a lower resolution. Yet, like our eyes with ever-higher pixel densities, intelligence is subject to human limits that are not always clear. It may be that some individuals can operate at an intellectual “resolution” we’ll never directly experience or even comprehend. Think: Einstein developing the special theory of relativity at the age of 26 in his spare time while working as a patent clerk.
Thinking Fast, Slow, and in High Definition
Daniel Kahneman, in Thinking, Fast and Slow, famously contrasts two modes of thought: the immediate, intuitive System 1 and the slower, more deliberate System 2. High intelligence can manifest in both systems, but often we associate it with the more methodical, analytical side of the mind. Still, the line between these systems can blur. Indeed, some forms of expertise, like a seasoned trial attorney’s instinct for a judge’s demeanor or a chess grandmaster’s uncanny pattern recognition, highlight how mastery may look like lightning-fast intuition, even if it was built on long hours of slow, focused practice.
IQ tests try to measure these faculties, but they don’t always capture what’s truly happening behind the scenes. A more powerful mind can run repeated “mental simulations” at high speed (akin to high frame-rate video). It can juggle various angles of a problem at once, detect hidden structures, and recall relevant knowledge from memory with startling clarity. Like going from standard-definition to 8K, complexity emerges in crisp detail.
Yet, does ever-greater resolution always guarantee greater practical insight? If we can’t perceive or effectively use this super-fine resolution, is it really helpful? Dogs can hear ultrasounds we miss, but does that truly make them “smarter” than we are, or simply more attuned to different frequencies?
Dogs, Dials, and Limitations
Dog hearing reminds us that intelligence is not monolithic. It’s multifaceted: vision, hearing, pattern recognition, emotional acuity, contextual understanding, creative problem solving, etc. That’s why intelligence might be more fruitfully thought of as a “dial” that can be turned up to reveal more detail, but only if the rest of our mental apparatus can handle it. Imagine having a satellite feed of global legal news from every jurisdiction and in every language, but not having the language skills or time to sift through it. Extra data is not always extra useful.
Douglas Hofstadter, in Gödel, Escher, Bach, explores how the layering and recursion of symbolic systems can give rise to emergent structures of intelligence. Hofstadter uses fugues and visual illusions to highlight how the mind can perceive deeper levels of meaning that are missed by a narrower interpretative lens. But there’s a limit; beyond a certain threshold, the layers become so numerous and interwoven that they might exceed human capacity to parse them. Like trying to watch 8K footage on a 720p screen, we can’t make use of all the additional data.
The AI “Superscreen”
Then there’s the question of artificial intelligence, which Nick Bostrom tackles in his work, Superintelligence. AI can run at a resolution far beyond our natural hardware. It can crunch staggering volumes of data, detect patterns with superhuman speed, and even (as we’ve seen in the latest GPT models and specialized AI systems) produce insights that surprise experts in various fields. We’re increasingly living alongside these “superscreens,” which are effectively high-definition minds that may parse the world with far finer resolution than we can.
But here’s the twist: if an AI arrives at a conclusion that is correct, yet incomprehensible to us, how do we verify or even appreciate that correctness? In law, as in many fields, transparency and explainability are paramount. An AI that sees patterns no human does can be both an invaluable partner and a potential source of nightmares. It could exploit legal gray areas, interpret voluminous case histories, or weigh the subtle biases in a judge’s historical rulings, possibly outmaneuvering even the most skilled attorneys.
This is where we lawyers (and all professionals) must tread carefully. Could we fall behind the resolution threshold, becoming like a standard-definition set watching an 8K broadcast? We’d see something but remain ignorant of the nuance that the AI perceives. In the best scenario, we collaborate with these high-resolution systems, leveraging their fine detail while applying our distinctly human qualities, empathy, ethics, intuition, and moral reasoning, to shape how AI’s insights are used.
Extended Minds and Cognitive Exoskeletons
Andy Clark, in Supersizing the Mind, posits that our mental processes aren’t confined to the wetware of our brains; rather, our tools, devices, and even social networks extend our cognition. In the modern world, your smartphone is effectively part of your mind. Now, with advanced AI systems, we are on the brink of a new era where each of us has access to “cognitive exoskeletons.” These superpowered external minds can boost our own abilities, letting us see a higher resolution of reality than any single human brain might handle alone.
But with great resolution comes great responsibility. As lawyers, we must remain vigilant about the ways in which increased cognitive clarity can be abused. We know from court proceedings how the subtleties of evidence, argumentation, and rhetorical skill can tip the balance of justice. An AI that can analyze thousands of previous cases, down to each judge’s semantic preferences, might craft hyper-tailored and uniquely compelling arguments. Is that a fair advantage or a threat to due process? Perhaps both. The difference will come from how we choose to wield such power.
The Limits of High Definition
We all want that crisp, detailed view of the world, just like we want the latest 4K screens in our living rooms. But ironically, more detail isn’t always the final goal. Sometimes the big picture can be lost in the minutiae. High intelligence can be paralyzing if we’re so overwhelmed by nuance that we can’t make a clear decision or see broad patterns. Practitioners often say, “When in doubt, zoom out,” reminding us that the best approach can be toggling between wide-angle and close-up views. Kahneman’s System 1 might tell us when something “feels off,” prompting System 2 to dive deeper.
Moreover, as intelligence moves toward “superintelligence,” we might not even grasp how to interpret its solutions. Much like the dog’s unperceived high-pitched squeals, the AI could be alerting us to truths outside our mental hearing range. The crucial question becomes: Can we develop translatable frameworks that allow us to make sense of the AI’s hyper-resolution? That’s a challenge for us all.
Closing Thoughts
For the legal profession, the resolution metaphor reveals an uncomfortable truth. We've spent centuries honing our ability to see finer distinctions, parsing precedents, distinguishing facts, detecting subtle shifts in testimony. With the prospect of AGI and beyond, we now face intelligences that can see details we literally cannot perceive, patterns that exist beyond our cognitive frequency range.
The temptation is to chase ever-higher resolution (read: the latest AI model), to augment ourselves with AI until we can match their superhuman clarity. But perhaps that's missing the point. The practice of law has never been about seeing the most detail, it's been about seeing what matters most to us as human beings. A good attorney knows when to zoom in on a critical inconsistency and when to pull back to show the broader narrative. That's wisdom, not processing power.
As AI systems become our cognitive partners, we must resist the assumption that higher resolution automatically yields better justice. These tools will undoubtedly reveal patterns we've missed, connections we couldn't make, efficiencies we couldn't achieve. But they may also overwhelm us with detail, paralyze us with possibility, or worse, make decisions based on patterns so complex we can't verify if they're just or merely optimal.
Our challenge isn't to match the AI's resolution. It's to maintain our focus on what makes law human: the ability to weigh competing values, to recognize when mercy should temper strict adherence to the law, to know when the technically correct answer isn't the right one.
The highest resolution image isn't always the truest picture. Sometimes, stepping back reveals what really matters.
This article is the fourth in a series on Machine Thinking, where I explore different aspects of how large language models “think.”
By the way, did you know you that I now offer a daily AI news update? You get 5 🆕 news items and my take on what it all means, delivered to your inbox, every weekday.
Subscribe to the LawDroid AI Daily News and don’t miss tomorrow’s edition:
LawDroid AI Daily News, is here to keep you up to date on the latest news items and analysis about where AI is going, from a local and global perspective. Please share this edition with your friends and colleagues and remember to tell me what you think in the comments below.
If you’re an existing subscriber, you read the daily news here. I look forward to seeing you on the inside. ;)
Cheers,
Tom Martin
CEO and Founder, LawDroid
I can't help but think that an essential link has been missed in the logic of your argument. You say this: 'Douglas Hofstadter, in Gödel, Escher, Bach, explores how the layering and recursion of symbolic systems can give rise to emergent structures of intelligence. Hofstadter uses fugues and visual illusions to highlight how the mind can perceive deeper levels of meaning that are missed by a narrower interpretative lens. But there’s a limit; beyond a certain threshold, the layers become so numerous and interwoven that they might exceed human capacity to parse them. Like trying to watch 8K footage on a 720p screen, we can’t make use of all the additional data.'
Then you go on with an argument that seems to assume that the additional layers of data are all that are needed, without addressing the question of whether AI has the complex symbolic reasoning that we employ - often without understanding ourselves the full extent that this reasoning contributes to our construction of ever deeper layers of meaning. AI doesn't know anything. It doesn't operate like a mind. Its patterns may end up being completely meaningless both to us, and indeed to the AI - because it doesn't care about meaning, or indeed anything at all.