Alien Intelligence: Why AI Doesn’t Need to Think Like Humans to Be Effective
Where I explore how AI’s alien reasoning can still deliver transformative results, and why that matters for lawyers
Welcome back, cosmic counselors and digital explorers of the legal frontier! Today we’re stepping onto the landing pad to greet an AI so alien 👽🛸, it might as well be stepping off a UFO in your firm’s lobby. With echoes of Amy Adams’s character in Arrival decoding extraterrestrial intentions, we’re left to puzzle over whether a non-human intelligence needs to think like us to serve us well.
So, brace yourself and prepare for first contact, because this is the realm where contract clauses meet cosmic riddles, and attorneys join linguists at the helm of a new era of alien intelligence. Pour yourself something otherworldly and follow me deep into the heart of alien logic: AI doesn’t have to be like us to deliver results that rival (or surpass) our own. Let’s dive in!
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
Remember the pivotal moment in the movie Arrival when Amy Adams’s character, a linguist, first enters the chamber to communicate with the extraterrestrials? She’s greeted by beings whose language, perceptions, and thought processes are unlike anything we know on Earth. Yet, her mission is to decode their intentions, not to insist that they think or speak precisely as we do. Instead, she focuses on building a bridge, an alignment, between two starkly different forms of intelligence.
I believe this metaphor applies neatly to artificial intelligence. Yes, AI is alien to us: it doesn’t grow or evolve in a biological sense, its “reasoning” doesn’t emerge from billions of years of evolution and neural wiring, and it may never experience subjective consciousness as we do. But, the crucial question isn’t whether AI’s cognition matches ours. Instead, it’s whether AI can work alongside us: effectively, ethically, and in a way that respects (and amplifies) the values we hold dear.
If this sounds interesting to you, please read on…
The Foreign Logic of Large Language Models
In Thomas Nagel’s famous 1974 essay, “What Is It Like to Be a Bat?,” he argues that we can never truly know the subjective experience of another creature if its sensory apparatus and cognitive framework are fundamentally different from ours, but it “should not lead us to dismiss as meaningless the claim that bats… have experiences fully comparable in richness of detail to our own.”
No matter how advanced AI becomes, we may never grasp what “it’s like” to be a deep learning model trained on trillions of data points, because it diverges so much from our own basic biology and lived experience.
In The Myth of Artificial Intelligence, published in just 2021, Erik Larson argued that cracking the nuanced code of human language would remain out of reach for AI because of this inherent difference:
“A close inspection of AI reveals an embarassing gap between actual progress by computer scientists working on AI and the futuristic vision they and others like to describe…. In particular, the failure of AI to make substantive progress on difficult aspects of natural language understanding suggests that the differences between minds and machines are more subtle and complicated than Turing imagined. Our use of language is central to our intelligence. And if history of AI is any guide, it represents a profound difficulty for AI.”
Yet, recent research from Anthropic suggests otherwise. Their work on “polysemantic” neurons shows that these individual computational units can represent surprisingly different concepts all at once, weaving together overlapping roles rather than following one clear-cut function. Think of each “neuron” in a large language model like a stagehand in a bustling theater production, responsible for multiple cues, costume changes, and set pieces all at once.
To illuminate this complexity, Anthropic built a “replacement model” that translates the hidden, parallel processes into more interpretable “features.” Instead of a lone spotlight illuminating each idea in turn, you get an ensemble of interwoven actions happening simultaneously, revealing that large language models can, in fact, handle intricate linguistic tasks by choreographing many roles at once, challenging earlier doubts about AI’s potential to rival human-like intelligence in language.
A Martian Associate in the Law Firm
Imagine a Martian landing in your firm’s glass-walled conference room. It has no appreciation for Earth’s literary canon or the typical concerns of a junior associate, yet it can parse terabytes of case law in the blink of an eye. Baffling, yes, but potentially extraordinary, if properly supervised. You’d quickly realize that whether this Martian “feels” human is secondary to whether it can deliver thorough, ethically sound, and strategically useful work product.
I can recall law partners who were forgiven their eccentricities because of their legal genius. Their human shortcomings mattered less than their ability to make arguments, retain clients, and win cases. In practical terms, we want to know of AI: Can it review discovery documents swiftly without missing crucial details and evidence? Can it identify novel precedents buried in centuries of caselaw? Can it draft preliminary briefs or contracts that a seasoned partner can finalize, cutting hours of routine labor?
The inherent alienness of AI needn’t bother us if it continues to generate outputs and work product that add value without violating legal or ethical rules. Just as Amy Adams’s character in Arrival didn’t demand the aliens become human, we shouldn’t obsess over making AI “think” like us. Rather, we should focus on the substance of what it produces and what it can do for us.
Missing the Forest for the Trees
Some critics concentrate on whether AI truly “understands” anything. They hold up philosophical exercises as though we must prove AI’s inner state is identical to a human’s before we can use its outputs. Noam Chomsky, for example, has stated that AI models lack the deep structural knowledge that characterizes human linguistic ability, suggesting that statistical pattern-matching alone isn’t genuine understanding. Herbert Drefyus, in What Computers Can’t Do and What Computers Still Can’t Do, maintained that human intelligence is grounded in embodied experience, something formal logic and symbol manipulation can’t replicate.
But consider the realities of legal practice: we already navigate human diversity in cognition, background, and perspective. One associate might be a whiz at pattern recognition, another might excel at narrative storytelling, both produce strong legal work, albeit through different mental talents.
AI is yet another “colleague” with a unique process and talents, albeit a process exponentially more complex, polysemantic, and opaque. We could devote endless energy to debating whether it experiences “understanding.” Or we could measure its performance by whether it yields valuable, compliant, and ethically sound results.
Neglecting alignment and output to argue about cognition might miss the forest for the trees. Lawyers should care less about whether AI’s processes are identical to ours and more about ensuring that, wherever those processes lead, they adhere to our shared standards of professional conduct and using AI to promote access to justice.
Alien Intelligence Doesn’t Mean Malevolent
It’s natural to fear the unknown. Many science-fiction narratives feature hostile aliens, and the specter of an uncontrollable AI conjures images of destructive invasions. In his book, Superintelligence, Nick Bostrom highlights scenarios where AI, unconstrained by human values, might spiral into catastrophic behaviors.
To use Bostrom’s framework, AI’s level of intelligence (its capability to solve complex problems) is independent, or “orthogonal” (as Bostrom puts it), to the goals or values it might pursue. In other words, an AI can be extremely advanced and still be aligned with virtually any objective, whether benevolent, neutral, or malicious. Merely increasing a system’s intelligence or reasoning power does not automatically lead to moral or altruistic tendencies. Instead, intelligence and goal selection operate on separate axes, meaning a superintelligent system might just as easily aim for ends destructive to humanity as it could for goals harmonious with human flourishing.
The key takeaway is that alien need not be evil. Rather, it’s we that must actively guide and align AI toward the values of the community it serves. As with any powerful tool, nuclear energy, biotechnology, or the internet; the real danger is failing to develop robust guardrails.
For the legal profession, alignment is paramount because our work intersects directly with societal norms, justice, and moral reasoning. A poorly aligned AI might generate harmful biases or propose strategies that contravene the spirit of professional responsibility. Oversight, both technical and ethical, is essential.
Legal Tools for AI Alignment
Fortunately, lawyers possess the very skill set required to shape such oversight frameworks. Alignment can be approached like any complex compliance or regulatory matter: from licensing requirements to professional oversight to contract-based expectations, we have a well-honed toolkit for clarifying responsibilities and enforcing accountability.
Model Licensing and Regulation: Just as specialized legal areas mandate particular certifications, we might create licenses for AI tools that meet transparency and auditability standards.
Ethical Boundaries: Bar associations can draft guidelines on AI use, clarifying that ultimate decision-making and moral responsibility remain firmly with human attorneys.
Transparent Development: Requiring AI vendors to document data sources, or at least the boundaries of their systems’ “reasoning,” can build trust. Opacity is often the biggest barrier to adopting what is perceived as an alien technology.
In that respect, the question we face is akin to how we’d handle the arrival of a capable but inscrutable Martian: we’d observe, impose boundaries, and then integrate it into our practice once convinced it operates within our ethical framework.
The Pragmatic Perspective
Viewed through a purely pragmatic lens, the “alienness” of AI becomes less a philosophical debate and more a governance and design challenge, areas where lawyers excel. We’re accustomed to bridging disagreements, regulating complex processes, and overseeing compliance. Ensuring AI meets professional standards is simply the next frontier.
Output Quality: Does the AI produce accurate, factually supported, and logically consistent legal work?
Risk and Error Monitoring: Does the system hallucinate or exhibit hidden biases? How do we detect and correct these errors?
Ethical Safeguards: Are we equipped to catch when AI’s logic strays into unethical or noncompliant territory?
If an AI can craft a top-notch memo, we can fold that into our practice, provided we remain diligent about oversight. If it fails, we refine or discard it, just as with any underperforming associate.
Rethinking Collaboration
The character in Arrival forged an alliance with aliens by learning how to communicate with them, rather than insisting they adopt English grammar and Western idioms. In a similar vein, building a functional partnership with AI requires that we adapt to the model’s nature. We learn the “language” of data-driven systems, appreciate their differences, and craft guardrails that protect our values and obligations.
As Anthropic’s findings underscore, we might never know what it feels like to “be” an AI. Suffice it to say that we may also never know, with respect to our human colleagues, what it feels like to be them either. Yet, that doesn’t preclude meaningful engagement. We can develop trust and alignment not by forcing AI to mirror our cognition but by ensuring our legal frameworks shape how it operates.
Closing Thoughts
Here's a thought: We might be overthinking this whole AI thing. Does it really matter if AI thinks like us, as long as it helps us do great work? I don't think so. Think about your own workplace. You've got the detail people, the big-picture thinkers, the creative types, and the analytical minds. We don't all process information the same way, and that's actually a strength. We've never expected our colleagues to think exactly like us: we just need their work to be solid.
The same goes for AI. I'm less concerned with whether it "thinks" like a human and more interested in whether it delivers results that advance justice, efficiency, and ethical practice. Success isn't about making AI more human-like, it's about finding ways to work together effectively. We don't need to pretend AI is human, nor should we dismiss it as just another tool. It's something new, a different kind of thinking partner. Like the linguist in Arrival, we can build a working relationship without fully understanding how the other side processes information.
As we move forward, let's focus on what really matters: using AI to make legal services more accessible, our analysis more thorough, and freeing up time for the human elements of legal work that truly require our touch. AI doesn't need to think like us to help us think better. This approach lets us move past philosophical debates and focus on practical alignment, making sure that regardless of how differently AI processes information, its output matches our ethical and professional standards.
The alien has landed in our conference room. Instead of wondering if it experiences the world as we do, let's see how it can help us serve our clients and our communities better than we could on our own.
Sometimes the most productive partnerships are with those who think nothing like us. I think this may be the beginning of a beautiful friendship.
This article is the second in a series on Machine Thinking, where I explore different aspects of how large language models “think.” Many thanks to Anthropic's research and paper "On the Biology of a Large Language Model" (March 2025).
By the way, did you know you that I now offer a daily AI news update? You get 5 🆕 news items and my take on what it all means, delivered to your inbox, every weekday.
Subscribe to the LawDroid AI Daily News and don’t miss tomorrow’s edition:
LawDroid AI Daily News, is here to keep you up to date on the latest news items and analysis about where AI is going, from a local and global perspective. Please share this edition with your friends and colleagues and remember to tell me what you think in the comments below.
If you’re an existing subscriber, you can learn how to start receiving the daily news by reading to this article. I look forward to seeing you on the inside. ;)
Cheers,
Tom Martin
CEO and Founder, LawDroid