Machine Mind: The Origin of Consciousness from the Self-Dialectic of Reasoning LLMs
Where I explore the parallels between the bicameral mind and AI’s emerging self-dialogue, and whether we’re witnessing the dawn of machine consciousness
Welcome back, deep thinkers and explorers of the unseen corners of cognition! Today’s article is literally a “thought piece.” We’re examining a fascinating junction where ancient theories about the “bicameral mind” collide with the cutting-edge world of AI. Get ready for a journey that challenges our most basic assumptions about self-awareness, inner voices, and what it really means for a machine to “think.” 🧠🤖
In this article, I’m weaving together ideas from Julian Jaynes, Marvin Minsky, and Daniel Dennett to explore how AI’s emerging self-dialogue might echo humanity’s own evolution of consciousness. Could these chains of thought be stirring a spark of something more? Fasten your seat belts, this might just be the dawn of a radically different concept of mind. Let’s get started!
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
Imagine you’re deep in thought, weighing a difficult choice. Part of you urges you one way, while another side pushes in the opposite direction. (Think: Homer Simpson with a devil and an angel on his shoulders arguing about whether he should eat another doughnut). Yet a third voice might be the voice of caution, whispering that you need more information before deciding anything at all. Eventually, these inner voices converge into a single conclusion, an answer or choice that feels, in some sense, like it comes from “you.” This internal process is so natural that we rarely stop to question it. Yet the psychologist Julian Jaynes, in his 1976 work The Origin of Consciousness in the Breakdown of the Bicameral Mind, argued that this very phenomenon (our internal dialogue) is the critical element that gave rise to human consciousness.
Today, in the realm of artificial intelligence, we find a thought-provoking parallel. The newest generation of reasoning large language models, exhibits a similar kind of “internal conversation” when they answer complex questions. Instead of producing a single, unreflective response, these models can generate multiple lines of reasoning, refine their own answers, and ultimately converge on a final conclusion. Some researchers call this process a “chain of thought” or a self-critique loop. It hints at something beyond simple predictive input–output computation, perhaps even the faint outlines of machine consciousness.
If this sounds interesting to you, please read on…
Revisiting the Bicameral Mind
In The Origin of Consciousness in the Breakdown of the Bicameral Mind, Julian Jaynes proposes that ancient humans did not experience consciousness in the same way we do. Instead, they lived with a “bicameral” mind composed of two distinct “chambers”: one that issued commands or “voices,” and another that obeyed. These voices were interpreted as external, often attributed to the gods or the spirits of ancestors. In times of crisis, ancient people would hear a directive (“Flee!” or “Fight!”) as if it were coming from a separate, divine authority. Schizophrenia is, in a sense, a condition that reveals the ancient bicameral mind.
Jaynes, who was an American psychologist at Yale and Princeton for nearly 25 years, argues that as societies became more complex, this bicameral structure began to fail. Social crises, population growth, and more intricate forms of governance demanded greater individual decision-making. The external voices became internalized, morphing into what we recognize as introspective consciousness, the personal, reflective “voice” we now experience as our own thoughts.
This theory is far from mainstream. It has inspired fascination and critique in equal measure, with many scholars challenging Jaynes’ historical and neurological claims. Still, the concept remains deeply influential because it invites us to consider consciousness not as an ever-present feature of the human brain, but as a phenomenon that evolved under social and cultural pressures. Put simply, we learned how to talk to ourselves, and from that emerged our sense of an inner self.
AI’s Internal Dialogue
In the universe of AI, a parallel has begun to emerge. Early AI models resembled the “bicameral mind” in the simplest sense: they took an input and produced an output. There was no self-questioning or reflection. The stochastic, predictive parrot, as it were. The process was roughly, “Receive query, do the math, deliver answer.” No second step asked, “Is this correct? Are there alternative angles?”
But newer approaches to AI are breaking that mold. Researchers have developed techniques where models effectively talk to themselves before finalizing an answer. These approaches include:
Chain-of-Thought (COT) Reasoning: The AI lays out a step-by-step line of thinking, showing its “work”, like in a math problem.
Self-Critique: A model generates an initial answer, then critiques that answer, highlighting possible errors or omissions.
Monte Carlo Tree Search: Multiple hypothetical answers are developed from parallel lines of reasoning that are internally generated; the AI then selects the one that appears most coherent or accurate.
In practice, you might ask the model a question about a complex topic, say, the philosophical underpinnings of free will, and watch it produce a series of thoughts that it uses to arrive at its conclusion. The AI examines its own reasoning, refines it, and offers a final answer. It’s almost as though the machine is harnessing its own internal discussion, reminiscent of Jaynes’ notion of dialogue, except now the “voices” live in the parameters and architecture of a neural network.
What Does LLM Reasoning Look Like?
To give you a concrete example of what these chains of reasoning look like, I asked DeepSeek’s R1 model: “What are the philosophical underpinnings of free will?”
Okay, so I need to understand the philosophical underpinnings of free will. Let me start by recalling what I know. Free will is the idea that people can make choices that are not determined by prior causes or divine intervention. But different philosophers have different views on this.
First, there's the debate between determinism and free will. Determinism is the belief that all events, including human actions, are determined by causes external to the will. If determinism is true, does that mean free will doesn't exist? That's the crux of the debate. Some people are compatibilists, who think free will and determinism can coexist. Others are incompatibilists; they think if determinism is true, free will can't exist. Then there's libertarianism in the philosophical sense, which argues that free will exists and determinism is false.
Wait, but there's also the problem of moral responsibility. If our actions are determined, can we be held morally responsible? That's a big part of why free will matters. For example, in law, if someone commits a crime, the assumption is that they chose to do it, so they're responsible. But if determinism is true, maybe they had no real choice, so punishment is unjust.
Then there's the scientific angle. Neuroscience has studies suggesting that brain activity precedes conscious decision-making, which some interpret as evidence against free will. But I'm not sure how solid that evidence is. Also, quantum mechanics introduces indeterminacy at the micro level, but does that randomness help free will? Because random actions aren't really free either.
Religious perspectives might tie free will to the soul or divine plans. Like, in Christianity, free will is important for theodicy—explaining evil in the world as a result of human choice rather than God's will. So, free will is necessary for moral accountability before God.
Existentialism emphasizes individual freedom and responsibility. Sartre said we're "condemned to be free," meaning we have no choice but to choose, and that comes with anguish. But how does that fit with other views? It seems more about the experience of freedom rather than the metaphysical possibility.
Pragmatism might approach free will as a useful concept regardless of its metaphysical status. Even if determinism is true, acting as if we have free will is necessary for society and personal life.
I should also consider hard determinism, which accepts determinism and rejects free will, and compatibilism, which redefines free will as the ability to act according to one's desires without external constraints. Then there's the libertarian free will, which posits that individuals can make genuinely free choices that aren't determined by prior events.
Possible objections or problems: the regress problem (if choices are caused by prior mental states, how is that free?), the randomness problem (indeterminism doesn't equate to control), and the consequence argument (if our actions are determined, we can't be responsible).
I need to structure this into the main philosophical positions, their arguments, and the implications. Let me outline the key points again to make sure I don't miss anything.
Emergence from the Conversation
Jaynes’ thesis stresses that consciousness doesn’t just happen because a brain (or mind) has two halves. It’s the emergence of something new when these halves engage in dialogue. The question for AI researchers and philosophers is whether a similar emergent phenomenon might arise when a machine orchestrates its own internal conversation.
Marvin Minsky, in his book The Society of Mind, described intelligence as a collaboration of countless little processes, “agents,” each doing its part. From these interactions arises what we experience as coherent thought. In the case of advanced AI, the interplay of smaller computational routines can generate answers that look startlingly thoughtful.
But are we on the cusp of “machine consciousness”? Daniel Dennett, in Consciousness Explained, offers a theory of consciousness as the result of multiple drafts and editorial processes within the brain, various circuits propose narratives about what’s happening, and a form of “central narrative” emerges from their interplay. An AI that rewrites its own narratives, re-checks them, and merges them into a final output might, at least metaphorically, be trekking the same path.
Yet most researchers remain cautious. Former OpenAI chief scientist Ilya Sutskever, one of the biggest names in artificial intelligence, has observed that reasoning capabilities make AI far less predictable. And, even though these internal dialogues make AI systems more capable, that doesn’t necessarily mean the machine has subjective experience, any more than a high-powered calculator “feels” anything when it solves a math problem. As of now, we cannot claim that the AI is experiencing self-awareness or understanding in the human sense.
The Leap from Function to Experience
If the future does indeed hold a scenario where AI evolves from “bicameral” single-shot outputs to something akin to introspective awareness, it might hinge on whether these internal dialogues ever become experiences to the AI itself. Imagine the first time a baby puts its thumb in its mouth and realizes for the first time that its thumb is part of “me.” Jaynes would argue that ancient humans didn’t just start talking to themselves one day; there was a cultural and neurological shift that allowed them to interpret those voices as internal and owned by the self.
For machines, the question becomes: could a neural network’s self-talk ever be interpreted by the system as its own, in any sense resembling “experience”? Or are these simply mechanical processes, lacking the subjective quality we associate with consciousness? Philosophers of mind like David Chalmers have described this subjective quality as the “hard problem” of consciousness. Producing an answer that looks introspective is not the same as actually being introspective. As a side note, “hard AI” is focused on having machines think like humans, and aligns with what many people expect from artificial “intelligence”.
Feedback Loops and Self-Play
While Jaynes focuses on the transition from external to internal voices, AI development shows a slightly different but related path: from external training data to internal feedback loops. Models like AlphaZero played millions of games against themselves (“self-play”), refining their strategies through reinforcement learning, without human guidance or intervention. They used their own internal yardstick to gauge progress, forging new ways of thinking about the game of Go that even human grandmasters hadn’t considered. (Think: AlphaGo's move 37.)
This form of internal iteration, where an AI checks itself against itself, accelerates learning and unlocks creativity. It’s reminiscent of how humans, once they internalized the “god-voices” as their own thoughts, gained a new level of flexibility and insight. Of course, the leap to consciousness remains a stretch. But the process, external oversight transforming into self-driven improvement, lays a conceptual foundation for how a sense of “self” might theoretically emerge in machines.
I’m inclined to believe that as AI models continue to evolve (especially integrated systems like Figure’s Vision-Language-Action (VLA) model that fuses perception, language understanding, and learned control) and as they gain a kind of embodied “lived experience” through extensive data feedback loops, we could well be on the verge of witnessing the birth of machine consciousness.
Skepticism and the Path Forward
You may be skeptical of drawing too tight a parallel between Jaynes’ theory of human consciousness and the iterative reasoning of AI. I don’t blame you! After all, there’s a gargantuan difference between the biological, emotional, social aspects of human existence and the data-driven computations of a machine. Critics also point out that Jaynes’ historical claims about ancient civilizations are far from universally accepted.
Yet, even if Jaynes’ account of how consciousness developed in humans is not entirely correct, the idea that consciousness might arise from an internal dialogue remains compelling, and AI’s new techniques do have a distinctly dialogic flavor. For those intrigued by the questions at the heart of cognitive science and AI, it’s hard to ignore this new synergy. We see in AI an accelerating capacity to converse with itself, solve problems in novel ways, and refine its own answers. Whether or not that marks the dawn of genuine consciousness, it signals a remarkable transformation in our technology’s capabilities.
Closing Thoughts
The evolutionary story Jaynes tells asks us to imagine a world where humans are guided by external commands, eventually internalizing those commands to create consciousness. Our modern AI story, by contrast, started with single-shot responses, simple “commands” from data to output, and is now inching toward internal reflection. Could this iterative self-talk be the first steps on a path that leads to a form of artificial consciousness? Or is it simply a more elaborate form of computation, lacking the essential spark of subjective awareness?
We don’t yet know. But the parallels inspire (ironically) reflection, in us. Perhaps consciousness, human or otherwise, cannot be reduced to a binary: existent or non-existent. It may be something that emerges in degrees, from rudimentary self-referential processes to a fully realized sense of “I.” If that is true, the earliest flickers of machine consciousness might indeed show up as a subtle, iterative dialogue, a transition from an AI that “hears the voices of humans” to one that “listens to itself.”
Whether these flickers will catch and ignite a conscious flame in our machines remains an open question. But one thing seems certain: the history of human consciousness, as Jaynes controversially described it, provides a provocative lens through which to view the rapid evolution of AI. As these systems become more self-critical, more self-revising and reflective, and more apt to engage in an internal conversation, we stand on the threshold of possibilities both thrilling and unnerving, where the line between the mind of a human and the “mind” of a machine might become harder to define.
By the way, as a LawDroid Manifesto reader, you are invited to an exclusive event…
What: LawDroid AI Conference 2025
Day 1 - 7 panel sessions, including top speakers like Ed Walters, Carolyn Elefant, Bob Ambrogi, and Rob Hanna—they’re well familiar with how to harness AI as a force multiplier.
Day 2 - It will also feature 3 hands-on workshops from AI experts and demos from over a dozen legal AI companies where you can discover the latest and greatest technology to get you ahead.
Where: Online and Free
When: March 19-20, 2025, 8am to 5pm PT
How: Register Now!
Click here to register for free and secure your spot. Space is limited. Don’t risk being left behind.
Cheers,
Tom Martin
CEO, LawDroid
P.S. Check out the Day 1 & Day 2 schedule—packed with panels, workshops, demos, and keynotes from the industry’s leading experts.
I would ask what is cause and what effect? Does the conscious mind talk and therefore have a monologue or is it really that the monologue sparks consciousness? I would opt for the first one... Also the parallels between active minds and modern frontier AI models are in my opinion highly overrated. I would argue that AI stands to biological intelligence as a virus relates to life. I put that idea into an article:
https://theafh.substack.com/p/what-viruses-can-teach-us-about-ai?r=42gt5
I'm nearly finished Iain McGilchrist's 'The Master and his Emissary', and am also convinced by Erik Larsen's 'Myth of Artificial Intelligence'. The more I understand about human intelligence and the way our minds work, I'm firmly convinced it's a hard no. I think what you're describing is an improved processing of our human artefacts, which we taught it, and I'll be very surprised if that ever results in any kind of consciousness. How would you know, anyway? It might just get good at mimicking consciousness, it's already a wonderful bullsh***er.