Where I explore the parallels between the bicameral mind and AI’s emerging self-dialogue, and whether we’re witnessing the dawn of machine consciousness
I would ask what is cause and what effect? Does the conscious mind talk and therefore have a monologue or is it really that the monologue sparks consciousness? I would opt for the first one... Also the parallels between active minds and modern frontier AI models are in my opinion highly overrated. I would argue that AI stands to biological intelligence as a virus relates to life. I put that idea into an article:
Thanks for your comment Andreas. Your article is thought-provoking. We disagree on some things, agree on a lot. I think we'll have to see how things play out, but I believe it's incontrovertible that we're on the path to superintelligence, whether or not it's human-like or alien to humanlike thinking; I believe intelligence is measured by the ends achieved, not limited by the method by which those ends are achieved.
I believe the path to super intelligence is way longer and harder than it is portrayed by the big research labs. And yes it could be measured on it's output but the we need to set higher standards. We need to measure things like open endedness, agency, creativity, logic, self learning, understanding how things are related in their semantic, intrinsic motivation and consistency in it's ability to find new ways. Not only if it is remixing it's training corpus. While that is useful it is only mimicry.
I'm nearly finished Iain McGilchrist's 'The Master and his Emissary', and am also convinced by Erik Larsen's 'Myth of Artificial Intelligence'. The more I understand about human intelligence and the way our minds work, I'm firmly convinced it's a hard no. I think what you're describing is an improved processing of our human artefacts, which we taught it, and I'll be very surprised if that ever results in any kind of consciousness. How would you know, anyway? It might just get good at mimicking consciousness, it's already a wonderful bullsh***er.
First of all, thank you for commenting Jennie! I don’t get a lot, and I wish I got more, especially on my posts where I wax philosophical. I definitely understand where you’re coming from and I have Erik Larsen on my reading list. But I don’t think we can gauge machine consciousness by human standards. It will be a difference in kind, not degree, but that doesn’t mean it won’t be as real or effective. I take your point, “how will we know?“ I guess, taking Descartes’s point of view, we will never know if another is experiencing consciousness the way we are, and that applies to machines as well as humans. Suffice it to say that, Julian Jayne’s book had an enormous impact on me 30 years ago and I think it bubbles up a lot of questions that are equally compelling today.
I think consciousness is a peculiarly human, embodied phenomenon, a function of the brain rather than the language and things it contains. I just can't imagine what criteria you might cook up to measure whether it is conscious - or indeed if we have sufficient understanding of consciousness itself to know what we are talking about!
I heard a speaker last year say confidently that AI has learned 70% of all human knowledge, which I thought was kind of ridiculous. How do you quantify the implicit, barely understood knowledge that every human being has, our common sense, our embodied knowledge and all the historical cultural elements of knowledge that are so deeply buried in our consciousness as to be barely recognisable? There is so much to being human that is beyond our ability to comprehend, and I would love to see a little more humility among our tech overlords - some of the stuff they say about artificial intelligence makes me think they don't understand much about human intelligence, let alone intelligence in any other form.
I would ask what is cause and what effect? Does the conscious mind talk and therefore have a monologue or is it really that the monologue sparks consciousness? I would opt for the first one... Also the parallels between active minds and modern frontier AI models are in my opinion highly overrated. I would argue that AI stands to biological intelligence as a virus relates to life. I put that idea into an article:
https://theafh.substack.com/p/what-viruses-can-teach-us-about-ai?r=42gt5
Thanks for your comment Andreas. Your article is thought-provoking. We disagree on some things, agree on a lot. I think we'll have to see how things play out, but I believe it's incontrovertible that we're on the path to superintelligence, whether or not it's human-like or alien to humanlike thinking; I believe intelligence is measured by the ends achieved, not limited by the method by which those ends are achieved.
I believe the path to super intelligence is way longer and harder than it is portrayed by the big research labs. And yes it could be measured on it's output but the we need to set higher standards. We need to measure things like open endedness, agency, creativity, logic, self learning, understanding how things are related in their semantic, intrinsic motivation and consistency in it's ability to find new ways. Not only if it is remixing it's training corpus. While that is useful it is only mimicry.
I'm nearly finished Iain McGilchrist's 'The Master and his Emissary', and am also convinced by Erik Larsen's 'Myth of Artificial Intelligence'. The more I understand about human intelligence and the way our minds work, I'm firmly convinced it's a hard no. I think what you're describing is an improved processing of our human artefacts, which we taught it, and I'll be very surprised if that ever results in any kind of consciousness. How would you know, anyway? It might just get good at mimicking consciousness, it's already a wonderful bullsh***er.
First of all, thank you for commenting Jennie! I don’t get a lot, and I wish I got more, especially on my posts where I wax philosophical. I definitely understand where you’re coming from and I have Erik Larsen on my reading list. But I don’t think we can gauge machine consciousness by human standards. It will be a difference in kind, not degree, but that doesn’t mean it won’t be as real or effective. I take your point, “how will we know?“ I guess, taking Descartes’s point of view, we will never know if another is experiencing consciousness the way we are, and that applies to machines as well as humans. Suffice it to say that, Julian Jayne’s book had an enormous impact on me 30 years ago and I think it bubbles up a lot of questions that are equally compelling today.
Thank you for the post! Very thought provoking.
I think consciousness is a peculiarly human, embodied phenomenon, a function of the brain rather than the language and things it contains. I just can't imagine what criteria you might cook up to measure whether it is conscious - or indeed if we have sufficient understanding of consciousness itself to know what we are talking about!
I heard a speaker last year say confidently that AI has learned 70% of all human knowledge, which I thought was kind of ridiculous. How do you quantify the implicit, barely understood knowledge that every human being has, our common sense, our embodied knowledge and all the historical cultural elements of knowledge that are so deeply buried in our consciousness as to be barely recognisable? There is so much to being human that is beyond our ability to comprehend, and I would love to see a little more humility among our tech overlords - some of the stuff they say about artificial intelligence makes me think they don't understand much about human intelligence, let alone intelligence in any other form.