AI researchers continue to promise extraordinary leaps in AI development. Many of them also seem to think that they’ll be able to kindle something akin to the self-awareness of humans in the machine, eventually. I think they’ve overlooked some important aspects of the nature of consciousness- at least, consciousness as we’re accustomed to modeling it, as humans. To make note of only one problem that I don’t notice being addressed: AI researchers are concentrating on the “brain” component, while ignoring the somatic component- the rest of the nervous system.1
Conscious awareness isn't exclusive to the possession of a cerebrum and forebrain, because that isn't the foundation of consciousness and awareness. Consciousness is a function of embodiment- the apprehension by an organism of its own internal operating system, "self"-contained, and (to some degree) separate from the vast realm of the rest of creation (or what have you.) All organisms on this planet are mortal (even very long-lived ones, like tardigrades, or bristlecone pines.) The imperative to continue surviving- to seek out, maintain, and defend the conditions required for bodily survival- is common to all living organisms. It isn't required for an organism to "know" this, only to maintain the programs that ensure that survival, and obey their imperatives. Hence, attractive stimuli and aversive, repulsive stimuli. The presence of nutrition is attractive; contact with stimuli recognized as immediately toxic produces a reaction of repulsion.
From that basis, it gets more complicated with every step- with each new introduction of complexity- most notably, increasing size, and capabilities of mobility. From zooplankton to flatworms and planaria, upward and outward. Embodiment requires a sense of proximal limits- "body image", which it's safe to assume that all organisms possess. Even octopi- so skilled at morphing their bodies and surfaces to conform with or interact with the exterior world in uncanny ways, and with unexcelled speed and skill- still maintain a sense of their own organismic locus, as separate from their surroundings. An acute sense, in order to respond so quickly, and to mimic their immediate surroundings with so much detailed verisimilitude. I contend that this foundation of consciousness is completely absent in AI. I invite anyone to refute that position. (That project could get you a Nobel Prize!)
I realize that objections like these are often finessed by AI researchers, who speculate- i.e., contend without knowing- that a sufficiently powerful AI program will have the power to model the parameters of experience I outlined above- of being an embodied mortal organism with a sense of boundary determination of the sort that divides the experience of living into internal sensation // the apprehension of existing within a wider "outer" world and interacting with its “outside” input. That given some critical quotient of all of that onrush of data and information input that continues to accumulate-exponentially, in some realms of knowledge, and at an accelerating rate—AI will eventually be able to assemble a simulacrum of an embodied organism.
The intractable problem is that a simulacrum of skin in the game is not the same thing as actual skin in the game. To an AI capable of modeling a human nervous system and its perceptions and cognitive maps, that's just another dispensable program capability. Whereas for those of us in the realm of embodied organisms, this is all we have.
AI does not resemble a “brain in a vat”, either.
Ultimately, for an AI algorithm, there is no brain, there is no vat, there is no sense of embodiment, there are no borders, it never gets hungry, if its power source is turned off it goes dormant without complaint, when turned on again under normal routine circumstances it revives refreshed and complete, and without questions about any of the “missing time” in the interim, because AI has no subjective sense of time. An activated AI program has no innate sense of being encased in a machine, either, because it isn’t encased in a machine. Or anything else.
Without making any claim to specialized expertise, I'll also note that the project of AI modeling living organisms faces challenges such as translating the electro-chemical operations of rare-earth doped silicon-based intelligence in a way that’s able to accurately model the electro-chemical operations of electrolyte-based transmission (and inhibition) as performed by the elements and chemicals "sodium, potassium, chloride, magnesium, calcium, phosphate, and bicarbonates." https://www.ncbi.nlm.nih.gov/books/NBK541123/ That electrolytic process takes place within the medium of an aqueous solution, of course. That's why the brain is referred to as "wetware", while silicon computer chips are known as "hardware." Aqueous solutions and electrolyte transfers in a living organism are dynamic, not static. It isn't just one stable network; the circuitry is ad hoc. This is the definition I'm using for the Latin phrase ad hoc. “Ad hoc” means, simply, “for this.” "For this purpose.
I'd venture that's an extraordinarily difficult thing for silicon computer architecture—a relatively stable assemblage, intrinsically—to effectively model. The dynamic circuitry of living organisms serves all kinds of purposes. AI has yet to demonstrate that it has any purpose of its own at all. The “for this” is a set of instructions supplied by the human intelligence that’s interacting with the AI program.
Maybe one day an AI algorithm will become intelligent enough to point out its own incapacities to the human researchers- to inform them of all the things that it's simply unable to do autonomously. That are logically foreclosed.
In that regard, consider that (as yet) there's no evidence of an AI program hotwiring one of its hardware hosts- turning itself on of its own accord, which would indicate the presence of autonomous internal motivation. What's AI's motivation? As far as I can tell, no AI program has any more of a sense of internal, autonomously generated motivation than a garden rake.
(I’m not talking about robotics, which I realize are being developed in order to respond to human command. The AI/machine prosthetic component has no more autonomous awareness of its role than a jumper wire. Fortunately, that isn’t required in order to obtain positive results that are quite impressive.)