ink transition
play buttonpause icon
view nft
view history

Toward A Metabolic Theory of Consciousness:
The Cognitive Science Revolution

Cognitive Science Brings Unprecedented Clarity

The cognitive science revolution could be said to trace its roots back to Kantian philosophy through the experiments of Hermann von Helmholtz, the grandfather of cognitive neuroscience. Helmholtz conducted a variety of incredibly imaginative experiments in the mid-nineteenth century in Europe, measuring in one case the speed of transmission of a nerve impulse by simulating physiological conditions in a frog leg and measuring the observable physical effects of its nervous activation. A century and a half later, science has improved dramatically in terms of high resolution imaging techniques and the corresponding inferences about neural circuitry we can now make in live subjects so complex as to include the functioning brains of real people. It is against this increasing base level of empirical understanding of the processes of human cognition that the present artificial intelligence revolution is playing out. 

As time has gone by and the amount of information about what brains are and how they work has increased, computers have improved by leaps and bounds with regard to computational efficiency. Decreasing size and electricity needs have made processors more portable and indeed more ubiquitous, as shown by the recent shortage’s far-flung effects on markets including the automotive industry. The recent AI Panic Letter (Future of Life, 2023) gave rise to a lively and widespread discussion about the ramifications of increasingly advanced modern technology with or without respect to artificial intelligence, or perhaps more accurately, artificial consciousness. Now the Biden administration has released a 70-page Blueprint for an AI Bill of Rights (White House, 2023) that attempts an estimation of both the problem set and the material steps that could be taken to ensure that basic freedoms are protected from increasingly advanced algorithms. 

Philosophically speaking, what researchers today refer to as “artificially intelligent” is actually a mere repository of information collected about human interactions that have already happened. Chatbots historically have been designed by people; the general idea being that a clever person can script out several different interactions that others can then essentially play back at any time if the software is well-designed and constructed properly. This sort of playback is what ChatGPT accomplishes, but the script isn’t handwritten by a hired hand at a startup. Instead, a large language model is a pattern analysis algorithm that takes written content as input and then returns complex outputs that can best be thought of as the sort of thing the machine predicts a sufficiently informed user would type in response to a given query. So in order for it to work at all, it has to have people typing novel content and saving that content within reach of the algorithm. That being said, ChatGPT may dramatically impact the market for copycat content and greatly increase the human bandwidth available for new ideas by making it quick and easy to rehash ideas that are already present and perhaps by coming up with monetization structures capable of identifying & financially incentivizing well-formed novel ideas.

One question that is not addressed directly by our ideas in the preceding paragraph is the question of whether ChatGPT brings us closer to artificial consciousness or not, and perhaps the best way to field an answer to this question is to investigate the nature of conscious thinking. The first of this series of essays dealt with the work of Dr. Melanie Mitchell, a prominent cognitive scientist in the AI field today who does not agree that the reasoning forwarded in the AI Panic Letter is well thought-out because AI is not conscious. Therefore, a brief introduction to the science of cognition is required. We will also want to consider the outcomes implied by a different open letter, put together by the Association for the Advancement of Artificial Intelligence (AAAI, 2023).

What is Consciousness?
Consciousness is the account cognitive processes give of themselves. Brain activity gives rise to the mind by complex activity, and one recent argument against the further proliferation of increasingly advanced chatbot technologies is the argument that such complex processes are not able to give accurate accounts of themselves. One effective demonstration of variously interpretable computational modeling in complex systems exists in each individual conscious person. However, here, too, interpretability is imperfect. Much of thinking is unconscious and thus difficult or impossible to interpret. The interpretability of a system has to do with how it works and how we are able to analyze it, which are both components of the complexity of the system and not directly related to whether or not that system is conscious. Worldview Ethics holds that the basic unit of consciousness is the worldview, which is the mental model constructed by conscious minds to govern conscious processes.

Recent works by Dr. Piccinini and Dr. Hipólito provide compelling accounts of computational theory of mind (Piccinini, 2023) and the enactivist, embodied account of cognition (Hipólito, 2023). A Chinese research team led by anesthesiologist Dr. Yali Chen recently published an article detailing the possible underlying corollaries between metabolic activity in the brain and conscious engagement with the world. The conclusion is that metabolism is a necessary but not a sufficient substrate for conscious thinking: “In addition to energetic-metabolic resources, consciousness requires specific neuronal mechanisms, such as information integration and/or neuronal globalization, based on the construction of brain of its own neuronal time and space,” (Chen, 2021). Dr. Chen’s work, in addition to the accounts of relevant processes by Dr. Piccinini and Dr. Hipólito alongside hundreds of modern cognitive neuroscientists & professionals from various other interdisciplinary fields, becomes the basis for an outline that may soon give rise to a complete metabolic account of cognition. The simplicity and elegance of this account is remarkable insofar as it emerges seamlessly from common science and provides deep insight into conscious thought’s neurobiological underpinnings with objectively verifiable outcomes.

The account is simple enough to give in one sentence: a conscious mind is part of a metabolically-driven enactive complex consisting of a body, a brain, and an environment. How do we know? Well, recent breakthroughs have yielded considerable insight. It turns out that GPT models configured to interpret neuronal activity provide a remarkable level of empirically verifiable correlation between a neural map of function and actual linguistic outputs generated by conscious participants in a lab (Askenov, et. al., 2023; Artisana.ai, 2023). We’re likely to see much more rapid progress now that ChatGPT is helping people around the world accelerate their learning process.

Digital computers are not the only game in town, with respect to computational mechanisms - analog computers are used to do a number of tasks and perhaps minds and/or brains are themselves capable of engaging in computational tasks in a way that is neither quite analog nor digital but rather cognitive. This is the means by which conscious thinking is accomplished, and it still seems unlikely that the sorts of machines we have today will become capable of replicating it. However, the more closely machines are able to map actual neural models and patterns, perhaps the more likely we are to see increasingly advanced computational frameworks emerge.

The primary issue here is that most of cognition takes place below the surface, and even with the best tricks we’ve been able to think of, human minds are simply not as powerful in terms of brute force as digital computers are. Nonetheless, as an analogy for the sort of thing a brain does - for example, neuronal translocation empowers neurons to create new synaptic connections - the analog computer takes complex inputs that relate rather directly to the one-way nature of conscious lived experience. This arrow of time for minds can account for the way in which we mature and perhaps even contribute valuable insights for a deeper understanding of the function of consciousness; we can no more regress along the timeline of lived experience than we can unthink thoughts. Our thinking history is deeply ingrained in our conscious experience of the world. No matter how well a machine functions to map our thoughts into abstract domains, lived experience will remain beyond the grasp of automata until the entire enactive complex within which we find individual people is able to be simulated. For our purposes here, we can safely set this issue aside for the moment. It will come up again, later.

Metabolism in Consciousness
We’re growing increasingly confident in our description of metabolism as a necessary but not sufficient condition for consciousness to emerge. Though consciousness is not completely understood, it is generally taken to be acceptable to say that it emerges from the metabolic activity of neurons. Neurons spend a tremendous amount of energy regulating the cell’s ability to pass action potentials and interact with other neural structures to send and receive information. Somehow, though we’re not completely sure how, somehow consciousness seems to emerge from a living, waking being’s brain.

One of the functions of Worldview Ethics will be to establish a baseline framework for a metabolic theory of consciousness that is capable of spanning the entire range of perspectives between the metabolic behavior of individual cells and the emergent personality and behavior of the organism at the holistic level. Minimizing overzealous abstraction is healthy, but foundationally the situation is that we have a very powerful analogy in metabolism that will soon begin to offer increasingly robust explanations of conscious phenomena. This metabolism is what enables the GPT model above to predict neuronal outputs in terms of language. In my first academic essay, “The Lexicultural Propagation of Concepts,” I put forth an argument that learning language is akin to learning to swim in a pool - it cannot be explained; it is instead achieved (Daniel, 2016). As the self-assembly of a mapping process that tracks phenomena inside and around the individual body, the mind’s emergence is spontaneous, and lasts the duration of a life. The mind’s emergence from the body/brain/environment enactive complex is one of the great mysteries of modern science & philosophy.

Consciousness itself is an emergent activity of homeostatic neural systems. Abstract thought requires a comparatively small metabolic boost, possibly caused by neuron + astrocyte interaction and powered by the ANLS, the astrocyte-neuron lactate shuttle. Unconscious brains have different firing patterns but do not require substantially less neural metabolic activity than conscious brains do. Focusing hard on a problem is likely to upregulate local neural metabolism in some areas and could increase overall metabolic needs by as much as 10%, (Chen 2021). If computational theory of mind, aka CTM (Piccinini, 2023), is to be regarded as an accurate account of the manner in which processes are subdivided and interlinked, one immediate moral question is the individual’s need to eat in order to think on the one hand, and their need to think in order to compute advanced simulations to survive and coordinate future meals on the other. Not all brains achieve full consciousness, but all fully conscious organisms have brains. What would a computer’s brain look like?

It is possible that a chatbot that could give a complete account of itself may be a fundamentally different technology than the ones that power ChatGPT and other contemporary AI applications. Whether or not it would be appropriate to label such an algorithm conscious is most likely predicated on the properties it would have. Perhaps some basic imperative to self-care would be an interesting starting point for experimentation on this vector.

One lingering question regarding the nature of consciousness as it is found in humans and in animals is the function of conscious thinking. Conscious thought drives a substantial proportion of lived experience and most of the thirty-seven trillion cells in the body live and die during the course of ordinary conscious thought cycles. Are human beings conscious because their underlying systems need assistance in updating their computational models? If so, this may provide a way in which to begin to approach the issue of what something like consciousness would look like in a machine.

Enactivist Account of Cognitive Behavior
For assistance with the function of consciousness, we turn to the account created by Dr. Inês Hipólito in “Computational Modelling: A Language Game of Situated Cultural Practices,” (Hipólito, 2023). Dr. Hipólito details an enactive complex system consisting of the brain, the body, and the environment. The system is set up such that “studying the brain entails comprehending the reciprocal interactions among the brain, organism, and environment…” (Hipólito, 2023). Insofar as the body and environment interact, the brain is affected. Likewise, as the brain changes it is able to exert its impact upon the environment via the body. If we are to use the enactivist model of the body/brain/environment complex to attempt to understand consciousness, what are the corresponding input variables that could create a similar state map in the virtual space?

Complex systems theory provides a means by which to explain the disparity between ChatGPT and a conscious system, and insofar as it is able to account for the differences between human and machine with respect to consciousness, it is also perhaps able to detail steps by which to prevent AI systems from doing anything like consciousness. For example, if we were to analogize the brain to the GPU cluster running an AI model, the body of the AI system may be linked to the body of the user, and perhaps constraints could be applied to prevent powerful AI applications from interfacing with their environments directly by gating access to a token controlled by the user and outside of the reach of the AI. This would effectively subsume the responsibility for the actions of an AI model under the will of the user interacting with it, and one recourse by the legal system could be to hold the user responsible for the AI’s actions.  

Cognitive behavior, within the enactivist framework, is thought of as action. Thoughts aren’t states that our minds have for an extended duration, but rather processes that unfold and change as time goes by, always in relation to the external world. In some sense, digital computation is divorced from external realities. The fundamental nature of digital computation is abstract, and enactivist systems are wholes which may not be reconcilable after being represented abstractly in parts. This does not mean that the right set of parts might not be able to demonstrate complex emergent properties with the right support systems in place to create an enactive system, but for the purposes of the modern internet it is entirely sufficient to say that user actions are the most important component of any system. AI users are the agents that can change the status quo here, there as yet is no conceivable level of advancement that can enable an AI system to achieve independent consciousness.

At last, the cognitive revolution can finally reach the realm of moral philosophy. It seems inevitable that, for at least the time being, AI systems will not attain anything like an independent consciousness, but will instead be systems that depend upon the touch of a human user to spring to life. Rather than the consciousness of machines, human beings in the case of ChatGPT have achieved a new tool to allow individual people to interact with one another. The impact that the present developments will have on the consciousness of people around the world will be very difficult to overstate, but one key to dealing with the present situation is that we have a good grasp of the science we are participating in by observing these developments and adding our thoughts, just as Dr. Mitchell suggested in the first essay.

References
1. Aksenov, D., Gerasimova, E., Berezutskaya, E., Kireev, M., Aleksandrova, L., Kuzmina, A., ... Stroganova, T. (2023). Interpreting thoughts from brain activity with artificial intelligence. Nature Neuroscience, 26, 574-582. https://doi.org/10.1038/s41593-023-01304-9

2. Association for the Advancement of Artificial Intelligence. (Franscesva Rossi & co.,  April 5, 2023.). Working together on our future with AI. AAAI. https://aaai.org/working-together-on-our-future-with-ai/

3. Artisana.ai. (Michael Zhang. May 01, 2023). GPT AI enables scientists to passively decode thoughts in groundbreaking study. Artisana.
https://www.artisana.ai/articles/gpt-ai-enables-scientists-to-passively-decode-thoughts-in-groundbreaking

4. Chen Y, Zhang J. How Energy Supports Our Brain to Yield Consciousness: Insights From Neuroimaging Based on the Neuroenergetics Hypothesis. Front Syst Neurosci. 2021 Jul 6;15:648860. doi: 10.3389/fnsys.2021.648860. PMID: 34295226; PMCID: PMC8291083.

5. Daniel, T. Dylan. (2016). Chapter two: The lexicultural propagation of concepts. In Brian Thomas (Ed.), Philosophy of Language. Cambridge Scholars Publishing. https://www.academia.edu/36052261/CHAPTER_TWO_THE_LEXICULTURAL_PROPAGATION_OF_CONCEPTS

6. Daniel, T. Dylan. (2018) Pierre Hadot (1922-2010) - Issue 113. Philosophy Now. https://www.academia.edu/36052263/Pierre_Hadot_1922_2010_Issue_113_Philosophy_Now_pdf

7. Future of Life Institute. (2023, March 22). An Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence [Letter to the scientific community]. Future of Life Institute. https://futureoflife.org/open-letter/pause-giant-ai-experiments/  

8. Hipólito, Inês. (2023). Computational Modelling: a Language Game of Situated Cultural Practices. 10.13140/RG.2.2.25835.00805.

9. Piccinini, Gualtiero. (2023). The Computational Theory of Mind. Pre-press.

10. The White House. (2022, March 11). AI Bill of Rights. Executive Office of the President of the United States. https://www.whitehouse.gov/ostp/ai-bill-of-rights/
metamask logo
connect with metamask