
Inside the operating room, while surgeons are occupied with their tasks, the patient’s brain under deep anesthesia behaves quite differently than one might anticipate. For a long time, it was presumed that general anesthesia represented a condition akin to deep sleep or a “shut-down” of consciousness.
However, research published in the journal Nature reveals that even in absolute silence, our “self” allows neurons to continue recognizing words, acquiring new knowledge, and making predictions about what is about to be spoken next.
The Secret Life of the Hippocampus
An international team of neurosurgeons conducted a unique experiment that came to light in May 2026. During surgical procedures, patients receiving propofol, who were undergoing epilepsy treatment, were exposed to various sounds and human speech (podcasts). Microelectrodes implanted in their brains were capable of recording the activity of individual cells within the hippocampus—a deep structure vitally important for memory.
The results compelled scientists to revisit the fundamental question: “What does it mean to be conscious?” Astonishing findings emerged.
Neurosurgeons at the Russian Children’s Clinical Hospital performing surgical intervention.
Burning out the illness. Moscow neurosurgeons cured a child of severe epilepsy.
More details
Firstly, unconscious learning by the brain was detected. When a distinct, novel sound was occasionally introduced amidst a repetitive sound pattern, hippocampal neurons responded to this unexpected input. Over time, their reaction to the “odd” auditory stimulus grew increasingly precise. The brain was learning to identify anomalies without engaging conscious awareness.
Secondly, it was discovered that individuals under anesthesia can comprehend speech. The most striking observation concerned the response to podcasts. Neurons in the patients distinguished between vocal tonalities, clearly separated nouns from verbs, and grasped word meanings. Furthermore, the brain attempted to anticipate subsequent words in sentences, demonstrating a sophisticated level of contextual analysis. Essentially, the same language model functioning while awake was operational under anesthesia, merely without the active listener present.
“Our findings necessitate a re-evaluation of the very nature of the anesthetic state. Consciousness is absent at this time, but the brain’s internal existence is far richer than we might have conjectured,” commented the study’s lead author, Professor Samir Sheth of Baylor College of Medicine (USA).
The Key to the Hidden Mind
Why is this significant? The scientists’ discovery dismantles the obsolete view of anesthesia as a mere “dimming” of awareness. It appears that the anesthetized brain continues complex analytical work while being completely disconnected from our “self.”
This has ramifications extending beyond philosophy and into strictly medical domains. For instance, it impacts the development of speech neuroprosthetics. Since the brain’s language centers activate automatically, requiring no conscious effort, this opens up enormous possibilities for individuals who have lost the ability to speak and move (e.g., following a stroke).
This implies the potential to engineer a neurointerface capable of directly reading intentions and even “inner speech” from the motor cortex, utilizing the natural word prediction algorithms discovered in the hippocampus.
What you need to know when general anesthesia is proposed before surgery?
Another promising prospect is the possibility of opening a “window into the coma.” Comprehension of speech under anesthesia provides researchers with a novel instrument for detecting latent consciousness in individuals in a persistent vegetative state. Science now knows what specific signal to look for in their brains. If that signal also exhibits the capacity to predict words, it suggests that cognitive functions may be preserved, offering a potential pathway to establish contact with such individuals.
Samir Sheth plans to conduct analogous studies in the future involving patients who have sustained severe traumatic brain injuries or are comatose. If they can determine whether isolated areas exist in the brain capable of processing language and predicting speech flow, it will pave the way for new therapeutic approaches for these patients.
To confirm that the data obtained is not specific to the current experimental conditions, the neurosurgeon intends to replicate the experience using different types of anesthetic agents. Additionally, he plans to expose unconscious volunteers to podcasts in languages they do not know—this will help ascertain if the unconscious brain attempts to recognize linguistic patterns even when the person is not fluent in that language.
If we understand how the brain safeguards itself, it could ultimately lead to new strategies for treating Alzheimer’s disease.
The Solution. Scientists figured out why 30% of people with Alzheimer’s do not develop dementia.
Aids in Creating More Advanced AI
Concurrent with this discovery, science is addressing a purely practical problem: how to precisely gauge the depth of a patient’s anesthesia. Currently, clinicians often rely on the so-called BIS index, but its efficacy varies depending on the specific drug used.
A new large-scale project involving 250 patients, set to commence shortly in Wuhan, aims to identify a universal “objective index of consciousness” using high-resolution EEG. Scientists seek to uncover brainwave patterns that would reliably indicate the depth of unconsciousness across various agents, such as propofol and, for example, ketamine.
The investigation into the nature of consciousness under anesthesia unexpectedly brings neuroscience and computer science into closer alignment. The fact that the biological brain constructs predictions about the next word in a sentence bears an uncanny resemblance to the operational principles of large language models like GPT. The difference is that the human “processor” achieves this without conscious involvement and with minimal energy expenditure.
The technological projection is clear: understanding these innate mechanisms will facilitate the creation of more sophisticated artificial intelligence. Until that moment arrives, however, it offers hope to thousands of patients whose minds may still be listening and waiting, until we learn how to truly hear their response.