2 Sources
2 Sources
[1]
Adversarial AI reveals mechanisms and treatments for disorders of consciousness - Nature Neuroscience
Understanding disorders of consciousness (DOC) remains one of the most challenging problems in neuroscience, hindered by the lack of experimental models for probing mechanisms or testing interventions. Here, to address this, we introduce a generative adversarial artificial intelligence (AI) framework that pits deep neural networks -- trained to detect consciousness across more than 680,000 ten-second neuroelectrophysiology samples and validated on 565 patients, healthy volunteers and animals -- against interpretable, machine learning-driven neural field models. This adversarial architecture produces biologically realistic simulations of both conscious and comatose brains that recapitulate empirical neurophysiological features across humans, monkeys, rats and bats. Without explicit programming, the AI model retrodicts known DOC responses to brain stimulation and generates testable predictions about the mechanisms of unconsciousness. Two such predictions are validated here: selective disruption of the basal ganglia indirect pathway, supported by diffusion magnetic resonance imaging in 51 patients with DOC, and increased cortical inhibitory-to-inhibitory synaptic coupling, supported by RNA sequencing of resected brain tissue from 6 human patients with coma and a rat stroke model. The model also identifies high-frequency stimulation of the subthalamic nucleus as a promising intervention for DOC, supported by electrophysiological data from human patients. This work introduces an AI framework for causal inference and therapeutic discovery in consciousness research, as well as in complex systems more broadly.
[2]
Dueling AI agents could reveal keys to restoring consciousness
Can a "friendly" rivalry between two artificial intelligence (AI) agents help reveal how the brain supports consciousness? That's the suggestion coma researcher Martin Monti and his colleagues at the University of California, Los Angeles make in a paper published today in Nature Neuroscience. One of their two AI models generated realistic imitations of electrical patterns seen in conscious and unconscious brain states, from wakefulness to deep comas. Its counterpart had to identify these states. The results largely support established ideas about how the brain behaves during comas, vegetative states, and other disorders of consciousness. But they also suggest roles for a brain structure and a pattern of cell signaling not previously known to be involved in such disorders -- predictions the scientists were able to test. Monti spoke with Science about how the paper's two models, which he calls the "black box" and the "glass brain," could reveal new ways to restore consciousness after brain injury. This interview has been edited for clarity and length. Q: You built two AI models, with one designed to interrogate the other. Can you explain how they talk to each other? A: So here's the game: We have two friends. One -- let's call it the black box -- knows how to tell consciousness from unconsciousness. It's been trained on 680,000 snippets of EEG [electroencephalography] data from animals and people in different states of consciousness. The other -- think of it as a glass brain -- is a real, biologically plausible simulation of the human brain. We tell it, "Your job is to move all of your knobs, every single parameter you've got, to trick the other guy -- the black box -- to think that you're creating a real EEG of a conscious or unconscious state." Now, we ask the glass brain, "Which brain parameters made the box think the EEG was unconscious?" Q: And what did that game reveal? A: The simulated brain created authentically unconscious-looking EEGs. I work a lot with clinicians. I see a lot of clinical data. They look like things that we knew and that we expected. And then, of course, the question is, how did you tweak your knobs to get these things? And we got a lot of parameters that we'd expected. But there were two novel things that we did not expect, and those were just not on anybody's bingo card. One was the role of a part of the basal ganglia called the globus pallidus externa. The less connectivity between this and [another basal ganglia structure called] the striatum, the more likely the EEG produced was an unconsciousness EEG. We confirmed this with imaging data from people with disorders of consciousness. The second finding is that there should be, in unconsciousness, more coupling between inhibitory neurons -- neurons that restrain the firing of other neurons. We were lucky enough to be able to validate this using data from another group comparing human brain tissue from individuals who died in a coma versus individuals who did not. Q: Your model also predicted that stimulating one brain region -- called the subthalamic nucleus -- might trigger awakening from unconsciousness. Is there evidence for that in people? A: Nobody has ever done deep-brain stimulation [DBS] in disorders of consciousness to the subthalamic nucleus. But Daniel Toker, the first author on the study, found a study of people with an implanted DBS device for cervical dystonia, a type of neck spasm. Some patients had stimulation to the subthalamic nucleus. We fed their EEG data to our neural network and we said, "Hey, give us a consciousness score for these EEGs." They were conscious to begin with, but the neural network scored them higher after stimulation. In patients who are not conscious, is stimulation going to wake them up? I don't know. Q: There's only one way to find out. A: There's only one way. And we're trying to set up a clinical trial for that. Q: What other novel things could this method tell us about consciousness? A: Maybe this is sci-fi, but we can look at other species now. Before, we could look at their EEGs and wonder about their degree of consciousness, but now we can also look at what mechanistically is going on. Q: You also suggest that these findings are relevant for other neurological and psychiatric disorders. Could you create the same kind of opposing AI models to learn more about what the brain is doing in those cases? A: Absolutely. It would be exactly the same. You would first train a network to recognize the features, let's say EEGs from individuals experiencing depression. And healthy volunteers. And EEGs from other mood states. I sincerely hope that people will apply this method to something else and discover something that was not already known about some other condition. I would be so proud if that happened.
Share
Share
Copy Link
Researchers at UCLA developed a generative adversarial artificial intelligence framework that pits two AI models against each other to simulate conscious and unconscious brain states. The system, trained on over 680,000 neuroelectrophysiology samples, identified previously unknown mechanisms behind coma and predicted that stimulation of the subthalamic nucleus could help restore consciousness in patients with brain injuries.
A team led by coma researcher Martin Monti at UCLA has introduced a generative adversarial artificial intelligence framework that uses dueling AI agents to investigate disorders of consciousness, one of neuroscience's most challenging problems
1
. Published in Nature Neuroscience, the research trained deep neural networks on more than 680,000 ten-second neuroelectrophysiology samples from 565 patients, healthy volunteers, and animals to detect conscious and unconscious brain states1
.The adversarial architecture features two AI models working in opposition. The first, dubbed the "black box," learned to distinguish consciousness from unconsciousness using EEG data from animals and people in different states
2
. The second, called the "glass brain," generates biologically realistic simulations of brain activity by adjusting parameters to trick its counterpart into identifying the generated patterns as real conscious or comatose states2
. This approach produces authentic-looking unconscious EEGs that recapitulate empirical neurophysiological features across humans, monkeys, rats, and bats1
.Without explicit programming, the adversarial AI retrodicted known responses to brain stimulation in patients with disorders of consciousness and generated testable predictions about mechanisms of unconsciousness
1
. Two unexpected findings emerged from analyzing which parameters the glass brain adjusted to create unconscious-looking patterns. The first involved the basal ganglia, specifically showing that reduced connectivity between the globus pallidus externa and the striatum correlated with unconscious EEG patterns2
. This prediction was validated using diffusion magnetic resonance imaging data from 51 patients with disorders of consciousness1
.The second discovery revealed increased coupling between inhibitory neurons during unconscious states
1
. These inhibitory neurons normally restrain the firing of other neurons, and their enhanced interaction appears to play a role in maintaining coma states2
. Researchers confirmed this mechanism through RNA sequencing of resected brain tissue from 6 human patients who died in coma and a rat stroke model1
.The model identified high-frequency stimulation of the subthalamic nucleus as a potential intervention to treat disorders of consciousness
1
. While no one has performed deep-brain stimulation targeting the subthalamic nucleus specifically for restoring consciousness, first author Daniel Toker found supporting evidence in an unexpected place2
. Patients with cervical dystonia who received deep-brain stimulation to the subthalamic nucleus showed higher consciousness scores on EEG data when analyzed by the neural network, even though they were already conscious2
. Whether this stimulation could wake unconscious patients remains unknown, but Monti's team is working to establish a clinical trial to test this hypothesis2
.Related Stories
This work introduces a framework for causal inference and therapeutic discovery in consciousness research that extends beyond treating coma patients
1
. The method could investigate consciousness across different species by examining not just their EEG patterns but also the underlying mechanisms2
. Monti suggests the same adversarial approach could apply to other neurological disorders and psychiatric conditions like depression, training networks to recognize disease-specific EEG features and identify potential interventions2
. The ability to generate and test hypotheses about complex brain states positions this adversarial AI system as a tool for understanding consciousness and developing treatments where experimental models have been lacking.Summarized by
Navi
11 Sept 2024

27 Oct 2025•Science and Research

18 May 2025•Science and Research

1
Technology

2
Technology

3
Technology
