Predicting Health from Sleep: Why EEG Shouldn't Stand Alone - BrainAccess

Predicting Health from Sleep: Why EEG Shouldn’t Stand Alone

Sleep is one of the few moments in the day when the body’s language becomes more predictable across different modalities. Breathing rhythms stabilize, muscle tone changes, the heart cycles through distinct autonomic states, and the brain produces recognizable patterns, such as slow waves, spindles, and Rapid Eye Movements (REM) stage dynamics.

Multimodal sleep monitoring is a powerful tool to measure these dynamics. Electroencephalography (EEG) and electrooculography (EOG) capture brain state and arousal dynamics, electrocardiogram (ECG) reflects cardiac function and autonomic regulation, electromyography (EMG) tracks muscle tone and movement, and respiratory signals capture airflow and effort.

In clinical sleep studies (polysomnography, PSG), these signals are already collected together, creating a dense physiological snapshot of health.

Wearable technology is now pushing sleep monitoring beyond specialized labs. But access alone isn’t enough: these recordings are rich, complex, and variable across devices and settings. The real bottleneck is interpretation. How do we translate streams of biosignals into information that matters for people in everyday life?

A recent paper (Thapa et al., 2025) published on Nature medicine addresses this problem by proposing a multimodal sleep foundation model (SleepFM), a general-purpose AI system trained to learn the language of sleep from large-scale PSG, and then adapt to many downstream tasks, including disease risk prediction.

Below is a summary of the model, what it suggests for real life, and why EEG remains central but is best when  combined with other biomarkers. 

SleepFM: training, architecture, scope, and results

The authors trained SleepFM on an exceptionally large dataset: ~585,000 hours of polysomnography (PSG) from ~65,000 participants across multiple cohorts. Instead of relying on extensive manual labeling, they used self-supervised contrastive learning to align information across modalities, even when channels were missing or PSG setups differed across sites. This allowed the model to learn robust structure from heterogeneous, real-world sleep data.

Architecturally, SleepFM processes PSG in 5-second windows, using convolutional layers to extract local features and a transformer to capture longer-range temporal structure across minutes of sleep. A key design choice was channel-agnostic pooling, enabling the model to handle variations in channel number and ordering, one of the main barriers to deploying sleep models across clinics.

After pretraining, the authors fine-tuned the model on standard benchmarks, where it performed strongly on sleep staging, sleep apnea severity classification, and age and sex prediction from PSG signals. These results confirmed that the learned representations were physiologically meaningful.

The central question, however, was broader: does a single night of sleep contain predictive signals of future disease? By pairing PSG with electronic health records and mapping diagnoses to phecodes, the authors evaluated prediction across 1,041 disease categories. The results were striking. From one night of sleep, SleepFM predicted 130 conditions with strong performance, including all-cause mortality, dementia, myocardial infarction, heart failure, chronic kidney disease, stroke, and atrial fibrillation.

Interestingly, modalities contributed differently depending on the outcome: EEG and EOG  signals mattered more for neurological and mental health conditions, while respiratory and ECG signals were especially informative for metabolic and cardiovascular risk. Still, the best performance consistently emerged when all modalities were combined, highlighting the value of multimodal sleep data.

Real-life implications: what these findings mean beyond the lab

The paper makes a simple but powerful claim: sleep contains early, predictive signatures of future health, and foundation models can extract them at scale. The importance of this idea is not about diagnosing disease from a single night of data, but about revealing a change way before symptoms become visible during the day.

Many conditions develop quietly. Subtle shifts in arousal regulation, breathing stability, or autonomic balance can emerge during sleep long before a person notices anything unusual. In this framing, SleepFM functions as a risk-stratification tool, designed to flag when physiology starts to drift rather than to deliver definitive diagnoses and sleep becomes an early signal that invites attention.

This naturally leads to personalization. Risk is not absolute; it is relative to an individual baseline. Sleep-based models work best when they learn what normal looks like for a given person and then track meaningful deviations over time. 

Seen this way, the role of sleep technology changes. The key question is no longer how long or well did you sleep?, but what does your sleep physiology suggest about your health trajectory? Sleep shifts from a nightly score to a physiological window into future risk.

Noticeably, the scientific implications of this work are huge, but the model is yet far from being ready to be deployed in a real-world scenario. In fact, much of the data comes from sleep-clinic populations, which introduces selection bias, and individual-level interpretability remains a challenge. As the authors remind us, the near-term opportunity lies in screening, stratification, and guidance, with AI supporting the decision-making of clinicians and experts rather than replacing it.

This also reminds us that interpretability remains a key challenge in neurotechnology and the usage of wearables devices in healthcare. If a system flags risk, people need to understand why and what to do next. Turning complex model outputs into actionable, responsible feedback is not a secondary concern, it is what ultimately will determine whether foundation models can move from research into everyday life.

Multimodal Integration Matters: EEG and other modalities

For neurotech products, like BrainAccess, this paper pose some important consideration about brain signal and its combination with other modalities in sleep and disease prevention research. Firstly, it reminds us EEG should not be treated as an isolated metric, but as part of an integrated physiological story that AI can use at scale. SleepFM demonstrates a scalable way to interpret complex sleep physiology despite heterogeneity in channels, montages, and recording setups, which closely mirrors the reality of wearable data. 

The paper also reinforces a broader point about EEG itself. EEG is not just sleep staging. It acts as a general lens on brain state, capturing complex biomarkers linked to cognition, neurodegeneration risk, and overall brain health. Combining brain activity information with other multimodal biomarker improves the prediction and is crucial to get the full picture. 

That result points to an unavoidable conclusion: multimodal integration matters.  Even for EEG-first devices, the strongest health models emerge when brain data is combined with autonomic and respiratory information. In practice, this means ecosystems, partnerships, and data-fusion pipelines should be encouraged rather than single-sensor solutions.

Conclusion

This study strengthens a powerful idea: sleep is not just a passive state, but a rich physiological signal of future health, and foundation models are finally capable of extracting that signal at scale. The value is not in replacing clinical judgment, but in enabling earlier risk detection, longitudinal monitoring, and more informed intervention.

Crucially, the results show that EEG is most powerful when it is not used alone. Brain activity provides a window into arousal, regulation, and brain health, but its predictive value increases substantially when combined with cardiac and respiratory signals. The future of sleep-based health monitoring is therefore not single-modality, but integrative by design.

At the same time, deployment hinges on interpretability and trust. Foundation models can surface meaningful patterns, but for real-world use those patterns must translate into clear, actionable insights. Making complex models understandable is not optional; it is the condition that turns powerful predictions into responsible products.

Reference

Thapa, R., Kjaer, M. R., He, B., Covert, I., Moore IV, H., Hanif, U., … & Zou, J. (2026). A multimodal sleep foundation model for disease prediction. Nature Medicine, 1-11.

Predicting Health from Sleep: Why EEG Shouldn't Stand Alone - BrainAccess

Martina Berto, PhD

Research Engineer & Neuroscientist @ Neurotechnology.

Predicting Health from Sleep: Why EEG Shouldn't Stand Alone - BrainAccess

Martina Berto, PhD

Research Engineer & Neuroscientist @ Neurotechnology.