Multimodal Integration: How Your Experiment Can Benefit from Multiple Modalities - BrainAccess

Multimodal Integration: How Your Experiment Can Benefit from Multiple Modalities

In this article, we explore multimodal integration, the practice of combining signals from different sensors in the same experiment to build a richer, more reliable picture of brain activity, cognitive states, and physiology. At BrainAccess, this is a direction we are actively building toward, with two major developments in the pipeline: a new HALO headband combining EEG, fNIRS, and EOG/eye tracking in one wearable, and the MINI ExG system bringing multiple biosignal modalities together in a single, fully synchronised device.

Written by Martina Berto.

The human brain does not work in isolation from the body. Every thought, emotion, and mental state leaves traces across multiple biological systems simultaneously. Electrical signals ripple through the cortex, the heart rate shifts, muscles tense, sweat glands activate, and the eyes move in telling patterns. Yet for decades, most neuroscience and cognitive research has focused on just one of these signals at a time.

That is changing rapidly. Multimodal integration, the practice of recording and jointly analysing signals from multiple sensor types in the same experiment, is becoming one of the most powerful approaches in modern brain research. And as the hardware to support it becomes more accessible, it is no longer the exclusive domain of large research institutions with costly setups.

Multimodal Integration and Data Fusion

At its core, multimodal integration means combining information from different measurement sources — different modalities — to get a more complete picture of what is happening in a person’s mind and body. No single sensor captures everything. EEG gives you excellent temporal resolution of brain electrical activity, but tells you little about peripheral arousal. A heart rate monitor can track autonomic stress responses, but has no window into cortical dynamics. Fuse them together, and suddenly the picture becomes far richer.

The key concept here is complementarity. Each modality contributes information the others cannot provide on their own. When you combine them thoughtfully (data fusion), the whole genuinely becomes more than the sum of its parts: you can answer research questions that would simply be unanswerable with a single sensor, and you can do so with greater confidence and nuance.

This matters enormously for the kinds of questions researchers are now asking. Understanding cognitive workload, emotional states, fatigue, attention, or mental effort requires observing the interplay of brain activity, autonomic responses, and behaviour — not just one dimension in isolation. Several studies have demonstrated that analysing multiple signals concurrently produces more reliable classification of mental states than any single modality alone. The more channels of information you have, the harder it is for noise in one to mislead your conclusions.

EEG sits at the centre of most multimodal setups in cognitive neuroscience, and for good reason. It is non-invasive, portable, has millisecond-level temporal resolution, and directly reflects neural computation. Pairing it with other physiological signals amplifies its strengths and fills in its gaps.

Sensors and Data type

A typical multimodal setup draws from some combination of the following modalities, depending on the research question. Each one can be used in combination with EEG or other neuroscientific techniques to deepen the exploration of different cognitive states. 

  • EEG (Electroencephalography) measures the brain’s electrical activity via electrodes placed on the scalp. It is the gold standard for temporal precision in non-invasive brain measurement — ideal for tracking attention fluctuations, cognitive load, sleep stages, and event-related responses.
  • fMRI (Functional Magnetic Resonance Imaging) offers the finest spatial resolution of all brain imaging techniques, mapping activity across the whole brain. Its main limitation is poor temporal resolution and the need for a fixed, expensive scanner — making it less accessible for real-world or longitudinal research. It is often combined with EEG in controlled lab settings to exploit both modalities’ strengths.
  • fNIRS (Functional Near-Infrared Spectroscopy) measures changes in oxygenated and deoxygenated haemoglobin in the cortex using near-infrared light. Where EEG captures fast electrical dynamics, fNIRS captures slower haemodynamic changes, providing complementary spatial information about cortical activation. Combined, EEG and fNIRS offer both the when and the where of brain activity in a way neither can alone.
  • Eye Tracking provides precise spatial information about where a person is looking, how long fixations last, and how the pupil dilates — a proxy for mental effort. Integrated with EEG, gaze data adds a crucial behavioural layer that links neural signals to moment-to-moment visual attention.
  • EOG (Electrooculography) tracks eye movements through electrical potentials generated by the corneoretinal potential difference. While dedicated eye trackers offer greater spatial precision, EOG is a lightweight way to detect blinks, saccades, and gaze shifts — all of which carry meaningful cognitive and attentional information.
  • ECG (Electrocardiography) records the electrical activity of the heart. Heart rate and heart rate variability (HRV) are well-established markers of autonomic regulation, stress, and cognitive load. ECG is often the most reliable peripheral measure of sustained mental effort over time.
  • EMG (Electromyography) records muscle electrical activity. It is particularly useful for detecting facial muscle movements (micro-expressions related to emotion), motor preparation and execution, or physical fatigue in occupational settings.
  • IMU  (Inertial Measurement Unit)  / Accelerometer  is an electronic device that measures and reports motion, orientation, and forces acting on an object.  The accelerometer is one of the core sensors inside an IMU.
  • GSR / EDA (Galvanic Skin Response / Electrodermal Activity) measures changes in skin conductance caused by sweat gland activity, which is driven by the sympathetic nervous system. GSR is a sensitive index of emotional arousal, stress, and cognitive demand — a valuable complement to EEG’s more nuanced neural picture.

BRAIN ACTIVITY

🧠
 
EEG
Gold standard for temporal precision in non-invasive brain measurement. Ideal for tracking attention fluctuations, cognitive load, sleep stages, and event-related responses.
🧲
 
fMRI
Offers the finest spatial resolution of all brain imaging techniques. Its main limitations are poor temporal resolution and the need for a bulky, fixed, expensive scanner.
🩸
 
fNIRS
Measures changes in oxygenated and deoxygenated haemoglobin in the cortex using near-infrared light. Captures slower changes compared to EEG.

EYES AND HEART

👓
 
Eye Tracking
Precise spatial information about where a person is looking, how long fixations last, and how the pupil dilates.
👁️
 
EOG
Tracks eye movements through electrical potentials generated by the corneoretinal potential difference.
❤️
 
ECG
Records the electrical activity of the heart.

MOTION AND AROUSAL

💪
 
EMG
Muscle electrical activity. Useful for detecting facial muscle movements, motor preparation and execution, or physical fatigue.
🖐️
 
IMU / ACCELEROMETER
Electronic device that measures and reports motion, orientation, and forces acting on an object. Useful to monitor the motion of head,  limbs, hand, wrist etc.
🖐️
 
GSR / EDA
Measures changes in skin conductance caused by sweat gland activity, which is driven by the sympathetic nervous system.

Advantages of Going Multimodal

If you are a researcher interested in studying the human brain you may be asking: why would I complicate my research and invole other modalities? 

I get you. Brain data can be already extremely complex, noisy, and messy even without the additional complexity.

However, there are several instances when collecting information from different sensors will boost your insights about a specific function or applications. And it will be worth the hassle.

Here some advantages of the approach:

1️⃣

Richer and more reliable measurements. When multiple independent signals point to the same conclusion, confidence in that finding increases substantially. If EEG, GSR, and ECG all indicate elevated cognitive load at the same moment, that convergence is far more convincing than any single indicator. Equally important, when one signal is noisy or ambiguous, others can compensate.

2️⃣

Answering questions that single modalities cannot. Some research questions are simply not addressable with one sensor. Studying fatigue requires tracking both neural signatures and autonomic indicators over time. Distinguishing genuine emotional engagement from neutral cognitive processing often requires the combination of neural and peripheral signals. Multimodality opens doors that would otherwise stay closed.

3️⃣

Separating shared from modality-specific information. A sophisticated benefit that is often underappreciated: joint analysis of multiple data streams can help you distinguish processes that are common across the brain and body from those that are specific to one system. This can reveal underlying mechanisms that no single modality would expose.

4️⃣

Better classification and prediction. For applications in affective computing, mental health monitoring, human-computer interaction, and BCI (brain-computer interfaces), multimodal models consistently outperform single-modality models in classifying cognitive and emotional states. More inputs mean more discriminative power.

5️⃣

Supporting exploratory research. Many of the most interesting scientific discoveries come not from testing a specific hypothesis about one signal, but from exploring the relationships between signals. Multimodal recording enables exactly this kind of discovery-driven research.

Box 1. Challenges to be aware of

Multimodal integration is powerful, but it introduces genuine technical and methodological challenges that are worth understanding before designing your study.

Synchronisation and temporal alignment. When signals come from different devices with different clocks and sampling rates — EEG at 250 Hz, GSR at 4 Hz, eye tracking at 60 Hz — ensuring all streams share a common time reference is non-trivial. Misaligned data can introduce errors that are difficult to detect and can silently corrupt your analysis. This is probably the single most important practical challenge in multimodal experimental design.

Different resolutions and data formats. Each modality produces data at its own temporal and spatial resolution. EEG captures millisecond dynamics; fNIRS works on a timescale of seconds; GSR changes over several seconds to minutes. Combining signals that operate at different speeds requires careful choices about how to align or aggregate them without losing the information that makes each modality valuable.

Increased noise and conflicting information. More sensors means more noise, more artefacts, and more opportunity for signals to tell apparently contradictory stories. Movement artefacts, electrode-skin contact issues, and environmental interference all multiply when you have more channels to manage. Robust preprocessing pipelines become essential.

Analytical complexity. Jointly analysing multiple data streams requires more sophisticated statistical and machine learning approaches than single-modality analysis. Choosing the right model — whether to fuse signals at the raw data level, at the feature level, or at the decision level — is a methodological decision with significant consequences for what you can conclude.

Calibration and registration. Different sensors may need separate calibration procedures, and the physical placement of multiple devices on the same person requires care to avoid interference between them.

None of these challenges are insurmountable, but they require planning. The good news is that hardware and software ecosystems are evolving rapidly to address many of them — particularly the synchronisation problem, which can now be handled elegantly by systems designed with multimodality in mind from the start.

THE BRAINACCESS TAKE

Multimodality Is the Direction We Are Moving In

At BrainAccess, we have been paying close attention to where the field is going, and we are building toward it deliberately. Two of our upcoming developments reflect exactly this commitment to multimodal research.

The underlying philosophy behind both developments is the same: multimodal capability should not require a systems-integration project. It should be available out of the box, ready for research.

1. HALO — EEG + fNIRS, and a built-in eye tracker

The next generation of the HALO headband, currently under testing, will combine EEG and fNIRS sensors in a single wearable device. The result is a portable system capable of simultaneously capturing the fast neural dynamics of EEG and the haemodynamic signals of fNIRS — a pairing particularly well suited for measuring cognitive workload, mental fatigue, stress, and sustained concentration. The combined signal gives researchers both the temporal precision of EEG and the spatial and metabolic context of fNIRS, without needing to manage two separate systems.

But that is not all. HALO’s electrode placement has been strategically designed with frontal and occipital coverage that maps eye movement activity. Combined with our deep learning algorithmic implementation, this transforms the headband into a portable EEG + eye tracker — two modalities in one device. This is particularly exciting for research in attention, visual cognition, and any domain where knowing both the neural state and where someone is looking matters. Stay tuned for more on this.

2. MINI ExG — Multiple physiological modalities in one unified system

The MINI ExG system is being developed to support a broad range of biosignal modalities from a single device: EEG, ECG, EMG, EOG, GSR, and more. The key advantage is that all signals are synchronised and temporally aligned by design. The challenge of data alignment — arguably the most frustrating aspect of building multimodal setups from separate devices — is handled for you. You get multiple modalities in one unified system, without the additional complexity that typically comes with integrating hardware from different manufacturers, each with its own clock, driver, and data format.

Conclusion

The case for multimodal integration in neuroscience and cognitive research is not theoretical, it is practical and increasingly well-demonstrated. No single sensor tells the whole story of what a human brain and body are doing. Combining EEG with peripheral physiological signals, eye tracking, or haemodynamic measures gives you a more complete, more reliable, and ultimately more scientifically powerful view of mental states and cognitive processes.

The challenges are real (see the Box 1 above for more details):

  • Synchronisation,
  • Resolution mismatches,
  • Analytical complexity,
  • Increasing noise…

But they are solvable, particularly as purpose-built multimodal systems become available. The era of single-sensor experiments as the default is giving way to something richer.

Whether you are studying cognitive workload in demanding environments, emotional responses in consumer research, sleep quality, attention in clinical populations, or the neural correlates of decision-making, there is almost certainly a multimodal approach that will yield deeper insights than a unimodal one.

The question is no longer whether to go multimodal: it is how to do it efficiently and with the right tools.

At BrainAccess, we are building those tools.

Follow BrainAccess on Linkedin to stay updated on our upcoming releases and developments!

Reference

Lahat, D., Adali, T., & Jutten, C. (2015). Multimodal data fusion: an overview of methods, challenges, and prospects. Proceedings of the IEEE103(9), 1449-1477.

Bhatlawande, S., Shilaskar, S., Pramanik, S., & Sole, S. (2024). Multimodal emotion recognition based on the fusion of vision, EEG, ECG, and EMG signals. International journal of electrical and computer engineering systems15(1), 41-58.

Razavi, M., Yamauchi, T., Janfaza, V., Leontyev, A., Longmire-Monford, S., & Orr, J. (2020). Multimodal-multisensory experiments: Design and implementation. BioRxiv, 2020-12.

Multimodal Integration: How Your Experiment Can Benefit from Multiple Modalities - BrainAccess

Martina Berto, PhD

Research Engineer & Neuroscientist @ Neurotechnology.

Multimodal Integration: How Your Experiment Can Benefit from Multiple Modalities - BrainAccess

Martina Berto, PhD

Research Engineer & Neuroscientist @ Neurotechnology.