Can Your EEG Read Your Eyes? - BrainAccess

Can Your EEG Read Your Eyes?

Eye tracking usually requires a dedicated device. A growing body of research suggests that EEG alone — even from a small number of electrodes — can tell us a surprising amount about where you’re looking.

Every time your eyes move, a small but detectable electrical signal ripples through the scalp. For decades, researchers treated this as noise — an inconvenient artifact to be filtered away before the “real” EEG analysis could begin. What if that instinct has it backwards?

A growing line of research is exploring whether those ocular signals, and the neural activity surrounding them, can be turned into a useful output: a gaze estimate derived from EEG alone. The implications range from more capable brain-computer interfaces to richer neuroscience experiments — and the early results are more promising than many expected.

Why Would EEG Know Anything About Gaze?

The connection is actually quite direct. Eye movements generate strong, stereotyped electrical potentials that spread across the scalp. Horizontal saccades produce a characteristic voltage difference across frontal electrodes; vertical movements produce a distinct pattern along a front-to-back axis. These are the signals researchers have historically suppressed as contamination.

But suppression assumes the signals carry no useful information. That assumption turns out to be wrong. The amplitude of these saccade-related potentials scales systematically with gaze direction and distance — which means, with the right model, you can work backwards from the EEG to estimate where someone was looking.

Beyond the raw ocular artifact, there are also genuine neural correlates of gaze. Alpha-band oscillations — the 8–13 Hz rhythms strongly associated with spatial attention — shift in a direction-selective way during eye movements. This is part of why machine learning models trained on EEG can predict gaze direction even from data that has been aggressively artifact-cleaned: some of the information lives in the brain, not just in the eyes.

The EEGEyeNet Benchmark: Establishing What’s Possible

Before claims about EEG-based eye tracking can be taken seriously, you need a rigorous benchmark — one large enough to train data-hungry models and structured carefully enough to produce meaningful comparisons. That is exactly what the EEGEyeNet dataset provides.

Released by researchers at ETH Zurich and the University of Zurich, EEGEyeNet pairs high-density 128-channel EEG with precise infrared eye-tracking across 356 healthy adults — totalling over 47 hours of synchronized recording. Three experimental paradigms were used: a prosaccade/antisaccade task, a large grid fixation task (participants fixating on dots at 25 screen positions), and a visual symbol search task. Together they cover a rich variety of eye movement patterns.

356

Participants, ages 18–80

47h+

synchronized EEG & eye-tracking

128

EEG channels, 500 Hz
 

3

experimental paradigms

The benchmark defines three tasks of increasing difficulty, each probing a different aspect of gaze estimation from EEG.

TASK 1

Left or right?

The simplest task: classify whether a saccade went left or right. Even classical machine learning models (random forests, gradient boosting) reach accuracy above 96% here. Deep learning pushes it above 98%. The random baseline sits at 52%. For a binary direction judgment, EEG turns out to be highly informative.

TASK 2

Angle and Amplitude

Harder: estimate the continuous angle (in radians) and pixel amplitude of a saccade from the large grid paradigm. Classical models improve modestly over a naïve baseline; deep learning makes a much larger jump. The best-performing CNN achieves an angle RMSE of 0.33 radians (~19°) and amplitude RMSE of 64 pixels (~3.7 cm at the screen distance used). There is clearly signal here, but also clearly room to improve.

TASK 3

Absolute Screen Position

The hardest task: predict the XY coordinates of a fixation in pixels. The naïve baseline (predicting screen center) yields 246.6 pixels RMSE. The best CNN brings this down to 140.5 pixels — a meaningful reduction, though far from what a dedicated eye tracker achieves. Interestingly, classical models barely beat the baseline here, while deep learning makes substantial gains, suggesting that exploiting raw temporal EEG structure is important for this task.

Model Left-Right Accuracy Angle RMSE (rad) Amplitude RMSE (px) Abs. Position RMSE (px)
Naïve baseline 52.3% 1.90 149.4 246.6
Random Forest 96.5% 1.09 119.7 233.5
XGBoost 97.9% 1.11 122.6 236.0
CNN 98.3% 0.33 64.0 140.5
EEGNet 98.6% 0.70 92.1 163.4
Xception 98.8% 0.47 64.4 157.5

Results from Kastrati et al. (2021), EEGEyeNet benchmark. CNN best values highlighted. Angle in radians, Amplitude and Abs. Position in pixels.

One particularly noteworthy finding: when the dataset is maximally preprocessed — meaning eye artifact components are explicitly removed by ICA — models still perform significantly above chance on left-right classification (86%+ for deep learning). This is direct evidence that genuine neural correlates of gaze direction survive artifact cleaning and are captured in the EEG. The ocular signal helps, but it is not the only signal available.

Even after removing eye artifacts with ICA, deep learning models still predict gaze direction well above chance — the brain itself is part of the signal.

What Happens with Far Fewer Electrodes?

The EEGEyeNet results are impressive, but they rely on 128 channels of research-grade, gel-based EEG — not something you can slip into a lightweight consumer headset. The more pressing question for practical applications is: how much of this holds up when you shrink the channel count dramatically?

This is exactly the question explored in a project we informally called “EyeAccess”, which adapted the EEGEyeNet framework to a 4-channel wearable configuration — specifically mirroring the electrode layout of the BrainAccess HALO device (two frontal channels and two occipital channels). The channel reduction is radical: 97% fewer electrodes than the benchmark.

Why frontal + occipital?

Most of the gaze-relevant information in EEG lives at the extremes of the head. Horizontal eye movements create a voltage gradient across frontal electrodes (effectively a horizontal EOG). Vertical movements require comparing frontal to occipital signals. Four channels strategically placed can approximate both — though with less spatial resolution for the triangulation that amplitude estimation requires.

The approach re-trained the same CNN backbone — four residual blocks of 1D convolutions along the time axis — on the EEGEyeNet data, using only the four channels corresponding to the HALO layout. The results are instructive:

Model Channels Sampling Rate Angle RMSE (rad) Amplitude RMSE (px)
CNN — full 128 ch 500 Hz 0.31 71.65
CNN — HALO 4 ch 500 Hz 0.40 95.05
CNN — HALO 4 ch 250 Hz 0.44 95.53

EyeAccess project. CNN trained on EEGEyeNet Large Grid paradigm. HALO configuration uses 4 frontal/occipital channels mirroring the BrainAccess HALO device. Angle in radians, Amplitude in pixels.

The performance drop is real but not catastrophic. For direction estimation — the most practically useful output — the model remains meaningfully predictive. For absolute position estimation, the degradation is more severe, which makes physical sense: with fewer electrodes there is simply less spatial information available for triangulation.

An important practical caveat: these results were obtained by training on the 128-channel dataset and evaluating on re-extracted 4-channel data. The harder challenge is testing on data actually recorded with a dry-electrode wearable, where signal quality differs from the gel-based reference in ways that matter: higher impedance, more motion artifact, less stable skin contact. That mismatch between training and deployment is one of the most significant open challenges in this space.

The Broader Pipeline: From Signal to Gaze

Whether you are working with 128 channels or 4, the general modelling approach follows a recognizable pattern:

🧠
Raw EEG
Continuous multichannel recording
🔧
Preprocessing
Filter, re-reference, epoch around events
📐
Feature / Model
Alpha bandpower, or raw signal into CNN
🎯
Calibration
Fine-tune on subject-specific trials
👁
Gaze Estimate
Direction, angle, or XY position

The calibration step deserves emphasis. EEG signals vary considerably between individuals — electrode placement relative to underlying anatomy, skull thickness, and skin conductance all differ. A model trained across subjects will generally perform better once it has seen even a modest amount of data from the target user. In the EyeAccess experiments, fine-tuning just the output layer of the CNN on 20–50 subject-specific trials improved amplitude estimates noticeably for some participants, though the gains were inconsistent across individuals.

This points to a practical deployment pattern: a short calibration routine (perhaps 2–3 minutes of structured fixations) before each session, used to adapt a pre-trained model to the current user. This mirrors what dedicated eye trackers already do — and suggests that the “extra” burden of EEG-based gaze calibration may not be as large as it first appears.

Where This Is Headed

It is worth being honest about the current state: EEG-based gaze estimation is not yet a drop-in replacement for a dedicated eye tracker. Absolute position estimation remains challenging, especially at low channel counts. The domain gap between research-grade wet-electrode data and real-world dry-electrode recordings has not been fully characterized. And most published results are for constrained laboratory paradigms — structured grids of fixation targets — rather than the free-viewing, naturalistic scenarios that matter most for applications.

But the trajectory is encouraging. Larger datasets like EEGEyeNet are providing the fuel for data-hungry deep learning models to demonstrate that EEG carries real gaze information. Work like EyeAccess is mapping out how much of that translates to minimal hardware. And the calibration-based adaptation approach provides a practical path towards personalized, deployable models.

The applications that benefit most from this direction are those where adding a dedicated eye tracker is impractical or impossible:

  • Long-term ambulatory monitoring
  • Clinical settings where hardware complexity is a barrier
  • Wearable BCI systems that already have EEG and need to augment their sensing capabilities without adding sensors.

In all of these contexts, a moderate-accuracy gaze estimate from EEG alone is far more valuable than no gaze estimate at all.

BrainAccess & EEG Eye Tracking

At BrainAccess, we have been exploring these questions in the context of our own wearable EEG hardware. The EyeAccess work described here used a 4-channel configuration mirroring our HALO device as a testbed for understanding how far minimal-electrode systems can go. Collecting matched data with the HALO in real-world conditions — and characterizing the dry-electrode domain gap directly — is an active direction. We expect to share more findings as that work matures.

The broader message is one of reframing: what looked like a problem (eye artifact in EEG) turns out to be a resource. The information was always there. The question was whether we had the models and the data to extract it. Increasingly, the answer is yes.

References

  1. Kastrati, A., Płomecka, M. B., Pascual, D., Wolf, L., Gillioz, V., Wattenhofer, R., & Langer, N. (2021). EEGEyeNet: a simultaneous electroencephalography and eye-tracking dataset and benchmark for eye movement prediction. arXiv preprint arXiv:2111.05100.
  2. Sun, R., Cheng, A. S., Chan, C., Hsiao, J., Privitera, A. J., Gao, J., … & Tang, A. C. (2023). Tracking gaze position from EEG: Exploring the possibility of an EEG‐based virtual eye‐tracker. Brain and behavior13(10), e3205.
Can Your EEG Read Your Eyes? - BrainAccess

Martina Berto, PhD

Research Engineer & Neuroscientist @ Neurotechnology.

Can Your EEG Read Your Eyes? - BrainAccess

Martina Berto, PhD

Research Engineer & Neuroscientist @ Neurotechnology.