How Well Can We Decode Vowels from Auditory EEG -- A Rigorous Cross-Subject Benchmark with Honest Assessment

·ArXiv q-bio··

arXiv:2605.00865v1 Announce Type: cross Abstract: EEG based phoneme decoding is promising for brain computer interfaces, but many prior studies rely on within subject evaluation, small cohorts, or weak leakage control. We present a reproducible cross subject benchmark for five class vowel decoding (a, e, i, o, u) from auditory EEG using OpenNeuro ds006104 (16 subjects, 61 channels, 256 Hz). Under strict leave one subject out evaluation with training only normalization and explicit anti leakage c...

Read full article →

Related Articles

Does Employment Slow Cognitive Decline? Evidence from Labor Market Shocks
littlexsparkee · Hacker News · 4d ago
Underwater robot tracks sperm whale conversations in real time
thedebuglife · Hacker News · 5d ago
Linking spatial biology and clinical histology via Haiku
Yan Cui, Jacob S. Leiby, Wenhui Lei, Dokyoon Kim, Yanxiang Deng, Aaron T. Mayer, Zhenqin Wu, Alexandro E. Trevino, Zhi Huang · ArXiv cs.LG · 3d ago
CGM-JEPA: Learning Consistent Continuous Glucose Monitor Representations via Predictive Self-Supervised Pretraining
Hada Melino Muhammad, Zechen Li, Flora Salim, Ahmed A. Metwally · ArXiv cs.LG · 3d ago
CellxPert: Inference-Time MCMC Steering of a Multi-Omics Single-Cell Foundation Model for In-Silico Perturbation
Andac Demir, Erik W. Anderson, Jeremy L. Jenkins, Srayanta Mukherjee · ArXiv q-bio · 3d ago