MoDAl: Self-Supervised Neural Modality Discovery via Decorrelation for Speech Neuroprosthesis

·ArXiv q-bio··

arXiv:2605.00025v1 Announce Type: new Abstract: Speech neuroprosthesis systems decode intended speech from neural activity in the absence of audible output, offering a path to restoring communication for individuals with speech-impairing conditions. Current approaches decode predominantly from motor cortical areas, discarding others -- such as area 44, part of Broca's area -- that may encode complementary linguistic information. We introduce MoDAl (Modality Decorrelation and Alignment), a framew...

Read full article →

Related Articles

Does Employment Slow Cognitive Decline? Evidence from Labor Market Shocks
littlexsparkee · Hacker News · 4d ago
Underwater robot tracks sperm whale conversations in real time
thedebuglife · Hacker News · 5d ago
Linking spatial biology and clinical histology via Haiku
Yan Cui, Jacob S. Leiby, Wenhui Lei, Dokyoon Kim, Yanxiang Deng, Aaron T. Mayer, Zhenqin Wu, Alexandro E. Trevino, Zhi Huang · ArXiv cs.LG · 3d ago
CGM-JEPA: Learning Consistent Continuous Glucose Monitor Representations via Predictive Self-Supervised Pretraining
Hada Melino Muhammad, Zechen Li, Flora Salim, Ahmed A. Metwally · ArXiv cs.LG · 3d ago
CellxPert: Inference-Time MCMC Steering of a Multi-Omics Single-Cell Foundation Model for In-Silico Perturbation
Andac Demir, Erik W. Anderson, Jeremy L. Jenkins, Srayanta Mukherjee · ArXiv q-bio · 3d ago