Feature-Enhanced Consensus Graph Model for EEG-based Imagined Word Recognition

Abstract:

For people with speech impairments, imagined speech provides a way to communicate by mentally formulating words without engaging vocal organs. The good temporal resolution, portability, safety, and cost-effectiveness of Electroencephalography (EEG) have made it a prominent non-invasive method for studying imagined speech. Recently, a multi-view graph fusion model was proposed, named as KGMV, which learned a consensus graph from multiple EEG features in a Reproducing Kernel Hilbert Space (RKHS). An accuracy of 81.73% was reported on 5-class imagined word classification using a limited set of temporal, statistical, and frequency-domain views. Much of the potentially informative feature space thus remained unexplored. In this work, we extended KGMV to a Feature Enhanced KGMV (FE-KGMV) framework that (1) augmented the feature pool with additional temporal, statistical, and frequency-domain features and (2) applied a subject-wise view-ranking step based on a fused discriminability score (MV-Fisher, ANOVA F, CV-LDA) to retain only the most informative features before graph learning. On the same dataset, FE-KGMV reached a mean accuracy of 86.27%, corresponding to a 4.54 percentage-point improvement over the baseline KGMV.


Year: 2026
In session: Posters
Pages: 187 to 194