Generalizing EEG-Based Classification for User-Independent Brain-Computer Interface
DOI:
https://doi.org/10.47611/jsrhs.v13i3.6986Keywords:
Electro Encephalo Graphy, Machine Learning, ClassificationAbstract
The emergence of brain-computer interface technology has changed the way of interacting between people and the computer. By applying this technology to individuals with impairments, it is possible to help them regain their mobility. Consequently, exoskeleton robots, guided by electroencephalograms (EEG), have been studied to provide assistance to these individuals. However, previous methods have struggled to achieve accurate classification of user intentions, often displaying an excessive sensitivity to input noise. Thus, there is a need to develop methods that are robust to noise and yield highly accurate results. In this research, I proposed a noise robust system for classifying user intentions based on EEG signals. The proposed system takes EEG signals as input and outputs commands that guide exoskeleton robots in assisting individuals with impairments. These commands encompass a range of fundamental movements, including running, forward and backward walking, maintaining a stationary position, and more. Through comprehensive experiments, the results obtained by the proposed method substantiate its superiority over prior approaches. I expect that this method holds the potential to significantly aid individuals in need, particularly those with impairments or undergoing rehabilitation.
Downloads
References or Bibliography
Baldi, P. (2012, June). Autoencoders, unsupervised learning, and deep architectures. In Proceedings of ICML workshop on unsupervised and transfer learning (pp. 37-49). JMLR Workshop and Conference Proceedings.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). https://doi.org/10.48550/arXiv.1512.03385
Hinss, M. F., Jahanpour, E. S., Somon, B., Pluchon, L., Dehais, F., & Roy, R. N. (2023). Open multi-session and multi-task EEG cognitive Dataset for passive brain-computer Interface Applications. Scientific Data, 10(1), 85.
Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708). https://doi.org/10.48550/arXiv.1608.06993
Milbich, T., Ghori, O., Diego, F., & Ommer, B. (2020). Unsupervised representation learning by discovering reliable image relations. Pattern Recognition, 102, 107107.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510-4520). https://doi.org/10.48550/arXiv.1801.04381
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. https://doi.org/10.48550/arXiv.1409.1556
Published
How to Cite
Issue
Section
Copyright (c) 2024 Ian Baek; Sojung Min

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright holder(s) granted JSR a perpetual, non-exclusive license to distriute & display this article.


