Professor X: Manipulating EEG BCI with Invisible and Robust Backdoor Attack
Abstract
Professor X is an invisible backdoor attack that manipulates EEG BCI outputs by poisoning clean labels through trigger selection, reinforcement learning-based electrode and frequency optimization, and spectral amplitude interpolation across multiple EEG tasks.
While electroencephalogram (EEG) based brain-computer interface (BCI) has been widely used for medical diagnosis, health care, and device control, the safety of EEG BCI has long been neglected. In this paper, we propose Professor X, an invisible and robust "mind-controller" that can arbitrarily manipulate the outputs of EEG BCI through backdoor attack, to alert the EEG community of the potential hazard. However, existing EEG attacks mainly focus on single-target class attacks, and they either require engaging the training stage of the target BCI, or fail to maintain high stealthiness. Addressing these limitations, Professor X exploits a three-stage clean label poisoning attack: 1) selecting one trigger for each class; 2) learning optimal injecting EEG electrodes and frequencies strategy with reinforcement learning for each trigger; 3) generating poisoned samples by injecting the corresponding trigger's frequencies into poisoned data for each class by linearly interpolating the spectral amplitude of both data according to previously learned strategies. Experiments on datasets of three common EEG tasks demonstrate the effectiveness and robustness of Professor X, which also easily bypasses existing backdoor defenses.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper