Domain adaptation (DA) is considered to be effective solutions for unsupervised emotion recognition cross-session and cross-subject tasks based on electroencephalogram (EEG). However, the cross-domain shifts caused by individual differences and sessions differences seriously limit the generalization ability of existing models. Moreover, existing models often overlook the discrepancies among task-specific subdomains. In this study, we propose the auxiliary classifier adversarial networks (ACAN) to tackle these two key issues by aligning global domains and subdomains and maximizing subdomain discrepancies to enhance model effectiveness. Specifically, to address cross-domain discrepancies, we deploy a domain alignment module in the feature space to reduce inter-domain and inter-subdomain discrepancies. Meanwhile, to maximum subdomain discrepancies, the auxiliary adversarial classifier is introduced to generate distinguishable subdomain features by promoting adversarial learning between feature extractor and auxiliary classifier. System experiment results on three benchmark databases (SEED, SEED-IV, and DEAP) validate the model's effectiveness and superiority in cross-session and cross-subject experiments. The method proposed in this study outperforms other state-of-the-art DA, that effectively address domain shifts in multiple emotion recognition tasks, and promote the development of brain-computer interfaces.
Keywords: Adversarial learning; Domain adaptation; Electroencephalogram; Emotion recognition.
© 2025. International Federation for Medical and Biological Engineering.