Display options
Share it on

Annu Int Conf IEEE Eng Med Biol Soc. 2019 Jul;2019:3071-3074. doi: 10.1109/EMBC.2019.8856563.

Multimodal Emotion Recognition from Eye Image, Eye Movement and EEG Using Deep Neural Networks.

Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference

Jiang-Jian Guo, Rong Zhou, Li-Ming Zhao, Bao-Liang Lu

PMID: 31946536 DOI: 10.1109/EMBC.2019.8856563

Abstract

In consideration of the complexity of recording electroencephalography(EEG), some researchers are trying to find new features of emotion recognition. In order to investigate the potential of eye tracking glasses for multimodal emotion recognition, we collect and use eye images to classify five emotions along with eye movements and EEG. We compare four combinations of the three different types of data and two kinds of fusion methods, feature level fusion and Bimodal Deep AutoEncoder (BDAE). According to the three-modality fusion features generated by BDAE, the best mean accuracy of 79.63% is achieved. By analyzing the confusion matrices, we find that the three modalities can provide complementary information for recognizing five emotions. Meanwhile, the experimental results indicate that the classifiers with eye image and eye movement fusion features can achieve a comparable classification accuracy of 71.99%.

MeSH terms

Publication Types