Affect recognition from face and body: Early fusion vs. late fusion

Publication Type:
Conference Proceeding
Citation:
Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, 2005, 4 pp. 3437 - 3443
Issue Date:
2005-11-30
Full metadata record
This paper presents an approach to automatic visual emotion recognition from two modalities: face and body. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information first at a feature-level, in which the data from both modalities are combined before classification, and later at a decision-level, in which we integrate the outputs of the monomodal systems by the use of suitable criteria. We then evaluate these two fusion approaches, in terms of performance over monomodal emotion recognition based on facial expression modality only. In the experiments performed the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. Moreover, fusion at the feature-level proved better recognition than fusion at the decision-level. © 2005 IEEE.
Please use this identifier to cite or link to this item: