Conference paper
Speech emotion recognition using affective saliency
Interspeech 2016, Vol.2016
Annual Conference of the International Speech Communication Association: INTERSPEECH (Hyatt Regency, San Francisco, 08/09/2016–12/09/2016)
2016
Abstract
We investigate an affective saliency approach for speech emotion recognition of spoken dialogue utterances that estimates the amount of emotional information over time. The proposed saliency approach uses a regression model that combines features extracted from the acoustic signal and the posteriors of a segment-level classifier to obtain frame or segment-level ratings. The affective saliency model is trained using a minimum classification error (MCE) criterion that learns the weights by optimizing an objective loss function related to the classification error rate of the emotion recognition system. Affective saliency scores are then used to weight the contribution of frame-level posteriors and/or features to the speech emotion classification decision. The algorithm is evaluated for the task of anger detection on four call-center datasets for two languages, Greek and English, with good results.
Details
- Title
- Speech emotion recognition using affective saliency
- Authors/Creators
- A. Chorianopoulou (Author/Creator) - Technical University of CreteP. Koutsakis (Author/Creator) - Murdoch UniversityA. Potamianos (Author/Creator) - National Technical University of Athens
- Publication Details
- Interspeech 2016, Vol.2016
- Conference
- Annual Conference of the International Speech Communication Association: INTERSPEECH (Hyatt Regency, San Francisco, 08/09/2016–12/09/2016)
- Identifiers
- 991005541802307891
- Murdoch Affiliation
- School of Engineering and Information Technology
- Language
- English
- Resource Type
- Conference paper
Metrics
92 Record Views