Conference paper
Learning spatiotemporal features Using 3DCNN and Convolutional LSTM for Gesture Recognition
2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
IEEE International Conference on Computer Vision Workshops (ICCVW) 2017 (Venice, Italy, 22/10/2017–29/10/2017)
2017
Abstract
Gesture recognition aims at understanding the ongoing human gestures. In this paper, we present a deep architecture to learn spatiotemporal features for gesture recognition. The deep architecture first learns 2D spatiotemporal feature maps using 3D convolutional neural networks (3DCNN) and bidirectional convolutional long-short-term-memory networks (ConvLSTM). The learnt 2D feature maps can encode the global temporal information and local spatial information simultaneously. Then, 2DCNN is utilized further to learn the higher-level spatiotemporal features from the 2D feature maps for the final gesture recognition. The spatiotemporal correlation information is kept through the whole process of feature learning. This makes the deep architecture an effective spatiotemporal feature learner. Experiments on the ChaLearn LAP large-scale isolated gesture dataset (IsoGD) and the Sheffield Kinect Gesture (SKIG) dataset demonstrate the superiority of the proposed deep architecture.
Details
- Title
- Learning spatiotemporal features Using 3DCNN and Convolutional LSTM for Gesture Recognition
- Authors/Creators
- L. Zhang (Author/Creator)G. Zhu (Author/Creator)P. Shen (Author/Creator)J. Song (Author/Creator)S.A.A. Shah (Author/Creator)M. Bennamoun (Author/Creator)
- Publication Details
- 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
- Conference
- IEEE International Conference on Computer Vision Workshops (ICCVW) 2017 (Venice, Italy, 22/10/2017–29/10/2017)
- Identifiers
- 991005541687807891
- Murdoch Affiliation
- Murdoch University
- Language
- English
- Resource Type
- Conference paper
Metrics
84 Record Views