Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/133224
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison
Author: Li, D.
Rodriguez Opazo, C.
Yu, X.
Li, H.
Citation: Proceedings of the 2020 IEEE Winter Conference on Applications of Computer Vision, 2020, pp.1448-1458
Publisher: IEEE
Issue Date: 2020
Series/Report no.: IEEE Winter Conference on Applications of Computer Vision
ISSN: 2472-6737
Conference Name: IEEE Winter Conference on Applications of Computer Vision (WACV) (1 Mar 2020 - 5 Mar 2020 : Snowmass Village, CO, USA)
Statement of
Responsibility: 
Dongxu Li , Cristian Rodriguez Opazo, Xin Yu, Hongdong Li
Abstract: Vision-based sign language recognition aims at helping the deaf people to communicate with others. However, most existing sign language datasets are limited to a small number of words. Due to the limited vocabulary size, models learned from those datasets cannot be applied in practice. In this paper, we introduce a new large-scale Word-Level American Sign Language (WLASL) video dataset, containing more than 2000 words performed by over 100 signers. This dataset will be made publicly available to the research community. To our knowledge,it is by far the largest public ASL dataset to facilitate word-level sign recognition research. Based on this new large-scale dataset, we are able to experiment with several deep learning methods for wordlevel sign recognition and evaluate their performances in large scale scenarios. Specifically we implement and compare two different models,i.e., (i) holistic visual appearance based approach, and (ii) 2D human pose based approach. Both models are valuable baselines that will benefit the community for method benchmarking. Moreover, we also propose a novel pose-based temporal graph convolution networks (Pose-TGCN) that model spatial and temporal dependencies in human pose trajectories simultaneously, which has further boosted the performance of the pose-based method. Our results show that pose-based and appearance-based models achieve comparable performances up to 62.63% at top-10 accuracy on 2,000 words/glosses, demonstrating the validity and challenges of our dataset. Our dataset and baseline deep models are available at https://dxli94.github.io/WLASL/.
Keywords: Assistive technology; Gesture recognition; Feature extraction
Rights: © 2020 IEEE
DOI: 10.1109/wacv45572.2020.9093512
Grant ID: http://purl.org/au-research/grants/arc/CE140100016
http://purl.org/au-research/grants/arc/DP190102261
http://purl.org/au-research/grants/arc/190100080
Published version: https://ieeexplore.ieee.org/
Appears in Collections:Australian Institute for Machine Learning publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.