Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Real-time human gaze estimation|
|Citation:||Proceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA 2019), 2019 / pp.1-7|
|Conference Name:||International Conference on Digital Image Computing: Techniques and Applications (DICTA) (02 Dec 2019 - 04 Dec 2019 : Perth, Australia)|
|Thomas Rowntree, Carmine Pontecorvo, Ian Reid|
|Abstract:||This paper describes a system for estimating the course gaze or 1D head pose of multiple people in a video stream from a moving camera in an indoor scene. The system runs at 30 Hz and can detect human heads with a F-Score of 87.2% and predict their gaze with an average error 20.9° including when they are facing directly away from the camera. The system uses two Convolutional Neural Networks (CNNs) for head detection and gaze estimation respectively and uses common tracking and filtering techniques for smoothing predictions over time. This paper is application-focused and so describes the individual components of the system as well as the techniques used for collecting data and training the CNNs.|
|Rights:||© 2019 IEEE|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.