Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/111268
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChen, Y.-
dc.contributor.authorShen, C.-
dc.contributor.authorWei, X.-
dc.contributor.authorLiu, L.-
dc.contributor.authorYang, J.-
dc.date.issued2017-
dc.identifier.citationProceedings / IEEE International Conference on Computer Vision. IEEE International Conference on Computer Vision, 2017, vol.2017, pp.1221-1230-
dc.identifier.isbn9781538610336-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/2440/111268-
dc.description.abstractFor human pose estimation in monocular images, joint occlusions and overlapping upon human bodies often result in deviated pose predictions. Under these circumstances, biologically implausible pose predictions may be produced. In contrast, human vision is able to predict poses by exploiting geometric constraints of joint inter-connectivity. To address the problem by incorporating priors about the structure of human bodies, we propose a novel structure-aware convolutional network to implicitly take such priors into account during training of the deep network. Explicit learning of such constraints is typically challenging. Instead, we design discriminators to distinguish the real poses from the fake ones (such as biologically implausible ones). If the pose generator (G) generates results that the discriminator fails to distinguish from real ones, the network successfully learns the priors.,,To better capture the structure dependency of human body joints, the generator G is designed in a stacked multi-task manner to predict poses as well as occlusion heatmaps. Then, the pose and occlusion heatmaps are sent to the discriminators to predict the likelihood of the pose being real. Training of the network follows the strategy of conditional Generative Adversarial Networks (GANs). The effectiveness of the proposed network is evaluated on two widely used human pose estimation benchmark datasets. Our approach significantly outperforms the state-of-the-art methods and almost always generates plausible human pose predictions.-
dc.description.statementofresponsibilityYu Chen, Chunhua Shen, Xiu-Shen Wei, Lingqiao Liu, Jian Yang-
dc.language.isoen-
dc.publisherIEEE-
dc.relation.ispartofseriesIEEE International Conference on Computer Vision-
dc.rights© 2017 IEEE-
dc.source.urihttp://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8234942-
dc.titleAdversarial PoseNet: a structure-aware convolutional network for human pose estimation-
dc.typeConference paper-
dc.contributor.conferenceIEEE International Conference on Computer Vision (ICCV 2017) (22 Oct 2017 - 29 Oct 2017 : Venice, ITALY)-
dc.identifier.doi10.1109/ICCV.2017.137-
dc.publisher.placePiscataway, NJ-
pubs.publication-statusPublished-
dc.identifier.orcidShen, C. [0000-0002-8648-8718]-
Appears in Collections:Aurora harvest 8
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.