Please use this identifier to cite or link to this item: http://hdl.handle.net/2440/126991
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChen, Y.en
dc.contributor.authorShen, C.en
dc.contributor.authorChen, H.en
dc.contributor.authorWei, X.en
dc.contributor.authorLiu, L.en
dc.contributor.authorYang, J.en
dc.date.issued2020en
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2020; 42(7):1654-1669en
dc.identifier.issn0162-8828en
dc.identifier.issn1939-3539en
dc.identifier.urihttp://hdl.handle.net/2440/126991-
dc.description.abstractLandmark/pose estimation in single monocular images has received much effort in computer vision due to its important applications. It remains a challenging task when input images come with severe occlusions caused by, e.g., adverse camera views. Under such circumstances, biologically implausible pose predictions may be produced. In contrast, human vision is able to predict poses by exploiting geometric constraints of landmark point inter-connectivity. To address the problem, by incorporating priors about the structure of pose components, we propose a novel structure-aware fully convolutional network to implicitly take such priors into account during training of the deep network. Explicit learning of such constraints is typically challenging. Instead, inspired by how human identifies implausible poses, we design discriminators to distinguish the real poses from the fake ones (such as biologically implausible ones). If the pose generator G generates results that the discriminator fails to distinguish from real ones, the network successfully learns the priors. Training of the network follows the strategy of conditional Generative Adversarial Networks (GANs). The effectiveness of the proposed network is evaluated on three pose-related tasks: 2D human pose estimation, 2D facial landmark estimation and 3D human pose estimation. The proposed approach significantly outperforms several state-of-the-art methods and almost always generates plausible pose predictions, demonstrating the usefulness of implicit learning of structures using GANs.en
dc.description.statementofresponsibilityYu Chen, Chunhua Shen, Hao Chen, Xiu-Shen Wei, Lingqiao Liu, Jian Yangen
dc.language.isoenen
dc.publisherIEEEen
dc.rights© 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permissionen
dc.subjectPose estimation; landmark localization; structure-aware network; adversarial training; multi-task learning; deep convolutional networksen
dc.titleAdversarial learning of structure-aware fully convolutional networks for landmark localizationen
dc.typeJournal articleen
dc.identifier.rmid1000022065en
dc.identifier.doi10.1109/TPAMI.2019.2901875en
dc.relation.granthttp://purl.org/au-research/grants/arc/CE140100016en
dc.identifier.pubid447423-
pubs.library.collectionComputer Science publicationsen
pubs.library.teamDS10en
pubs.verification-statusVerifieden
pubs.publication-statusPublisheden
dc.identifier.orcidShen, C. [0000-0002-8648-8718]en
Appears in Collections:Computer Science publications

Files in This Item:
File Description SizeFormat 
hdl_126991.pdfAccepted version8.24 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.