Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/120134
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMilan, A.-
dc.contributor.authorPham, T.-
dc.contributor.authorVijay, K.-
dc.contributor.authorMorrison, D.-
dc.contributor.authorTow, A.W.-
dc.contributor.authorLiu, L.-
dc.contributor.authorErskine, J.-
dc.contributor.authorGrinover, R.-
dc.contributor.authorGurman, A.-
dc.contributor.authorHunn, T.-
dc.contributor.authorKelly-Boxall, N.-
dc.contributor.authorLee, D.-
dc.contributor.authorMcTaggart, M.-
dc.contributor.authorRallos, G.-
dc.contributor.authorRazjigaev, A.-
dc.contributor.authorRowntree, T.-
dc.contributor.authorShen, T.-
dc.contributor.authorSmith, R.-
dc.contributor.authorWade-McCue, S.-
dc.contributor.authorZhuang, Z.-
dc.contributor.authoret al.-
dc.date.issued2018-
dc.identifier.citationIEEE International Conference on Robotics and Automation, 2018, vol.abs/1709.07665, pp.1908-1915-
dc.identifier.isbn9781538630815-
dc.identifier.issn1050-4729-
dc.identifier.issn2577-087X-
dc.identifier.urihttp://hdl.handle.net/2440/120134-
dc.description.abstractWe present our approach for robotic perception in cluttered scenes that led to winning the recent Amazon Robotics Challenge (ARC) 2017. Next to small objects with shiny and transparent surfaces, the biggest challenge of the 2017 competition was the introduction of unseen categories. In contrast to traditional approaches which require large collections of annotated data and many hours of training, the task here was to obtain a robust perception pipeline with only few minutes of data acquisition and training time. To that end, we present two strategies that we explored. One is a deep metric learning approach that works in three separate steps: semantic-agnostic boundary detection, patch classification and pixel-wise voting. The other is a fully-supervised semantic segmentation approach with efficient dataset collection. We conduct an extensive analysis of the two methods on our ARC 2017 dataset. Interestingly, only few examples of each class are sufficient to fine-tune even very deep convolutional neural networks for this specific task.-
dc.description.statementofresponsibilityA. Milan, T. Pham, K. Vijay, D. Morrison, A.W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, T. Hunn, N. Kelly-Boxall, D. Lee, M. McTaggart, G. Rallos, A. Razjigaev, T. Rowntree, T. Shen, R. Smith, S. Wade-McCue, Z. Zhuang, C. Lehnert, G. Lin, I. Reid, P. Corke, and J. Leitner-
dc.language.isoen-
dc.publisherIEEE-
dc.relation.ispartofseriesIEEE International Conference on Robotics and Automation ICRA-
dc.rights© 2018 IEEE-
dc.source.urihttp://dx.doi.org/10.1109/icra.2018.8461082-
dc.titleSemantic segmentation from limited training data-
dc.typeConference paper-
dc.contributor.conferenceIEEE International Conference on Robotics and Automation (ICRA) (21 May 2018 - 25 May 2018 : Brisbane, Australia)-
dc.identifier.doi10.1109/ICRA.2018.8461082-
dc.publisher.placeonline-
dc.relation.granthttp://purl.org/au-research/grants/arc/CE140100016-
pubs.publication-statusPublished-
dc.identifier.orcidReid, I. [0000-0001-7790-6423]-
Appears in Collections:Aurora harvest 4
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.