Please use this identifier to cite or link to this item:
https://hdl.handle.net/2440/120134
Citations | ||
Scopus | Web of Science® | Altmetric |
---|---|---|
?
|
?
|
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Milan, A. | - |
dc.contributor.author | Pham, T. | - |
dc.contributor.author | Vijay, K. | - |
dc.contributor.author | Morrison, D. | - |
dc.contributor.author | Tow, A.W. | - |
dc.contributor.author | Liu, L. | - |
dc.contributor.author | Erskine, J. | - |
dc.contributor.author | Grinover, R. | - |
dc.contributor.author | Gurman, A. | - |
dc.contributor.author | Hunn, T. | - |
dc.contributor.author | Kelly-Boxall, N. | - |
dc.contributor.author | Lee, D. | - |
dc.contributor.author | McTaggart, M. | - |
dc.contributor.author | Rallos, G. | - |
dc.contributor.author | Razjigaev, A. | - |
dc.contributor.author | Rowntree, T. | - |
dc.contributor.author | Shen, T. | - |
dc.contributor.author | Smith, R. | - |
dc.contributor.author | Wade-McCue, S. | - |
dc.contributor.author | Zhuang, Z. | - |
dc.contributor.author | et al. | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | IEEE International Conference on Robotics and Automation, 2018, vol.abs/1709.07665, pp.1908-1915 | - |
dc.identifier.isbn | 9781538630815 | - |
dc.identifier.issn | 1050-4729 | - |
dc.identifier.issn | 2577-087X | - |
dc.identifier.uri | http://hdl.handle.net/2440/120134 | - |
dc.description.abstract | We present our approach for robotic perception in cluttered scenes that led to winning the recent Amazon Robotics Challenge (ARC) 2017. Next to small objects with shiny and transparent surfaces, the biggest challenge of the 2017 competition was the introduction of unseen categories. In contrast to traditional approaches which require large collections of annotated data and many hours of training, the task here was to obtain a robust perception pipeline with only few minutes of data acquisition and training time. To that end, we present two strategies that we explored. One is a deep metric learning approach that works in three separate steps: semantic-agnostic boundary detection, patch classification and pixel-wise voting. The other is a fully-supervised semantic segmentation approach with efficient dataset collection. We conduct an extensive analysis of the two methods on our ARC 2017 dataset. Interestingly, only few examples of each class are sufficient to fine-tune even very deep convolutional neural networks for this specific task. | - |
dc.description.statementofresponsibility | A. Milan, T. Pham, K. Vijay, D. Morrison, A.W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, T. Hunn, N. Kelly-Boxall, D. Lee, M. McTaggart, G. Rallos, A. Razjigaev, T. Rowntree, T. Shen, R. Smith, S. Wade-McCue, Z. Zhuang, C. Lehnert, G. Lin, I. Reid, P. Corke, and J. Leitner | - |
dc.language.iso | en | - |
dc.publisher | IEEE | - |
dc.relation.ispartofseries | IEEE International Conference on Robotics and Automation ICRA | - |
dc.rights | © 2018 IEEE | - |
dc.source.uri | http://dx.doi.org/10.1109/icra.2018.8461082 | - |
dc.title | Semantic segmentation from limited training data | - |
dc.type | Conference paper | - |
dc.contributor.conference | IEEE International Conference on Robotics and Automation (ICRA) (21 May 2018 - 25 May 2018 : Brisbane, Australia) | - |
dc.identifier.doi | 10.1109/ICRA.2018.8461082 | - |
dc.publisher.place | online | - |
dc.relation.grant | http://purl.org/au-research/grants/arc/CE140100016 | - |
pubs.publication-status | Published | - |
dc.identifier.orcid | Reid, I. [0000-0001-7790-6423] | - |
Appears in Collections: | Aurora harvest 4 Computer Science publications |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.