3D semantic mapping from arthroscopy using out-of-distribution pose and depth and in-distribution segmentation training

dc.contributor.authorJonmohamadi, Y.
dc.contributor.authorAli, S.
dc.contributor.authorLiu, F.
dc.contributor.authorRoberts, J.
dc.contributor.authorCrawford, R.
dc.contributor.authorCarneiro, G.
dc.contributor.authorPandey, A.K.
dc.contributor.conference24th International Conference of Medical Image Computing and Computer Assisted Intervention – MICCAI 2021 (27 Sep 2021 - 1 Oct 2021 : Strasbourg, France)
dc.contributor.editorDeBruijne, M.
dc.contributor.editorCattin, P.C.
dc.contributor.editorCotin, S.
dc.contributor.editorPadoy, N.
dc.contributor.editorSpeidel, S.
dc.contributor.editorZheng, Y.
dc.contributor.editorEssert, C.
dc.date.issued2021
dc.description.abstractMinimally invasive surgery (MIS) has many documented advantages, but the surgeon’s limited visual contact with the scene can be problematic. Hence, systems that can help surgeons navigate, such as a method that can produce a 3D semantic map, can compensate for the limitation above. In theory, we can borrow 3D semantic mapping techniques developed for robotics, but this requires finding solutions to the following challenges in MIS: 1) semantic segmentation, 2) depth estimation, and 3) pose estimation. In this paper, we propose the first 3D semantic mapping system from knee arthroscopy that solves the three challenges above. Using out-of-distribution non-human datasets, where pose could be labeled, we jointly train depth+pose estimators using self-supervised and supervised losses. Using an in-distribution human knee dataset, we train a fully-supervised semantic segmentation system to label arthroscopic image pixels into femur, ACL, and meniscus. Taking testing images from human knees, we combine the results from these two systems to automatically create 3D semantic maps of the human knee. The result of this work opens the pathway to the generation of intra-operative 3D semantic mapping, registration with pre-operative data, and robotic-assisted arthroscopy. Source code: https://github.com/YJonmo/EndoMapNet.
dc.description.statementofresponsibilityYaqub Jonmohamadi, Shahnewaz Ali, Fengbei Liu, Jonathan Roberts, Ross Crawford, Gustavo Carneiro, Ajay K. Pandey
dc.identifier.citationLecture Notes in Artificial Intelligence, 2021 / DeBruijne, M., Cattin, P.C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C. (ed./s), vol.12902 LNCS, pp.383-393
dc.identifier.doi10.1007/978-3-030-87196-3_36
dc.identifier.isbn9783030871956
dc.identifier.issn0302-9743
dc.identifier.issn1611-3349
dc.identifier.orcidLiu, F. [0000-0003-0355-2006]
dc.identifier.orcidCarneiro, G. [0000-0002-5571-6220]
dc.identifier.urihttps://hdl.handle.net/2440/133344
dc.language.isoen
dc.publisherSpringer International Publishing
dc.publisher.placeSwitzerland
dc.relation.granthttp://purl.org/au-research/grants/arc/DP180103232
dc.relation.granthttp://purl.org/au-research/grants/arc/FT190100525
dc.relation.ispartofseriesLecture Notes in Computer Science
dc.rights© Springer Nature Switzerland AG 2021
dc.source.urihttps://link.springer.com/book/10.1007/978-3-030-87196-3
dc.subject3D semantic mapping; endoscopy; deep learning
dc.title3D semantic mapping from arthroscopy using out-of-distribution pose and depth and in-distribution segmentation training
dc.typeConference paper
pubs.publication-statusPublished

Files