Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance|
|Citation:||Medical Image Analysis, 2017; 35:159-171|
|Tuan Anh Ngo, Zhi Lu, Gustavo Carneiro|
|Abstract:||We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge.|
|Keywords:||Cardiac cine magnetic resonance; Deep learning; Level set method; Segmentation of the left ventricle of the heart|
|Rights:||Crown Copyright ©2016 Published by Elsevier B.V. All rights reserved.|
|Appears in Collections:||Medicine publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.