Please use this identifier to cite or link to this item: http://hdl.handle.net/2440/111525
Citations
Scopus Web of Science® Altmetric
?
?
Type: Journal article
Title: Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography
Author: Yong, Y.
Tan, L.
McLaughlin, R.
Chee, K.
Liew, Y.
Citation: Journal of Biomedical Optics, 2017; 22(12):126005-1-126005-9
Publisher: SPIE
Issue Date: 2017
ISSN: 1083-3668
1560-2281
Statement of
Responsibility: 
Yan Ling Yong, Li Kuo Tan, Robert A. McLaughlin, Kok Han Chee, Yih Miin Liew
Abstract: Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame.
Keywords: coronary lumen; neural network; optical coherence tomography; optical diagnostics; pattern recognition; segmentation
Rights: © 2017 SPIE
RMID: 0030079780
DOI: 10.1117/1.JBO.22.12.126005
Grant ID: http://purl.org/au-research/grants/arc/CE140100003
http://purl.org/au-research/grants/arc/DP150104660
Appears in Collections:Medicine publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.