Choosing an NLP library for analyzing software documentation: a systematic literature review and a series of experiments
Files
(Accepted version)
Date
2017
Authors
Al Omran, F.
Treude, C.
Editors
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
IEEE International Working Conference on Mining Software Repositories, 2017, vol.0, pp.187-197
Statement of Responsibility
Fouad Nasser A Al Omran, Christoph Treude
Conference Name
2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR 2017) (20 May 2017 - 21 May 2017 : Buenos Aires, Argentina)
Abstract
To uncover interesting and actionable information from natural language documents authored by software developers, many researchers rely on "out-of-the-box" NLP libraries. However, software artifacts written in natural language are different from other textual documents due to the technical language used. In this paper, we first analyze the state of the art through a systematic literature review in which we find that only a small minority of papers justify their choice of an NLP library. We then report on a series of experiments in which we applied four state-of-the-art NLP libraries to publicly available software artifacts from three different sources. Our results show low agreement between different libraries (only between 60% and 71% of tokens were assigned the same part-of-speech tag by all four libraries) as well as differences in accuracy depending on source: For example, spaCy achieved the best accuracy on Stack Overflow data with nearly 90% of tokens tagged correctly, while it was clearly outperformed by Google's SyntaxNet when parsing GitHub ReadMe files. Our work implies that researchers should make an informed decision about the particular NLP library they choose and that customizations to libraries might be necessary to achieve good results when analyzing software artifacts written in natural language.
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
© 2017 IEEE