A case study on automated fuzz target generation for large codebases
Date
2019
Authors
Kelly, M.
Treude, C.
Murray, A.
Editors
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
International Symposium on Empirical Software Engineering and Measurement, 2019, vol.2019-Septemer, pp.1-6
Statement of Responsibility
Matthew Kelly, Christoph Treude, Alex Murray
Conference Name
13th International Symposium on Empirical Software Engineering (ESEM) (19 Sep 2019 - 20 Sep 2019 : Porto de Galinhas, Brazil)
Abstract
Fuzz Testing is a largely automated testing technique that provides random and unexpected input to a program in attempt to trigger failure conditions. Much of the research conducted thus far into Fuzz Testing has focused on developing improvements to available Fuzz Testing tools and frameworks in order to improve efficiency. In this paper however, we instead look at a way in which we can reduce the amount of developer time required to integrate Fuzz Testing to help maintain an existing codebase. We accomplish this with a new technique for automatically generating Fuzz Targets, the modified versions of programs on which Fuzz Testing tools operate. We evaluated three different Fuzz Testing solutions on the codebase of our industry partner and found a fully automated solution to result in significantly more bugs found with respect to the developer time required to implement said solution. Our research is an important step towards increasing the prevalence of Fuzz Testing by making it simpler to integrate a Fuzz Testing solution for maintaining an existing codebase.
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
©2019 IEEE