Please use this identifier to cite or link to this item:
|Web of Science®
|A case study on automated fuzz target generation for large codebases
|International Symposium on Empirical Software Engineering and Measurement, 2019, vol.2019-Septemer, pp.1-6
|International Symposium on Empirical Software Engineering and Measurement
|13th International Symposium on Empirical Software Engineering (ESEM) (19 Sep 2019 - 20 Sep 2019 : Porto de Galinhas, Brazil)
|Matthew Kelly, Christoph Treude, Alex Murray
|Fuzz Testing is a largely automated testing technique that provides random and unexpected input to a program in attempt to trigger failure conditions. Much of the research conducted thus far into Fuzz Testing has focused on developing improvements to available Fuzz Testing tools and frameworks in order to improve efficiency. In this paper however, we instead look at a way in which we can reduce the amount of developer time required to integrate Fuzz Testing to help maintain an existing codebase. We accomplish this with a new technique for automatically generating Fuzz Targets, the modified versions of programs on which Fuzz Testing tools operate. We evaluated three different Fuzz Testing solutions on the codebase of our industry partner and found a fully automated solution to result in significantly more bugs found with respect to the developer time required to implement said solution. Our research is an important step towards increasing the prevalence of Fuzz Testing by making it simpler to integrate a Fuzz Testing solution for maintaining an existing codebase.
|Appears in Collections:
|Aurora harvest 8
Computer Science publications
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.