Green Fuzzing: A Saturation-Based Stopping Criterion using Vulnerability Prediction

Stephan Lipp, Daniel Elsner, Severin Kacianka, Alexander Pretschner, Marcel Böhme, Sebastian Banescu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Fuzzing is a widely used automated testing technique that uses random inputs to provoke program crashes indicating security breaches. A difficult but important question is when to stop a fuzzing campaign. Usually, a campaign is terminated when the number of crashes and/or covered code elements has not increased over a certain period of time. To avoid premature termination when a ramp-up time is needed before vulnerabilities are reached, code coverage is often preferred over crash count to decide when to terminate a campaign. However, a campaign might only increase the coverage on non-security-critical code or repeatedly trigger the same crashes. For these reasons, both code coverage and crash count tend to overestimate the fuzzing effectiveness, unnecessarily increasing the duration and thus the cost of the testing process. The present paper explores the tradeoff between the amount of saved fuzzing time and number of missed bugs when stopping campaigns based on the saturation of covered, potentially vulnerable functions rather than triggered crashes or regular function coverage. In a large-scale empirical evaluation of 30 open-source C programs with a total of 240 security bugs and 1,280 fuzzing campaigns, we first show that binary classification models trained on software with known vulnerabilities (CVEs), using lightweight machine learning features derived from findings of static application security testing tools and proven software metrics, can reliably predict (potentially) vulnerable functions. Second, we show that our proposed stopping criterion terminates 24-hour fuzzing campaigns 6-12 hours earlier than the saturation of crashes and regular function coverage while missing (on average) fewer than 0.5 out of 12.5 contained bugs.

Original languageEnglish
Title of host publicationISSTA 2023 - Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis
EditorsRene Just, Gordon Fraser
PublisherAssociation for Computing Machinery, Inc
Pages127-139
Number of pages13
ISBN (Electronic)9798400702211
DOIs
StatePublished - 12 Jul 2023
Event32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2023 - Seattle, United States
Duration: 17 Jul 202321 Jul 2023

Publication series

NameISSTA 2023 - Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis

Conference

Conference32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2023
Country/TerritoryUnited States
CitySeattle
Period17/07/2321/07/23

Keywords

  • empirical study
  • fuzzing
  • stopping criterion

Fingerprint

Dive into the research topics of 'Green Fuzzing: A Saturation-Based Stopping Criterion using Vulnerability Prediction'. Together they form a unique fingerprint.

Cite this