Finding faults: Manual testing vs. random+ testing vs. user reports

Ilinca Ciupa, Bertrand Meyer, Manuel Oriol, Alexander Pretschner

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

24 Scopus citations

Abstract

The usual way to compare testing strategies, whether theoretically or empirically, is to compare the number of faults they detect. To ascertain definitely that a testing strategy is better than another, this is a rather coarse criterion: shouldn't the nature of faults matter as well as their number? The empirical study reported here confirms this conjecture. An analysis of faults detected in Eiffel libraries through three different techniques-random tests, manual tests, and user incident reports-shows that each is good at uncovering significantly different kinds of faults. None of the techniques subsumes any of the others, but each brings distinct contributions.

Original languageEnglish
Title of host publicationProceedings - 19th International Symposium on Software Reliability Engineering, ISSRE 2008
Pages157-166
Number of pages10
DOIs
StatePublished - 2008
Externally publishedYes
Event19th International Symposium on Software Reliability Engineering, ISSRE 2008 - Seattle, WA, United States
Duration: 10 Nov 200814 Nov 2008

Publication series

NameProceedings - International Symposium on Software Reliability Engineering, ISSRE
ISSN (Print)1071-9458

Conference

Conference19th International Symposium on Software Reliability Engineering, ISSRE 2008
Country/TerritoryUnited States
CitySeattle, WA
Period10/11/0814/11/08

Fingerprint

Dive into the research topics of 'Finding faults: Manual testing vs. random+ testing vs. user reports'. Together they form a unique fingerprint.

Cite this