One evaluation of model-based testing and its automation

A. Pretschner, W. Prenninger, S. Wagner, C. Kühnel, M. Baumgartner, B. Sostawa, R. Zölch, T. Stauner

Research output: Contribution to conferencePaperpeer-review

24 Scopus citations

Abstract

Model-based testing relies on behavior models for the generation of model traces: input and expected output - test cases - for an implementation. We use the case study of an automotive network controller to assess different test suites in terms of error detection, model coverage, and implementation coverage. Some of these suites were generated automatically with and without models, purely at random, and with dedicated functional test selection criteria. Other suites were derived manually, with and without the model at hand. Both automatically and manually derived model-based test suites detected significantly more requirements errors than handcrafted test suites that were directly derived from the requirements. The number of detected programming errors did not depend on the use of models. Automatically generated model-based test suites detected as many errors as hand-crafted model-based suites with the same number of tests. A sixfold increase in the number of model-based tests led to an 11% increase in detected errors.

Original languageEnglish
Pages392-401
Number of pages10
DOIs
StatePublished - 2005
Event27th International Conference on Software Engineering, ICSE05 - St. Louis, MO, United States
Duration: 15 May 200521 May 2005

Conference

Conference27th International Conference on Software Engineering, ICSE05
Country/TerritoryUnited States
CitySt. Louis, MO
Period15/05/0521/05/05

Keywords

  • Abstraction
  • Automotive software
  • CASE
  • Coverage
  • Model-based development
  • Test case generation

Fingerprint

Dive into the research topics of 'One evaluation of model-based testing and its automation'. Together they form a unique fingerprint.

Cite this