One evaluation of model-based testing and its automation

A. Pretschner, W. Prenninger, S. Wagner, C. Kühnel, M. Baumgartner, B. Sostawa, R. Zölch, T. Stauner

Research output: Contribution to journalConference articlepeer-review

140 Scopus citations


Model-based testing relies on behavior models for the generation of model traces: input and expected output - test cases - for an implementation. We use the case study of an automotive network controller to assess different test suites in terms of error detection, model coverage, and implementation coverage. Some of these suites were generated automatically with and without models, purely at random, and with dedicated functional test selection criteria. Other suites were derived manually, with and without the model at hand. Both automatically and manually derived model-based test suites detected significantly more requirements errors than hand-crafted test suites that were directly derived from the requirements. The number of detected programming errors did not depend on the use of models. Automatically generated model-based test suites detected as many errors as hand-crafted model-based suites with the same number of tests. A sixfold increase in the number of modelbased tests led to an 11% increase in detected errors.

Original languageEnglish
Article number1553582
Pages (from-to)392-401
Number of pages10
JournalProceedings - International Conference on Software Engineering
StatePublished - 2005
Externally publishedYes
Event27th International Conference on Software Engineering, ICSE 2005 - Saint Louis, MO, United States
Duration: 15 May 200521 May 2005


  • Abstraction
  • Automotive software
  • CASE
  • Coverage
  • Model-based development
  • Test case generation


Dive into the research topics of 'One evaluation of model-based testing and its automation'. Together they form a unique fingerprint.

Cite this