Abstract
Model-based testing relies on behavior models for the generation of model traces: input and expected output - test cases - for an implementation. We use the case study of an automotive network controller to assess different test suites in terms of error detection, model coverage, and implementation coverage. Some of these suites were generated automatically with and without models, purely at random, and with dedicated functional test selection criteria. Other suites were derived manually, with and without the model at hand. Both automatically and manually derived model-based test suites detected significantly more requirements errors than hand-crafted test suites that were directly derived from the requirements. The number of detected programming errors did not depend on the use of models. Automatically generated model-based test suites detected as many errors as hand-crafted model-based suites with the same number of tests. A sixfold increase in the number of modelbased tests led to an 11% increase in detected errors.
Originalsprache | Englisch |
---|---|
Aufsatznummer | 1553582 |
Seiten (von - bis) | 392-401 |
Seitenumfang | 10 |
Fachzeitschrift | Proceedings - International Conference on Software Engineering |
Jahrgang | 2005 |
DOIs | |
Publikationsstatus | Veröffentlicht - 2005 |
Veranstaltung | 27th International Conference on Software Engineering, ICSE 2005 - Saint Louis, MO, USA/Vereinigte Staaten Dauer: 15 Mai 2005 → 21 Mai 2005 |