TY - GEN
T1 - Semi-Automatic Assessment of Modeling Exercises using Supervised Machine Learning
AU - Krusche, Stephan
N1 - Publisher Copyright:
© 2022 IEEE Computer Society. All rights reserved.
PY - 2022
Y1 - 2022
N2 - Motivation: Modeling is an essential skill in software engineering. With rising numbers of students, introductory courses with hundreds of students are becoming standard. Grading all students' exercise solutions and providing individual feedback is time-consuming. Objectives: This paper describes a semi-automatic assessment approach based on supervised machine learning. It aims to increase the fairness and efficiency of grading and improve the provided feedback quality. Method: While manually assessing the first submitted models, the system learns which elements are correct or wrong and which feedback is appropriate. The system identifies similar model elements in subsequent assessments and suggests how to assess them based on scores and feedback of previous assessments. While reviewing new submissions, reviewers apply the suggestions or adjust them and manually assess the remaining model elements. Results: We empirically evaluated this approach in three modeling exercises in a large software engineering course, each with more than 800 participants, and compared the results with three manually assessed exercises. A quantitative analysis reveals an automatic feedback rate between 65 % and 80 %. Between 4.6 % and 9.6 % of the suggestions had to be manually adjusted. Discussion: Qualitative feedback indicates that semi-automatic assessment reduces the effort and improves consistency. Few participants noted that the proposed feedback sometimes does not fit the context of the submission and that the selection of feedback should be further improved.
AB - Motivation: Modeling is an essential skill in software engineering. With rising numbers of students, introductory courses with hundreds of students are becoming standard. Grading all students' exercise solutions and providing individual feedback is time-consuming. Objectives: This paper describes a semi-automatic assessment approach based on supervised machine learning. It aims to increase the fairness and efficiency of grading and improve the provided feedback quality. Method: While manually assessing the first submitted models, the system learns which elements are correct or wrong and which feedback is appropriate. The system identifies similar model elements in subsequent assessments and suggests how to assess them based on scores and feedback of previous assessments. While reviewing new submissions, reviewers apply the suggestions or adjust them and manually assess the remaining model elements. Results: We empirically evaluated this approach in three modeling exercises in a large software engineering course, each with more than 800 participants, and compared the results with three manually assessed exercises. A quantitative analysis reveals an automatic feedback rate between 65 % and 80 %. Between 4.6 % and 9.6 % of the suggestions had to be manually adjusted. Discussion: Qualitative feedback indicates that semi-automatic assessment reduces the effort and improves consistency. Few participants noted that the proposed feedback sometimes does not fit the context of the submission and that the selection of feedback should be further improved.
UR - http://www.scopus.com/inward/record.url?scp=85118750146&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85118750146
T3 - Proceedings of the Annual Hawaii International Conference on System Sciences
SP - 871
EP - 880
BT - Proceedings of the 55th Annual Hawaii International Conference on System Sciences, HICSS 2022
A2 - Bui, Tung X.
PB - IEEE Computer Society
T2 - 55th Annual Hawaii International Conference on System Sciences, HICSS 2022
Y2 - 3 January 2022 through 7 January 2022
ER -