Abstract
Assessing programming exercises requires time and effort from instructors, especially in large courses with many students. Automated assessment systems reduce the effort, but impose a certain solution through test cases. This can limit the creativity of students and lead to a reduced learning experience. To verify code quality or evaluate creative programming tasks, the manual review of code submissions is necessary. However, the process of downloading the students’ code, identifying their contributions, and assessing their solution can require many repetitive manual steps. In this paper, we present Stager, a tool designed to support code reviewers by reducing the time to prepare and conduct manual assessments. Stager downloads multiple submissions and adds the student’s name to the corresponding folder and project, so that reviewers can better distinguish between different submissions. It filters out late submissions and applies coding style standards to prevent white space related issues. Stager combines all changes of one student into a single commit, so that reviewers can identify the student’s solution more quickly. Stager is an open source, programming language agnostic tool with an automated build pipeline for cross-platform executables. It can be used for a variety of computer science courses. We used Stager in a software engineering undergraduate course with 1600 students and 45 teaching assistants in three separate programming exercises. We found that Stager improves the code correction experience and reduces the overall assessment effort.
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 34-43 |
Seitenumfang | 10 |
Fachzeitschrift | CEUR Workshop Proceedings |
Jahrgang | 2358 |
Publikationsstatus | Veröffentlicht - 2019 |
Veranstaltung | Tagungsband des 16. Workshops "Software Engineering im Unterricht der Hochschulen" 2019, SEUH 2019 - 16th Workshop "Software Engineering in University Teaching", SEUH 2019 - Bremerhaven, Deutschland Dauer: 21 Feb. 2019 → 22 Feb. 2019 |