This thesis presents an approach for automated grading of UML Use Case diagrams. Many software engineering courses require students to learn how to model the behavioural features of a problem domain or an object-oriented design in the form of a use case diagram. Because assessing UML assignments is a time-consuming and labor-intensive operation, there is a need for an automated grading strategy that may help instructors by speeding up the grading process while also maintaining uniformity and fairness in large classrooms. The effectiveness of this automated grading approach was assessed by applying it to two real-world assignments. We demonstrate how the result is similar to manual grading, which was less than 7% on average; and when we applied some strategies, such as configuring settings and using multiple solutions, the average differences were even lower. Also, the grading methods and the tool are proposed and empirically validated.
Author Keywords: Automated Grading, Compare Models, Use Case