Towards Comprehensive Assessment of Code Quality at CS1-Level: Tools, Rubrics and Refactoring Rules

Date

2024

Authors

Izu, C.
Mirolo, C.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Conference paper

Citation

IEEE Global Engineering Education Conference, EDUCON, 2024, pp.1-10

Statement of Responsibility

Cruz Izu, Claudio Mirolo

Conference Name

IEEE Global Engineering Education Conference (EDUCON) (8 May 2024 - 11 May 2024 : Kos Island, Greece)

Abstract

While most student code is assessed for correctness and functionality, recent work has looked at extending automatic assessment to include quality aspects. In software engineering code reviews help developers to increase the quality of a project by identifying and cleaning poor structures — commonly referred to as code smells. Despite the availability of professional tools, evaluating the quality of small programs at CS1 level is quite different from evaluating a complex software system. Thus, identifying meaningful quality criteria for small programs written by novices and either adapting current tools or designing new ones for that purpose are topics worth being investigated. The present work contributes to this aim by analysing the code produced by CS1 students from three different perspectives: (i) inspecting the feedback of automated tools — Hyperstyle and Pylint; (ii) matching the smells addressed by a set of refactoring rules; (iii) devising and using a manual rubric. A comparative analysis indeed highlights strengths and weaknesses of these approaches. Overall, automatic quality feedback needs to be complemented with classroom instruction to manually detect code issues and decide if they need refactoring. Additionally, such review activities have the potential to develop code comprehension by engaging novice programmers to reflect on their own code.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

© 2024, IEEE

License

Grant ID

Call number

Persistent link to this record