You are here

  1. Home
  2. eSTEeM Projects
  3. Innovative assessment
  4. An evaluation of the impact of changes to assessment practice in a second-year object-oriented Java programming module

An evaluation of the impact of changes to assessment practice in a second-year object-oriented Java programming module

  • Project leader(s): Anton DilSharon Dawes
  • Theme: Innovative assessment
  • Faculty: STEM
  • Status: Archived
  • Dates: April 2022 to February 2024

The level two module Object-oriented programming with Java (M250) was rewritten for 2021, informed by student feedback and changes to our Level 1 offerings.

In line with university policy[1] to make assessment engaging and authentic, and to help develop self-regulated and independent learners, M250 now incorporates more relatable examples, so that students can bring their own domain knowledge to programming tasks. Key to this was the adoption of a practical, third-party textbook, supported by our own online ‘Chapter companions’.

SEAM (Student Experience on A Module survey) feedback improved overall by 9.2%, and Appendix 1 suggests where our redesign impacted positively and negatively on this.

Our online programming activities are now supported by unlimited and penalty-free access to automated feedback using the CodeRunner (Lobb, 2016) environment. A more radical decision was to extend this testbed support to allow students to check their Tutor Marked Assignment (TMA) code. This means that students can get early feedback on their progress and refine their work before submission for marking by tutors, who can also see students’ progress online.

There was high participation in the use of our online assessment. Predictably (Sambell and McDowell, 1998; Cain et al 2020), students engaged to a higher extent with summative activities than formative ones. We report on engagement with summative feedback and demographic differences in Section 2, while Appendix 2 discusses engagement with formative assessment. Appendix 3 provides a summary of module demographic groups.

In a student survey (Appendix 4), 78.5% said that the online testbeds always or often helped them and 81.4% reported that using automated feedback on summative assignments increased their confidence significantly.

We found that use of our feedback was strongly related to increased scores and successful module completion. There was also evidence of automated feedback driving discussions in the module forums.

Compared to the previous version of the module, a higher proportion of responding students reported that the module material met their expectations for Level 2. However, students on the non-specialist qualification Q67 (Computing and IT and a second subject), which includes a higher representation of female and Black students, were significantly less comfortable with the material than students on the specialist Q62 qualification.

Students whose code more frequently failed to compile averaged lower scores on assessments and were more likely to drop out. Deeper analysis of the kinds of errors that students encountered when coding and of student interaction with automated feedback will be the subject of a later report.

  • Section 1 explains our automated feedback approach.
  • Section 2 reports differences in student comfort levels with M250.
  • Section 3 explores how students used summative automated feedback.
  • Section 4 discusses demographic differences in achievement.

Appendix 5 provides an update on our feedback software.

Related Resources: 
AttachmentSize
File Anton-Dil-Sharon-Dawes.pptx125.65 KB

Project poster.