You are here

  1. Home
  2. Blog
  3. Force feeding? What makes good written feedback and how much is too much?

Force feeding? What makes good written feedback and how much is too much?

It’s universally accepted that it is easier to keep an existing customer than it is to attract a new one, hence why ‘retention’ is the current HE zeitgeist. ‘One little thing’ initiatives and more broader actions are being implemented in order for us to keep our audience captive and engaged. The Early Alerts Indicator Dashboard (EAID), which helps tutors to identify absent students, is also a much-vaunted tool to try and keep all passengers aboard for the duration of the voyage through the choppy seas of deadlines with life ring deployment to those struggling to stay afloat.

For its part, feedback and its younger sibling, feed forward, don’t seem to be widely discussed features of the retention landscape, but perhaps they should be. In a grade-driven environment, we often wonder whether students are even looking at the comments, but it is perhaps in the packaging and delivery of our feedback that we may be able to make that marginal difference.

According to the National Student Survey in 2013 (HEA Feedback Toolkit, 2013), “…year on year, the lowest scores received within the National Student Survey are for the area of feedback.” Fast forward to NSS 2023, and although how good staff are at explaining things sits at a lofty 90.6%, the response to the question about how often feedback helps you to improve your work is languishing at 72.2% (NSS 2023). So, if tutors are so adept at explaining things, why the perceived mismatch with the translation of feedback into improvements by students?

Perhaps a good place to start, before we look at the actual delivery of feedback, are the necessary conditions needed for students to actually benefit from it. Buckley et al. (2021) reference the large body of research that explores the effects of assisting students in gaining a clear understanding of assignment expectations and highlight the work of Sadler (1989). Nicol and Macfarlane-Dick (2006) also refer to Sadler’s work in this area and in particular his three conditions that students require in order to be able to get the most out of their tutor’s feedback. Sadler posited that the learner must be aware of:

  1. What good performance is;
  2. How current performance relates to good performance;
  3. How to act to close the gap between ‘current’ and ‘good’ performance.

Essentially, before we start playing the game, are the ground rules well established and are we providing the required outline that will allow coaching to improve performance? If we are scaffolding the students’ study along a ‘constructive alignment’ (Biggs 1999) approach, then we have the framework for them to benchmark against and to which tutors can anchor their feedback.

At the Open University we pride ourselves (rightly so) on the level of feedback we give to our students. As a provider of correspondence tuition, the importance of ‘tutoring through marking’ and by extension feedback, is therefore elevated. Appreciating the cohorts’ needs and recognising the backgrounds of our students makes us especially conscious of how to encourage student development. At this juncture, I should say that I am most definitely not going to suggest that there is only one way to provide written feedback to students. As a Student Experience Manager (SEM) academic who is responsible for facilitating the delivery of modules and for managing tutors, I have signed off countless peer-assessed monitoring forms. I have seen many variations on a theme that all provide evidence of good/best practice, but there are some universals which particularly for tutors new to the Open University, give a solid start point for development.

Of all the written feedback mechanisms we use at the Open University, the fabled ‘PT3’ form is fertile ground for deeper exploration. This form accompanies a student’s marked script and is an opportunity for tutors to develop further, the key themes for feedback and feed forward that they have identified. As well as some amazing work, I have also seen examples of extreme paucity on some forms juxtaposed with dense jungles of text on others. Sometimes this indicates a lack of confidence on the tutor’s part on what to do with the form. It does, however, demonstrate the gulf in approaches and perhaps the uncertainty over whether students actually read it, let alone digest it, act on it, and improve. So where is the happy medium and what are the essentials of effective practice?

If we assume that feedback on the task (including challenge and praise where appropriate) is included in the margin comments of the script, then the PT3 is surely a place for extrapolating those themes into a list of ‘strengths’ and ‘areas for development’ for a student. These are pretty much a must and act as a sense check for the students as well as a direction of travel. Next, ‘learning outcomes’ - to include or not include, that is the question. I would never suggest ‘shoehorning’ them for the sake of it, but they can be helpful if they are incorporated to allow students to see if they have aligned accurately to our assessment guidelines. Crucially, students should be able to identify the extent to which they have been met, and if not, why not? Feed forward, the aforementioned relation of feedback. Highlighting upcoming deadlines, tutorials, available resources and offering further support is key for the student to understand what’s next and where/who they can turn to for additional support to address points raised. There is a place for large-scale generic feedback to the tutor group, but perhaps this could be saved for a forum post rather than flooding the PT3 form and obfuscating the pertinently personal points.

Finally, a focus on the self-efficacy of the student. None of this matters if the feedback is not packaged correctly. Tone, style, approachability, and personalisation are essential elements to consider, through framing comments as encouraging, developmental and avoiding judgement. The importance of a student’s confidence in their ability in be able to enact any improvements is key, and this is neatly summed up by Bouffard-Bouchard (1990) who stated.

“…despite the existence of requisite skills in an individual’s repertoire, perceived self-efficacy operates partially independently of these skills.”

By way of illustration, in my very first week at the Open University back in early 2021, I dealt with a student complaint where a tutor had begun their PT3 feedback with the comments, ‘the script was very poor’. Upon telephoning the student I was met with floods of tears and a person on the verge of giving up their whole degree off the back of that one comment which had absolutely floored them. It starkly brought home to me the importance of recognising our students’ start points for being able to cope with receiving feedback and the need to tailor it accordingly. It is not just the Open University ethos of widening participation, but a vital component of using feedback to develop our learners’ self-efficacy.

We rely on the tutor’s knowledge of their students’ individual needs but further to that is acknowledging that how we package our feedback, its tone, and way we deliver it to our students is not only key to their self-development, but also their retention on the module and overall attainment.


References

  • AdvanceHE (2013) HEA Feedback toolkit. Available at: https://www.advance-he.ac.uk/knowledgehub/hea-feedback-toolkit (Accessed 04/01/23).
  • Biggs, J. (1999) Teaching for Quality Learning at University: What the Student Does. Buckingham: The Society for Research into Higher Education and Open University Press. xiv + 250 pp.
  • Bouffard-Bouchard, T. (1990) ‘Influence of Self-Efficacy on Performance in a Cognitive Task’, The Journal of social psychology, 130(3), pp. 353–363. Available at: https://doi.org/10.1080/00224545.1990.9924591.
  • Buckley, A., Brown, D., Potapova-Crighton, O., and Yusuf, A. (2021) ‘Sticking plaster or long-term option? Take-home exams at Heriott-Watt University, in P. Baughan (ed.) Assessment and Feedback in a Post-Pandemic Era: A Time for Learning and Inclusion. AdvanceHE, pp. 127-137.
  • Nicol, D.J. and Macfarlane-Dick, D. (2006) ‘Formative assessment and self-regulated learning: a model and seven principles of good feedback practice’, Studies in higher education (Dorchester-on-Thames), 31(2), pp. 199–218. Available at: https://doi.org/10.1080/03075070600572090
  • Office for Students (2023) National Student Survey data: provider-level. Available at: https://www.officeforstudents.org.uk/data-and-analysis/national-student-survey-data/provider-leveldashboard/ (Accessed: 04/01/24)
  • Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144. doi: 10.1007/BF00117714 - Sadler's work on formative assessment provides insights into the role of assessment in supporting learning, including the concepts related to feedforward
Daniel Russell

Daniel Russell biography

Daniel is a lecturer and Student Experience Manager at the Open University. He completed his PGDipEd (Advanced PGCE) and MA in Teaching in Lifelong Learning at the University of Huddersfield. He was previously an Academic Practice Tutor at Coventry University supporting widening participation students. Prior to that, he was a lecturer in ESOL and taught English for academic purposes and English skills for university study at the University of Huddersfield. He is a Senior Fellow of AdvanceHE.