This article has been written by third party author, who is independent of The Academic. This article does not necessarily reflect the opinions of the editors or management of The Academic, and solely reflects the opinions of the article’s author.
The pandemic and the different implemented social measures have caused a permanent change in our society. Many academics across the globe have been forced to adapt to unfamiliar online teaching, which is often implemented at short notice and without access to specialised equipment. The speed and dramatic nature of this shift will arguably be difficult for future generations to comprehend fully.
The pandemic placed unprecedented pressure upon staff and students alike. Yet performance management of academics, including Student Evaluation of Teaching (SETs), persists. These pressures can be extreme and are often likened to quasi-legal environments. Dealing with such pressures can be overwhelming, especially when one has done nothing wrong except fulfilling their teaching duties to the best of their ability.
For many faculty, SET is a grimly inevitable part of university life. Whilst potentially informative about teaching, problems arise when SET is used to review faculty performance. Some call SETs “student perception data” with students ill-equipped to judge teaching quality. SETs may contribute to grade inflation, display racial/gender biases and discriminate against quantitative subjects. Low response rates and respondent anonymity may also encourage extreme responses. These extreme responses could have serious and unintended consequences for individual faculty in what can resemble quasi-legal environments.
The pandemic has raised concerns over low student satisfaction levels. However, key sources of student dissatisfaction may lie outside instructors’ control – especially during a global pandemic. Common issues include library access and IT infrastructure and the effects of social restrictions. This adds to long-standing concerns about confounding factors associated with SET.
Analysing numerical teaching data
The above reflects a long-standing need to analyse numerical teaching data highlighted by the pandemic. Numerical teaching data is not always analysed very accurately or, indeed, very well! Every academic will have their own stories to tell. My favourite is a departmental meeting where two faculty members, holding mathematics PhDs, told a room full of mathematicians that the average mark for one of my coursework assignments was 0.8% too high. The result was easily attributable to statistical random sample variation – potentially explainable in terms of high-school mathematics. However, the serious point is that numerical teaching data matters and people clever enough to know better can make a hash of analysing it. The whole process is not always conducted nicely, and the stakes can be higher than readers might realise.
Analysing available data and setting reasonable targets
The Chartered Association of Business Schools collects National Student Survey data for the UK. One key component of this data is the proportion of students reporting satisfaction with their course. The effect of the pandemic can be measured by analysing this data for those institutions submitting to both the 2019 and 2021 exercises (either side of the outbreak of the pandemic). Statistical analyses give significant evidence of a difference with the pandemic leading to an inevitable 10% drop in student satisfaction.
What proportion of satisfied students is a reasonable target to aim for? A reasonable target in non-pandemic times might be 80%. This would be consistent with, though slightly higher than, some of the informal internal teaching targets used in some of the Universities I have previously worked in. The 80% satisfaction target is also marginally below average pre-pandemic satisfaction levels. Therefore, an inevitable 10% reduction caused by the pandemic suggests a reasonable threshold value of 72% satisfaction. An analysis using Bayesian statistics suggests that around 85% of UK universities broadly achieve this figure. This points to a very high teaching performance in challenging circumstances and reflects unprecedented efforts devoted to pandemic-era teaching.
Postscript: student evaluation of teaching in a post-COVID world
It seems inevitable that the current high-stakes nature of Student Evaluation of Teaching will continue for some time. As long as such activities are genuinely informative about teaching and faculty and not subjected to undue pressure, they may not be inherently wrong with them.
Being optimistic, perhaps the pandemic can give stakeholders renewed confidence in the ability of University faculty to teach and to teach well. Available numerical data suggest very high levels of performance from University teachers during the pandemic. Academics worldwide have also shown themselves to be very dedicated and have, at times, worked heroically to support learning during the pandemic. With enhanced stakeholder confidence, perhaps Student Evaluation of Teaching could be made more collaborative and less adversarial. After all, academics are people too!
In addition to the aforementioned points, we must also be realistic about what can be anticipated from Student Evaluation of Teaching following the pandemic. The pandemic has resulted in an inevitable 10% reduction in student satisfaction levels. It may take time for these scores to go back to normal. Staff and students may also need time to recover from a challenging time. This is especially true given the intensity of some of the pandemic-era teaching. Extra kindness and understanding will ultimately be needed from all concerned.
In conclusion, simply be kind. The pandemic has placed unprecedented pressure on staff and students alike. Before people pass judgement, just remember there may be more than 85% probability that your professor has been doing all they reasonably can in challenging (and likely unfamiliar) circumstances whilst teaching during the pandemic.
Fry, J. (2023). Revisiting student evaluation of teaching during the pandemic. Applied Economics Letters, 1-5. https://doi.org/10.1080/13504851.2023.2178623