ÁñÁ«ÊÓƵ

Student evaluations of teaching ¡®methodologically flawed¡¯

<ÁñÁ«ÊÓƵ class="standfirst">Putting aside questions of sexism, racism and homophobia, Australian literature review finds that SETs are just poor science
April 8, 2021
Shanghai, China - April 08, 2017 Discarded and vandalized bicycle of popular bikesharing company ofo laying in the street
Source: iStock

Critics say student evaluations of teaching (SETs) are skewed by innate biases against minority groups, and their results should never be used for professional assessment purposes. But a new analysis has found that SETs are so susceptible to factors unrelated to teachers and courses that their results should be disregarded anyway.

A La Trobe University of 183 SET-related studies has found that issues that have nothing to do with teachers¡¯ identity ¨C such as class size, website quality, university cleanliness and even food options in the canteen ¨C also skew the results. Student characteristics such as gender, age and disciplinary area influence the evaluations as?well.

¡°That student demographics alone impact on SET results demonstrates just how flawed the system is,¡± says the paper, published in the journal Assessment and Evaluation in Higher Education. ¡°The existing literature makes it clear that SET results are strongly influenced by external factors unrelated to course content or teacher performance. This analysis raises the question of how any university [can] justify the continued use of SETs.¡±

Author Troy Heffernan said researchers had spent decades exploring how SETs disadvantaged academics on the grounds of gender, racial background, disability and sexual orientation, with women and academics from minority groups routinely given less favourable evaluations than white, able-bodied males.

ÁñÁ«ÊÓƵ

But the focus had now turned to even more basic methodological shortcomings, with evaluations influenced not only by the teachers¡¯ irrelevant characteristics but also by background traits of the students.

An estimated 16,000 higher education institutions around the world regularly conduct SETs, the review found. Dr Heffernan said their administrators might not appreciate the fundamental weaknesses of data that appeared ¡°sound¡±.

ÁñÁ«ÊÓƵ

¡°On the surface, it seems like a great system. You have a?class of?100. You ask them if they like the class or course. Over 100 students, you would think you¡¯re getting some form of objective answer.¡±

Cost considerations also contribute to the continued use of SETs, he said. ¡°The fact is, universities want this data ¨C they want to understand how [to] improve classes ¨C and student evaluations [are] a very quick, cheap way to get instant data.¡±

Dr Heffernan said none of the reviewed studies had reported favourable findings about SETs, although they had differed on ¡°how damaging¡± evaluations were. SETs appeared less slanted against minority academics in the humanities than in science-based subjects, for example.

Some academics say they value feedback from SETs, both positive and negative. Dr Heffernan said some institutions conducted evaluations without using the results for career progression purposes. ¡°The main problem is when a majority of universities use this information for hiring, firing and promotion.¡±

ÁñÁ«ÊÓƵ

He said qualitative feedback sourced through student support teams would deliver more useful information than quantitative data from students. ¡°Back and forth¡± dialogue about what ¡°worked¡± in classes, and what students liked, would be better than ¡°grading someone one to five¡±.

¡°But that takes time and money,¡± he noted. ¡°In a post-Covid austerity-measure world, most universities probably aren¡¯t prepared to do that right now.¡±

john.ross@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Related universities
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (2)
The conclusions from this study is not new - MANY published studies (both reporting original data and systematic quantitative reviews) have reached the same conclusions as the paper reported in this article. The same conclusion about the poor validity of SETs have remained the same for decades but yet you have instructors/lecturers/professors in 'education' and policy makers claiming the validity of SETs. It makes you wonder the hidden agenda or lack of evidence-based policy making these people engage in.
deheuty, I fear the reason is all too clear. The SETs give a single number against which managers with no particular expertise can make a judgement. It's the same with research metrics. The consequences of a poor decision made on the basis of numbers that are of limited value often do not fall on those making the decisions.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs