榴莲视频

Dissatisfied with poll 1

<榴莲视频 class="standfirst">
十月 7, 2005

A number of methodological issues are cause for concern in the national student satisfaction survey ("Student poll puts staff under pressure", September 23).

First, the subject categories bear little relationship to degree programmes and thus to undergraduate experience - for example, my discipline of civil engineering is included with chemical engineering and other engineering subjects. At my institution these two subjects are taught separately to a large extent and thus the student experience will be different. The methodology takes averages across fundamentally different student samples.

Second, when the results for my subject are analysed, it can be shown that for each of the questions asked the average responses are very similar with little spread between institutions. Any "differences" are almost certainly not statistically significance.

Finally, the number of responses required for the results to be included in the analysis is at least 30, or more than half the students surveyed. This small sample size gives the possibility that the results will be adversely skewed by a small number of dissatisfied students - who perhaps have just been subject to the return of a rigorously marked piece of coursework that was not to their liking.

While some of these problems may be less significant for institution-wide assessments, where the sample sizes are larger, the subject-level scores should be presented with at least some indication of their reliability in statistical terms.

The results at subject level are neither reliable nor statistically significant, and certainly not robust enough to be used for the formulation of league tables. I can think of many ways in which Higher Education Funding Council for England money could be much better spent.

Chris Baker Birmingham University

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.