?
?
?
?
<ÁñÁ«ÊÓƵ>Overall scoreÁñÁ«ÊÓƵ><ÁñÁ«ÊÓƵ>Academic experienceÁñÁ«ÊÓƵ>
University name | Rank by attribute | Overall ranking |
Harper Adams University | 1 | 2 |
University of Oxford | 2 | 6 |
University of St Andrews | 3 | =10 |
University of Cambridge | 4 | =18 |
University of Chichester | 5 | =10 |
University name | Rank by attribute | Overall ranking |
University of Surrey | 1 | =7 |
University of Bath | 2 | 4 |
University of Chichester | =3 | =10 |
Lancaster University | =3 | 15 |
University of St Andrews | 5 | =10 |
University name | Rank by attribute | Overall ranking |
Harper Adams University | 1 | 2 |
Loughborough University | 2 | 1 |
University of Chichester | 3 | =10 |
University of St Andrews | 4 | =10 |
University of Leeds | 5 | 3 |
ÁñÁ«ÊÓƵ
<ÁñÁ«ÊÓƵ>AccommodationÁñÁ«ÊÓƵ>University name | Rank by attribute? | Overall ranking |
Lancaster University | 1 | 15 |
Loughborough University | 2 | 1 |
Harper Adams University | 3 | 2 |
Edge Hill University | 4 | 17 |
University of Sheffield | 5 | 5 |
University name | Rank by attribute | Overall rank |
Loughborough University | 1 | 1 |
University of Leeds | 2 | 3 |
University of Sheffield | 3 | 5 |
Newcastle University | 4 | =7 |
University of Bath | 5 | 4 |
ÁñÁ«ÊÓƵ
<ÁñÁ«ÊÓƵ>Industry connectionsÁñÁ«ÊÓƵ>University name | Rank by attribute | Overall ranking |
Harper Adams University | 1 | 2 |
Loughborough University | 2 | 1 |
University of Bath | 3 | 4 |
Royal Veterinary College | 4 | =12 |
London School of Economics and Political Science | 5 | 104 |
University name | Rank by attribute | Overall ranking |
Loughborough University | 1 | 1 |
University of Leeds | 2 | 3 |
University of Bath | 3 | 4 |
University of Dundee | 4 | 14 |
Harper Adams University | 5 | 2 |
<ÁñÁ«ÊÓƵ>Figuring it out: the methodology behind the resultsÁñÁ«ÊÓƵ>
This year, the annual results are based on the responses of 20,251 undergraduates, who were asked to describe how their university contributed to a positive or negative experience on a seven-point scale, from ¡°strongly agree¡± to ¡°strongly disagree¡±. Questions have remained unchanged since 2005 to allow comparisons across years.
Data are collected from October to June from full-time, UK-based undergraduates. There is a one-year time lag between collection and publication.?
The survey is linked to YouthSight¡¯s student omnibus surveys and respondents can participate once a year. Only full-time undergraduate members of YouthSight¡¯s 140,000-strong opinion panel community can take part.
ÁñÁ«ÊÓƵ
For each measure, a mean ¡°agreement score¡± is created per institution. Each score for the 21 formulated measures is then weighted depending on how strongly it correlates with the 22nd ¡°measure¡± recommendation.?
For each institution, the sum of the weighted mean scores is divided by the sum of the weights to calculate a weighted average mean. This is then indexed to provide an overall score (out of 100).?
A selection of composite scores have been created to allow institutions to see how they are performing in different areas of student experience.?
ÁñÁ«ÊÓƵ
As the number of responses per institution broadly reflects institution size, it is not always possible to achieve a large enough sample to provide statistically robust data at all institutions. The compromise is to set a minimum threshold of 50 responses per institution before inclusion. In total, 116 institutions were included, with an average sample size of 175.
As with any survey, there is an ¡°error bar¡± linked to each institution¡¯s sample size and variance. On average, for the SES, a difference of 3.7 points in the overall score is required to produce a significant difference between institutions. So, at the top and bottom of the rankings, institutions have to move only a few places to see a significant difference, but in the middle, where scores are bunched together, rankings need to shift by 30-40 places to see a significant difference.
For the first time this year, three-year rolling average data have been published, which aggregate results for a period of three consecutive years, starting with the years 2009 to 2011.?
This allows for a very large sample, which reduces the impact of sampling errors in any one given year. The next time period adds one year and loses one year ¨C for example, the period 2009-11 is followed by 2010-12, and so on. In this way, we are able to show the long-term trends at the level of individual higher -education institutions and the overall sector. ?
ÁñÁ«ÊÓƵ
We have supplied a rolling three-year -verage for the overall scores, for the composite scores and for each individual measure. Only institutions that achieve at least 150 interviews across a three-year rolling period have been included in calculations.?
Ashleigh Gillan?
Research manager
YouthSight
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±á·¡¡¯²õ university and college rankings analysis
Already registered or a current subscriber? Login