The search for robust and accessible means of comparing the quality of different courses and institutions is becoming higher education's equivalent of the search for the Holy Grail ("Pressure grows to replace league tables", August). Institutional profiles and spidergrams are only the latest ideas for finding the elusive chalice, but it isn't likely to be found soon: the same diversity that makes such comparisons desirable also renders them impossible.
There are three main problems. First, the number of variables that need to be taken into account if measures are to be valid and reliable (not least what students themselves bring to their education). Second, the difficulty of making any measures easily accessible to the "two-clicks" generation. Third, the low probability that users will be any more "rational" in interpreting and acting than other consumers. Yet if students don't choose in a rational manner, how can institutions respond appropriately by modifying their "offer"?
The resources going into devising quality indicators would be better employed in a debate about the proper meaning of, and limits to, diversity in a mass system. In the meantime, universities (and especially vice-chancellors) should have nothing to do with league tables.
Roger Brown, Professor of higher education policy, Liverpool Hope University.