Analytic Quality Glossary
Citation reference: Harvey, L., 2004-20, Analytic Quality Glossary, Quality Research International, http://www.qualityresearchinternational.com/glossary/
This is a dynamic glossary and the author would welcome any e-mail suggestions for additions or amendments. Page updated 31 October, 2020 , © Lee Harvey 2004–2020.
The UNESCO (2004) definition is of both ranking and league tables
Ranking/league tables: Ranking and league tables are an established technique for displaying the comparative ranking of organizations in terms of their performance. They are meant to supply information to interested stakeholders, consumers, and policy-makers, alike on measurable differences in service quality of several similar providers. Even if somewhat controversial, especially concerning the methodological aspects, they are quite popular and seen as a useful instrument for public information, while also providing an additional incentive to quality improvement. Ranking/ league tables are generally published in the popular press and magazines, specialist journals and/or on the Internet. The ranking process starts with the collection of data from existing data sources, site visits, studies, and institutional research. Following collection, the type and quantity of variables are selected from the information gathered. Then, the indicators are standardized and weighted from the selected variables. Finally, the calculations are conducted and comparisons are made so that institutions are sorted into “ranking order”. Ranking/league tables make use, in the process of evaluation of institutions or programmes, of a range of different indicators. The results of ranking/league tables (the “scores” of each assessed institution) may thus vary from one case to another, depending on the number of indicators used or on the indicators themselves. Ranking indicators or criteria usually take into consideration scientific, pedagogic, administrative, and socio-economic aspects: student/staff ratio, A-level points (held by first-year students), teaching and research (as marks received in teaching and research assessments by individual departments), library and computer spending, drop out rate, satisfaction, study conditions, employment perspectives, etc. (Vlãsceanu et al., pp,, 52–53)
DAAD, 2004, comments on ranking in Germany:
Germany is catching up with the ranking mania seen in all walks of life in Britain and America. Soon Germans will be able to consult charts to choose the best universities in the same way they can choose the best restaurants or bars – by looking at a ranking list. But this has been a controversial move because many people have argued that university education across Germany cannot be standardised like other consumer goods.
In 2002, the most serious attempt to put a German university ranking system in place came to fruition. The Centre for University Development (CHE) compiled the study with the German weekly magazine "Stern". They looked at 242 nationally recognized universities and professional schools. More than 100,000 students and 10,000 professors took part in the process. Around 30 indicators were measured. Some variable data such as student numbers, the average study duration and the number of graduations were also considered. But judgements on the quality of teaching and specialist areas played a more decisive role than factors such as the atmosphere at the university or the library equipment.
The Alexander von Humboldt Foundation has introduced an additional research ranking, which measures how attractive German universities are to international scientists. Following the introduction of the Humboldt scholarship, successful international candidates can select which host universities are best suited to their needs. Therefore, the number of Humboldt scientists at an institution also allows people to draw conclusions on the research achievements and international prospects of the university.
Institute of Higher Education, Shanghai Jiao Tong University (2004) notes:
We rank universities by several indicators of academic or research performance, including alumni and staff winning Nobel Prizes and Fields Medals, highly cited researchers, articles published in Nature and Science, articles in Science Citation Index-expanded and Social Science Citation Index, and academic performance with respect to the size of an institution.
For each indicator, the highest scoring institution is assigned a score of 100, and other institutions are calculated as a percentage of the top score. The distribution of data for each indicator is examined for any significant distorting effect; standard statistical techniques are used to adjust the indicator if necessary.
Scores for each indicator are weighted … to arrive at a final overall score for an institution. The highest scoring institution is assigned a score of 100, and other institutions are calculated as a percentage of the top score. The scores are then placed in descending order. An institution's rank reflects the number of institutions that sit above it.
Harvey (2008), in the Editorial to Quality in Higher Education 14(3),
Harvey (2008), in the Editorial to Quality in Higher Education 14(3),
the Editorial to Quality in Higher Education 14(3),provides a critical review of rankings.
An analysis of ranking systems by Usher and Savino (2006) makes inter alia, the following key points:
It should come as no surprise to learn that different ranking systems use very different indicators in order to obtain a picture of “quality.” In some cases, these differences are clearly due to differing national standards or practices in the way data is collected or reported. In some cases, differences in indicators reflect genuine differences in the definition of “quality;” Shanghai Jiao Tong, for instance, uses research-related indicators far more than THES; the Washington Monthly has explicitly tried to generate indicators on “social responsibility” which do not exist in the US News and World Report; and so on. But the sheer number of individual indicators used in ranking systems worldwide runs well into the hundreds, making any kind of comparison grid too large to be useful. (Usher and Savino, 2006, p. 12)
Despite the vastly different choices of indicators and weightings evident throughout the world, certain patterns do appear when the studies are grouped together geographically. For instance, studies from China—which has four different ranking projects—place much more weight on research indicators than any other studies in the world. In the most extreme case—that of Shanghai Jiao Tong University’s Academic Ranking of World Universities—research performance is worth 90% of the total ranking. This is followed by Wuhan, where research measures are worth 48.2% of the final ranking, Netbig (45.2%), and Guangdong (42.1%). As we have seen, much of this weighting comes from counting papers and citations in bibliometric studies—studies which have a heavy bias towards the hard sciences. With the exception of Guangdong, which has a major focus on learning outputs (mostly graduation rates), Chinese systems also put significant emphasis on institutional reputation. In contrast, comparatively little weight is put on either resource inputs or on final outcomes. Whether this is because data on these issues is scarce or because Chinese experts genuinely consider indicators of these types to be unimportant is an open question. (Usher and Savino, 2006, p. 28)
Other regional patterns are also evident. Rankings of UK universities, for instance, completely eschew the use of reputation surveys as a means of determining quality (although THES places a 50% weighting on reputation issues). British league tables also put a much higher emphasis than league tables elsewhere on measures of staff and staff quality—on average, they put over 40% of their weighting in this area, as opposed to an average of just 5% in the rest of the world’s league tables combined. The two big North American surveys—Maclean’s rankings and America’s Best Colleges by the US News and World Report—are virtually identical in the distribution of weighting, except for the fact that the Canadian version puts more weight on resource inputs and the American version puts more weight on learning output (intriguingly, the general category weightings of Italy’s La Repubblica rankings are very similar in nature to those of Maclean’s and the US News and World Report, even though the specific indicators used are completely different). (Usher and Savino, 2006, p. 28)
...different ranking systems have very different definitions of quality. The notion of “quality” in higher education is clearly a very malleable one—some observers wish to look at outputs, while others focus on inputs. Among both inputs and outputs, there is very little agreement as to what kinds of inputs and outputs are important. Not only is no single indicator used across all ranking schemes, no single category of indicators is common either: remarkably, none of the seven basic categories of indicators are common to all university ranking systems. One of the only previous comparative examinations of league tables (Dill and Soo 2004) concluded, on the basis of an examination of four sets of league tables in four countries, that international definitions of quality were converging. Our findings, based on a larger sample, contradict their result. We acknowledge that part of the reason for the contradiction lies in the fact that we have divided indicators into seven categories instead of four and hence were always likely to find more variation. Methodological differences notwithstanding—and we believe our methodology to be the more refined of the two—the results still conflict. We believe that had Dill and Soo looked at Asian or international ranking schemes, they too would have seen these differences and revised their conclusions. (Usher and Savino, 2006, p. 29)
Deutscher Akademischer Austausch Dienst [German Academic Exchange Service] (DAAD), undated, Study and Research in Germany, Fachhochschulen / Universities of Applied Sciences, available at http://www.daad.de/deutschland/hochschulen/hochschultypen/00411.en.html , accessed 9 March 2011, not available 22 September 2012.
, accessed 9 March 2011, not available 22 September 2012.
Harvey, 2008, 'Editorial, Rankings of higher education institutions: a critical review', Quality in Higher Education 14(3), pp. 187–208, pre-corrected proof available as a pdf.
Institute of Higher Education, Shanghai Jiao Tong University, 2004, Academic Ranking of World Universities – 2004 http://ed.sjtu.edu.cn/ranking.htm, not at this address 24 January 2012
Vlãsceanu, L., Grünberg, L., and Pârlea, D., 2004, Quality Assurance and Accreditation: A Glossary of Basic Terms and Definitions (Bucharest, UNESCO-CEPES) Papers on Higher Education, ISBN 92-9069-178-6, available at http://www.aic.lv/bolona/Bologna/contrib/UNESCO/QA&A%20Glossary.pdf, accessed 20 September 2012, still available 29 December 2016.
20 September 2012, still available 29 December 2016.
Other sources (cited by Vlãsceanu et al., 2004, p. 53, mostly from , Higher Education in Europe 27 (4)):
Adab, P., Rouse, A., Mohammed, M., and Marshall, T., 2002, Performance League Tables: The NHS deserves better, BMJ. January 12; 324(7329): 95–98.
Clarke, M. 2002, ‘Some guidelines for academic quality rankings’, Higher Education in Europe 27(4), pp. 443–59.
Eccles, C. 2002, ‘The use of university rankings in the United Kingdom’, Higher Education in Europe 27(4,) pp. 423–32.
Federkeil, G. 2002, ‘Some aspects of ranking methodology — The CHE-ranking of German universities’, Higher Education in Europe 27 (4) pp. 389–97.
Filinov, N.B. and Ruchkina, S., 2002, ‘The ranking of higher education institutions in Russia: some methodological problems’ Higher Education in Europe 27 (4) pp. 407–21.
Jobbins, D.2002, ‘The Times/The Times Higher Education Supplement — league tables in Britain: an insider’s view’, Higher Education in Europe 27 (4) pp. 383–88.
Merisotis, J.P. 2002, ‘On the ranking of higher education institutions’, Higher Education in Europe 27 (4) pp. 361–63.
Siwiñski, W. 2002, ‘Perspektywy—Ten Years of Rankings’, Higher Education in Europe 27 (4) pp. 399–406.
Teixeira, I.C., Teixeira, J.P., Pile, M., and Durão, D. Classification and Ranking of Higher Engineering Education Programmes and Institutions: The IST View <http://gep. ist.utl.pt/arquivos/Comunicacoes/Classification%20and%20Ranking%20of%20Higher%20Education.PDF>.
Usher, A. and Savino, M., 2006, A World of Difference: A Global Survey of University League
Tables. Toronto, ON: Educational Policy Institute.
Usher, A. and Savino, M., 2006, A World of Difference: A Global Survey of University League Tables. Toronto, ON: Educational Policy Institute.
Vaughn, J. ‘Accreditation, Commercial Rankings, and New Approaches to Assessing the Quality of University Research and Education Programmes in the United States, Higher Education in Europe 27 (4) pp. 433–41.
Yonezaza, A., Nakatsui, I., and Kobayashi, T. “University Rankings in Japan”, Higher Education in Europe 27 (4) pp. 37382.
copyright Lee Harvey 2004–2019
copyright Lee Harvey 2004–2019