Social Research Glossary
Citation reference: Harvey, L., 2012-18, Social Research Glossary, Quality Research International, http://www.qualityresearchinternational.com/socialresearch/
This is a dynamic glossary and the author would welcome any e-mail suggestions for additions or amendments. Page updated 24 January, 2018 , © Lee Harvey 2012–2018.
|A fast-paced novel of conjecture and surprises|
Interchangeability of indicators
Interchangeability of indicators refers to the assertion by some quantitative practitioners who use multivariate analysis that given there are a large number of items that could be used as indicative of a dimension of an operationalised concept, then any one indicator is as good as any other indicator, i.e., they are interchangeable.
This does not mean that any indicator will do. The selection needs to be from a pool (potentially very large) of items that address the dimension of the operationalised concept in question. These items must conform to basic standards of item construction (which are much the same as general question design criteria) i.e. they must be non-ambiguous, etc..
Furthermore, any one item from the pool will not necessarily classify a given individual respondent in the same way as any other respondent. This is not important, however, if the indicators all classify the subject group in approximately the same way.
More important still, the whole point of multivariate analysis is to show relationships between different concepts. These concepts are operationalised via indicators. If the relationship between an independent and dependent variable remains the same when different indicators are used for the dependent variable then the indicators used for the dependent variable are said to be interchangeable.
The argument runs as follows. An indicator X1 of concept C will be as good as any other theoretically sound indicator X2 of concept C for the purposes of multi-variate analysis because, empirically, the correlation of X1 with another dependent variable, Y, will be more or less the same as that of X2 with Y.
While individual people will be categorised in different ways in respect of concept C for each of the two indicators, the group as a whole will exhibit the same overall pattern for X1 as for X2. More important, the correlation of X1 with Y for the group will be the same (more or less) to that of X2 with Y.
Given sampling variation, this means that it does not really matter which, of a potentially large (or even infinite) number of possible theoretically sound indicators one chooses. In short, the argument obviates the need to worry about the subjective process of indicator selection. This, then, to some extent, appears to circumvent the problem of validity. However, it is an illusory circumvention.
Researchers who adhere to the principle of the interchangeability of indicators tend to justify their position through 'common-sense' exhortations and empirical examples which demonstrate the interchangeability principle. The idea of the interchangeability of indicators is affirmed by empirical demonstration and is not underpinned by any theoretical explanation.
Indeed, the argument for interchangeability of indicators is a tenuous argument. It has no solid theoretical base itself, being merely the result of empirical observation that applied in some cases. It can also be seen to be a tautological argument in as much as any indicator that is at variance with others can be disregarded as being theoretically unsound, or unreliable and therefore invalid. It is a rather contrived way of legitimating the subjective element in what purports to be an objective process.
Commenting on the proposed national student survey, Harvey (2003) explained how interchangebaility of indicators was being misused:
I am all for institutions making their internal feedback available to prospective students. The proposed approach, though, is laughable in its pointlessness. The pilot, for example, assembled nine statements on teaching with which respondents might agree or disagree on a five-point scale. These are averaged and a teaching score generated ranging from one to five—a low score being more positive than a high one. There were five other scales and an overall rating. It is proposed that the post-pilot version will have fewer items per scale. What do the average scores show? What does 1.5 for teaching mean? Well, it means students quite strongly agree that teaching is... is what? Well, better than if it had scored 3.4, but maybe not quite as good as if it had scored 1.3. But what is it about teaching that this score represents? The whole scheme is based on the "interchangeability of indicators" thesis developed, pragmatically not theoretically, by sociologist Paul Lazarsfeld and colleagues in the early 1960s. It assumes that there is a concept called teaching and that any set of an unspecified subgroup of similar indicators is as good as any other for measuring the concept. Various statistical manipulations, such as factor analysis, "prove" this. But the whole process is based on an invalid presupposition - that the concept "teaching" is unidimensional. If it isn't - and it isn't - the average is meaningless. The point is that no prospective student is going to make a decision on what course to take based on whether a teaching score is 1.5 or 1.8.
Similarly, Bukodi et al. (undated) state:
The relationship between children’s social origins and their levels of educational attainment has been the subject of extensive research. However, views still diverge on the central issue of whether inequalities in educational attainment associated with social origins show any sustained historical decline. A major problem is that of determining how far differing findings are real or artefactual: i.e. how far they reflect actually existing differences across periods and places and how far simply differences in research procedures. Some progress has been made in standardising the conceptualisation and measurement of educational attainment. But a potentially far more serious – and far less considered – problem remains with social origins. The assumption seems often to have been made, if only implicitly, of the ‘interchangeability of indicators’: i.e. it has been assumed that in whatever way social origins might be conceptualised and measured, much the same results would be obtained as regards associated differences in children’s levels of educational attainment.
Our project starts out from a questioning of this assumption. Of late, a tendency has been apparent among sociologists to re-emphasise the multidimensional nature of the structuring of social inequality – i.e. of social stratification. In particular, there has been a move away from synthetic, one-dimensional notions of ‘socioeconomic status’ and a return to the Weberian recognition of social class and status as two qualitatively different forms of social stratification that pattern various social outcomes in distinctive ways. Following on in this line of research, we aim to re-examine the question of inequalities in educational attainment by treating social origins in terms of the separate components of parental class, parental status and also parental education.
Bukodi, E., et al., undated, 'Social inequalities in educational attainment', available at http://www.oisp.ox.ac.uk/res/education-and-social-policy/social-inequalities-in-educational-attainment.html, accessed 24 January 2013, not available 22 December 2016.
Harvey, L., 2003, 'Scrap that student survey now', Times Higher Education, 12 December 2003, available at http://www.timeshighereducation.co.uk/story.asp?storyCode=181752§ioncode=26, accessed 24 January 2013, still available 22 December 2016.
copyright Lee Harvey 2012–2018
copyright Lee Harvey 2012–2018