RESEARCHING THE REAL WORLD



MAIN MENU

Basics

Orientation Observation In-depth interviews Document analysis and semiology Conversation and discourse analysis Secondary Data Surveys Experiments Ethics Research outcomes
Conclusion

References

Activities

Social Research Glossary

About Researching the Real World

Search

Contact

© Lee Harvey 2012–2017

Page updated 29 May, 2017

Citation reference: Harvey, L., 2012–2017, Researching the Real World, available at qualityresearchinternational.com/methodology
All rights belong to author.


 

A Guide to Methodology

8. Surveys

8.1 Introduction to surveys
8.2 Methodological approaches
8.3 Doing survey research
8.4 Summary and conclusion

8.4.1 Critique of survey research and statistical analysis

8.4 Summary and conclusion
Section 8 has explored how social surveys are used to investigate social theory. The section outlines how to undertake a survey, avoiding as many pitfalls as possible. In the majority of cases the social survey attempts to reproduce, in a social setting, the classic positivistic approach, that is, to measure social facts to provide evidence for causal relationships (see see Section 2.2). In some circumstances surveys are used in conjunction with critical or phenomenological approaches but thee are relatively unusual.

It has been emphasised that the survey should be related to social theory.

Surveys aim ultimately to be explain the relationship between variables. This requires that the survey has a clear aim, that hypotheses are explicit and concepts are operationalised (see Section 2.2.2.2). It is through careful operationalisation of concepts that the ‘right’ questions are devised to test the hypotheses (see Section 8.3.13.

Identifying how the survey instrument is to be distributed is a crucial stage as it impacts on subsequent decisions. The following considerations will guide the distribution approach, whether to use questionnaires or interviews, whether to hand the questionnaire out or mail it or send it electronically,
whether to interview in person or via some form of communication technology:

  • dispersed nature of the sample;
  • time frame;
  • available resources (money and personnel involved in the research);
  • extent of personal contact needed (to explain the survey or to encourage response rates);
  • the availability of a database of respondents (including addresses,
  • telephone numbers, email addresses, social media contacts);
  • the level of privacy required;
  • the flexibility necessary to encourage responses or tailor questions in light of earlier responses;
  • the preference for either instant or reflective responses.

Selecting a sample will need to be done with care and various sampling procedures have been outlined (see Section 8.3.9. A conventional social survey would normally aim for a representative sample. However, this may not be possible in practice and the researcher has to adopt a pragmatic approach. Good practice is to comment on the adequacy of the sample when presenting findings in a report.

Once the sample has been selected and and a set of clear and accurate questions have been drawn up, it is usually expedient to undertake a pilot survey (see Section 8.3.8). This is necessary avoid a lot of time and energy being wasted on the collection of data that does not aid the analysis of the hypotheses because the wrong questions were being asked. The main survey is usually only undertaken once so it is important to get it right. Only in rare situations would the researcher have the opportunity and time to go back to the respondents. The pilot survey provides a way of overcoming a lot of the errors and problems that are bound to come up in a full-scale survey.

The analysis should address the hypotheses. The preceding sections used a CASE STUDY example with several interrelated sub-hypotheses but only outlined a small fraction of the potential analysis.

An initial step in the analysis is to extract data from the schedules and questionnaires and create a data file (see Section 8.3.11.2). This helps enormously when it comes to analysing the hypotheses. Second, compile the frequency tables (see Section 8.3.12.2) for the variables in the sample as these will provide initial insights into testing the hypotheses. Most statistical programs will generate the measures of central tendency and dispersion for each variable when constructing the frequency tables from a data file.

Crosstabulations show differences between subgroups in the sample, and thus help to explain the results. For example, in the example CASE STUDY the attitudes and knowledge of the respondents were crosstabulated by gender and age. Inspecting the crosstabulation provides some interesting initial insights providing the appropriate percentages from the crosstabulation is used for comparison purposes.

However, none of this either takes account of sampling error nor does it measure the extent of a relationship between two variables. Dealing with sampling error requires the use of confidence limits (see Section 8.3.12.10) and and significance tests (see Section 8.3.12.11).

Similarly, measuring the extent of the relationship between two variables also involves measures of association

Reporting the findings is important, otherwise no one will know what the study has revealed. The data will need to be explained: it is not self-explanatory. The reader needs to be taken through the data, so do not include material that is not explained. Make sure that the report relates directly to the aims of the survey. When commenting on the hypotheses make sure not to overstate what has been found out about them; remember that statistical procedures are at best statements of probability and not of absolute proof.

Top

8.4.1 Critique of survey research and statistical analysis
There is a tendency to think that once data has been quantified that the research is somehow more ‘objective’. This is a fundamental error of judgment on three levels: epistemological, theoretical and pragmatic.

First, epistemologically (see Section 1.6), the objectivity attributed to ‘social facts’ is a positivist perspective that is challenged by phenomenological and critical perspectives.

Second, theoretically, the construction of hypotheses and the operationalisation of concepts are based on preconceptions of the nature of the variables subsequently analysed. The validity of the operationalisations are rarely examined in details and are often convenience operationalisations that are presumed to address the theoretical issues.

Furthermore, the testing of fairly simple hypotheses is assumed to provide a solution to a complex theoretical issue on the grounds that all relevant other factors have been controlled, when, in practice, only those factors that the researcher presumed may be relevant have been taken into consideration in the research.

There is another dimension to the theoretical concern and that is the political nature of research. The Radical Statistics Health Group (1980), Stephen Fothergill and Jill Vincent (1985), Paul Connolly and Barry Troyna (1997), Daniel Dorling and Stephen Simpson (1999) and Suzanne Hood et al. (1999) all provided warnings in the last century about the political nature of social survey research and the hegemony of statistical analysis. All reveal how so-called objective quantitative analysis is politically biased, for example, by the terms of reference being limited so that only certain things are looked at, or by using convenience statistics that exclude sections of the population (such as how statistics that excluded women were used by a Conservative government in the United Kingdom to legitimate the privatisation of pensions). In essence, the message from these critics is that all statistics are political, and therefore one should always question the evidence and not accept it because it is couched in numerical terms: examine the methodology.

Third, pragmatically, there are a host of issues that are glossed over on a pragmatic basis that undermine any claims to objectivity. These include the following.

Non-representative samples; very few if any social research studies have truly random, non-biased studies and claims are made for lack of bias often on spurious grounds.

Data collection, such as the coding of answers by interviewers, and subsequent transfer to appropriate data files if often flawed resulting in errors going unnoticed.

Interpretation of the questions may vary enormously between respondents and so answers that appear to be to a single question can actually be answers to a range of different questions depending on how the respondent interprets them.

Statistical tests are often applied to data of the wrong level of measurement, such as analysis of variance to ordinal scale data and caveats for the use of particular statistical tests are ignored (Morrison and Henkel, 1972). Most often, the statistical results are presumed to test the hypotheses, even when the data is clearly biased and the statistical tests are inapplicable.

Next 9.1 Experiments

Top