8.3.12 Analysis Having spent considerable time and effort operationalising concepts, devising questions, constructing an interview schedule or questionnaire, interviewing respondents or distributing and following-up questionnaires and constructing a data file of responses, it is important to make a good job of the analysis. Otherwise all the work will have been for nothing. There are some standard practices that you can follow to analyse sociological surveys. Analysis of surveys involves using quantiative techniques to test hypotheses (see also Section 8.3.13).
The following sections will explore basic descriptive statistics and introduce inferential statistical procedures. The focus is on what data analysis is about rather than on statistical computations. The computations are usually carried out by computer using a statistical package, so it is important to understand what you can do in any given circumstance and what the statistics show.
First, though, just how good is the survey at getting a response from sample members?
220.127.116.11 Response rate A major problem with surveys is ensuring a sufficient response rate; that is, the number of people in the sample who agree to participate in the survey. The response rate for postal questionnaires is often very low, between 10% and 40%. Online surveys also tend to have low response rates (often below 10%). For schedules administered by interviewers the response rate is usually higher, between 40% and 80%.
A sample is chosen to represent a population and if, say, only 30% return their questionnaires then the research becomes problematic. You have to ask whether the group of people who returned questionnaires differ from those who did not? In short, the problem with non-response is that you are unlikely to know whether the people who responded are a biased subgroup. The bigger the non-response the bigger the risk of distortion in the data.
Response rates to mailed questionnaires can be improved by various techniques. Researchers should enclose stamped reply envelopes, and some pre-contact is likely to be advantageous. Follow-up procedures, such as sending reminders, will usually lead to increased response rates. Economic incentives also seem certain to raise response rates but may increase costs to unacceptable levels. The questionnaire should be constructed in a way that is likely to appeal to the respondent. It should direct itself to arousing, rather than assuming, the interest of the respondent (Harvey, 1987a).
Many of the same techniques need to be employed for electronic questionnaires, not least pre-contact, which given the huge amount of electronic junk mail, is essential to inform potential respondents of the imminence of a questionnaire. The more personalised the communication containing the electronic questionnaire the more likely the respondent is to take it seriously.
The other issue that arises with response rates is how they are calculated. To know what proportion of respondents have replied requires knowing how many respondents were in the sample in the first place. Sometimes on-line surveys are sent to non-specified membership lists or are forwarded by initial recipients, so there is no way of knowing how many people are in the sample, resulting in guestimates as to the response rate.
Another aspect of response rate calculation is what to count as a response. If the questionnaire is returned but not completed, or only partially completed, does this count as a response. Most researchers note the number of responses and the number of usuable responses for the analysis. The latter is usually regarded as the statistic by which to calculate the response rate.