18.104.22.168 Locating respondents In general, in-depth interviewing involves a much smaller sample of informants than survey interviewing (Section 8). Elizabeth Bott (1971) interviewed twenty married couples, while Hannah Gavron's (1966) study included a sample of ninety-six women. Life histories may involve only one subject (for example, The Jack Roller(Shaw, 1930)) and Maguire's (2005) micro study was based on a single interview.
Can very small samples provide valid evidence to test theory?
Under such circumstances it is clearly not possible to locate a representative or 'typical' sample from which generalisations can be made. However, where the purpose of the research is to discover the range of meanings constructed by people themselves, even a sample of one can be seen to contribute to this endeavour.
There are rarely straightforward sampling frames for in-depth interviews and researchers have to be imaginative in locating respondents. Gavron (1996) located a suitable sample of mothers, with at least one child under the age of five, from the lists of General Practitioners and from the Housebound Wives Register. Jane Ribbens McCarthy et al.'s (2003) study of parenting and step-parenting generated a sample by snowballing methods. Vladimir Andrle (2001) located the farmers in his study of Czech entrepreneurs from a landowners' association membership list plus some snowballing.
I selected most respondents from lists compiled with the help of
my prior personal contacts (whom I did not interview) and by the snowball method.
For restituent farmers, I had a landowners' association membership list acquired by a
helpful interviewee. A 'snowball' of insiders of the last Communist government
started with a yellow-pages search for a company that a respondent had mentioned in
passing during her interview, as being run by a former Communist government man.
(For these insiders' view of the old regime, see Andrle 2000a.) The overall response
rate was high,with only five approaches failing to yield an interview. (Andrle, 2001, p. 816)
Similarly, in his study of hate-motivated violence, Doug Meyer (2010) conducted semi-structured, in-depth interviews with 44 people who experienced violence because they were perceived to be lesbian, gay, bisexual, or transgender. Interviews were between one and three hours in length (median 102 minutes). Participants were asked to describe their violent experiences in detail, with follow-up questions about their understanding of hate-motivated violence and their perception of its severity. He had to use some ingenuity to find the sample:
This interview sample was compiled through LGBT advocacy and service organizations in New York City. Seeking a diverse sample, I recruited participants from a wide range of organizations, many of which provide services for LGBT people of colour. At these organizations, recruitment fliers were placed on a bulletin board or in a waiting room. The flier read: 'Have you experienced violence because you are (or were perceived to be) lesbian, gay, bisexual or transgender?' A broad, open-ended question was used on the flier to attract participants with a variety of violent experiences and to allow respondents to define violence on their own terms. (Meyer, 2010, p. 984)
Respondents also provided basic demographic information via a short questionnaire. Interviews were transcribed shortly after they occurred and a coding scheme to organise the data was developed and refined, which led to the discovery of key themes for analysis. The results showed that:
Since poor and working-class LGBT people of colour were typically friends with individuals who had encountered a lot of violence, and white, middle-class LGBT people were not, the latter were more likely than the former to perceive their violent experiences as severe. (Meyer, 2010, p. 985)
Trying to identify 'typical' cases is another way of selecting a sample. In a Dutch study of quality management in higher education, Jan Kleijnen et al. (2014) undertook interviews within six teaching departments in several Universities of Applied Sciences. The selection of these six departments was based on the method of 'typical cases' with 'maximum variation,' aiming for 'as wide a range of perspectives as possible to capture the broadest set of information and experiences' (Kuper et al., 2008, a1035). (Kleijnen et al., 2014).
This suggests that sampling for in-depth interviews is rather haphazard and often simply a convenience sample. Michelle Byrne (2001), though, argued that when sampling for qualitative research that the primary criterion should be the purpose of the research.
Amanda Wilmot (2005) went further and argued, whether for qualitative research or quantitative research, that 'a well-defined sampling strategy that utilises an unbiased and robust frame can provide unbiased and robust results'. (see also Section 22.214.171.124.2.3)
She argued that 'it is as important to develop a robust sampling strategy, from a well-constructed sampling frame, for qualitative research practice'. She provided an account of how the Office for National Statistics 'puts theory into practice using a qualitative 'Respondent Register', developed for use for sample frame construction for qualitative social research.'
She provides a list of issues that should inform any sampling strategy. These are:
What are the research objectives? What is the target population?
Who should be excluded from the sample?
Who should be included in the sample?
What is the budget?
What is the reporting time period?
How many qualified researchers are available to work on the project?
What sampling technique(s) should be employed?
How are the data to be analysed?
What data collection methods should be employed?
What are the sample criteria?
How long will the interview be?
What size should the sample be?
What should be used as the sampling frame?
How should potential respondents/participants be recruited?