5.6.3 How to do content analysis Content analysis is essentially a positivistic quantitative technique. Content analysis, in principle, has three positivistic characteristics, which it attempts to put into practice. Thus, content analysis is:
1. Replicable: the analysis is based on explicit rules for categorising the content of a document, which enable different researchers to obtain the same results from the same document.
2. Systematic: the explicit rules for categorisation ensure that the coding process is systematic and removes any bias on the part of the researcher.
3. Generalisable: the results obtained by the researcher can be applied to other similar situations.
The phases of a content analysis study are outlined in the following sections.
18.104.22.168 The research topic As with all research, the researcher identifies the topic for the research study and then considers what is the most appropriate methodology. Section 5.6.2 illustrates the type of research topic for which content analysis has been used.
The researcher needs to closely specify the research objective. Formulation of the research question usually begins with a question or a hypothesis derived from theory.
Such a question might be, for example, 'how have newspapers framed election coverage and influenced the debate?' Or, 'to what extent are ethnic minorities represented in children's television programmes?'
22.214.171.124 Selecting content for analysis Having specified a research question, and if the research enquiry can be examined using content analysis, the next stage is to identify the content to be analysed.
The researcher needs to clearly define the content that is under scrutiny, such as, 'national daily newspaper articles on election coverage published in the month prior to the election' or 'television programmes aimed at pre-teen children broadcast on free-to-air mainstream channels over a 12 month period'.
However, even having specified the content area closely, the sheer amount of content may be more than the resources available can deal with in the time frame proposed for the research. This means the content needs to be sampled. So the researcher has to estimate how much time and resources are available and decide how much material could be analysed. Then a decision rule has to be created that would result in an unbiased sample.
Given that content analysis aims at generalisation, then the sample needs to be representative of the all the content specified (the body of content is referred to as the corpus).
The sampling frame from which the sample will be taken is the based on the detailed specification, as in the examples above. The issue is how to sample to ensure an unbiased selection.
If, for example, your corpus is 'national daily newspaper articles on election coverage published in the month prior to the election' then there may be several articles a day in a dozen different newspapers, that may easily add up to well over a thousand articles in a month. The researcher may not have the resources to analyse the total content specified and so might decide to sample as follows. First, select only the most prominent article in each newspaper each day, with prominent defined by size and location within the newspaper. Second, split the newspapers into left wing and right wing political opinion and then select half the left wing on day 1 (the other half on day 2, and so on for the month) and similarly for the right wing newspapers. This means that the most prominent article would be selected from each newspaper every other day. However, in the United Kingdom, for example, there are more right-wing newspapers so the sample would contain more articles from right-wing newspapers. The researcher then has to decide whether the sample should have equal coverage from the political spectrum or simply be representative of all available newspaper coverage. If the latter then nothing else needs to be done to the sample. If the former then the articles from the right-wing newspapers would need to be further selected on a random basis so the numbers of articles match the number of articles from the left-wing newspapers.
The principle, in selecting a sample, is to set out the selection process in line with the research question and then ensure that the sample is as representative and unbiased as possible. This means, amongst other things, being aware of regular patterns in media content and therefore avoiding systematic random sampling in certain circumstances.
For example, if the population being sampled is 'television programmes aimed at pre-teen children broadcast on free-to-air mainstream channels over a 12 month period' then avoid selecting a sample that results in a television channel's programmes falling on the same day of the week. If there are, for example, five mainstream channels and pre-teen television is targeted between midday and six o'clock the researcher might select half an hour per week at random for each channel: thus generating 125 hours of media content. Using random sampling may mean that some times of day are not covered at all and some days not covered on some channels. If all times on all channels are required then a scheme based on time would need to be devised and allocated randomly to each channel.
126.96.36.199 Developing content categories The researcher now has to define the categories of content that are going to be measured.
In defining the categories it is important not to just think about the content but to go back to the research question and ask what is being measured and why?
Take the example of children's television; the question was 'to what extent are ethnic minorities represented in children's television programmes?'. What does this question mean in operational terms? The content has been specified (half an hour per channel per week) but what is the researcher looking for? It may be that all is required is to note the ethnicity of the people in each half an hour programme segment. Then a composite frequency table could be constructed for each television channel.
That might answer the simple question of extent of representation although it would mean that someone appearing for a very short period during the half an hour would carry the same weight as someone appearing for a much longer duration. So maybe what is needed is to note the ethnicity and the length of time each person appears in the programme. That might answer to what extent are they represented but it doesn't answer the type of representation; whether, for example, as a positive or negative character. It may be that this is going beyond the research brief, depending on how 'representation' is construed. It may be that once under way, the research shifts focus a little and the nature of the representation as well as the extent might seem an appropriate development.
Once one moves to the nature of representation rather than simply who is on screen and for how long, the measurement becomes more problematic because the type of representation has to be defined as well. It may be split into 'positive' and 'negative' categories but exactly what this means needs defining. It might be that the research requires more extensive categories than just positive and negative: categories such as 'leadership', 'helpfulness', 'controlling', or whatever. Each would need careful explicit definition if the research is to be replicable.
In general, format (structural elements such as space, time, location within the document, colour) is usually simpler to define (code) and measure than content, which, as the example above suggests can be trickier to define and code.
In a famous study about gender stereotyping, Erving Goffman argued that facial prominence in photographs has been associated with dominance (this is referred to as facism). Goffman noted that when men appear in photographs in print media it tends to be mostly pictures of their face, while women more often have more of their body in the photograph, which Goffman claimed implied less dominance, among other things. He then defined a quantitative measure of facism:
distance from the top of the head to the lowest point of the chin
distance from the top of the head to the lowest visible part of the body
The higher the ratio the greater proportion devoted to the face. A full-face (no body) picture would have a ratio of 1. A full head and body picture would have a ratio of about 0.15. A full body picture but no head would have a ration of 0.
This was a relatively simple definition of format and did not need any interpretation of content.
So in all cases, it is important to use operational definitions to define categories so that they can be applied to the content to make it easy to code in a systematic and replicable way.
When analysing text, for example, is it specific words that are coded, is it sentences about a particular issue, or sentiments expressed in a paragraph or is it whole articles? If photographs is it the whole photograph or part of it, is it the people represented in the photograph that are of interest? If an advertisement, is it the words or the picture or the juxtaposition of the two, or the size of the advertisement or its location in a newspaper? A clear statement of what is to be counted as an instance for the analysis is imperative.
In practice there are two types of category unit: recording units and context units. Recording units are the predetermined words or sentences or paragraphs or whole items that are instances of the category. In the example of the children's programmes, the recording unit would be the ethnicity of the character appearing in the programmes.
Context units are the bigger unit in which the recording unit appears, which characterises the nature of the recorded unit. Thus in the example, the nature of the representation of different ethnicities would be the outcome of the context unit. Whether a character is 'helpful', for example, may only emerge from viewing the whole episode, which would be the context unit.
Computer programmes are available to aid coding of recording units but are of limited use when coding context units.
What is also important for content analysis is that the defined categories are mutually exclusive: that a specific instance can only be placed in one category.
Categories also need to be comprehensive, that is cover all options, which sometimes requires an 'other' or 'not applicable' category. If the 'other' category becomes large, there may be a need for further re-categorisation. It also means that a non-instance category is required (a study of sexism might list different types of sexist behaviour but it also needs to include 'no sexist behaviour' as an option).
Berelson (1952, p. 147) argued that: 'Content analysis stands or falls by its categories. Particular studies have been productive to the extent that the categories were clearly formulated and well adapted to the problem and the content'. This comment emphasises how important it is to identify and define appropriate categories.
At this stage the researcher should review all of the categories and decide whether some categories can be merged or if some need to be broken down further. This process of reviewing categories will occur again after the initial attempts to code the content.
188.8.131.52 Coding the data in the document Coding the data is the process of allocating elements of the document to specified categories. The operationalisation of the categories should mean that coding the data should result in the same allocation whoever is doing the coding.
The coding process is helped by drawing up a coding schedule, which resembles a survey coding form. Coding normally involves using numeric codes; for example, in the children's programme analysis, different ethnic minorities would be represented by a number.
It is important at this stage to undertake a pilot study with a small sub-sample to see (a) if the coding can be applied readily (b) if and how the coding needs adjusting (which it normally will) (c) whether the coded data begins to answer the question, which requires some preliminary analysis to see if what is being collected as data is the correct material.
The pilot study should reveal where categories may need further refinement or where coding rules require additional clarity as well as whether the categories are going to answer the research question.
It is not unusual to adjust categories several times before finalising a set of categories that can be used for the study. Sometimes, category systems already developed by other researchers may prove useful.
184.108.40.206.1 Inter-rater reliability The pilot could also be used to test reliability of the coding. If more than one coder is used at the pilot stage then inter-rater reliability can be computed and adjustments made where ambiguity arises. Inter-rater reliability is the extent to which different people code the same text in the same way.
For straightforward categories or measurement there should be a high level of agreement between two or more coders. If the level of agreement is not high then the categories need rethinking: maybe they need to be more closely defined, or breaking into smaller categories or grouping into larger ones. Any changes must take into account the research question and the level and precision of analysis required. For more complex categories, it may be necessary to discuss and resolve differences between coders to see where the judgements are diverging.
Various statistical tests exist to assess inter-rater reliability. A simple method is to divide the number of units placed in the same categories by the total number of units coded (Chadwick et al., 1984).
So, for example, if there are 100 units and if two coders place all 100 units in the same categories, then the inter-coder reliability would be 100/100=1. If they only agreed on 50 units then the inter-coder reliability would be 50/100=0.5 and at its lowest, if they didn't agree on any it would be 0/100 =0.
220.127.116.11 Analysis of content The research problem provides the basis for the data analysis, suggesting the initial frequencies to be calculated, patterns to be examined and the relationships to be investigated. The coded data constitute the variables that relate to the hypotheses under investigation.
The way the data is subsequently analysed depends on the scope of the data and the complexity of the research question and related hypotheses. Simple descriptive analysis may be sufficient in some cases and, in others, complex multivariate analysis may be needed.
Britto and Dabney (2010) offers an example of the analysis stage of content analysis. They were interested in the amount of crime content in political talk shows and how crime was presented and characterised in three major cable networks in the United States. They recorded one show a week, at random, for 26 weeks. At the programme level they coded the number of segments in an episode, the speaking time given to guests, and the ethnicity and gender of offenders and victims in crime stories. They also coded the characteristics of guests on the show and the interactions with the show hosts. The statistical analysis included an account of guest profiles, comparing the demographic characteristics of guests with the general population profile, and a comparison of general show guests to guests in justice-related segments. The analysis described the amount of justice-related content on each show and compared the shows. Chi-square tests revealed differences between the guests' political persuasions. The analysis also presented ratios of offender and victim characteristics, comparing these ratios across shows and against official United States crime data.
18.104.22.168 Strengths and limitations of content analysis Content analysis, it is argued, is a readily understood, inexpensive research method. It is usually relatively easy to gain access to the documents needed for the study, which makes it relatively inexpensive to build a representative sample.
It is an unobtrusive method, which does not involve the researcher interacting with subjects of the research. The researcher cannot, therefore, influence the behaviour of the people being studied.
The use of a coding schedule and coding manual in content analysis makes the process transparent. The coding rules mean that the study is replicable and systematic. It thus claimed that it is a reliable and objective method.
It produces a systematic account of events, themes orissues that may not be immediately apparent to a reader, viewer or general consumer.
Content analysis has its limitations. Content analysis may not be as objective as it claims since the researcher must select and record data accurately. In some instances (such as a television programme) the researcher must make subjective choices about how to interpret particular forms of behaviour (for example, is the character playing a positive or negative role?) The researcher also decides what categories will be used and how they will be defined. This may make the study replicable but not objective.
The analysis relies on the accuracy of the coding in the first instance.
Content analysis attempts to quantify behaviour, opinions and action and in so doing it is more likely to describe, rather than explain, interpret or understand people's behaviour. Underlying motives are unlikely to be revealed. Content analysis does not necessarily provide insight into the underlying reasons for relationships and trends in data. In short, content analysis describes the 'what' not the 'why'.
Although documents may be easier and cheaper to obtain than responses from interviewees, for example, the coding may be very time-consuming. Furthermore, the analysis is limited to the available (sampled) documents. If the documents are drawn from the mass media, they may portray a rather different version of reality, as, for example, criminal, catastrophic and other high profile events receive more coverage than less dramatic occurrences.
A major problem for content analysis is the analysis of the ideological content of documents, which comes to the fore in media studies. Thus, the positivist, Harold Lasswell (1949, p. 6) wrote:
For a century, controversy has raged over the relative weight of "material" and "ideological" factors in the social and political process. This controversy has been sterile of scientific results, though the propoganda resonance of "dialectical materialism" has been enormous. Insofar as sterility can be attributed to technical factors in the domain of scholarship, the significant factor is failure to deal adequately with "ideological" elements. The usual account of how material and ideological factors interact upon one another leaves the process in a cloud of mystery.... So far as the material dimensions are concerned, operational methods have been worked out to describe them, not so with the ideological.
The problem with this view is that it fails to address the problem of ideology in any significant way. Ideology cannot be operationalised and measured because it is not a surface phenomenon. Either one ignores ideology altogether, which is effectively what conventional content analysis does or else one addresses ideology within the framework of a critique of prevailing structures. Note that ideology, as a significant concept, means more than a set of presuppositions, as for example in a so-called political ideology. One approach to media analysis that engages ideology is semiology (see Section 5.8).