18.104.22.168 Observation for triangulation Observation is used to provide an alternative view that can be compared with, and augment data gathered in other ways, usually by more formal survey techniques (Lacey, 1966). This latter approach is known as triangulation (as discussed in Section 1.15). The use of observation data in this way can, to some extent, mediate the surface-level responses to a formal survey.Lomax et al., (1999) used observation and interviews to complement data from a questionnaire survey of medical oncologists, andByström (1997) used observation to complement the task diaries and interviews in a study of municipal administrators.
Observation can be used to check data obtained by questionnaires. An example occurs in evaluation research undertaken in the US to explore the extent of compliance, of HIV counselling and testing services, with Center for Disease Control and Prevention (CDC) guidelines.Silvestre, Gehl, Encandela and Schelzel (2000) focused on the staff-client interaction in a participant observation study designed to augment an evaluation of Pennsylvania publicly-funded HIV counselling and testing sites. They were concerned that the standard evaluation methods, such as mail surveys and site visits, while useful in evaluating the existence of appropriate policies and protocols and gathering baseline data, were inadequate when it came to evaluating staff-client interaction. The researchers used actors to explore this situation. The actors were trained as research assistants and sent to 30 randomly chosen sites to be tested and counselled for HIV disease. The results showed that compliance with CDC guidelines and state policy was far from perfect. For example, 10 of the 30 sites required signed consents despite a state policy allowing anonymous testing. Only five providers developed a written risk reduction plan, even though more than two-thirds of all sites surveyed by mail asserted that such plans were developed. Only two of five HIV-positive actors were offered partner notification services, even though all sites visited by an interviewer claimed to offer such services. The conclusion was that standard evaluation tools are not sufficient for assessing actual staff-client interaction.