Using audio and video clips in Web surveys – feasibility and impact on data quality
Marek Fuchs & Frederik Funke
Paper presented at the 60th annual conference of the
World Association for Public Opinion Research
September 19-21, 2007 in Berlin
In recent years, Web surveys have become a standard survey mode. So far, online questionnaires resemble their paper counterparts to a great extend: online measurement instruments rely mostly on visually presented written questions with associated response categories. From a methodological point of view this was a desired development since many researchers doubted the equivalence of comparability of results obtained by paper and pencil questionnaires on the one hand and Web surveys on the other hand. The similarity of paper and pencil questionnaire and online questionnaire made sure that measurements obtained by either mode would not differ to a great extend.
However, compared to paper and pencil questionnaires Web surveys allow for more rich communication with the respondent: graphical elements, pictures, sound and animated GIFs are used to enhance the appearance of web pages. Even thought the use of such elements in Web surveys is still not wide spread several authors have assessed the impact of such elements on data quality. The corresponding body of literature dealing with such effects can be summarized under the heading of visual design effect.
In this paper we will extend this line of research. We will assess the use of audio and video in order to convey the content of the question meaning. Thus, audio and video are no longer considered to be nuisance variables; we rather treat them as content-bearing meaningful elements of a Web Survey.
The paper reports results from a field experimental study on the impact of audio and video support in Web surveys on data quality. Within a Web survey among university students a standard interactive online questionnaire was used. A random sub-sample answered a version of the questionnaire that consists of not only written questions but also of corresponding audio files reading the questions to the respondent. Also, a questionnaire version providing video segments showing an interviewer reading the questions to the respondent is tested. Data quality is assessed using standard indicators (item non-response, social desirability scale, social presence scale, non-differentiation of responses and others) as well as the time to complete the survey are used.
Results will be discussed in the light of the feasibility of multimedia Web surveys as well as in terms human computer interaction.