Abstract
In this article, the insertion of a two-staged highly interesting question in an online, survey-based field experiment is shown to produce better survey completion rate (i.e., decreases completion refusal by 8%) and sample representativeness (increases the number of moderate answer patterns by 12%) than a typical (same) highly interesting question at the beginning of a survey only. Using nonparametric tests and subgroup probability analysis, measured effects include survey completion rates, response bias and reported demographic differences. In regards to sample representativeness, the results also raise questions about the sensitivity of the conventional practice of comparing early to late respondent means scores as a method of investigating nonresponse bias in marketing research. Alternative approaches to measuring potential non-response bias are compared with the tradition of comparing early-wave verses late-wave mean respondent differences. The results indicate that the conventional mean test fails to identify differences in nonresponse bias; the scores of highly interested or opposed respondents in the first waves produce equivalent means to the scores of the less interested or opposed respondents in the latter wave between the surveys (e.g., 1's and 5's vs. 2's and 4's, both averaging to 3's) that are identifiable through kurtosis and probability analysis.