Accéder à la navigation Accéder au contenu principal

Square but straight: Measurement tool design to improve response fluency and certainty.

Recherche académique

Square but straight: Measurement tool design to improve response fluency and certainty

The assessment of overall customer satisfaction is an important issue in market research. After each online purchase, customers are asked to assess the product or service for which they have paid, usually on a five-point rating scales (e.g., Amazon, Trip Advisor). Comparable to bipolar scales, these tools are effective in making a distinction between polarized evaluations. However, the literature on methodology reveals serious problems related to the mid-point displayed on these continuums. Actually, this mid-point inappropriately aggregates uncertain responses (difficult evaluation) with ambivalent (a combination of moderate to high positivity and negativity) or indifferent (low positivity and negativity) ones, when these different responses have been shown to reflect different attitudes and drive distinct behavioral responses. The Evaluative Space Grid (hereafter, ESG), developed in psychology by Larsen and colleagues, could help address part of this methodological issue. The present article builds on previous research to investigate the influence of different formats of the ESG on response task fluency and certainty. To do so, an experiment specifically manipulating the ESG dimension and the presence of verbal labels in the cells was conducted on a sample of 105 undergraduate students. We demonstrate that the use of verbal labels, rather than a reduction in response alternatives, is a promising way to increase response task fluency and, in turn, improve individuals’ response certainty. This work advocates for tool design reflection to create responding behavior incentives and reduce survey drop-out rates which is especially challenging within self-administered electronic settings. Co-written with Béatrice Parguel

Détails de l’édition

Éditeur :
Electronic Journal of Information Systems Evaluation, 20 (2)