Short Research Article
On the Lack of Real Consequences in Consumer Choice Research And Its Consequences Sina A. Klein and Benjamin E. Hilbig Cognitive Psychology Lab, Department of Psychology, University of Koblenz-Landau, Germany
Abstract: Experimental tasks measure actual behavior when the consequences that follow actions and choices mirror those of real-life behavior. Consequently, choice tasks in consumer research would need to include both costs (losing a previously earned endowment) and gains (actually receiving what was chosen) to structurally resemble real-life consumer choices. A literature review of studies (k = 446) in consumer research confirms that full implementation of consequences is rare. The extent to which presence versus absence of these consequences systematically affects observable behavior is tested in an experiment (N = 669) comparing a fully consequential (cost and gain consequences), a partially consequential (gain consequence only), and a hypothetical (no consequences) consumer choice task. Results show that consequences, once real, affect both the general willingness to purchase and the relative preferences for different products. Hence, it would seem advisable to more carefully consider the role of consequences in future consumer research. Keywords: consumer choice, research practices, literature review, food choice
Many subfields of psychology ultimately aim to explain and predict behavior. That is, they intend to draw conclusions about what people might actually do in “real life” (and why they would do so) from different kinds of observations such as participants’ responses on a self-report questionnaire or responses in some laboratory task. As has been repeatedly argued (Baumeister, Vohs, & Funder, 2007; Funder, 2009a, 2009b; Furr, 2009), many of the observations psychologists predominantly rely on are more or less strongly removed from the to-be-explained behavior. For example, several groups of authors (Baumeister et al., 2007; Furr, 2009; Meredith, Dicks, Noel, & Wagstaff, 2017; Patterson, 2008; Patterson, Giles, & Teske, 2011) argue – and demonstrate in literature reviews – that vast portions of recent psychological research rely on observations that cannot be considered “actual behavior.” Thus, Baumeister et al. (2007) provocatively state that much of psychology has become “the science of self-reports and finger movements” (p. 396). Importantly, the core argument is not that questionnaire responses or button presses are, per se, poor examples of behavior. Anyone who ever filled out an immigration or tax form (a questionnaire) or clicked a Website’s “buy” Experimental Psychology (2019), 66(1), 68–76 https://doi.org/10.1027/1618-3169/a000420
button for a hugely expensive product will indubitably agree that these actions – while essentially being self-reports and finger movements – entail a lot of behavior. What then sets apart these examples from the omnipresent self-report personality questionnaires, hypothetical scenarios, or reaction time tasks that Baumeister et al. (2007) and others have convincingly argued do not represent observations of actual behavior? We argue that the core distinguishing aspect is whether the consequences a research participant faces (conditional on her and potentially others’ actions) match or at least approximate the consequences faced by agents in the corresponding real-life situations in a structurally comparable way. If the tasks given to participants “carry some form of consequence (e.g., social, financial, effort, time, self-efficacy)”, these will typically be “substantially more informative of real [. . .] behavior” (Morales, Amir, & Lee, 2017). Correspondingly, Lewandowski and Strohmetz (2009) have argued that consequences for the self or others are one defining element of behavioral choice: “Rather than ask participants to self-report what they believe they would choose, behavioral choice focuses on what participants actually select as the dependent variable” (p. 998). Similarly, Diederich (2003a, 2003b) argues that real consequences ought to be implemented to induce choice conflict in multi-attribute decision tasks. Indeed, this very principle – that well-specified consequences help transform researchers’ observations from some artificial task into truly behavioral observations – dates Ó 2019 Hogrefe Publishing