Investigating the belief bias in everyday political reasoning
The belief bias is most often investigated with syllogisms varying on two dimensions, logical validity (valid vs. invalid) and believability (believable vs. unbelievable). Typically, participants can distinguish valid and invalid syllogisms (albeit imperfectly), but are also more likely to rate syllogisms as logically valid if they have a believable versus unbelievable conclusion. Additionally, the ability to distinguish between valid and invalid syllogisms can be reduced when their conclusions are believable compared to when they are unbelievable. However, syllogisms are formal reasoning forms unlike arguments we typically see in everyday or informal reasoning. We investigated the belief bias effect in the context of everyday arguments regarding controversial political topics such as those encountered on (social) media (e.g., ‘abortion should be legal’). Arguments in our study differ in their (informal) argument quality; ‘good’ arguments provide an explanation for their conclusion, whilst ‘bad’ arguments do not provide an explanation and contain a reasoning fallacy (e.g., appeals to authority). Participants rated their beliefs about a series of political claims on a scale from 1 to 7 and rated the strength of ‘good’ and ‘bad’ arguments about these claims on a scale of 1 (extremely bad argument) to 6 (extremely good argument). Participants exhibited the belief bias effect for everyday arguments; they consistently rated good arguments as stronger than bad arguments, but were also biased in rating arguments in line with their beliefs as stronger than arguments that were not. The interaction between the quality of an argument and participants’ beliefs about the claims that argument makes is unclear. If we assume the belief and argument strength rating scales are continuous and the relationship between these variables is linear, we fail to find evidence of this interaction using a linear mixed model. However, if we analyse the data using a signal detection approach after binarising the argument strength ratings we find evidence for an interaction, but in an unexpected direction. The ability to discriminate between good and bad arguments increases with the strength of participants’ beliefs about these arguments. The difference in these results is possibly due to assumptions of a nonlinear relationship between the variables in the latter model and raises questions about the most appropriate way to measure the belief bias in everyday reasoning.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: