Modelling speeded random generation as sampling for inference
In a random generation task, participants are asked to randomly generate a sequence of items (e.g., from numbers 1-10). Past work conclusively established that human random generation is flawed, and participants’ sequences become less random the faster they are asked to produce them (Towse, 1998). These results have been interpreted as the result of items being generated according to simple schemas (e.g., add one or subtract one) with effortful inhibition of typical outputs, and so faster sequences lead to more stereotyped behaviour (Jahanshahi et al., 2006). However, we have recently reinterpreted random generation as drawing samples for inference: people’s internal sampling process resembles algorithms used in computer science, such as Markov Chain Monte Carlo (Castillo et al., 2023). One empirically-verified prediction of this approach is that participants can randomly generate examples from non-uniform distributions, such as the distribution of UK heights. If that is the case, then what are the causes for people’s more stereotyped random generation under speeded conditions? Is it that at higher production speeds people generate fewer samples between utterances, leading to differences in the resulting sequences? Or does the sampling process change qualitatively when a speed threshold is reached, either in terms of parameters or even structure? We asked participants to randomly produce UK lifespans both at 40 and 80 items per minute, and compared the sequences they produced to several computational models. We assessed how well characteristic features of the sampling algorithm that have been informative in previous experiments changed under speeded conditions. We found large individual differences (which previous research focusing on average trends has not identified), with some participants being more random in the faster sequence, contrary to previous findings. Our results provide insight into the noise and individual variability in cognition, and will help develop better computational models of human inference and decision-making.