Mathematical psychology offers the ability to implement psychological theories, meaning translating a theory into a computer program. Such programmatic implementation is standard in mathematical psychology and helpful to test theoretical mechanisms across laboratories, tasks, and data sources. Besides theory testing, programmatic implementation can have other significant benefits: the possibility for documentation, sharing, and the broad reuse of a formalized psychological theory. Ready-to-use implemented theories can lead to efficiency gains for researchers, but only of the implementation works. There are standards for robust implementation of programs to ensure they are fail-safe, scale, and are error-free in computer science. My talk will discuss these standards for mathematical psychologists and show an example in the programming language R. It introduces how psychologists can benefit and achieve robust implementation of mathematical theories. The principles of robust code, error-checking, unit-tests, and documentation will be exemplified using well-known mathematical, psychological models (exemplar-based categorization models and prospect theory). A specific software package (Jarecki & Seitz 2020) will illustrate how to implement those robustness principles when writing, estimating, and evaluating psychological theories. If implemented robustly, computer-programmed models offer the excellent opportunity to make complex psychological theories widely available to a diverse group of researchers at all levels, boosting the speed of theory testing.
Jarecki, J. B., & Seitz, F. I. (2020). Cognitivemodels: An R Package for Formal Cognitive Modeling. In T. C. Stewart (Ed.), Proceedings of ICCM 2020. 18th International Conference on Cognitive Modelling (pp. 100–106). Applied Cognitive Science Lab, Penn State. https://iccm-conference.neocities.org/2020/papers/Contribution_229_final.pdf
The psychological sciences are described as being in crisis. The source of this crisis is hotly debated, but it is most often described as a crisis of replication. Culture of psychological science (eg Nosek & Bar-Anan, 2012), statistical practice (eg Benjamin et al, 2018), effects-centric thinking (Broers, 2021) and flexible theory (Szollosi & Donkin, 2021) have all been implicated. That there is a crisis - and that it, in some sense, is a crisis of (un)replicability - is accepted by most commentators. The same issues were faced by chemists as it developed from alchemy. Although highly replicable now, chemistry was not – and could not be – as reliable before theory was developed that defined experimental success (Boyle, 1668). Parts of psychological science are in a similar situation.
While psychology has an abundance of theories, many psychologists lament the lack of what I’ll call big “T” Theories. Theories such as Newton’s, Einstein’s and Darwin’s (augmented by formal developments in the 20th C.) have provided unifying explanations of disparate phenomena, and contributed to cumulative progress in their respective areas of science. In my presentation I point out that in the writings of mathematical psychologists, the notions of explanation and understanding are themselves undertheorized. I support this case by quickly surveying three recent papers by Rich Shiffrin, JimTownsend, and Danielle Navarro (the latter focusing on Roger Shepard’s theory of generalization), all of which make a case for the importance of mathematical psychology to psychological theorizing. I find the philosophy of science appearing in these papers to be limited to the 20th Century, so I point towards more recent work in philosophy of science, where a consensus is emerging about the diversity of scientific explanations. This literature could provide a robust set of ideas for discriminating among the various roles that formal mathematical psychology can play in psychological theorizing, hopefully leading to more Theories.
Paul Meehl’s famous critique laid out in detail many of the problematic practices and
conceptual confusions that stand in the way of meaningful theoretical progress in
psychological science. By integrating many of Meehl’s points, we argue that one of the
reasons for the slow progress in psychology is the failure to acknowledge the problem of
coordination. This problem arises whenever we attempt to measure quantities that are not
directly observable, but can be inferred from observable variables. The solution to this
problem is far from trivial, as demonstrated by a historical analysis of thermometry. The
key challenge is the specification of a functional relationship between theoretical concepts
and observations. As we demonstrate, empirical means alone will not allow us to determine
this relationship. In the case of psychology, the problem of coordination has dramatic
implications in the sense that it severely constrains our ability to make meaningful
theoretical claims. We discuss several examples and outline some of the solutions that are