Symposia and Workshops

The central theme of this symposium will be the many contributions made by William H. Batchelder (1940-2018). Bathelder was a leader in the field of mathematical psychology whose pioneering work includes contributions to mathematical psychology, psychometrics, and cognitive science, but also anthropology, sociology, and other social and behavioral sciences. He was one of the first to recognize the power of combining mathematical cognitive modeling with psychometrics and measurement, and much of his later work focused on the hybrid discipline he coined Cognitive Psychometrics. The symposium will feature research talks by students and collaborators of Batchelder’s, as well as presentations of novel research project that follow in one of the paths he began.

This symposium explores some emerging new ideas in theory, models, and methods for understanding how people use strategies to search for information and make decisions. In terms of theory, we are interested in questions like the stability of strategy use, individual differences, and decision making in dynamic and social environments. Is strategy use related to other psychological properties of individuals? Do people switch strategies and, if they do, how often, why, and when do they they switch? How does strategy use change in various social settings, or in response to different individual or group goals? In terms of models, we are interested in strategies beyond the set usually considered in the literature. Interesting probabilistic variants on classic strategies like take-the-best and the weighted-additive rule have recently been proposed, and extensions of the tally heuristic like tally-N raise modeling challenges. Methodologically, there has been a recent burst of activity using Bayesian latent-mixture models and information theory measures like Normalized Maximum Likelihood to make inferences about strategy use from behavioral data. Overall, we hope to share recent achievements and raise new challenges for the diverse set of cognitive modelers who all work on understanding the complexity underlying the way people make decisions.

Symmetry has played a fundamental role in physics ever since Emmy Noether (1918) connected symmetry with conservation laws operating under the dynamics that obeys a least-action principle. These concepts have been used both in classical and modern physics to explain the Natural Laws. Symmetry has also manifested in the Erlanger Program (1872) which arranged several geometries into a single hierarchy, from topology, to projective, affine and Euclidean geometry. This session will explore how symmetry and group invariants can be used in providing explanations of visual perception of 3D space and objects within it. Invariants are obviously relevant in theories of perceptual constancies and redundancy inherent in symmetry is instrumental in solving the inverse problem of reconstructing 3D space and recovering 3D objects by applying a priori intuitive physics to the projected 2D images. This session may also shed light on how higher cognition may be bootstrapped from a cognitive architecture of constrained optimization based on operations and representation constructed from vision.

Version control is the lab notebook of the digital world: it’s what professionals use to keep track of what they’ve done and to collaborate with other people. Every large software development project relies on it, and most programmers use it for their small jobs as well. And it isn’t just for software: books, papers, small data sets, and anything that changes over time or needs to be shared can and should be stored in a version control system. This is a 3.5h course at a beginner level. Experience with shell commands is useful, but not mandatory. The lesson plan includes these topics: “Automated Version Control”, “Setting Up Git”, “Creating a Repository”, “Tracking Changes”, “Exploring History”, “Ignoring Things”, “Remotes in GitHub”, “Collaborating”, “Conflicts”, and “Open Science, licensing and hosting.”

Bayesian nonparametric (BNP) models are becoming increasingly important in cognitive psychology, both as theoretical models of cognition and as analytic tools. However, existing expositions tend to be at a level of abstraction largely impenetrable by non-technicians. This tutorial aims to explain BNP to the curious non-technicians using the Dirichlet Process (DP) as an illustrative example. DP is one of the most widely used BNP methods. A student researching these topics may encounter terms such as DP and the Chinese Restaurant Process (CRP, one of the construction methods of DP), but he or she may only have a vague impression as to the origin of these somewhat abstract concepts. This tutorial aims to make these concepts more concrete, explicit, and transparent.

This tutorial will (1) show what the DP and CRP look like; (2) explain the essential mathematical derivations often omitted in existing expositions you find online; and (3) demonstrate how to write a simple program in the statistical language R to fit a DP mixture model (DPMM).

The R program will be explained line by line so that you know precisely how the computation algorithm works. The mathematics will be no more than basic conditional probability and sampling from standard probability distributions. The overall goals are to help you understand more fully the theory and application so that you may apply DP in your own work and leverage the technical details in this tutorial to develop novel methods. By working through the R program and simulated data, you will learn the key feature of DP. The number of clusters is not required to be fixed in advance. The number of clusters used by a DP cognitive theory grows as data accrue and tops when additional clusters no longer explains the data. This tutorial should enhance your appreciation of other tutorials of the DP (e.g., Gershman \& Blei, 2012, J.\ Math Psych; Austerweil, Gershman, Tenenbaum, and Griffiths, 2015, in Busemeyer et al., Oxford Handbook of Computational and Mathematical Psychology).

Prerequisite knowledge: Basic familiarity with R (e.g., comfortable with logistic regression in R). Experience with R programming also helps (unfortunately, DP is not yet supported by statistical packages frequently used by behavioral scientists, such as SPSS, Mplus, Stata or SAS). But the programming skills required are no more complicated than writing simple functions. Consider bringing a laptop with R already installed to run the R program right away.

Details available at http://mathpsych.org/wmp/professional/index.html

Details available at http://act-r.psy.cmu.edu/?post_type=workshops&p=31635

9:00am | Peter Pirolli | ACT-R Models of Health Behavior Change in Mobile Health Change |

9:20am | Andrea Stocco | Computational Psychiatry: Predicting Recovery Curves for PTSD |

9:40am | Mark Orr & Parantapa Bhattacharya | Scaling Social Systems with Cognitive Components |

11:00am | David M. Schwartz & Christopher L. Dancy | Building Environments for Simulation and Experimentation in Malmo |

11:20am | Nele Russwinkel | Towards Incorporating Cognitive Models in Applications |

11:40am | Robert L. West, Emily Greve, & Elisabeth Reid | Using Smart Phone Games to Validate ACT-R |

2:00pm | Frank E. Ritter, Farnaz Tehranchi, Jacob D. Oury, & Shan Wang | Testing the KRK Theory Breaks ACT-R and Pilot Data to Show it |

2:20pm | Andrea Stocco | Deriving an Architecture from Brain Data |

2:40pm | Niels Taatgen | Extending ACT-R’s Modeling Capabilities: One Level Below, and One Level Above |

4:20pm | Dan Bothell | Software Updates |

4:40pm | Everyone | Open Discussion |