You must be logged in and registered to see live session information.
How many instances come to mind when making probability estimates?
Dr. Joakim Sundh
Dr. Jianqiao Zhu
Prof. Adam Sanborn
Dr. Jianqiao Zhu
Prof. Adam Sanborn
Human probability judgments are variable and subject to systematic biases. Sampling-based accounts of probability judgment explain such idiosyncrasies by assuming that people remember or simulate instances of events and base their judgments on sampled frequencies. In the sampling-based framework, biases have generally been explained either by an additional noise process corrupting sampling (Probability Theory + Noise account), or as a Bayesian adjustment to the uncertainty implicit in small samples (the Bayesian sampler). These accounts both explain data well but because they can generally be expressed by the same equation their predictions are very difficult to distinguish from each other, despite describing qualitatively different processes. To this end, we have developed a method that uses a linear model of the relationship between the mean and the variance of repeated judgments. This model serves two purposes: Firstly, it can be used to provide a crucial test between these two accounts, validating the Bayesian sampler account. Secondly, because the variance of a binomial variable is directly dependent on the number of samples, it can be used to estimate (among other parameters) the number of samples used for each judgment, which for probability judgments are found to be rather small (< 10). This is particularly important because, although sampling-based models have become increasing popular, little attention have hitherto been given to estimating the precise number of samples people use. We hope that, in the long run, the principle behind this simple model can be used to estimate sample sizes in a broader context.
Differentiating dreams from wakefulness by automatic content analysis and support vector
Ms. Xiaofang Zheng
Dream content is connected to major concerns of the individual’s waking life (e.g., Domhoff & Schneider, 2008a, 2008b). Despite long investigation with laborious content analysis coding, dreams are far from well understood. Automatic quantitative analysis techniques can be not only faster than traditional human hand-coding but also lower in coding errors and bias, and deserve further investigation. Linguistic Inquiry and Word Count (LIWC, Pennebaker, Boyd, Jordan, & Blackburn, 2015) is an automatic technique possibly useful for dream research. We analyzed dream reports and waking life reports of individuals using LIWC and found differences in social content and other aspects. Furthermore, we used a machine learning technique, support vector machines, to detect whether a report described waking life or dreams, based on the LIWC word frequencies of various categories. Automatic content analysis techniques are promising for scientific research on dreams.
Blind Man's Bluff: Formalizing theory-of-mind reasoning in a classic model of common knowledge
Prof. Jun Zhang
Prof. Jun Zhang
There are many variations of a classic example from game theory for differentiating knowledge and common knowledge. We revisit that example in the form of the "Blind Man's Bluff" game, which involves three players reasoning about the color (red or black) of a playing card they drew. Each player holds their card on their forehead to reveal it to the others but conceal it from themself. They reason about their own card based on the actions chosen by the others after a helpful announcement from a trustworthy friend. The primary mandate of the game is that a player will announce that their card is red upon deducing that fact with certainty, and thereby win the game. Suppose each card is red (the true state of the world). No player knows the color of their own card, and so none can yet win, but each player does possess the knowledge that not every card is black. However, only after their friend announces "not every card is black"—making that private knowledge common knowledge— does it become certain that at least one player will deduce their own card is red and, consequently, announce that fact to win the game. In this game, we formalize the Theory-of-Mind (ToM) reasoning in- volved in refining each player's possibility partition, which describes the sets of states of the world that are indistinguishable to them given the available information, following the friend's initial announcement and the subsequent action choices. We focus on how the refinement process does not require knowledge of any specific announcement or action—only common knowledge of the sequential information revelation process. Our framework applies the concept of a "rough approximation" (from Rough Set theory). We find that the upper approximation of a player's possibility partition defined by another player's possibility partition has a clear ToM interpretation, though the meaning of the lower approximation is less obvious. We also consider the role of strategies, which map a player's information to a choice of action, and contrast the perception- based strategies used in the game with inference-based ones. To deal with common knowledge about strategies, we construct a modified, but informationally-equivalent game that involves repeated announcements from the friend instead of sequential action choices by the players. In this way—via a common knowledge device—our framework decouples, for the first time, the recursive ToM reasoning process from the information revelation process in a multi-stage game of incomplete information.
Fast and flexible: Human program induction in abstract reasoning tasks
Wai Keen Vong
Wai Keen Vong
The Abstraction and Reasoning Corpus (ARC) is a challenging program induction dataset that was recently proposed by Chollet (2019). Here, we report the first set of results collected from a behavioral study of humans solving a subset of tasks from ARC (40 out of 1000). Although this subset of tasks contains considerable variation, our results showed that humans were able to infer the underlying program and generate the correct test output for a novel test input example, with an average of 80% of tasks solved per participant, and with 65% of tasks being solved by more than 80% of participants. Additionally, we find interesting patterns of behavioral consistency and variability within the action sequences during the generation process, the natural language descriptions to describe the transformations for each task, and the errors people made. Our findings suggest that people can quickly and reliably determine the relevant features and properties of a task to compose a correct solution. Future modeling work could incorporate these findings, potentially by connecting the natural language descriptions we collected here to the underlying semantics of ARC.
Explaining away differences in face matching
Jordan W. Suchow
Jordan W. Suchow
Unfamiliar face processing is often studied in the context of face matching, where an observer judges whether two images depict the same individual. On matching trials, the two images depict the same person but differ by factors. On non-matching trials, the two images depict different people, chosen in part because of their resemblance to each other. Accurate performance benefits from a representation of identity that is invariant both to state-based changes (e.g., in viewpoint, pose, and illumination) and to structural or surface-level changes to the faces themselves — e.g., those caused by aging or body modification. Here, we cast the problem of face matching as one of causal inference where the observer infers whether the depicted person underwent a transformation or is a different person. We introduce a causal model of face matching in which the observer infers which factor best explains the observed differences between a pair of faces. Our model produces a classic phenomenon in causal inference — explaining away — whereby two independent causes become dependent conditioned on a common effect. We then provide support for the model in two experiments that asked participants to make face matching determinations and explain them. We find that observers have a rich understanding of the causal mechanisms that affect identity and appearance and can use that knowledge to make accurate inferences unattainable by approaches that rely only on feature detection and comparison.
Instance-based cognitive modeling: a machine learning perspective
Cognitive Instance-Based Learning (CogIBL) model is a cognitive framework implemented within the constraints of ACT-R principles. This formulation though, defined within the Cognitive Science field, does not reveal the model's full strength and capabilities. In this work, we show that CogIBL, essentially, implements Kernel Smoothing, a non-parametric Supervised Learning function approximation method. Under this perspective, abstracted from cognitive concepts and expressed as a statistical learning algorithm, we argue that all CogIBL's implementations fall under two main learning paradigms: Supervised Learning and Reinforcement Learning. This new perspective has multiple benefits. First, it reveals CogIBL's structural differences from parametric approaches such as Neural Networks. It links it with well-studied statistical learning theory which provides theoretical guarantees of convergence, reveals its properties at full and establishes good evaluation practices highlighting where the model should expected to perform well and why. Second, the model, under the new formulation, can be implemented with popular tensor libraries such as Tensorflow and Pytorch making it scalable and fully parallelizable. This enables it to interact with prevalent Reinforcement Learning libraries such as OpenAI gym and Deepmind Lab, get trained in parallel with synchronous updates, and output multiple decisions at the same time. Finally, we discuss what this new approach reveals about the strength and weaknesses of the model and how a modeler can benefit from these.