Formal analysis
Choosing an element from an offered set of alternatives is arguably the most basic paradigm of preference behavior. Typically, if the same set is offered several times, the choice will not always be the same. This is often attributed to the participant’s preference fluctuating over time due to the effect of various alternatives to be compared, or to the difficulty of distinguishing between similar alternatives.Theories of best-choice behavior try to account for the probability of choosing an alternative y from an offered set Y, a subset of base set X. This intrinsic randomness leads naturally to postulating the existence of a random variable U(x) , for each alternative x in Y, representing the momentary strength of preference for alternative x. Alternative y choosen from Y if the momentary (sampled) value of U(y) exceeds that of any other alternative, aka random utility model (RUM). Falmagne (1978) showed that nonnegativity of certain linear combinations of choice probabilities (Block-Marschak polynomials) is necessary and sufficient for the existence of a RUM representation of best-choice probabilities. Marley & Louviere (2005) proposed an alternative task, where a participant is asked to select both the best and the worst option in the available subset of options Y. Let B(b,w,Y) be the probability that a participant chooses b as best and w as worst alternative in the set Y. Here I show that non-negativity of best-worst Block-Marschak polynomials, appropriately defined, is necessary and sufficient for the existence of a RUM representation of best-worst choice probabilities. The theorem is obtained by extending proof techniques for the corresponding result on best choices (Falmagne, 1978).
Dr. Janne Kujala
Víctor Hernando Cervantes Botero
Many if not all objects of research, be it in psychology, quantum physics, computer science, etc., can be presented by systems of random variables, in which each variable is identified by what it measures (what question it answers) and by contexts, the conditions under which it is recorded. Systems can be contextual and noncontextual, contextuality meaning that contexts force random variables answering the same question to be more dissimilar than they are in isolation. There is a consensus that it is useful to measure degree of contextuality when a system is contextual. Measures of noncontextuality, however, have not been proposed until very recently. We will outline a theory of contextuality measures and noncontextuality measures applied to an important class of systems, called cyclic. Using the example of a cyclic system of rank 2 (the smallest nontrivial system formalizing, e.g., the question order effects in psychology), we explain why measures of noncontextuality are as important as measures of contextuality. Literature: Dzhafarov, E.N., Kujala, J.V., & Cervantes, V.H. (2020). Contextuality and noncontextuality measures and generalized Bell inequalities for cyclic systems. Physical Review A 101:042119. (available as arXiv:1907.03328.) Erratum Note: Physical Review A 101:069902.
Prof. Joost Vennekens
Prof. Walter Schaeken
Prof. Lorenz Demey
There is a great similarity in the knowledge modelling process between education and knowledge engineering. In education, psychometrician and educator work together to assess students’ knowledge states and what they are ready to learn next. Knowledge Space Theory (KST) maps out the knowledge structure of different concepts that a student can learn and the dependencies among these concepts. Meanwhile in knowledge engineering, knowledge engineer and domain expert work together to extract business knowledge so that they can automate decisions according to the client's situation. Common business knowledge representation standards such as Decision Model and Notation (DMN) provide the industry with a modelling notation that supports decision management. The similarity of the collaborations among stakeholders in the knowledge extraction process motivates us to investigate the possibility of applying KST in the industrial setting. However, KST lacks the ability to model the learning of contingent information, such as learning whether or not a given client speaks English (e.g., to determine if a translator is needed). If one learns that a particular client does in fact speak English, it becomes impossible to later learn that this same client does not speak English. This violates KST's assumption that knowledge is always cumulative. We propose as a solution to use bitstring semantics to represent the contingent knowledge. Bitstring semantics is a recent logical formalism exploring the meaning relations between different expressions. In this talk, we will illustrate how we can extend previous work on KST with bitstring semantics to construct contingent knowledge structures.
Dr. Ehtibar N. Dzhafarov
Many systems in which contextuality is studied have in common that their (non)contextuality is determined by particular configurations of pairwise correlations. Such systems are used to describe the question order effect in psychology, the Einstein-Podolsky-Rosen-Bohm paradigm in quantum physics, and many other situations. The prominence of pairwise correlations leads one to the incorrect intuitive idea that all contextuality appears on the level of pairwise associations, perhaps even only within cyclic subsystems. We present a new, hierarchical measure of (non)contextuality in which contextuality may arise at the level of pairwise, triple, quadruple, etc. associations of random variables. This measure allows one to look at (non)contextuality as varying not only in degree but also in pattern.
Prof. Timothy Brady
Edward Vul
Isabella DeStefano
In many decision tasks, we have a set of alternative choices and are faced with the problem of how to take our latent preferences or beliefs about each alternative and make a single choice. For example, we must decide which item is ‘old’ in a forced-choice memory study; or which cereal we prefer in a supermarket; or which color a word is in a Stroop task. Modeling how people go from latent strengths for each alternative to a single choice is thus a critical component of nearly all cognitive and decision models. Most models follow one of two traditions to establish this link. Modern psychophysics and memory researchers make use of signal detection theory, in the tradition of Fechner (1860) and Thurstone (1929), assuming that latent strengths are perturbed by noise, and the highest resulting signal is selected (e.g., Wixted, 2020). By contrast, many modern cognitive modeling and machine learning approaches use the softmax rule to give some weight to non-maximal-strength alternatives (Luce choice axiom; Luce, 1959). Despite the prominence of these two theories of choice, current approaches rarely address the connection between them, and the choice of one or the other appears more motivated by the tradition in the relevant literature than by theoretical or empirical reasons to prefer one theory to the other. The goal of the current work is to revisit this topic by elucidating which of these two models provides a better characterization of latent processes in K-alternative decision tasks, with a particular focus on memory tasks. In line with previous work (e.g., Luce and Suppes, 1966; Yellot, 1977), we find via both simulation and mathematical proofs that the softmax and signal detection link functions can mimic each other with high fidelity in all circumstances. However, we show that while the softmax parameter varies across task structures using the same stimuli (i.e., changes when K is varied), the parameter d’ of the signal-detection model is stable. The results of these studies are consistent with the results of Treisman and Faulkner (1985) in a novel suite of memory tasks. Together, our findings indicate that replacing softmax with signal-detection link models would yield more generalizable predictions across changes in task. More ambitiously, the invariance of signal detection model parameters across different tasks suggests that the mechanisms of these models (i.e., the corruption of signals by stochastic noise) may be more than just a mathematical convenience but reflect something real about human decision-making.
Akrenius (2020) proposed a novel probability weighting function, Valence-Weighted Distance (VWD), which builds on the notion that a reduction in uncertainty carries psychological utility. VWD presumes that a probability is evaluated relative to a plausible expectation (uniformity), and that the perceived distance between the probability and uniformity is influenced by the entropy of the distribution that the probability is embedded in. VWD reproduces the characteristic shape of existing probability weighting functions, makes novel predictions, and provides a parsimonious explanation for findings in probability and frequency estimation related tasks. To account for individual differences, VWD can be complemented with the Sharma-Mittal (1975) family of entropies, which has previously been applied in models of information search and hypothesis testing (Crupi et al., 2018). I review the theory underlying VWD, introduce its extension with the Sharma-Mittal family, and present some of the theoretical and empirical implications that follow.
Submitting author
Author