#
Axiomatics and Formal Analysis

Dr. Yung-Fong Hsu

Writing ξ_{s}(x) for the stimulus intensity judged greater (louder, heavier, brighter) than stimulus intensity x with criterion s, Iverson (2006) proposed a law of similarity ξ_{s}(λx) = γ(λ,s)ξ_{η(λ,s)}(x) to model the dependence of ξ_{s}(x) on x. This model, which has η(λ,s) and γ(λ,s) as parameters, is quite general and may be applied in a number of situations in psychophysics. Iverson (2006) analyzed this model assuming the representation s = u(ξ_{s}(x)) − u(x) and derived the possible functional forms for the scale u. In the present work, we extend the analysis to the more general s = u(ξ_{s}(x)) − w(x) and derive the forms for the scales u and w. We avoid the assumption of differentiability and replace it with an assumption either of non-constancy or of dependence on only one input variable. We find that for some solutions, w has the same form as u, reflective of the context for which u = w, while for other solutions, w takes a different form than u. Comparisons are made to Iverson (2006) and to other work.

I. R. Goodman

Hung T. Nguyen

Suppose that an agent is asked to rank the elements of a finite set U, starting with the most preferred and ending with the least. These rank orders may vary stochastically from occasion to occasion. Let PUrank denote the agent’s probability distribution of rankings of U. Duncan Luce’s well-known Choice Axiom, together with his Ranking Postulate, imply that the PUrank distribution will be a member of the Plackett-Luce family.We derive Luce’s Choice Axiom, rather than assuming it. Suppose that T and S are any disjoint sets whose union is U. Let Tspec and Sspec denote any specifications of of the preference order over the elements of T and S, respectively. Let Sfirst denote the event that every element of S is preferred to every element of T. Our Axiom of Independence from the Past (IFP) states that the events Tspec and Sspec-intersection-Sfirst are independent under PUrank. This axiom implies that PUrank is a Plackett-Luce distribution.Our Rational Choice Axiom states that, when the agent chooses an element from a subset T of U, the agent consults its preference ranking over U and selects the element of T that is highest ranked. Together, this axiom and the IFP Axiom imply Luce’s Choice Axiom.In addition, we formulate a ranking mechanism, based on Goodman and Nguyen’s Product Space Conditional Event Algebra, whose behavior conforms to the IFP Axiom.

In 1969 Sternberg introduced the notion of experimental factors selectively influencing random variables representing mental processes. In the early 2000s this notion was extended to stochastically non-independent variables. Traditionally one uses it by postulating a pattern of selective influences, and reconstructing "mental architectures" from the overall effect of the experimental factors on some overall measure of performance, e.g., response time. However, whenever one relates experimental factors to random variables that are directly observable, one finds that the pattern of influences is not selective: it invariably violates marginal selectivity, the crudest necessary condition of selectiveness. This prevents one from applying selective influences to such seemingly closely related issues as separability/integrality of perceptual tasks. In fact, the notion of selective influences seems to have no applications except to hypothetical variables one cannot observe — an intellectually unsatisfactory situation. The modern contextuality analysis, in which theory of selective influences converges with foundations of quantum mechanics, provides a way out, and offers a powerful mathematical language for addressing in a new way a variety of traditional issues, including the separability/integrality one. The notion of selective influences is a special case of a noncontextual system with marginal selectivity. However, in the theory dubbed Contextuality-by-Default, a system can be noncontextual or contextual irrespective of whether marginal selectivity is satisfied, and the degree of both contextuality and noncontextuality can be measured together with the degree of marginal selectivity. The theory also has prominent applications outside psychology, e.g., in quantum physics and computer science.

The Contextuality-by-Default theory describes contextual effects on random variables: how the identity of random variables changes from one context to another. Direct influences and true contextuality constitute different types of effects of contexts upon sets of random variables. Changes in the distributions of random variables across contexts define direct influences. True contextuality is defined by the impossibility of sewing all the variables of a system of random variables into a particular overall joint distribution where variables that correspond to the same property in different contexts are equal to each other as often as possible.For systems of binary random variables, the theory shows that, in cyclic systems, the two effects are in opposition. For the extension of the theory to systems with categorical random variables, I will present the nominal dominance theorem, which states a necessary condition for noncontextuality of systems where all dichotomizations of categorical variables are considered. This condition shows a case where direct effects may entail true contextuality. I will also illustrate the application of this theorem to the results of a psychophysical double-identification experiment.

Ever since the inception of the notion of perceptual independence, questions related to the processing of perceptually independent or separable dimensions have been intertwined with assumptions made about perceptual distributions, informational overlap, and channel capacity. Even though several successful empirical protocols and theoretical frameworks have been proposed for recognizing violations of perceptual independence or separability, few of them dissociate between different kinds of violations or on the potentially separate cognitive processes underlying these violations. Furthermore, despite of strong historical connections to Garner et al.’s work in the intersection of information theory and perceptual independence, few approaches take advantage of these connections. We revive Garner and Morton’s (1969) classic uncertainty analysis, combine it with contemporary information-theory-based tools and metrics, and reanalyze a set of simulated and empirically observed confusion matrices from modern studies. Our results shed light on the locus of interaction of perceptually integral dimensions, help build bridges between different notions of perceptual separability, and identify areas of research in which uncertainty analysis could complement existing methods for inferring perceptual processes.

Submitting author

Author