Conjoint features and inductive category learning
In the traditional artificial classification learning paradigm, each training item is typically a single object composed of values along particular object features (e.g., shape, size, shading, length of tail, etc). We investigate an alternative framework for inductive category learning in which stimuli consist of pairs of items and the diagnostic basis for classification is conjoint features: properties of the stimulus that arise from a relative evaluation of the traditional dimension values of the items in the pair. For example, if a pair consisted of a small white circle and a large black circle, the identity match between the items on the shape dimension would be a conjoint feature that might predict the category label. Under what conditions can people learn categories based on such features? Further, to what extent does this ability reflect common or distinct machinery relative to traditional inductive category learning? In a series of experiments, we trained subjects to categorize stimuli consisting of two fish that each varied along one traditional dimension: length of body. Fish pairs of similar length belonged to one category while fish pairs of different lengths belonged to the other. We found that subjects appeared to successfully leverage the conjoint feature based on the relative comparison of alignable stimulus feature values (body length). Further, we tested generalization performance for novel items (previously unseen pairs) and found evidence of both graded and non-graded generalization gradients depending on the category structure that was observed during training. We propose a modeling approach to account for these results in terms of neural networks that incorporate a design principle of simple preprocessing layers to recode the input in terms of pairwise hypotheses such as ‘same-value.’
There is nothing here yet. Be the first to create a thread.