Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Using cross-validation to determine dimensionality in multidimensional scaling

Authors
Dr. Russell Richie
University of Pennsylvania ~ Psychology
Steven Verheyen
Erasmus University Rotterdam, The Netherlands ~ Department of Psychology, Education & Child Studies
Abstract

Multidimensional scaling (MDS) is a popular technique for embedding items in a low-dimensional spatial representation from a matrix of the dissimilarities among items (Shepard, 1962). MDS has been used simply as a visualization aid or dimensionality reduction technique in statistics and machine learning applications, but in cognitive science, MDS has also been interpreted as a cognitive model of similarity perception or similarity judgment, and is often part of a larger framework for modeling complex behaviors like categorization (Nosofsky, 1992) or generalization (Shepard, 2004). However, a persistent challenge in application of MDS is selecting the latent dimensionality of the inferred spatial representation; the dimensionality is a hyperparameter that the modeler must specify when fitting MDS. Perhaps the most well-known procedure for selecting dimensionality is constructing a scree plot of residual stress (the difference between empirical dissimilarities and dissimilarities implied by the model) as a function of dimensionality, and then looking for an elbow: the dimensionality where stress has decreased dramatically but then plateaus. This elbow is taken to indicate that extending the space with additional dimensions does not substantially improve the fit of the model to the input similarities. Unfortunately, this procedure is highly subjective. Often such elbows do not exist, and instead the scree plots show a smooth decrease in stress as MDS increasingly overfits to noise at higher dimensionalities. In response, various more principled statistical techniques for model selection have been proposed that account for the trade-off between model complexity (dimensionality) and model fit (stress), including likelihood ratio tests (Ramsay, 1977), BIC (Lee, 2001), and Bayes factors (Gronau and Lee, in press). While such techniques are valuable, they can be prohibitively computationally complex for novice MDS users, and rely on a number of assumptions that are not necessarily met (e.g., Storms, 1995).An alternative technique that may avoid such problems is cross-validation. Under this approach, MDS of a given dimensionality would be fit to some subset of available dissimilarity data, the model’s predicted distances for held-out dissimilarity data would be evaluated, and the dimensionality which maximizes performance on the held-out data would be selected. Despite the simplicity and generality of cross-validation as a model selection procedure, cross-validation has seen relatively little application to MDS or related methods (Steyvers, 2006; Roads & Mozer, 2019; Gronau & Lee, in press), with no systematic exploration of its capabilities, as there has been for likelihood ratio tests, BIC, and Bayes factors (Ramsay, 1977; Lee, 2001; Gronau & Lee, in press). In the present work, we therefore examine the usefulness of cross-validation over cells of a dissimilarity matrix in simulations and applications to empirical data.

Discussion
New
Great presentation! Last updated 3 years ago

Hi Russell, That's a really good presentation, clear and well-paced! A quick question. Did you test how well cross-validation does in recovering the dimensionality, compared with other methods? If cross-validation can do at least equally well and is simpler to implement, that'll be really good IMO. Lisheng

Dr. Lisheng He 1 comment
A couple of questions Last updated 3 years ago

Interesting work! 1) There's no reason the cross-validation could not be applied to non-Euclidean metrics, right? 2) Is it a problem if noisy data generated in one dimensionality are inferred to come from a lower dimensionality? I don't see any reason for the inference to have to match the "truth". In the limit of very large levels of noise, ...

Michael Lee 2 comments
Cite this as:

Richie, R., & Verheyen, S. (2020, July). Using cross-validation to determine dimensionality in multidimensional scaling. Paper presented at Virtual MathPsych/ICCM 2020. Via mathpsych.org/presentation/166.