Towards a method for evaluating convergence across modeling frameworks
Model convergence is an alternative approach for evaluating computational models of cognition. Convergence occurs when multiple models provide similar explanations for a phenomenon. In contrast to competitive comparisons which focus on model differences, identifying areas of convergence can provide evidence for overarching theoretical ideas. We proposed criteria for convergence which require models to be high in predictive and cognitive similarity. We then used a cross fitting method to explore the extent to which models from distinct computational frameworks---quantum cognition and the cognitive architecture ACT-R---converge on explanations of the interference effect. Our analysis revealed the models to be moderately high in predictive similarity but mixed for cognitive similarity. Though convergence was limited, the analysis suggests that interference effects emerge from interactions between uncertainty and the degree to which an individual relies on typical cases to make decisions. This result demonstrates the utility of convergence analysis as a method for integrating insights from multiple models.