A comparison of retrofitting multiple knowledge structures to a cognitive diagnostic assessment
Cognitive Diagnostic Models (CDMs) are widely used psychometric models which assume the probability an exam item is correctly answered is functionally dependent upon the examinee’s binary-valued latent skills of the examinee. The skill requirement is formalized by the examiner in the form of a Q-matrix which specifies the skills required to successfully answer an exam item with a high probability. Given the Q matrix may not always be known a-priori, several studies have evaluated ways to retrofit a Q-Matrix to existing assessments (see Ravand and Baghaei, 2019 for a review). In the current experiment, we examined the model fit of two different approaches for constructing the Q matrix for an undergraduate course (n=79). In the top-down approach, each course-level learning objective is utilized as a skill by itself or broken into subcategories. Groups of exam items are then associated with the relevant subcategories. In the bottom-up approach, skills associated with individual exam items are identified and only the most frequently used skills are included in the final analysis. Using a bootstrap simulation methodology, three model selection criteria were used to compare model fits between the two Q matrices – Generalized Akaike Information Criterion (GAICTIC), Bayesian Information Criterion (BIC), and Cross-Entropy Bayesian Information Criterion (XBIC) (Golden, 2020). For different variations in sample sizes and regularization, all three measures consistently selected the bottom-up model as a better model. The results have implications for guiding the development of methods for developing Q matrix specifications (i.e., skill to exam item mappings).
There is nothing here yet. Be the first to create a thread.