Center of Research, Evaluation, Accreditation and Quality Assurance of Higher EducationEducational Measurement and Evaluation Studies2476-286561420161001Investigating comparability of Ability Parameter Estimation in Computerized Adaptive and Paper-Pencil TestsInvestigating comparability of Ability Parameter Estimation in Computerized Adaptive and Paper-Pencil Tests20323422178FANegarSharifi YeganehMohamadRezaFalsafinejadAliDelavarNoorAliFarrokhiEhsanJamaliJournal Article20160508This study aimed to investigateĀ the comparability of ability parameter estimation in computerized adaptive with paper-pencil testing and finding the algorithm for optimal computerized adaptive testing based on different kinds of ability estimation (maximum likelihood and expected a posteriori) and test termination criterion (fixed standard error and fixed length of test) in high stakes tests. The target population consisted of mathematics and engineering subgroup examinees of the Iranian university entrance exam in 2013. One thousand examinees were selected using random sampling method and mathematics questions were analyzed using 3-parameter logistic model. Equal to real numbers, 40 data sets were simulated and post hoc simulation of computerized adaptive testing was applied. The results indicated a strong correlation between ability estimation using computerized adaptive and paper-pencil testing of mathematics subscale. Furthermore, bias values, average absolute difference between ability estimation in computerized adaptive and paper-pencil testing and the mean squared root of the difference showed that the ability estimation in computerized adaptive testing using expected a posteriori is consistent with the ability estimation using the whole exam. The results suggested that computerized adaptive testing is capable of recovering the ability in mathematics subscale. It was concluded that expected a posteriori and test stopping rule of fixed 0.3 standard error was the optimal algorithm for suitable reliability, appropriate test length and the recovery of the ability estimation in computerized adaptive testing of mathematics subscale.This study aimed to investigateĀ the comparability of ability parameter estimation in computerized adaptive with paper-pencil testing and finding the algorithm for optimal computerized adaptive testing based on different kinds of ability estimation (maximum likelihood and expected a posteriori) and test termination criterion (fixed standard error and fixed length of test) in high stakes tests. The target population consisted of mathematics and engineering subgroup examinees of the Iranian university entrance exam in 2013. One thousand examinees were selected using random sampling method and mathematics questions were analyzed using 3-parameter logistic model. Equal to real numbers, 40 data sets were simulated and post hoc simulation of computerized adaptive testing was applied. The results indicated a strong correlation between ability estimation using computerized adaptive and paper-pencil testing of mathematics subscale. Furthermore, bias values, average absolute difference between ability estimation in computerized adaptive and paper-pencil testing and the mean squared root of the difference showed that the ability estimation in computerized adaptive testing using expected a posteriori is consistent with the ability estimation using the whole exam. The results suggested that computerized adaptive testing is capable of recovering the ability in mathematics subscale. It was concluded that expected a posteriori and test stopping rule of fixed 0.3 standard error was the optimal algorithm for suitable reliability, appropriate test length and the recovery of the ability estimation in computerized adaptive testing of mathematics subscale.https://jresearch.sanjesh.org/article_22178_3590b6fca8d3e7d96ae2da2779cd5bea.pdf