Item Parameter Drift (IPD) in the Trends in International Mathematics and Science Study (TIMSS) among Iranian 8th Grade Students and Its Impact on Estimating of Capability Parameter

Document Type : Original Article




This paper aimed to investigate the Item Parameter Drift (IPD) in the TIMSS among Iranian 8th grade students and its impact on estimating their capability parameter. The statistical population of this study included all Iranian 8th grade students who participated in the TIMSS in 2003, 2007 and 2011; 1224 students (408 students per test course) who responded to 40 trend items in the current test. The presence or absence of IPD was tested using SPSS (Version 23) Software and Logistic Regression method. The result showed that, among 40 trend items, 2 items were diagnosed with IPD which were characterized with a uniform IPD and 1 item had a non-uniform IPD. However, given the criteria of Goodwin-Gill's effect size, both items were characterized with little effect size. The students' capability (θ) in the presence of all trend items was calculated to address the impact of ID on students' capability.  Having identified and deleted the items with IPD, the capability parameter (θ) was calculated using BILOG-MG Software. Next, the Dependent T-test was used to examine the difference between the means and their significance. The results indicated that that rate of drift did not last a significant impact in determining the students' capability.


امبرتسون، سوزان ای. و رایس، استیون پی. (2000). نظریه‌های جدید روان‌سنجی برای روان شناسان (IRT)؛ ترجمه حسن پاشا شریفی، ولی‌الله فرزاد، مجتبی حبیبی و بلال ایزانلو (۱۳۸۸). تهران: انتشارات رشد.
سیف، علی‌اکبر (1390). اندازه‌گیری، سنجش و ارزشیابی آموزشی. تهران: انتشارات دوران، ویرایش ششم.
میرزاخانی، علیرضا و فرزاد، ولی‌الله (1392). ساخت مقیاس حل مسئله تیمز 2007 و بررسی موفقیت در حل مسئله دانش‌آموزان پایه سوم راهنمایی ایران در تیمز 2007. فصلنامه تعلیم و تربیت، 29 (2)، 145-164.
کبیری، مسعود (1392). کاربرد سنجش شناختی تشخیصی به‌منظور تعیین مهارت‌های کسب‌شده علوم تجربی در دانش‌آموزان پایه تحصیلی هشتم ایران بر اساس داده‌های تیمز 2011. پایان‌نامه دکتری، دانشگاه تهران.
مینائی، اصغر (1392). سنجش مقایسه پذیری سازه و تحلیل کارکرد افتراقی سؤال‌ها (DIF)  و بلوک‌های (DTF) آزمون علوم پایه هشتم تیمز 2007 در بین دانش‌آموزان ایران و آمریکا. فصلنامه اندازه‌گیری تربیتی، 4 (11)، 109-146.
مینائی، اصغر (1391). مدل‌پردازی تشخیصی شناختی (‍CDM) سؤال‌های ریاضیات تیمز 2007 در دانش‌آموزان پایه هشتم ایران با استفاده از مدل یکپارچه با پارامتر‌پردازی مجدد (RUM) و مقایسه مهارت‌های ریاضی دانش‌آموزان دختر و پسر. رساله دکتری، دانشگاه علامه طباطبائی.
مینائی، اصغر و فلسفی‌نژاد، محمدرضا (1389). روش‌های سنجش تک‌بعدی بودن سؤال‌ها در مدل‌های دو ارزشی IRT. فصلنامه اندازه‌گیری تربیتی، 3، 79-98.
Adedoyin, O. O. (2010). Investigating the invariance of person parameter estimates based on classical test and item response theories. International Journal of Educational Sciences, 2 (2), 107-113.
Bock, R. D.; Murake, E. & Pfeiffenberger, W. (1988). Item pool maintenance in the presence of item parameter drift. Journal of Educational Measurement, 25 (4), 275-285.
Donoghue, J. R., & Isham, S. P. (1998). A comparison of procedures to detect item parameter drift. Applied Psychological Measurement, 22 (1), 33-51.
Ercikan, K. (1998). Translation effects in international assessments. International Journal of Educational Research, 29 (6), 543-553.
Fraser, C. (1988). NOHARM: An IBM PC computer program for fitting both unidimensional and multidimensional normal ogive models of latent trait theory. LKJ [Computer software]. Armidale, Australia: The University of New England.
Gessaroli, M. E., & Champlain, A. F. (1996). Using an Approximate Chi‐Square Statistic to Test the Number of Dimensions Underlying the Responses to a Set of Items. Journal of Educational Measurement, 33 (2), 157-179.
Goldstein, H. (1983). Measuring changes in educational attainment over time: Problems and possibilities. Journal of Educational Measurement, 20 (4), 369-377.
Guo, F., & Wang, L. (2003, April). Online calibration and scale stability of a CAT program. In annual meeting of the National Council on Measurement in Education, Chicago: IL.
Hambleton, R. K. & Rovinelli, R. J. (1986). Assessing the dimensionality of a set of test items. Applied Psychological Measurement, 10 (3), 287-302.
Hambleton, R. K. & Traub, R. E. (1973). Analysis of empirical data using two logistic latent trait models. British Journal of Mathematical & Statistical Psychology, 24, 273-281.
Jodoin, M. G., & Gierl, M. J. (2001). Evaluating type I error and power rates using an effect size measure with the logistic regression procedure for DIF detection. Applied Measurement in Education, 14 (4), 329-349.
Kolen, M. J., & Brennan, R. L. (2004). Test equating, linking, and scaling: Methods and practices. Springer-Verlag, New York.
Lee, Y. S.; Park, Y. S. & Taylan, D. (2011). A cognitive diagnostic modeling of attribute mastery in Massachusetts, Minnesota, and the US national sample using the TIMSS 2007. International Journal of Testing, 11 (2), 144-177.
Lord, F. M. & Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA : Addison -Wesley.
Ludlow, L. H., & O’leary, M. (1999). Scoring omitted and not-reached items: Practical data analysis implications. Educational & Psychological Measurement, 59 (4), 615-630.
McCoy, K. M. (2009). The impact of item parameter drift on examinee ability measures in a computer adaptive environment. (Unpublished doctoral dissertation). University of Illinois at Chicago, Chicago, IL.
McDonald, R. P. (1967). Nonlinear factor analysis. Psychometric Monographs, 15, 32.
McDonald, R. P. (1982). Linear versus nonlinear models in item response theory. Applied Psychological Measurement, 6, 379-396.
McDonald, R. P. (1997). Normal-ogive multidimensional model. Handbook of modern item response theory, 257-269.
McLeod, L. D., Swygert, K. A. & Thissen, D. (2001). Factor analysis for items scored in two categories. In D. Thissen & H. Wainer. (Ed.), Test Scoring. Mahwah, NJ: Lawrence Erlbaum.
Mullis, I. V.; Martin, M. O.; Foy, P., & Arora, A. (2012). TIMSS 2011 international results in mathematics. International Association for the Evaluation of Educational Achievement. Herengracht 487, Amsterdam, 1017 BT, The Netherlands.
Mullis, I. V.; Martin, M. O.; Gonzalez, E. J., & Chrostowski, S. J. (2004). TIMSS 2003 International Mathematics Report: Findings from IEA's Trends in International Mathematics and Science Study at the Fourth and Eighth Grades. TIMSS & PIRLS International Study Center. Boston College, 140 Commonwealth Avenue, Chestnut Hill, MA 02467.
Park, S. P.; Lee, Y. S., & Xing, K. (2016). Investigating the impact of item parameter drift for item Response Theory Models with Mixture Distributions. Distributions. Frontiers in Psychology, 7, 255.
Reckase, M. D. (1979). Unifactor latent trait models applied to multifactor tests: Results and implications. Journal of Educational Statistics, 4, 207-230.
Risk, N. M. (2015). The Impact of Item Parameter Drift in Computer Adaptive Testing (CAT). Doctoral dissertation, University of Illinois at Chicago.
Shultz, K. S., Whitney, D. J., & Zickar, M. J. (2013). Measurement theory in action: Case studies and exercises. Routledge.
Swaminathan, H. & Rogers, H. J. (1990). Detection differential item functioning using Logistic regression procedures. Journal of Educational Measurement, 27, 361-370
Tanaka, J. S. (1993). Multifaceted conceptions of fit in structural equation models. Sage focus editions, 154, 10-10.
Van der Linden, W. J., & Glas, C. A. (2010). Elements of adaptive testing. New York, NY: Springer.
Wei, X. E. (2013). Impacts of Item Parameter Drift on Person Ability Estimation in Multistage Testing. Technical Report.
Wells, C. S., Subkoviak, M. J., & Serlin, R. C. (2002). The effect of item parameter drift on examinee ability estimates. Applied Psychological Measurement, 26 (1), 77-87.
Wells, C. S., Hambleton, R. K., Kirkpatrick, R., & Meng, Y. (2014).An examination of two procedures for identifying consequential item parameter drift. Applied Measurement in Education, 27, 214–231.
Wu, A. D.; Li, Z.; Ng, S. L., & Zumbo, B. D. (2006). Investigating and comparing the item parameter drift in the mathematics anchor/trend items in TIMSS between Singapore and the United States. In 32nd Annual Conference in International Association for Educational Assessment (Singapore).
Zumbo, B. D. (1999). A Handbook on the theory and methods of differential item functioning (DIF): Logistic regression modeling as a unitary framework for binary and Likert-type (ordinal) item scores. Ottawa, ON: Directorate of Human Resources Research and Evaluation, Department of National Defense.