ارزشیابی سوالات شیمی کنکور سراسری رشته علوم تجربی در سال 1396 با استفاده از نظریه سوال–پاسخ

نوع مقاله : مقاله پژوهشی

نویسندگان

1 استادیار گروه شیمی، دانشکده علوم پایه، دانشگاه تربیت دبیر شهید رجایی، تهران، ایران

2 کارشناسی ارشد،گروه شیمی، دانشکده‌ی علوم پایه، دانشگاه تربیت دبیر شهید رجایی، تهران، ایران

3 دانشیار،گروه روانشناسی تربیتی، دانشگاه آزاداسلامی، سنندج، ایران.

10.22034/emes.2021.248201

چکیده

هدف: پژوهش حاضر به ارزشیابی سوالات شیمی کنکور سراسری گروه علوم تجربی در سال 1396 با استفاده از نظریه سوال-پاسخ می پردازذ.
روش پژوهش: پژوهش کاربردی، توصیفی- پیمایشی است. جامعه آماری کلیه داوطلبان کنکور سراسری گروه علوم تجربی در سال 1396 و نمونه آماری 5۰۰۰ پاسخنامه از میان پاسخنامه های داوطلبان شرکت کننده در این سال است و محاسبات با نرم افزارهایNOHARM ،  IRTPROو EXCEL انجام شده است.
یافته‌ها: براساس نظریه کلاسیک و سوال پاسخ، پارامترهای تک تک سوالات محاسبه شد. نتایج نشان داد 7 سوال هم با مدل یک پارامتری و هم دو پارامتری و 21 سوال با مدل سه پارارمتری برازش مناسبی نشان می دهند و 13 سوال نیز با هیچکدام از مدلهای نظریه سوال-پاسخ برازش نشان نمی دهند. تحلیل سوالات با مدل سه پارامتری نشان داد که 26 سوال از نظر ضریب تشخیص قوی(a>1.3)، 6 سوال متوسط و 3 سوال ضعیف (a<0.65)می باشد. همچنین از نظر دشواری 11 سوال بسیار سخت (b>1.2) و 24 سوال مناسب (-1.2<b<1.2) بودند. سوال c≥0.2  بود که بیانگر حدس پذیری بالای سوالات شیمی آزمون است.
نتیجه‌گیری: این آزمون 35 سوالی، دارای ضریب دشواری 3/0 و عامل حدس حدود 2/0 است و برای سطوح توانایی در گستره توانایی 5/0- تا 2 بیشترین آگاهی و کمترین خطا را دارد با توجه به فراوانی داوطلبان در این محدوده از توانایی، به نظر می رسد این آزمون توانایی لازم برای تشخیص  داوطلبان مختلف و انتخاب عادلانه آنها برای دانشگاه ها را داشته باشد. آلفای کرونباخ آزمون 934/0 و نشان دهنده پایایی عالی آزمون می باشد.

کلیدواژه‌ها


عنوان مقاله [English]

Evaluation of Chemistry Questions in Concur 2017 Using Item Response Theory

نویسندگان [English]

  • Ali Reza Karami Gazafi 1
  • Sara Mehrabi 2
  • Farzad Zandi 3
1 Assistant Professor, Department of Chemistry, Faculty of Basic Sciences, Tarbiat Dabir Shahid Rajaei University, Tehran, Iran
2 M.Sc, Department of Chemistry, Faculty of Basic Sciences, Tarbiat Dabir Shahid Rajaei University, Tehran, Iran
3 Associate Professor, Department of Educational Psychology, Islamic Azad University, Sanandaj, Iran.
چکیده [English]

Objective: The aim of this study is to evaluate the chemistry questions of university entrance exam (concur) in 2017 by using item-response theory.
Methods: This is an applied, descriptive research. 5,000 answer sheets of participants in Empirical Science group, were selected randomly as sample. All parameters were calculated with NOHARM and IRTPRO and EXCEL softwares.
Results: First the initial assumptions of IRT theory (unidimensionality and local Independence) were investigated. The parameters of each questions such as difficulty and discrimination coefficient, were calculated based on the classic and the item response theories. The results showed that 7 questions show a good fit with one and two parameter models and 21 questions are also compatible with three-parameter model and 13 questions don’t fit with any IRT models. The analysis with 3-PL-IRT showed the discrimination coefficient of 26 questions are strong (a> 1.3), 6 are moderate and 3 are weak (a <0.65). Also 11 questions were very difficult (b> 1.2) and 24 questions were appropriate (-1.2 Conclusion: The results showed that 3-PL IRT model has a better fit with the test. All questions are efficient in terms of discrimination coefficient and have a high level of difficulty and also guessing parameter is high in this test. The test’s questions have the highest Information and the least error for a high level of ability (range -1 to 2).

کلیدواژه‌ها [English]

  • Evaluation
  • University Entrance Exam
  • Chemistry Education
  • Classic Theory
  • Item-Response Theory
Ahmadi, A. (2008). Scoring using classical theory techniques and its comparison with the models of Item-Response theory in the bachelor's entrance exam in mathematics. Master's Thesis, Tehran. Allameh Tabatabaei University.
Ansarin, A. (1992). Estimation of the Item characteristic curve and the ability of the subjects in Tehran-Stanford-Binet scale based on the two-parameter model of the Latent trait, Master's Thesis, Islamic Azad University.
Asadi, K., Hooman, H. A. & Liaqat, R. (2012). Comparison of one and three-parameter models of Item-Response theory in assessment the ability of high school girls by Raven's progressive matrices. Journal of Psychological Research 4(13), 71-89.
Bakhshifard, S. (2002). Application of IRT model in comparison between IQ test ability scores at Raven progressive matrices, Master's Thesis, Islamic Azad University, Central Tehran Branch.
Baqaei Moghadam, P. & Roshanzamir, M. (2010). A Study of the Divisibility of the Comprehension Structure in a Foreign Language Using the Multidimensional Item response theory, Journal of Language and Translation Studies (JLTS). (3).
Delavar, A., Moghadamzadeh A. & Motiei Langroudi S. T. (2006). Comparison of classical measurement model and non-parametric Item response theory models in terms of question characteristics, Quarterly Journal of Educational Innovations, 18.
Divgi, D. R. (1986). Does the Rasch model really work for multiple choice items? Not if you look closely. Journal of Educational Measurement, 23(4), 283-298.
Falsafi Nejad, M. R., Delavar, A., Farrokhi, N. A., & Mohagheghi, M.A. (2013). Comparison of classic and latent trait models in the evaluation of specialized entrance exams for medical internships in Tehran, Iranian Journal of Medical Education, 167-178.
Farahani, M. (1996). Comparison of classic and and IRT models in terms of question parameters and ability estimating, Master's Thesis, Allameh Tabatabaei University.
‫Hooman, H. A. (1989). Tehran-Stanford-Binet Individual Intelligence Test. Quarterly Journal of Educational Sciences, University of Tehran, Special Issue of Psychometrics, New Volume, (1-4).
Izanloo, B., Habibi A. M. (2008). Introduction to new measurement approaches in the field of psychology and educational sciences. Journal of Psychology and Education, 8, 135-165.
Karami Gazafi, A., & Niknam Z. (2014). Evaluation of mobtakeran Book Questions by IRT Method in the Periodic Properties of Elements topics from the Second Year High School Chemistry Book, Sixth National Conference on Education, Shahid Rajaee Teacher Training University, Tehran, Iran.
Karami Gazafi, A., & Niknam Z. (2015). Evaluating the questions of ghalam-Chi book on light and light reflectiontopics in the first year physics high school textbook, 7th National Conference on Education, Shahid Rajaee Teacher Training University,Tehran, Iran.
Karimi, B., Falsafi Nejad, M. & Dartaj F. (2011). The effect of the number of question options on the psychometric properties of the test and the estimated ability in the the Item response theory and classical models. Educational measurement, 6 (2).
Linden, W.J., Van Der. (2010). Item Response Theory, International Encyclopedia of Education, 81-89.
Lord, F. M. (1980). Applications of item response theory to practical testing problems. Routledge.
Macayan, J., & Ofalia, B. (2011). Determining experts and novices in college algebra: A psychometric test development and analysis using the Rasch model (1PL-IRT). Educational Measurement and Evaluation Review, (2), 62-76.
Mahmoudian, H. (2011). Setting up a question bank in the mathematics section in the university entrance exam at 2010 based on the Item response theory. Master's Thesis, Faculty of Psychology and Educational Sciences, Allameh Tabatabai University.
Maleki, H. (2013). Introduction to curriculum planning. Tehran:  The Organization for Researching and Composing University Textbooks in the Humanities (SAMT).
Mohammadzadeh Romiani, M. (1996). The Question Selection Methods in the Classical and IRT Models, Master's Thesis, Faculty of Psychology and Educational Sciences, Allameh Tabatabaei University.
Nafisi, G. (1997). Assessment and Evaluation, Tehran: Azad University, North Branch.
Naghizadeh, F. (2016). Evaluation of several questions in chemical reactions, stoichiometry and chemical Thermodynamics topics  using Item response theory in junior high school chemistry, Master's Thesis,shahid Rajaee Teacher Training University, Tehran, Iran.
Rouhani, M. (2012). The  analysis of Content and psychometric properties of the Islamic Azad University entrance exam for humanities science  at 2007-2009, Master's Thesis, Faculty of Psychology and Educational Sciences, Allameh Tabatabaei University.
Rudner, L. M. (2009). Implementing the graduate management admission test computerized adaptive test. In Elements of adaptive testing, Springer, New York, 151-165.
Schurmeier, K. D., Atwood, C. H., Shepler, C. G., & Lautenschlager, G. J. (2010). Using item response theory to assess changes in student performance based on changes in question wording. Journal of chemical education, 87(11), 1268-1272.
Sepasi, H. (2003). Comparison of concepts and assumptions between classic and Item response theory in the construction of psychological and educational tests, Journal of Educational Sciences and Psychology, No. 3 and 4.
Shizuka, T., Takeuchi, O., Yashima, T., & Yoshizawa, K. (2006). A comparison of three-and four-option English tests for university entrance selection purposes in Japan. Language Testing, 23(1), 35-57.
Stage, C. (1998). A Comparison between Item Analysis Based on Item Response theory. A Study of SWESAT subtest RCE. Journal of Educational measurement, 31.
Valeh, M. (2013). The Comprehensive evaluation and determination of psychometric properties of aptitude and academic readiness test   for management major in the entrance exam for master's degree using classical theory and Item response theory (IRT) at 2010-2011, Master's Thesis, Faculty of Psychology and Educational Sciences. Allameh Tabatabai University.
Yen, A. (2002). Introduction to measurement theory, .Long Grove IL: waveland press.
Younesi, J. (2007). Study of Psychometric Properties of Comprehensive test Questions for Psychology major at Payame Noor University in 2006. Master's Thesis, Faculty of Psychology and Educational Sciences, Allameh Tabatabai University.