Research Synthesis of Used Components, Tools and Methods in E-Learning and Presenting a Comprehensive Model for E-Learning Evaluation

Document Type : Original Article



With the development of new approaches in education and progress in e-learning, evaluation has became a considerable challenge in this area. The purpose of this study is extracting the most important components, tools and methods of evaluating e-learning in order to achieve common points in these studies and presenting a comprehensive framework for evaluation of e-learning systems. In this study, the researchers used research synthesis method in which researchers selected 25 samples by purposive sampling among statistical population including 78 researches about e-learning evaluation and extracted the evaluation tools, components and methods were used in these researches and presented a framework for e-learning evaluation by analysis of these cases. The findings of this research include components, tools and methods used in e-learning evaluation in various researches and presenting a comprehensive framework in this regard. The results of this study showed that the most important components are usability, content and information quality, accessibility, communication, interaction and interfaces, management and controllability, technical system and services and supporting, also the most important tools for evaluation are questionnaire, checklist and interview and finally the most important methods are case study, survey and documental analysis.


–      Abdellatief, M.; Sultan, M.; Jabar .M. & Rusli, A. (2011). A Technique for Quality Evaluation of E-Learning from Developers Perspective. American Journal of Economics and Business Administration 3 (1): 157-164.
–      Arbaugh, J. B. & Fich, R. B. (2007). The importance of participant interaction in online environments. Decision Support Systems. 43 (2): 853–865.
–      Ardito, C. et al. (2006). An Approach to Usability Evaluation of e-Learning Applications. Universal Access in the Information Society, 4 (3): 270-283.
–      Attwell. G. (2006). Evaluating e-learning - A guide to the evaluation of e-learning. Evaluate Europe Handbook Series Volume 2.
–      Bias, R. G. & Mayhew, D. J. (1994). Cost-justifying usability. Boston: Academic P.
–      Caramihai, M. & Severin, I. (2009). ELearning Tools Evaluation based on Quality Concept Distance Computing. A Case Study. World Academy of Science, Engineering and Technology, 29: 569-573.
–      Chua, B. B. & Dyson, L. E. (2004). Applying the ISO9126 model to the evaluation of an elearning system. In R. Atkinson, C. McBeath, D. Jonas-Dwyer & R. Phillips (Eds), beyond the comfort zone: Proceedings of the 21st ASCILITE Conference (184-190). Perth, 5-8 December. chua.html
–      Clark, Helen. (2009).Handbook on planning, monitoring and evaluating for development results. HandbookWeb site:
–      Cooper, H. & Hedges, Larry V. (2009). Research Synthesis as a Scientific Process. A chapter on: The Handbook of Research Synthesis and Meta-Analysis, Second Edition. Russell Sage Foundation.
–      Comerchero, M. (2006). E-learning concepts and techniques. Institute for Interactive Technologies, Bloomsburg University of Pennsylvania, USA.
–      Darab, B. & Montazer, Gh. A. (2011). An eclectic model for assessing e-learning readiness in the Iranian universities. Journal of Computers & Education, 56: 900–910.
–      Deepwell, F. (2007). Embedding Quality in e-Learning Implementation through Evaluation. Educational Technology & Society, 10 (2): 34-43.
–      Douglas, E. & Van Der Vyver, G. (2004). Effectiveness of e-learning course materials for learning database management systems: An experimental investigation. Journal of Computer Information System. 41 (4): 41–48.
–      Dumas, J. S. & Redish, J. C. (1993). A practical guide to usability testing. Greenwich: Ablex.
–      Eccles, R. G. (1991). The performance measurement manifesto. Harvard Business Review. 69 (1): 131–137.
–      ECOTEC. (2007). Final Evaluation of the eLearning Programme: Annex to the Joint Report. . ECOTEC eLearning Evaluation.
–      Erdogan. Y.; Bayram, S. & Deniz, L. (2008). Factor that influence academic achievement & attitude web based education. International Journal of Instruction. 1 (1).
–      Gilbert, J. (2007). E-learning: The student experience. British Journal of Educational Technology, 38 (4): 560–573.
–      Hand, A. (2012). Evaluating the suitability of current Authoring Tools for developing e-learning Resources. Dissertation submitted as part of the requirements for the award of the degree of MSc in Information Technology (Business)
–      Hassanzadeh, A.; Kanaani, F. & Elahi, SH. (2012). A model for measuring e-learning systems success in universities. Expert Systems with Applications.journal homepage:
–      Islam, Md. Aminul (2011). Effect of Demographic Factors on E-Learning Effectiveness in a Higher Learning Institution in Malaysia. International Education Studies, 4 (1). Available at
–      Islas, E.; Perez, M.; Rodriguez, G.; Paredes, I.; Avila, I. & Mendoza, M. (2007). E-learning tools evaluation and roadmap development for an electrical utility. Journal of Theoretical and Applied Electronic Commerce Research (JTAER), 2 (1): 63–75.
–      Kay, R. (2011). Evaluating learning, design, and engagement in web-based learning tools (WBLTs): The WBLT Evaluation Scale. Computers in Human Behavior journal, available at:
–      Kirkpatrick, D. L. (1999). Evaluación de acciones formativas: los cuatro niveles. Barcelona, EPISE-Gestión.
–      Liaw, S. S.; Huang. H. M. & Chen, G. D. (2007). Surveying instructor and learner attitudes toward e-learning. Computers Education, 49 (4): 1066–1080.
–      Liu, Gi-Zen; Liu, Zih-Hui & Hwang, Gwo-Jen (2011). Developing multi-dimensional evaluation criteria for English learning websites with university students and professors. Computers & Education. 56: 65–79.
–      McArdle, G. E. (1999). Training Design and Delivery. Alexandria, VA. American Society for Training and Development.
–      Mehlenbacher, Brad et al. (2005). Usable E-Learning: A Conceptual Model for Evaluation and Design. Appeared in Proceedings of HCI International 2005: 11th International Conference on Human-Computer Interaction, Volume 4 — Theories, Models, and Processes in HCI. Las Vegas, NV: Mira Digital pp. 1-10.
–      Monarri, M. (2005). Evaluation of Collaborative Tools in Web-Based E-Learning Systems. Master’s Degree Project Stockholm, Sweden.
–      Ozkan, S. & Koseler, R. (2009). Multi-dimensional students’ evaluation of e-learning systems in the higher education context: An empirical investigation. Computers & Education, 53: 1285–1296.
–      Oztekin, Asil (2011). A decision support system for usability evaluation of web-based information systems. Expert Systems with Applications, 38: 2110–2118.
–      Oztekin, A.; Delen, D.; Turkyilmaz, A. & Zaim, S. (2013). A machine learning-based usability evaluation method for eLearning system. Decision Support Systems. Available on: ScienceDirect.
–      Oztekin. A.; Kong. Z. J. & Uysal, O. (2010). UseLearn: A novel checklist and usability evaluation method for eLearning systems by criticality metric analysis. International Journal of Industrial Ergonomics, 40: 455-469.
–      Oztekin, Asil; Nikov, Alexander & Zaim, Selim (2009). UWIS: An assessment methodology for usability of web-based information systems. The Journal of Systems and Software. 82: 2038–2050.
–      Ozyurt, O.; Ozyurt, H.; Baki, A.; Guven, B. & Karal, H. (2012). Evaluation of an adaptive and intelligent educational hypermedia for enhanced individual learning of mathematics: A qualitative study. Expert Systems with Applications, 39: 12092–12104.
–      Pazalos, K.; Loukis, E. & Nikolopoulos, V. (2012). A structured methodology for assessing and improving e-services in digital cities. Telematics and Informatics. 29: 123–136.
–      Pohl, M.; Rester, M.; Judmaier, P. & Stöckelmayr, K. (2008). Ecodesign – Design and Evaluation of an E-Learning System for Vocational Training. Universal Access in the Information Society, 7 (4): 259-272.
–      Savić, Suzana; Stanković, Miomir & Janaćković, Goran. (2011). Hybrid model for e-learning quality evaluation. Belgrade, 29.09.-30.09.2011
–      Stake, R. (2004). Standards-based & responsive evaluation, Thousand Oaks, CA: Sage.
–      Stalling, D. (2002). Measuring success in the virtual university. The Journal of Academic Librarianship, 28 (1): 47–53.
–      Wang, Y. S. (2003). Assessment of learner satisfaction with asynchronous electronic learning systems. Information and Management, 41: 75–86.
–      Wang, Y. SH.; Wang, H. Y. & Shee, D. Y. (2007). Measuring e-learning systems success in an organizational context: Scale development and validation. Computers in Human Behavior. 23: 1792–1808.
–      Zhang, Weiyuan & Cheng, Y. L. (2012). Quality Assurance in E-Learning: PDPP Evaluation Model and its Application. The international review of research in open and distance learning, 13 (3).
–      Zhang, W. & Jiang, G. (2007). Evaluation research in distance education. Higher Education Press.