Logo

Evaluation of the Usability of the NAVID Learning Management System by Cognitive Walkthrough Method

Author(s):
Samin AlihosseiniSamin AlihosseiniSamin Alihosseini ORCID1, Taha Samad-SoltaniTaha Samad-SoltaniTaha Samad-Soltani ORCID2, Katayoun KatebiKatayoun KatebiKatayoun Katebi ORCID3, Ahmad PourabbasAhmad PourabbasAhmad Pourabbas ORCID4,*
1Department of Medical Education, Education Development Center, Tabriz University of Medical Sciences, Tabriz, Iran
2Department of Health Information Technology, Faculty of Management and Medical Informatics, Tabriz University of Medical Sciences, Tabriz, Iran
3Department of Oral and Maxillofacial Medicine, Faculty of Dentistry, Tabriz University of Medical Sciences, Tabriz, Iran
4Medical Education Research Center, Health Management and Safety Promotion Research Institute, Tabriz University of Medical Sciences, Tabriz, Iran

Shiraz E-Medical Journal:Vol. In Press, issue In Press; e161772
Published online:Aug 16, 2025
Article type:Research Article
Received:Apr 09, 2025
Accepted:Aug 03, 2025
How to Cite:Alihosseini S, Samad-Soltani T, Katebi K, Pourabbas A. Evaluation of the Usability of the NAVID Learning Management System by Cognitive Walkthrough Method.Shiraz E-Med J.2025;In Press(In Press):e161772.https://doi.org/10.5812/semj-161772.

Abstract

Background:

NAVID is one of the widely used learning management systems employed by medical universities in Iran. The usability of such systems is important for students’ learning.

Objectives:

This study aimed to evaluate the usability of the NAVID electronic learning (e-learning) system using the cognitive walkthrough method.

Methods:

In this cross-sectional study, validated and reliable checklists, confirmed by an expert panel, were used to measure three key dimensions of usability: Effectiveness, efficiency, and satisfaction from the point of view of 30 medical informatics graduate students, 29 professors from the basic and clinical science departments of the faculty of medicine, and 2 system experts from Tabriz University of Medical Sciences in the first semester of the 2022 - 2023 academic year. The effectiveness of tasks was calculated using the completion rate formula. Efficiency was calculated using a time-based efficiency formula. Satisfaction was measured using a seven-point Likert scale, with scores above 5.5 considered good and below 5.5 considered poor.

Results:

From the professors' perspective, the task completion rate was 87.5%. The overall efficiency for this group was 85%, with a satisfaction level of 75.2%. Among students, the task completion rate was 83.3%, with an efficiency of 85% and a satisfaction score of 81.1%. Experts reported a 100% task completion rate and efficiency, with an overall satisfaction level of 81.3%. Suggestions for improvement included enhancing interaction options, improving system performance during exams, full-time system support, and addressing technical issues like microphone quality. A positive correlation was observed between effectiveness and satisfaction, as well as between effectiveness and efficiency, for both students and professors. However, there was no significant correlation between efficiency and satisfaction in these groups. In contrast, all three dimensions were correlated in the experts’ group.

Conclusions:

While the system demonstrated overall acceptable usability, particularly in task completion rates, the findings reveal critical areas that require significant enhancement. The challenges faced by users, particularly in accessing essential system functions such as password recovery and course management, indicate a need for a more user-friendly design. Additionally, correlations among the various usability dimensions were noted, indicating that enhancements in one area are likely to result in overall improvements in system usability.

1. Background

Educational institutions globally have increasingly adopted electronic learning (e-learning) tools to meet growing educational demands (1). This approach makes education more accessible by allowing learners to engage from any location and at any time, saving both time and money. Consequently, e-learning systems have extended educational opportunities to a global audience (2). Failure to identify the factors that influence the acceptance of e-learning systems could render substantial investments in this area ineffective, preventing students from achieving the desired learning outcomes (3).

One of the most widely used and comprehensive systems in Iranian medical universities is the NAVID e-learning system. Despite the robust virtual education infrastructure of the NAVID system, several challenges have been identified (4). Addressing these shortcomings is essential to improving the quality of virtual education. Neglecting usability concerns may result in poor system adaptation and a failure to meet user expectations (5).

A systematic approach to evaluating the structure of virtual education systems involves assessing their usability. Usability in a system reflects its capacity to operate effectively and efficiently while ensuring user satisfaction (6). Various methods have been developed to assess the usability of e-learning systems (7). One expert-based method is the cognitive approach, which involves analyzing the user’s thought process when interacting with the system (8). This cognitive approach involves a task-based evaluation that identifies problems through simulation. It measures all three dimensions — effectiveness, efficiency, and satisfaction — and reveals usability issues (9). This technique is based on the assumption that evaluators can adopt the user’s viewpoint and apply this perspective to specific task scenarios to pinpoint design flaws.

2. Objectives

Given the extensive and nationwide use of the NAVID system and reports concerning its effectiveness, efficiency, and user satisfaction, this study aimed to evaluate the system's usability using a walkthrough cognitive method. The goal was to identify areas for improvement and enhance the overall user experience of the NAVID e-learning system.

3. Methods

This study evaluated the usability of the NAVID national e-learning system from the perspectives of its users. The population consisted of 29 professors from the basic and clinical science departments of the faculty of medicine, 30 medical informatics graduate students, and 2 information technology experts from Tabriz University of Medical Sciences in the first semester of the 2022 - 2023 academic year. To ensure consistency, all participants were asked to complete their evaluations on the university's internet network using a desktop computer connected to the wireless network.

3.1. Validation of Checklists

Custom checklists were developed by reviewing the literature and an expert panel. For the assessment of face validity, the preliminary questionnaire was evaluated by eight medical education professionals. These participants assessed the difficulty, generality, and ambiguity of the items. The content validity of the questionnaire was evaluated using both qualitative and quantitative methods. Eight medical education professionals were asked to comment on the grammar, vocabulary, item placement, and scoring. The content validity ratio (CVR) and Content Validity Index (CVI) were calculated. The participants rated each item using a 3-point Likert scale (essential, useful but not essential, not essential). The CVI was assessed by the same ten experts for "simplicity", "relevance", and "clarity" using a 4-point Likert scale. Items with a CVR above 0.85 and a CVI above 0.70 were considered valid. The Cronbach’s alpha coefficient was calculated. After confirming validity and reliability, the checklists were finalized. The three checklists are presented in Appendix 1 in Supplementary File.

3.2. Usability Evaluation

Participants evaluated the NAVID system by performing specific tasks. Effectiveness (completeness) was measured by the degree to which goals were achieved, efficiency was assessed by the time taken to complete tasks, and satisfaction was gauged using a 7-point Likert scale.

3.3. Statistical Analysis

The effectiveness of tasks was calculated using the completion rate formula, which is defined as the number of completed tasks divided by the total number of tasks. The effectiveness was calculated by: Effectiveness = (Number of tasks completed successfully/total number of tasks undertaken) × 100. Efficiency was calculated using a time-based efficiency formula, considering the number of users, the success rate of each task, and the time taken to complete tasks. Time-based efficiency is calculated as:

Where N: The total number of tasks; R: The number of users; nij: The result of task i by user j; if the user successfully completes the task, then nij = 1, if not, then nij = 0; tij: The time spent by user j to complete task i. If the task is not successfully completed, then time was measured until the moment the user quits the task.

Satisfaction was assessed using a seven-point Likert scale, where participants indicated their level of satisfaction by selecting a response that ranged from "very dissatisfied" (assigned a score of 0) to "very satisfied" (assigned a score of 6). Then the overall satisfaction score was reported as a percentage.

Spearman's correlation analysis was performed using SPSS 16. A significance level of P < 0.05 was set.

4. Results

4.1. Checklist Validation

After validation, the checklists were finalized with a CVR of 0.86 and a CVI of 0.83. Reliability was confirmed with a Cronbach's alpha of 0.80. Out of 30 students, 20 were female and 10 were male. Among the 29 professors, there were 5 female and 24 male professors. In the expert group, there was 1 female and 1 male expert. The mean age of the participating students was 24.2 ± 2.7 years. For the professors, the mean age was 48.5 ± 8.4 years, while the experts had a mean age of 36.1 ± 5.6 years. Students were in their mean semester of 3.1 ± 1.4. The mean length of professional experience was 21.3 ± 5.9 years for the professors, and 9.1 ± 1.4 years for the experts.

4.2. Usability Evaluation by Professors

Professors reported an overall task completion rate of 87.5%. Tasks 9 (viewing the list of students of a course) and 11 (editing lesson information and curriculum plan) had the lowest effectiveness. The overall efficiency was 85%, with the lowest efficiency related to the same tasks due to difficulties in task navigation and icon placement. Satisfaction was also lower for these tasks, with an overall satisfaction score of 75.2% (Table 1). Suggestions for improvement included enhancing interaction with students, improving system performance during exams, and addressing technical issues like microphone quality.

Table 1.The Results of the Professors Regarding the Usability of the NAVID System a
Task NumberList of System TasksAchieving The Desired GoalTime-Based EfficiencySatisfaction
1Viewing the list of courses for the current semester29 (100)0.095100
2Selecting one of the lessons and adding a new source from the computer29 (100)0.062100
3Choosing one of the lessons and adding a source from Arman29 (100)0.063100
4Creating a new assignment with a deadline of 10 days29 (100)0.04760
Creating a quiz for tomorrow at 10 a.m.29 (100)0.041100
5Creating a new forum for discussion and problem-solving with no time limit29 (100)0.050100
6Creating a virtual class and its link29 (100)0.04660
7Sending a message to all students with information about the final exam29 (100)0.052100
8Viewing the list of students of a course0 (0)0.0390
9Copying two resources from another lesson into the selected lesson29 (100)0.06660
10Editing lesson information0 (0)0.0350
11Viewing assignment answers and giving feedback29 (100)0.050100
12Removing a resource from the selected lesson29 (100)0.055100
13Logging in to the report page and seeing the activity of students in doing homework29 (100)0.087100
14Viewing previously uploaded files in the repository29 (100)0.08859
15Activating/deactivating the content of one of the lesson sessions29 (100)0.07564

The Results of the Professors Regarding the Usability of the NAVID System a

For the professors, Spearman's correlation test showed a strong positive correlation between effectiveness and satisfaction scores (rs = 0.63, P = 0.008). However, there was no correlation between efficiency and satisfaction scores (rs = 0.31, P = 0.23). There was a moderate positive correlation between effectiveness and efficiency (rs = 0.57, P = 0.02).

4.3. Usability Evaluation by Students

Students reported an overall task completion rate of 83.3%. Effectiveness was lower for tasks related to password recovery and logging in through the University’s education management system (SAMA), primarily due to issues with phone number and email registration. Efficiency was also lower for these tasks, with an overall efficiency score of 85%. The overall satisfaction score was 81.1%, with lower satisfaction for tasks related to system support and password management (Table 2). Students suggested improvements such as better support communication, increased file upload limits, and system notifications for assignments.

Table 2.The Results of the Usability Assessment of the NAVID System by Students a
Task NumberList of System TasksAchieving the Desired GoalTime-Based EfficiencySatisfaction
1Viewing the answer to one of the questions through the frequently asked questions section30 (100)0.15100
2Viewing the support contact information of the site through the support section30 (100)0.124.5
3Recovering your password through the forgotten password link3 (10)0.02134
4Entering your NAVID account by the login button through “SAMA”4 (13)0.03525.5
Searching for one of your current lessons, through the search button30 (100)0.11100
5Opening the guide to work with the NAVID system through the help button30 (100)0.11100
6Changing your password in the profile section3 (10)0.04634
7Receiving the student activity report in all courses in Excel, through the reports section30 (100)0.067100
8Receiving the report of all course assignments in Excel, through the reports section30 (100)0.071100
9Getting the report of grades in all course tests in Excel, through the reports section30 (100)0.070100
10Receiving the report on all lesson discussions in Excel, through the reports section30 (100)0.071100
11Opening one of the contents and confirming the reading of the resource30 (100)0.075100
12Viewing one of the assignments of your courses and respond by sending text and files, through the assignments section30 (100)0.057100
13Seeing the guide for participating in the exam, through the self-exams section30 (100)0.090100
14Sending a question to your classmate, through the messages section30 (100)0.066100
15Sending a thank you message to the course teacher, through the messages section30 (100)0.061100

The Results of the Usability Assessment of the NAVID System by Students a

For the students, Spearman's correlation test showed a strong positive correlation between effectiveness and satisfaction scores (rs = 0.72, P = 0.002). However, there was no correlation between efficiency and satisfaction scores (rs = 0.33, P = 0.22). There was a strong positive correlation between effectiveness and efficiency (rs = 0.69, P = 0.004).

4.4. Usability Evaluation by Experts

Experts reported 100% task completion but noted that many tasks were time-consuming due to difficulties in accessing task menus and unclear icon text. The overall efficiency and satisfaction scores were 100% and 81.3%, respectively (Table 3). Suggestions included improving the user interface and providing up-to-date user guides.

Table 3.The Results of the Experts Regarding the Usability of the NAVID System a
Task NumberList of System TasksAchieving the Desired GoalTime-Based EfficiencySatisfaction
1Entering the system management page2 (100)0.10100
2Viewing the system users (teacher and student)2 (100)0.0550
3Accessing the new user creation form2 (100)0.20100
4Creating a new user as a professor2 (100)0.02100
Creating a new degree2 (100)0.10100
5Viewing the defined semester2 (100)0.11100
6Extracting the total number of students for the current semester2 (100)0.03100
7Adding a new course to the system2 (100)0.02100
8Choosing a course in the master's degree and extracting a report of the professor's activity in that course2 (100)0.0125
9Extracting the students' activity in the above-selected course2 (100)0.0150
10Extracting a comprehensive report on the status of a particular professor's courses2 (100)0.01100
11Analyzing the status of submitted assignments in nursing school2 (100)0.0250
12Correcting the “contact us” information in the system2 (100)0.05100
13 Adding a new teaching assistant to one of the selected courses of the rehabilitation faculty2 (100)0.01100
14Adding a new field of study to the system and define a student for it2 (100)0.01100
15Preparing the activity log of one of the courses from the first of the semester to the end of the year2 (100)0.0125

The Results of the Experts Regarding the Usability of the NAVID System a

For the experts, Spearman's correlation test showed a strong positive correlation between effectiveness and satisfaction scores (rs = 0.66, P = 0.01). Additionally, there was a moderate positive correlation between efficiency and satisfaction scores (rs = 0.52, P = 0.05). There was a moderate positive correlation between effectiveness and efficiency (rs = 0.53, P = 0.05).

5. Discussion

The effectiveness of the NAVID system, as evidenced by the task completion rates, was generally high. This suggests that the system can support users in achieving their objectives. However, the lower effectiveness rates for specific tasks, particularly for students, indicate areas where the system's design does not fully support user needs. Tasks such as recovering forgotten passwords or logging in through SAMA presented significant challenges. Since accessing a user account is foundational to system use, the observed difficulties in these areas could have broader implications, potentially deterring consistent use and reducing overall system engagement. To address these issues, more robust integration between NAVID and SAMA is essential, alongside a more intuitive password recovery process that does not burden users with prerequisites they might not have fulfilled.

The findings of the study align with previous research on the usability of e-learning systems. A study found that while students generally view e-learning systems positively, there are significant issues with system features, including spontaneous and unwarranted log-outs (10).

Efficiency reflects how quickly and effortlessly users can complete tasks. While the system performed well for simple tasks, such as viewing the list of current courses, more complex tasks, like editing course information or presenting lesson plans, showed significant inefficiencies. These tasks were not only time-consuming but also required navigating through less intuitive parts of the system, which likely contributed to the lower satisfaction scores observed for these tasks. The inefficiencies in these areas can be linked to poor information architecture, where essential functions are buried under multiple layers, making them less accessible to users (11). This indicates a need for a redesign of the system's user interface, prioritizing ease of access to frequently used functions and reducing the cognitive load on users (12).

User satisfaction is perhaps the most subjective yet critical dimension of usability, as it encapsulates the overall user experience and willingness to continue using the system. The satisfaction levels, particularly among professors and students, were relatively low for specific tasks, such as managing course information and interacting with students during online sessions. These findings align with prior studies, which suggest that e-learning systems often fall short in providing seamless interaction and robust evaluation tools, especially in environments that heavily rely on offline methods (13). The dissatisfaction with the system's exam and evaluation capabilities, frequent disconnections, and limited interaction options during non-office hours point to significant areas needing improvement. Enhancing the system's reliability, particularly during high-stakes tasks like exams, and expanding support options to include real-time assistance through multiple channels, would likely increase overall user satisfaction (14).

In the present study, the reason mentioned for the low satisfaction with tasks 9 and 11 was the difficulty in finding the location and the inappropriate design of the icons. Ennam indicated that the lack of success in online learning is primarily due to inadequate web accessibility and affordability and insufficient training in distance education (15). For both students and professors, a strong positive correlation was found between effectiveness and satisfaction, and between effectiveness and efficiency, underscoring the importance of task success in shaping both their satisfaction and perception of system usability. However, no significant correlation was observed between efficiency and satisfaction, which may reflect their tolerance for slower system interactions if tasks are ultimately completed successfully. However, among experts, correlations were found between efficiency and satisfaction. These results indicate that for experts, who may have higher expectations and more system knowledge, task performance influences both perceived efficiency and satisfaction.

The correlation observed between effectiveness, efficiency, and satisfaction in this study underscores the interdependence of these usability dimensions, suggesting that when users can complete tasks effectively, their satisfaction with the system increases (16). Conversely, when tasks are inefficient or complex, effectiveness and satisfaction decline, as seen with tasks involving password recovery or system login. This interdependence suggests that improvements in one area, such as making key tasks more efficient, are likely to yield broader gains in overall system usability (17).

The Online Learning Consortium considers student satisfaction with online learning in higher education to be an essential element for measuring the quality of online courses (18). However, satisfaction is multidimensional, and different factors influence learner satisfaction, such as their digital literacy levels, social and professional engagements, perceived stress levels, and the course learning design (19).

Moreover, the frustration students expressed with limited support options during exams—especially when issues arise outside of office hours — suggests a need for more robust, round-the-clock support mechanisms (20). This could include automated solutions like AI-driven chat bots for common issues or an expanded support team available during peak usage times. A study by Kim et al. showed a strong correlation between student satisfaction, the acceptance of online learning, and the effectiveness of online support services for both American and Korean students (21).

Future research could focus on integrating more advanced technologies, such as AI-driven support, to further elevate the system’s usability. This study involved various groups of e-learning stakeholders, including students, professors, and administrators, enriching the research by providing multiple perspectives. Although the participants were students from diverse backgrounds, cultures, and cities attending one university in Iran, the validity and reliability of the model could be further improved by surveying participants from different universities across the country. Additionally, as technology and e-learning continue to evolve, conducting longitudinal research to explore how the usability factors identified in this study change over time could yield further interesting insights.

5.1. Conclusions

While the system demonstrates overall acceptable usability, particularly in task completion rates, the findings also reveal critical areas that require significant enhancement. The challenges faced by users, particularly in accessing and interacting with essential system functions such as password recovery and course management, indicate a need for a more user-friendly design. Furthermore, relationships were observed between different usability dimensions, suggesting that improvements in one aspect are likely to lead to overall improvements in system usability.

Footnotes

References

  • 1.
    Zhang C, Khan I, Dagar V, Saeed A, Zafar MW. Environmental impact of information and communication technology: Unveiling the role of education in developing countries. Technol Forecasting Soc Change. 2022;178. https://doi.org/10.1016/j.techfore.2022.121570.
  • 2.
    Jiang H, Islam A, Gu X, Spector JM. Online learning satisfaction in higher education during the COVID-19 pandemic: A regional comparison between Eastern and Western Chinese universities. Educ Inf Technol (Dordr). 2021;26(6):6747-69. [PubMed ID: 33814959]. [PubMed Central ID: PMC8010491]. https://doi.org/10.1007/s10639-021-10519-x.
  • 3.
    Chakraborty P, Mittal P, Gupta MS, Yadav S, Arora A. Opinion of students on online education during the COVID ‐19 pandemic. Hum Behav Emerg Technol. 2020;3(3):357-65. https://doi.org/10.1002/hbe2.240.
  • 4.
    Nachvak M, Sadeghi E, Mohammadi R, Rezaei M, Abdollahzad H, Soleimani D. User Experience of NAVID E-learning System in the School of Nutrition and Food Technology of Kermanshah University of Medical Sciences, Iran (2020). Educ Res Med Sci. 2021;10(1). https://doi.org/10.5812/erms.117418.
  • 5.
    Pourabbas A, Amini A, Fallah F, Asghari jafarabadi M. The status of accountable education in the Surgery Department, Tabriz, Iran. Res Dev Med Educ. 2019;8(1):31-7. https://doi.org/10.15171/rdme.2019.006.
  • 6.
    Nik Ahmad NA, Hamid NIM, Mohd Lokman A. Performing Usability Evaluation on Multi-Platform Based Application for Efficiency, Effectiveness, and Satisfaction Enhancement. Int J Interactive Mobile Technol. 2021;15(10). https://doi.org/10.3991/ijim.v15i10.20429.
  • 7.
    Oluwadele D, Singh Y, Adeliyi T. An Explorative Review of the Constructs, Metrics, Models, and Methods for Evaluating e-Learning Performance in Medical Education. Electron J e-Learn. 2023;21(5):394-412. https://doi.org/10.34190/ejel.21.5.3089.
  • 8.
    Farzandipour M, Nabovati E, Sadeqi Jabali M. Comparison of usability evaluation methods for a health information system: heuristic evaluation versus cognitive walkthrough method. BMC Med Inform Decis Mak. 2022;22(1):157. [PubMed ID: 35717183]. [PubMed Central ID: PMC9206256]. https://doi.org/10.1186/s12911-022-01905-7.
  • 9.
    Khajouei R, Hajesmaeel Gohari S, Mirzaee M. Comparison of two heuristic evaluation methods for evaluating the usability of health information systems. J Biomed Inform. 2018;80:37-42. [PubMed ID: 29499315]. https://doi.org/10.1016/j.jbi.2018.02.016.
  • 10.
    Salmani N, Bagheri I, Dadgari A. Iranian nursing students experiences regarding the status of e-learning during COVID-19 pandemic. PLoS One. 2022;17(2). e0263388. [PubMed ID: 35108327]. [PubMed Central ID: PMC8809553]. https://doi.org/10.1371/journal.pone.0263388.
  • 11.
    Garrido A, Morales L, Serina I. On the use of case-based planning for e-learning personalization. Expert Syst Applications. 2016;60:1-15. https://doi.org/10.1016/j.eswa.2016.04.030.
  • 12.
    Younas A, Faisal C, Habib MA, Ashraf R, Ahmad M. Role of Design Attributes to Determine the Intention to Use Online Learning via Cognitive Beliefs. IEEE Access. 2021;9:94181-202. https://doi.org/10.1109/access.2021.3093348.
  • 13.
    Liu M, Yu D. Towards intelligent E-learning systems. Educ Inf Technol (Dordr). 2022:1-32. [PubMed ID: 36532790]. [PubMed Central ID: PMC9742041]. https://doi.org/10.1007/s10639-022-11479-6.
  • 14.
    Wang J, Yang Y, Li H, van Aalst J. Continuing to teach in a time of crisis: The Chinese rural educational system’s response and student satisfaction and social and cognitive presence. British J Educ Technol. 2021;52(4):1494-512. https://doi.org/10.1111/bjet.13129.
  • 15.
    Ennam A. Assessing Covid-19 pandemic-forced transitioning to distance e-learning in Moroccan universities: an empirical, analytical critical study of implementality and achievability. J North African Stud. 2021;29(1):153-77. https://doi.org/10.1080/13629387.2021.1937138.
  • 16.
    Al-Fraihat D, Joy M, Masa'deh R, Sinclair J. Evaluating E-learning systems success: An empirical study. Comput Hum Behav. 2020;102:67-86. https://doi.org/10.1016/j.chb.2019.08.004.
  • 17.
    Bossman A, Agyei SK. Technology and instructor dimensions, e-learning satisfaction, and academic performance of distance students in Ghana. Heliyon. 2022;8(4). e09200. [PubMed ID: 35399373]. [PubMed Central ID: PMC8987389]. https://doi.org/10.1016/j.heliyon.2022.e09200.
  • 18.
    Dziuban C, Moskal P, Thompson J, Kramer L, DeCantis G, Hermsdorfer A. Student Satisfaction with Online Learning: Is it a Psychological Contract? Online Learning. 2015;19(2). https://doi.org/10.24059/olj.v19i2.496.
  • 19.
    Tere T, Bayu Seta H, Nizar Hidayanto A, Abidin Z. Variables Affecting E-Learning Services Quality in Indonesian Higher Education: Students’ Perspectives. J Inform Technol Educ: Res. 2020;19:259-86. https://doi.org/10.28945/4489.
  • 20.
    Ahmed V, Opoku A. Technology supported learning and pedagogy in times of crisis: the case of COVID-19 pandemic. Educ Inf Technol (Dordr). 2022;27(1):365-405. [PubMed ID: 34462626]. [PubMed Central ID: PMC8387665]. https://doi.org/10.1007/s10639-021-10706-w.
  • 21.
    Kim S, Lee J, Yoon S, Kim H. How can we achieve better e-Learning success in the new normal? Internet Res. 2022;33(1):410-41. https://doi.org/10.1108/intr-05-2021-0310.

Crossmark
Crossmark
Checking
Share on
Cited by
Metrics

Purchasing Reprints

  • Copyright Clearance Center (CCC) handles bulk orders for article reprints for Brieflands. To place an order for reprints, please click here (   https://www.copyright.com/landing/reprintsinquiryform/ ). Clicking this link will bring you to a CCC request form where you can provide the details of your order. Once complete, please click the ‘Submit Request’ button and CCC’s Reprints Services team will generate a quote for your review.
Search Relations

Author(s):

Related Articles