Abstract
Background:
Evaluation is a critical stage in nursing education and is an integral part of the learning process. The clinical performance evaluation of nursing students is essential to ensure that they, as future nurses, are capable of delivering competent and safe nursing care. Evaluation methods that rely on a single source cannot provide a comprehensive view of the student's performance.Objectives:
This study aimed to provide a 360-degree evaluation of the clinical performance of nursing students.Methods:
This analytical-observational study was conducted cross-sectionally during the first semester of 2023 - 2024. The study included all 8th-semester nursing students at Jahrom University of Medical Sciences through census sampling (30 students). Throughout the semester, the students completed rotational clerkships in internal surgical and critical care wards. The data collection tool was a checklist used to evaluate the clinical performance of nursing students, which was completed by the students themselves, their peers, clinical instructors, and head nurses. Additionally, the objective structured clinical examination (OSCE) test score, administered at the end of the semester, was used as another evaluation source. Data were analyzed using SPSS version 21 software, with descriptive and analytical statistics such as repeated measures ANOVA applied.Results:
Of the 30 participants, 17 (53.3%) were female, and 13 (46.7%) were male, with a mean age of 24.21 ± 12.1 years. The highest mean scores were from self-assessment (95.03 ± 6) and peer evaluation (95 ± 7.01), both at an excellent level, while the lowest mean scores were from clinical instructors (77 ± 5) and head nurses (78 ± 6), at a good level. There was no statistically significant difference between the mean scores of self-assessment and peer evaluation (P = 0.851). Similarly, no significant difference was found between the mean scores of clinical instructors and head nurses (P = 0.816). However, a statistically significant difference was observed between students' self-assessment and other evaluation sources such as clinical instructors, head nurses, and the OSCE (P < 0.001).Conclusions:
Given the discrepancy between students' self-assessments and evaluations from other sources, the use of a 360-degree evaluation method can provide a more realistic assessment and increase student satisfaction.Keywords
Program Evaluation Clinical Competence Student Performance Appraisal Nursing Education Research Nursing Evaluation Research
1. Background
Clinical education is the most crucial part of nursing education, and clinical performance is a key element in the nursing curriculum for acquiring essential skills. The primary goal of internships is to develop professional competence and skills in nursing care, which necessitates assessing students' mastery of basic skills. In other words, identifying the key dimensions and main indicators of the performance of nursing students at the bachelor's level is critical (1). Performance evaluation is a process used to assess individuals at specified intervals (2). Evaluating students' knowledge and performance is a significant indicator of successful educational planning (3).
Performance evaluation is the most effective way to improve the quality of education and student performance (4). Student evaluation is essential to educational activities, as it helps identify strengths and weaknesses. By reinforcing positive aspects and addressing deficiencies, evaluation leads to the transformation and enhancement of the educational system. Due to the impact of evaluation methods on professional skill development, accurate methods for clinical evaluation are necessary across various fields in medical sciences (3). Clinical performance evaluation is challenging in many medical professions, including nursing. Clinical instructors often worry whether their evaluations accurately reflect students' actual clinical performance. Meanwhile, some studies report student dissatisfaction with clinical evaluations, citing unfairness and a lack of authenticity in evaluations conducted by clinical instructors (5).
Previous research indicates that the quality of clinical evaluation is often unsatisfactory, with several deficiencies identified. These include neglecting appropriate clinical teaching and evaluation, a lack of coordination between faculty instructors and hospital facilities, and insufficient time to engage with cases for thorough bedside learning. Additionally, discrepancies between theoretical and clinical evaluations and inconsistencies in the scores reported by clinical instructors contribute to student dissatisfaction (6).
There is currently no specific evaluation method that can accurately assess the knowledge, skills, and clinical abilities of nursing students. Traditionally, faculty instructors evaluate nursing students, but this method has limitations as it excludes evaluations by patients, nursing staff, and peers (7). Despite the importance of clinical evaluation, it is often considered a subjective, time-consuming, and ambiguous process, with many evaluations being unclear and lacking in detail (1). Instructors may find it challenging to evaluate all aspects of behavior and individual skills, while evaluations from multiple perspectives offer more comprehensive insights than those from a single viewpoint.
The 360-degree evaluation is considered one of the best methods for evaluating skills. This method involves gathering feedback from individuals who interact with the student in the workplace, providing a comprehensive understanding of the student's skills and abilities (8). Since 1980, the 360-degree evaluation has been widely used to assess processes and professional competencies in various settings (9).
In nursing education, the 360-degree evaluation can be highly effective. Learners often behave differently when interacting with peers, staff, and patients compared to their behavior in the presence of nursing faculty instructors (7). In the traditional evaluation method, the student is at a lower hierarchical level, while the instructor holds a higher position. In contrast, the 360-degree evaluation places the student at the center of a network that includes the instructor, staff, peers, and patients, allowing performance to be evaluated from multiple perspectives and in different situations (10).
2. Objectives
Despite the advantages of the 360-degree evaluation, limited studies have explored its use in nursing education (7, 9, 11). Given the importance of clinical evaluation for senior nursing students and the need for continuous monitoring and assessment to improve teaching and learning, this study aims to implement a 360-degree evaluation of the clinical performance of senior nursing students and compare the mean scores from various evaluations.
3. Methods
3.1. Study Design and Setting
This study was an analytical observational study conducted during the first semester of the 2023 academic year at Jahrom University of Medical Sciences, Iran. The students completed their clinical placements over fifteen weeks in the medical/surgical and critical care units of two teaching hospitals affiliated with the university. In reporting this observational study, we adhered to the strengthening the reporting of observational studies in epidemiology (STROBE) guidelines (12).
3.2. Participants
The study population comprised all senior nursing students. All students were included in the study through a census method, totaling 30 participants. The inclusion criteria were completion of a clerkship course and consent to participate in the study. Exclusion criteria included absenteeism from more than four sessions during the clerkship period, absence from the objective structured clinical examination (OSCE), or incomplete completion of the clinical performance evaluation questionnaire. All students met the inclusion criteria, and none were excluded from the study.
3.3. Implementation Method
In the orientation session held before the start of the clinical internship, with all students and clinical instructors present, the responsible instructor familiarized the students with the course learning objectives and the structure of training in each section. Students were informed about the process of conducting research and evaluation methods. Informed consent forms for participation in the study were then completed. The students were randomly assigned (by lottery) into 10 groups of three individuals each.
During the academic semester, the students completed an 80-day clerkship rotation in adult and geriatric care, home care, and specialized nursing care in the internal surgical and critical care units. The questionnaire completion method required that at the end of the clerkship, each student and their group members assess themselves and their peers based on their performance throughout the semester, using the clinical performance evaluation checklist. The end-of-semester evaluation of the students by the clinical instructors was also conducted using the same checklist. Additionally, at the end of the clerkship in each department, the head nurses evaluated the students' clinical performance using the same checklist. The evaluation scores from peers, instructors, and head nurses were averaged to determine the overall score for each group.
Furthermore, another source of information for the 360-degree evaluation was the OSCE score. At the end of the semester, an OSCE was conducted to evaluate the clinical skills of the students. This exam consisted of 15 stations in various domains and was conducted in the nursing school's practice room at Jahrom University of Medical Sciences, taking into account the students' learning priorities.
3.4. Instrument
The data collection tool used was the clinical performance evaluation checklist for nursing students (13). This tool’s face, content, and construct validity had been previously examined in Iran, and its internal consistency was confirmed with a Cronbach's alpha of 0.92. The checklist consisted of 28 items divided into three categories: Nursing process (12 items), professionalization (9 items), and ethical principles (7 items). Each item was rated on a scale of 1 - 10, with the overall score ranging from a minimum of 28 to a maximum of 280. For analysis, scores were converted to percentages: 85 - 100% was rated as excellent, 75 - 84% as good, 70 - 74% as satisfactory, 60 - 69% as average, 50 - 59% as the minimum passing score, and below 50% indicated failure (14).
The OSCE exam scores were reported on a 0 - 100 scale based on the average performance across the stations. Each student had five sets of scores: Self-assessment, peer evaluation, clinical instructor evaluation, head nurse evaluation, and OSCE exam. The average scores across these different evaluations were compared using repeated measures ANOVA to meet the study's overall objectives.
3.5. Data Analysis
Data analysis was conducted using SPSS version 21. Descriptive statistics (mean, standard deviation, frequency) and analytical statistics (repeated measures ANOVA) were employed, with the significance level set at 0.05.
3.6. Ethical Considerations
This study was conducted in accordance with a research protocol approved by the ethics committee (code: IR.JUMS.REC.1401.013). Informed consent was obtained from all participating students, with assurance that their participation or lack thereof would not affect their evaluation. Additionally, the results were reported anonymously, without identifying individual students.
4. Results
In the present study, the mean age of the participating students was 24.21 ± 12.1, and their grade point average (GPA) was 15.38 ± 6.24. The participants included 17 female students (53.3%) and 13 male students (46.7%).
The analysis of the total clinical performance evaluation scores from different evaluators revealed that the highest mean scores were reported in self-assessment and peer evaluation, both rated at an excellent level. In contrast, the lowest mean scores were reported by clinical instructors and head nurses, who rated performance at a good level (Table 1).
The Mean and Standard Deviation of Clinical Performance Evaluation Scores of Nursing Students in Different Methods (n = 30)
Evaluation Methods | Min - Max | Mean ± SD |
---|---|---|
Peer evaluation | 66 - 100 | 95 ± 7.01 |
Self-evaluation | 76 - 100 | 95.03 ± 6 |
Clinical instructor | 59 - 88 | 77 ± 5 |
Head nurses | 61 - 88 | 78 ± 6 |
OSCE | 75 - 93 | 82 ± 4 |
The results from the repeated measures analysis showed no statistically significant difference between the mean scores of self-assessment and peer evaluation (P = 0.851). Additionally, no statistically significant difference was found between the mean scores given by clinical instructors and head nurses (P = 0.816). However, there was a statistically significant difference in the clinical performance evaluation scores between students' self-assessments and other evaluation methods (P < 0.001) (Table 2).
Pairwise Comparison of the Mean Score of Clinical Performance Evaluation of Nursing Students in Different Methods
Evaluation Methods | Mean Difference | Adj. P-Value a | 95% Confidence Interval | |
---|---|---|---|---|
Lower Bound | Upper Bound | |||
OSCE | ||||
Clinical instructor | 4.110 | < 0.001 | 2.03 | 2.18 |
Head nurses | 4.318 | < 0.001 | 2.23 | 6.40 |
Self-evaluation | -12.206 | < 0.001 | -14.82 | -9.58 |
Peer evaluation | -12.488 | < 0.001 | -15.08 | -9.89 |
Clinical instructor | ||||
OSCE | -4.110 | < 0.001 | -6.18 | -2.03 |
Head nurses | 0.208 | 0.816 | -1.59 | 2.01 |
Self-evaluation | -16.316 | < 0.001 | -18.97 | -13.65 |
Peer evaluation | -16.598 | < 0.001 | -19.43 | -13.76 |
Head nurses | ||||
OSCE | -4.318 | < 0.001 | -6.40 | -2.223 |
Clinical instructor | -0.208 | 0.816 | -2.01 | 1.59 |
Self-evaluation | -16.524 | < 0.001 | -18.88 | -14.16 |
Peer evaluation | -16.806 | < 0.001 | -19.978 | -13.634 |
Self-evaluation | ||||
OSCE | 12.206 | < 0.001 | 9.588 | 14.824 |
Clinical instructor | -16.316 | < 0.001 | 13.658 | 18.974 |
Head nurses | 16.524 | < 0.001 | 14.166 | 18.882 |
Peer evaluation | -0.282 | 0.851 | -3.298 | 2.734 |
Peer evaluation | ||||
OSCE | 12.488 | < 0.001 | 9.892 | 15.083 |
Clinical instructor | 16.598 | < 0.001 | 13.766 | 19.430 |
Head nurses | -16.808 | < 0.001 | 13.634 | 19.978 |
Self-evaluation | 0.282 | 0.851 | -2.734 | 3.298 |
5. Discussion
The present study aimed to conduct a 360-degree evaluation of nursing students' clinical performance, incorporating assessments from the students themselves, their peers, clinical instructors, ward head nurses, and the OSCE. The results demonstrated that the clinical performance of the nursing students was rated as excellent or good across these various perspectives. Specifically, the highest mean scores were reported in self-assessments and peer evaluations, while the lowest mean scores came from clinical instructors and head nurses.
In agreement with this study, Gonzalez-Gil et al. evaluated the effectiveness of the 360-degree method for assessing competencies in third-year nursing students. Their findings revealed that the highest scores were assigned by peers, and students generally received higher scores with the 360-degree evaluation compared to traditional evaluation methods (11). Similarly, R and Shakuntala reported that in the 360-degree evaluation of final-year nursing students' competencies, self-assessment and peer evaluations yielded higher scores than those from other evaluators. They also observed a positive correlation between self-assessment and peer assessments (7).
Kajander-Unkuri et al. also found that students’ self-assessments tended to result in higher scores than those given by instructors (15). In a study by Samadi et al., self-assessment scores among nursing students were significantly higher than those given by instructors. This difference can be attributed to the strictness of instructors when evaluating students' clinical skills, as instructors often hold higher expectations, considering the critical nature of students' future responsibilities (16). Takashima et al. suggested that achieving congruence in instructors' expectations is crucial for clinical evaluations, and they emphasized the need for a standardized evaluation process to improve student performance (17).
Reflecting on these findings, it is clear that differing perspectives between students and instructors can impact evaluation outcomes, highlighting the importance of incorporating various viewpoints to achieve a more comprehensive and balanced assessment.
The results of the present study revealed no statistically significant difference between the mean clinical performance evaluation scores of students and their peers, nor between those of clinical instructors and head nurses. However, a statistically significant difference was observed between the students' self-assessment scores and the evaluations provided by instructors and head nurses. Similarly, Mehrdad et al. found a correlation between self-assessment and peer assessment, but no correlation between self-assessment, peer assessment, and the evaluation by instructors. Based on these findings, it has been suggested that self-assessment and peer evaluation should be considered complementary educational tools rather than formal evaluation measures (18).
In contrast to this study, another investigation examining the "professional behavior" and "clinical skills" of students in the pediatrics department found a significant correlation between peer evaluations, clinical instructors' assessments, and students' self-assessments. However, no significant correlation was observed between nurses' and clinical instructors' evaluation scores (9). This discrepancy might be due to the limited scope of evaluation in the pediatrics department and the differences in evaluation tools used in the present study.
Some students expressed concerns that their performance evaluations were influenced by personal biases and the preconceived opinions of instructors, leading to perceived discrimination within groups. They also noted that the evaluations did not seem to be based on actual competence, as there were no standardized rules for assessment, and instructors often emphasized their personal views on the type of learning (whether theoretical or clinical) during clerkship (8, 19). Studies in this field suggest that clinical evaluation faces challenges such as inconsistency, subjective assessments, variations in evaluation methods among instructors, and the lack of stable evaluation tools. One of the major challenges in clinical education within the healthcare system is the absence of appropriate evaluation methods and a clear, consistent criterion shared by all instructors. Therefore, implementing a comprehensive, multidimensional evaluation system with constructive feedback could have a positive educational impact (20).
Sadeghi and Kazemi found that the 360-degree evaluation method fosters a dynamic atmosphere and promotes active learning during clerkship. Students reported that this method increased their interest and responsibility, and they appreciated how it removed subjectivity, leading to a more reality-based assessment (10). Also, the results of Mousavi and Kamali's study showed that the use of 360-degree evaluation method is effective in improving the clinical self-efficacy of final year nursing students (21). In assessing the clinical performance of undergraduate nursing students, it is essential to use evaluation tools and methods that are clear, concise, and reflect the comprehensive clinical experiences of instructors (22).
A multilateral evaluation approach—incorporating multiple evaluators, differences in perspectives, continuous communication between evaluators and learners, self-assessment, peer assessment, and the OSCE—represents the strengths of this study. However, potential challenges include resistance from some instructors, the need for cooperation from head nurses, unfamiliarity with this method, time constraints, and the difficulty of aggregating diverse opinions. Given the unique characteristics of each department, it is necessary to design department-specific assessment forms. Further studies with larger sample sizes and across different educational settings could provide more insights into the accuracy, validity, and limitations of the 360-degree evaluation method for assessing nursing students' clinical skills. Additionally, future research could benefit from qualitative methods to gather more in-depth information on this topic.
5.1. Conclusions
In the 360-degree assessment method, taking into account the differing perspectives of assessors and offering opportunities for self-assessment and peer evaluation can actively engage instructors, head nurses, and students in a more realistic and holistic approach to education and evaluation. This method helps enhance student performance, reduce the halo effect among evaluators, and provide students with more accurate and comprehensive feedback in clinical settings. It also allows for a clearer reflection of their strengths and areas for improvement.
The findings from this study can serve as a valuable resource for stakeholders, clinical educators, and nursing instructors. By adopting this evaluation approach and incorporating students' perspectives, efforts can be made to enhance clinical assessments and boost student satisfaction. Furthermore, applying the 360-degree assessment method in clinical practice could lead to improvements in the quality of care provided to patients by nursing students, contributing to better outcomes in both education and patient care.
References
-
1.
Esmaeili M, Toloie Eshlaghy A, Afshar Kazemi M, Motadel M. Improving the Performance Indicators of Nursing Students in the Field: Regular Review of Studies. Adv Nurs Midwifery. 2020;29(4):3-7. https://doi.org/10.29252/anm-290403.
-
2.
Ayaz-Alkaya S, Yaman-Sozbir S, Bayrak-Kahraman B. The effect of nursing internship program on burnout and professional commitment. Nurse Educ Today. 2018;68:19-22. [PubMed ID: 29870870]. https://doi.org/10.1016/j.nedt.2018.05.020.
-
3.
Esmaeili R, Esmaeili M. Performance Evaluation of Nursing Students in the Clinical Area. Pak J Med Health Sci. 2021;15(5):1623-8. https://doi.org/10.53350/pjmhs211551623.
-
4.
Cook S, Watson D, Webb R. Performance evaluation in teaching: Dissecting student evaluations in higher education. Studies in Educational Evaluation. 2024;81:101342. https://doi.org/10.1016/j.stueduc.2024.101342.
-
5.
Chen LC, Lin CC, Han CY, Huang YL. Clinical Instructors' Perspectives on the Assessment of Clinical Knowledge of Undergraduate Nursing Students: A Descriptive Phenomenological Approach. Healthcare (Basel). 2023;11(13). [PubMed ID: 37444685]. [PubMed Central ID: PMC10340473]. https://doi.org/10.3390/healthcare11131851.
-
6.
Riahi Roohi Z, Salehi S. Quality of Clinical Evaluation from Viewpoint of Nurse Interns and Nursing Unit Clerks; Nursing Students of the School of Nursing and Midwifery. Asian J Pharm Res Health Care. 2016;9(1):17-21. https://doi.org/10.18311/ajprhc/2017/6129.
-
7.
R H, Shakuntala BS. Using Multiple Assessors to Evaluate Core Competencies of Nursing Students: A 360° Evaluation Approach. Nitte Univ J Health Sci. 2013;3(3):13-7. https://doi.org/10.1055/s-0040-1703669.
-
8.
Seyed Bagheri SH, Sadeghi T. Challenges of teacher-based clinical evaluation from nursing students' point of view: Qualitative content analysis. J Educ Health Promot. 2017;6(1):72. https://doi.org/10.4103/jehp.jehp_109_16.
-
9.
Sadeghi T, Loripoor M. Usefulness of 360 degree evaluation in evaluating nursing students in Iran. Korean J Med Educ. 2016;28(2):195-200. [PubMed ID: 26913770]. [PubMed Central ID: PMC4951738]. https://doi.org/10.3946/kjme.2016.22.
-
10.
Sadeghi T, Kazemi M. [Nursing Student's Viewpoints and Experiences about Clinical Evaluation by 360 Degree Approach]. J Qual Res Health Sci. 2016;5(3):273-82. FA.
-
11.
Gonzalez-Gil MT, Parro-Moreno AI, Oter-Quintana C, Gonzalez-Blazquez C, Martinez-Marcos M, Casillas-Santana M, et al. 360-Degree evaluation: Towards a comprehensive, integrated assessment of performance on clinical placement in nursing degrees: A descriptive observational study. Nurse Educ Today. 2020;95:104594. [PubMed ID: 32979748]. https://doi.org/10.1016/j.nedt.2020.104594.
-
12.
Ghaferi AA, Schwartz TA, Pawlik TM. STROBE Reporting Guidelines for Observational Studies. JAMA Surg. 2021;156(6):577-8. https://doi.org/10.1001/jamasurg.2021.0528.
-
13.
Karayurt O, Mert H, Beser A. A study on development of a scale to assess nursing students' performance in clinical settings. J Clin Nurs. 2009;18(8):1123-30. [PubMed ID: 19320782]. https://doi.org/10.1111/j.1365-2702.2008.02417.x.
-
14.
Esmaeili M, Valiee S, Parsa-Yekta Z, Ebadi A. [Translation and Psychometric Evaluation of Clinical Performance Assessment Scale among Nursing Students]. Strid Dev Med Educ. 2013;10(2):288-97. FA.
-
15.
Kajander-Unkuri S, Leino-Kilpi H, Katajisto J, Meretoja R, Räisänen A, Saarikoski M, et al. Congruence between graduating nursing students’ self-assessments and mentors’ assessments of students’ nurse competence. Collegian. 2016;23(3):303-12. https://doi.org/10.1016/j.colegn.2015.06.002.
-
16.
Samadi N, Varei S, Ghiyasvandian S, Allahyari I, Moshfeghi S. [Effect of 360-Degree Feedback on the Evaluation of the Clinical Skills of Nursing Students of Ardabil University of Medical Sciences]. J Clin Nurs Midwifery. 2019;7(4):250-7. FA.
-
17.
Takashima M, Burmeister E, Ossenberg C, Henderson A. Assessment of the clinical performance of nursing students in the workplace: Exploring the role of benchmarking using the Australian Nursing Standards Assessment Tool (ANSAT). Collegian. 2019;26(4):502-6. https://doi.org/10.1016/j.colegn.2019.01.005.
-
18.
Mehrdad N, Bigdeli S, Ebrahimi H. A Comparative Study on Self, Peer and Teacher Evaluation to Evaluate Clinical Skills of Nursing Students. Procedia Soc Behav Sci. 2012;47:1847-52. https://doi.org/10.1016/j.sbspro.2012.06.911.
-
19.
Norozi S, Mogadam F. [Exploring Experiences of Nursing Student's Clinical Evaluation: A Qualitative Content Analysis]. J Med Educ Dev. 2016;11(2):134-45. FA.
-
20.
Torabizadeh C, Ghodsbin F, Javanmardifard S, Shirazi F, Amirkhani M, Bijani M. The Barriers and Challenges of Applying New Strategies in the Clinical Evaluation of Nursing Students from the Viewpoints of Clinical Teachers. Iran J Nurs Midwifery Res. 2018;23(4):305-10. [PubMed ID: 30034492]. [PubMed Central ID: PMC6034533]. https://doi.org/10.4103/ijnmr.IJNMR_17_17.
-
21.
Mousavi SK, Kamali M. Clinical self-efficacy of final-year nursing students: A comparison of a 360-degree evaluation method with a conventional method. J Med Educ Dev. 2022;15(47):27-35. https://doi.org/10.52547/edcj.15.47.27.
-
22.
Rajabpour M, Karimi Moonaghi H. [Effective Tools and Methods for Evaluating the Clinical Performance of Medical Sciences Students: A Systematic Review]. Horizon of Medical Education Development. 2024;15(2):69-81. FA. https://doi.org/10.22038/hmed.2024.73743.1283.