1. Context
Faculty members are the most costly workforce in universities, and thus the professor's evaluation system should be able to act as a mirror of the professor's performance commensurate with their responsibilities in the areas of educational duties, research, service delivery, management, and collegial and extraterrestrial behaviors (1, 2). The comprehensiveness of this system, while providing justice, helps to achieve more favorable results in the educational system (3, 4).
While a dentist or engineer can immediately see the result of their work, the result of a teacher's work is not easy to see and measure in a short time. On the other hand, a large percentage of what learners gain is the result of their previous learning; therefore, it is challenging for a teacher to see the impact of their work (5, 6).
Teacher evaluation can have different functions (7). One of the most tangible goals and applications of teacher evaluation is its role in managerial decisions. These decisions include hiring, renewing contracts, requiring correction, and even releasing a professor. This use of evaluation has attracted much attention from both managers and professors. A good evaluation of the teacher also gives them enough information about how they work; thus, they know whether they have done a good and valuable job. A good evaluation, in addition to reassuring the teacher, leads to increased job satisfaction among teachers (1, 7-9).
According to the latest decree of the Iran Ministry of Health, the duties of professors are classified into seven areas, including education, research, personal development, executive and managerial activities, providing health services, specialized and health promotion, specialized activities outside the university, and cultural activities (10).
Educational tasks in medical universities cover a wide range of activities. These activities include theoretical and practical teaching, counseling and guidance for students, supervising clinical and educational dissertations, active participation in morning rounds and reports, night watch, on-call services, journal clubs, and workshops for professors, students, and staff (9-11).
Educational appraisal is essential in finding and promoting educational quality and ensuring continuous improvement. The performance of faculty members, considered one of the primary building blocks of universities, makes a significant contribution to the output of an educational system. The importance of understanding and recognizing the performance of clinical faculty members is inevitable (12).
However, evaluation of clinical faculty members faces some challenges. One of the major problems of faculty members' promotion regulations is the incompatibility of scores given to some of their activities (13). Also, professor performance assessment requires collecting data on educational activities, comparing these data with specific and designed standards, and judging the extent to which predetermined goals have been achieved (14).
Evaluating faculty members in this area certainly has its difficulties and complexities. Therefore, the existence of a comprehensive and inclusive system that includes all professional aspects of medical professors seems necessary. The primary purpose of this study was to systematically review the models, tools, and challenges of evaluating the performance of clinical faculty members.
2. Methods
2.1. Definitional Concepts
The present study was a systematic review of articles and documents evaluating the performance of clinical faculty members. This systematic review followed the preferred reporting items for systematic reviews and meta-analysis (PRISMA) guidelines.
2.2. Search Strategy and Selection Criteria
Eight international (EMBASE, ProQuest, Science Direct, Web of Knowledge, Scopus, PubMed, Ovid, and Google Scholar) and four national (Civilica, Irandoc, Magiran, and SID) electronic databases were searched to find published studies and grey literature on evaluating models and methods of clinical faculty members performance. The search was done in 2019 and was limited to a specific time frame from 1990 to 2019. The key terms were identified and selected by consulting research experts in this field, and the search strategy was developed in partnership with a research team. The search terms adopted include:
(Assess* OR Measure* OR Judge OR Estimate* OR Evaluate* OR Appraise* OR Rank OR Categorize* OR Grade OR Status OR Classify* OR report or Metric OR Model OR Investigate* OR Promote* OR Develop* OR System OR Plan OR Implement* OR Affair OR Level OR Perform OR calculate* OR Outcome) AND (Clinic* OR Medic* OR Therapy* OR Curate* OR Health OR Hospital) AND (Professor OR Member OR Fellow OR Trainer OR Mentor OR Tutor OR lecturer OR Teacher OR Staff OR Researcher OR Activity OR Mission OR Workload OR Contribution OR Effort) AND (College OR Institution OR School OR Department OR Faculty OR Campus OR Academia OR Academy OR Academe OR University OR Academic OR Education).
In the initial search process, we reviewed reputable journals in this field. The references of identified articles were also independently hand-searched to find more specific and related articles and studies. We used EndNote software version X9 to manage the search library, duplicate screen articles, and extract irrelevant articles. The number of documents generated from the defined databases is indicated in Table 1.
Database | Number of Documents |
---|---|
EMBASE | 1327 |
ProQuest | 429 |
Science Direct | 293 |
Web of Knowledge | 6954 |
Scopus | 1003 |
PubMed | 4280 |
Ovid | 791 |
Google Scholar | 44 |
Civilica, Irandoc, Magiran and SID | 22 |
Total | 15143 |
The Number of Articles/Abstracts Generated from the Databases
2.3. Study Screening and Selection
This study undertook a three-stage screening process to select relevant studies and documents. Initially, the authors conducted independent searches in different databases based on the search strategy. Secondly, the title and abstract of identified articles and documents were screened independently by the authors to assess their eligibility for inclusion in the review. The authors used the inclusion and exclusion criteria in this stage. Finally, the available full texts of the selected articles were reviewed to confirm whether the studies met the research question of this review. A standard quality assessment of the retrieved articles was conducted using the critical appraisal skills programme (CASP). Two authors reviewed each article independently for the risk of bias. Any disagreements were resolved through discussion or consultation with a third author. The process for selecting and reviewing the articles is indicated in Figure 1.
2.4. Inclusion and Exclusion Criteria
All studies with different study designs and methodologies evaluating the methods of clinical faculty members were included. Also, all studies from 1990 to February 2019 were included. The exclusion criteria were studies with no data on the research question's scope, books, guidelines, peer reviews, conference papers, and reports. Also, articles whose full texts were not available or written in languages other than English and Persian were excluded. Articles published in Persian were addressed based on the type of article.
2.5. Data Extraction
The authors screened and summarized the full texts of eligible studies and documents according to the designed descriptive and thematic analysis forms. The forms included the data of the author, the country in which the study was carried out, the study year, the study design, and critical results. The thematic analysis method was used to analyze the data. Thus, the findings of the final studies were coded line by line and then the codes were grouped. Finally, the initial study themes were obtained. After this stage, the themes and sub-themes were examined and compared in terms of similarities and differences between the studies, and the final themes were obtained. MAXQDA software was used to analyze the data. Finally, the manuscript was evaluated using the PRISMA checklist.
2.6. Data Analysis
A thematic synthesis approach was used to gather information, and two authors performed inductive analysis. For designing this table, the authors extracted findings and coded each study's findings. They then grouped the codes due to their similarity. Finally, they analyzed the grouped findings to classify them into four main themes. The findings were analyzed using MAXQDA software, and 1506 codes were identified after thematic analysis. These codes were categorized into four main themes. The two authors checked the accuracy and completeness of the extracted data.
3. Results
The screening process yielded a total of 15143 documents and 20 gray literature (stage 1). The duplicated studies were removed, and of 13791 studies, after reviewing the titles and abstracts, 484 were excluded because they were not relevant to evaluating the performance of clinical faculty members (stage 2). A total of 145 articles were left for the full-text review (stage 3). Subsequently, 139 studies were discarded because they did not meet the inclusion criteria (Figure 1). Finally, 25 studies were included in the final analysis (Table 2).
Authors | Year | Method | Main Results |
---|---|---|---|
Troncon (15) | 2004 | Descriptive, semi-quantitative study | Focusing on shortage of resources and organizational problems, cultural aspects, and the lack of a better educational climate are the weaknesses of traditional medical schools. |
McVey et al. (16) | 2015 | Observational cohort study | Assessing technical and communication skills as part of a national continuing education process is recommended. Devoting further resources to objective skills evaluation is essential for the educational system. |
Moore et al. (17) | 2018 | Synthesis | Results show that practitioners will have a more explicit approach to helping clinicians and providers. |
Vaughan et al. (18) | 2015 | Survey | It is essential to assess and accredit local surgical specialization programs and training of non-physician surgical practitioners. |
O’Keefe et al. (19) | 2013 | Cross-sectional surveys | This study observed differences in staff education, training, and competencies, suggesting that enhanced epidemiologic training might be needed in local health departments serving smaller populations. |
McNamara et al. (20) | 2013 | Qualitative design | Each participant's current role and everyday practice is essential when using mentoring, coaching, and action learning interventions. This method helps the participant to develop and demonstrate clinical leadership skills. |
Cantillon et al. (21) | 2016 | Qualitative survey | Becoming a clinical teacher entails negotiating one's identity and practice between two potentially conflicting planes of accountability. Clinical CoPs are primarily conservative and reproductive of teaching practice, whereas accountability to institutions is potentially disruptive of teacher identity and practice. |
Savari et al. (22) | 2018 | Multimethod research | Three general themes were identified in this study: Clarifying and determining healthy dietary behaviors and actions, teaching life skills and adopting healthy diet behaviors, and utilizing social norms for adopting healthy diet patterns. |
Haghdoost and Shakibi (23) | 2006 | Cross-sectional study | Some differences were found between the perceptions of students about their lecturers when compared with the perceptions of staff about their colleagues. Students were more concerned with the personality of their lecturers. |
Horneffer et al. (24) | 2016 | Cross-sectional study | An intensified didactic training program for student tutors may help them to improve. More studies should be done to optimize the concept regarding time expenditure and costs. |
Mohammadi et al. (25) | 2011 | Short communication | The study's results provided reliable information about department chairs' concerns and reactions to this system. The researchers found strengths and threats to developing a faculty member activity measurement system. |
Shahhosseini and Danesh (26) | 2014 | Qualitative study | This study focused on effective measures to improve faculty members' situation increase their efficiency, effectiveness, and productivity. |
Vieira et al. (27) | 2014 | Exploratory study | The strategy used in this study was partially effective but could be improved mainly by more research on its duration, including a discussion of actual cases. |
Boerboom et al. (28) | 2011 | Questionnaire | MCTQ is a valid and reliable instrument to evaluate clinical teachers' performance during short rotations. |
Young et al. (29) | 2014 | Developing the form | Respectful interactions with students were the most influential item in the global rating of faculty performance. The method used in this document is a moderately reliable tool for assessing the professional behaviors of clinical teachers. |
McQueen et al. (30) | 2016 | Grounded theory approach | The barriers to effective assessment and feedback were identified in this study, and they should be addressed to improve postgraduate medical training. |
Ipsen et al. (31) | 2010 | The nominal group process consensus method | The documents of this study suggest that it is possible to develop standardized measurements of educational works. The studied faculty emphasized developing the work schedule. |
Guraya et al. (32) | 2018 | A single-stage survey-based randomized study | This study has found time constraints and insufficient support for research as critical barriers to medical professors' research productivity. The authors recommended having financial and technical support and a lesser administrative workload. |
van Roermund et al. (33) | 2011 | A qualitative study | The critical role played by the teachers' feelings and expectations regarding their work was studied in this research. This recommended that in developing a new teaching model and faculty development programs, attention should be paid to teachers' existing identification model and the culture and context. |
Wang et al. (34) | 2012 | Non-experimental research | The authors found that faculty members are not satisfied with the evaluative process and emphasize the need for improvements and development in evaluation tools. |
Tsingos-Lucas et al. (35) | 2016 | Mixed-method study | This study showed that students and professors perceive the RACA as an effective educational tool that may increase skill development for future clinical practice. |
Shaterjalali et al. (4) | 2018 | Delphi | The results of this study indicated the necessity of forming a teaching team, paying attention to the selection criteria, and planning requirements for assigning responsibilities to the teaching. |
Roos et al. (36) | 2014 | A mixed method evaluation | Findings showed the success of a 5-day education program in embedding knowledge and skills to improve the performance of medical educators. By using qualitative and quantitative measures, this approach could serve as a framework to assess the effectiveness of comparable interventions. |
Nandini et al. (37) | 2015 | Descriptive study | Absenteeism of students, overcrowding of wards, and lack of uniformity of study materials were essential factors. |
Colletti et al. (38) | 2010 | Survey | The authors designed a framework. A five-domain instrument consistently accounted for variations in faculty teaching performance as rated by resident physicians. This instrument may be useful for the standardized assessment of instructional quality. |
Oktay et al. (11) | 2017 | Cross-sectional study | The findings showed that the evaluator group and residents met the 360-degree assessment, and this method was readily accepted in the studied university residency training program setting. However, only evaluations by faculty, nurses, self, and peers were reliable for any value assessment. |
Characterization of Studies
They are models of evaluating the performance of clinical faculty members, education, data gathering tools, and challenges of evaluating the performance of clinical faculty members. The main subthemes and categories of the model evaluating the performance of clinical faculty members are displayed in Table 3 in detail. Other themes and their findings are indicated in Table 4.
Theme | Subtheme | Category |
---|---|---|
Model of evaluating the performance of clinical faculty members | System | Necessary features in system design |
Computation systems for faculty activities | ||
Evaluation resources | ||
Shoaa system | ||
360 degree evaluation | ||
Balanced scorecard | ||
Indicators | Clinical scope | |
Research scope | ||
Educational scope | ||
Executive scope | ||
Research in education | ||
Individual development | ||
Citizenship | ||
Informal roles | ||
Structure | Individuals or units responsible for data collection and analysis | |
Individuals or units responsible for judging the performance | ||
Individuals or units responsible for reviewing the reports | ||
Competency committee for review of documents | ||
Complaints review committee | ||
Process | Method of collecting work data of the faculty members | |
Identification of feedback system | ||
Evaluation time | ||
Confidential or anonymous assessments and non-confidential assessments | ||
Committee rating | ||
Analysis of available output data including | ||
Design and certification standards for continuing professional education | ||
Developmental and aggregate two-dimensional evaluation | ||
Developing impact mapping | ||
Awards by geographic impact level |
Subthemes and Categories of the Model of Evaluating the Performance of Clinical Faculty Members
Theme | Findings |
---|---|
Education | Motivate the students and colleagues |
Availability | |
Communication skill | |
Provide and use educational facilities | |
Educational planning | |
Creating a favorable educational environment | |
Features of being a master role model | |
Guidance advice | |
Student participation | |
Class management | |
Pay attention to educational rules | |
Evaluate learners' performance | |
Recognizing students | |
Teaching skills | |
Content mastery | |
Personality characteristics | |
Motivate the students and colleagues | |
Availability | |
Communication skill | |
Provide and use educational facilities | |
Educational planning | |
Data gathering tools | Developing and applying the evaluation system for educational activities |
Assessment of the professor in the emergency medicine program | |
Calculation of the American educational performance | |
American clinical education assessment | |
Evaluation of clinical dentistry professors | |
The peer evaluation system | |
The effectiveness of clinical education in assessing the developmental evaluation of faculty members | |
Assessment of resident anesthesia supervision | |
Evaluation of anesthesia training quality | |
Evaluation of residents of clinical education | |
Systematic evaluation of the educational quality of medical faculty members | |
Evaluate the educational performance of the faculty members of the medical school | |
Clinical education evaluation tool related to CanMEDS roles | |
Canadian clinical educational evaluation | |
Surgeon self-assessment and resident assessment of Dutch | |
Seeing a colleague (US) | |
Evaluation of Dutch resident professors | |
Assessing the supervision of anesthesia residents | |
Evaluation by medical students | |
Australian clinical education quality questionnaire | |
Self-assessment questionnaire and Dutch education quality resident | |
Evaluation of radiology professors by residents | |
Residents' evaluation of clinical professor performance | |
Questionnaire of clinical, educational effectiveness | |
Educational framework questionnaire for evaluating clinical professors | |
Features required for tool preparation | |
Training effectiveness calculation tool | |
Evaluation tool with stakeholder opinion | |
Calculation of clinical and educational activities | |
Evaluation of the quality of teaching theoretical courses | |
The Master's Clinical Training Questionnaire | |
Self-assessment criteria | |
Challenges | Zero and one act of some bosses |
Looking for an ideal computing system that never materializes | |
Fear of being manipulated by statistics | |
Lack of information and data culture | |
Lack of trust in evaluation systems | |
Not applying a specific framework to all groups | |
Performing no difference between active and inactive members | |
Unclear responsibilities of faculty members | |
Differences between clinical and non-clinical groups | |
The need to provide infrastructure | |
The low motivation of faculty members | |
The challenges of cultural change | |
The possibility of the system being played by scientific members | |
Probability of faculty members seeking a grade | |
Not considering the quality of work | |
Probability of interaction between different performance calculation systems | |
Lack of controlled questions to avoid random comments | |
Lack of coverage of all factors affecting teacher evaluation | |
Lack of training of people involved in the evaluation process | |
Excessive attention to research results concerning educational activities | |
Lack of attention to religious values in the evaluation system | |
Fear of disclosing peer review results | |
Lack of trust between faculty members | |
Inaccurate use of results | |
Lack of appropriate tools for evaluation | |
Unnecessary bureaucratic requirements | |
Focus on the number of articles for evaluation | |
Inadequate quantitative and qualitative indicators | |
The subjectivity of some promotion indicators | |
Lack of a unified protocol |
Main Themes and the Findings of Systematic Review
3.1. Models of Evaluating the Performance of Clinical Faculty Members
The findings of this study showed four main subthemes for evaluating the performance of the clinical faculty member model. They were systems, the structure, indicators, and the process (Table 5).
Systems | Structure | Indicators | Process |
---|---|---|---|
Shoaa system; balanced scorecard; 360 degree evaluation | Individuals or units responsible for data collection and analysis; individuals or unit responsible for performing judgment; individuals or units responsible for reviewing reports; competency committee for evidence; complaints review committee | Educational scope; executive scope; research in education; individual development; citizenship; informal roles | Identification of the feedback system; evaluation time; confidential or anonymous assessments and non-confidential assessments; committee rating, analysis of available output data; design and certification standards for continuing professional education; developmental and aggregate two-dimensional evaluation; developing impact mapping; awards by geographic impact level |
Clinical Faculty Performance Evaluation Models
3.2. Systems
The categories under the system subtheme were necessary features in system design, computation systems for faculty activities, evaluation resources, the Shoaa system, 360 degree evaluation, and the balanced scorecard.
Many articles have focused on necessary features in system design. Participation of professors in the design and implementation of the evaluation system (39), identification of standard time spent in various activities (40), item resolution, attention to the personal characteristics of the professor and the differences of the department, appropriate application format, completion guide, ranking of professors in the department and faculty of the university, between the primary faculty (41), landscape setting (42), verifiers (42), the analysis interval (42), minimum expectations (42), evaluation time (within one month) (42), being on the Web (42), self-reporting (42), contingency design to fit each department (42), existence of written procedures and policies for clinical evaluations, explaining performance evaluation objectives (43), and indigenous standards for system design (44) are some examples of considering features in system design by other researchers.
Due to computation systems for faculty activities, the method of relative value (9, 45, 46) has been considered by various researchers as follows: Forming a working group, identifying the main areas of professors' activities for comparison, listing all specific activities, determining the relative value range, determining the average relative value for activities, determining the relative value of other activities in proportion to the average activity, determining the time dimension of activities, identifying the activities of senior faculty members in each field, selecting the score of that activity as an excellent criterion, normalizing to pay deprivation, testing the system with several masters, modifying systems, implementing the system with all faculty members temporarily, making new corrections, and finalizing for final implementation are the examples of codes that are in the relative value group.
Evaluation resources, the Shoaa system (20), 360 degree evaluation (11, 47, 48), and the balanced scorecard (1) were other categories under the system subtheme.
3.3. The Structure
Individuals or units responsible for data collection and analysis, individuals or unit responsible for performing judgment, individuals or units responsible for reviewing reports, the competency committee for evidence, and the complaints review committee were the main categories of the structure of evaluating the performance of the clinical faculty member model.
3.4. Indicators
Indicators were categorized into eight categories: Clinical scope, research scope, educational scope, executive scope, research in education, individual development, citizenship, and informal roles.
The clinical scope had some subcategories such as specialty and medical knowledge, system-based learning, clinical skills, clinical responsibility, clinical awards, quality of professors' services, resource management, clinical/hospital monitoring, recognizing faculty members as elite medicine, new medical services, case reports, clinical activity, role modeling, contractual services, on-call shifts, and regular shifts.
The subcategories of research scope were the number of research projects, grants and rewards, lectures and conferences, publications, referee and editor of a journal, guidelines development, number of inventions, job awards or foreign certificates, faculty reputation of research, supporting the faculty from research mission, and thesis advisers.
The results of this study showed that educational scope focuses on mentoring and consulting, training hours, resource management and cost-effectiveness training, role modeling, educational awards, quality of education, journal clubs, areas of clinical education, learners score, educational impact score, assessment of learners, training place, evidence-based medical education, number of learners, internships, educational innovations, laboratory activity, non-clinical education, educational evaluation score, and adult education.
The executive scope subcategories were the manager of a department, deputy, and school, manager of committees, and educational leadership and training management.
Education research is another indicator that has 10 subcategories of community-based education and research, research opportunity, strategic planning of a field of study, curriculum review and curriculum development, a referee, grants for educational research, educational scholarship products, a grant index, educational awards, and personnel development.
Due to the research findings, the other indicator was individual development. The individual development subcategories were facilitation skills, the formal teaching skills course, advanced degrees, certificates and renewal of specialized and sub-specialized certificates, and participation in educational activities and workshops.
Citizenship was another indicator that focuses on establishing a proper working relationship with colleagues, facilitating personal and professional development, role modeling cooperation, facilitating respect, effectiveness, and interaction with the team, paying attention to personnel training, and supporting staff.
3.5. The Process
Method of collecting work data of the faculty members, identification of the feedback system (49), evaluation time (41), confidential or anonymous assessments and non-confidential assessments (50), committee rating, analysis of available output data (51), design and certification standards for continuing professional education (52), developmental and aggregate two-dimensional evaluation (49), developing impact mapping (53), and awards by geographic impact level (54) were the main categories of the process.
4. Discussion
Evaluating the performance of clinical faculty members refers to taking action for a better education system. The aim of evaluating clinical faculty members is to design a fair, equitable, and practical evaluating system to ensure clinical teaching effectiveness. Evaluating the performance of clinical faculty members is an ongoing process in medical universities (1). The current systematic review provides four main themes of evaluating the performance of the clinical faculty member model, education, data gathering tools, and challenges of evaluating the performance of clinical faculty members. The main categories for evaluating the performance of the clinical faculty member model are systems, the structure, indicators, and the process. Some studies evaluated the status of clinical faculty members with different methods. Based on the reported results of these studies, all methods have their challenges (REF). Based on the findings of this systematic review, the main challenges of evaluating the system of clinical faculty members’ performance are lack of information and data culture, lack of trust in evaluation systems, not applying a specific framework to all groups of faculty members, unclear responsibilities of faculty members, low motivation of faculty members, cultural change challenges, not considering the quality of work, lack of coverage of all factors affecting faculty member evaluation, lack of training of people involved in the evaluation process, fear of disclosing peer review results, lack of trust between faculty members, and inaccurate use of results.
Numerous studies have been conducted worldwide to evaluate the performance of clinical faculty members. Most of the studies have been done quantitatively, and through questionnaires, they have examined professors' performance cross-sectionally. Some of these studies have focused on developing a tool for measuring teachers' performance and its psychometrics. In each tool, different dimensions have been used to evaluate professors' performance. Studies have also been conducted using the review method to introduce areas and items involved in the performance evaluation of clinical professors. In one study conducted in Iran in 2018, factors affecting the evaluation results of university faculty members were examined from the perspective of university professors. Accordingly, universities use two sources, namely students and administrators, to evaluate professors. Two dimensions of the educational system and faculty members' characteristics were used to measure the effectiveness of the evaluation (2). Another study was conducted in Germany in 2015 to develop a framework for the basic competencies of clinical professors. The final model of the research was six core competencies for clinical professors. It showed the reflection and progress of personal training and the use of systems related to teaching and learning (55). A study was conducted in Denmark in 2010 to identify the essential classes from the point of view of clinical faculty members. The faculty members introduced six essential classes. They ranked the participation of residents and clinical faculty members, time for management and development, and formal educational activities such as occasional evening lectures (56). A study conducted in the United States in 2010 covered the field of study for clinical professors, including formal educational research such as new educational technology, grants for educational research, clinical trials, advanced degrees, and research such as public health master, referee board membership, the referee introduced, educational grants, and magazine editors (57).
Some tools and checklists have been developed for evaluating clinical faculty members performance by different organizations (11, 29, 58, 59). The findings of this systematic review showed the primary data-gathering tools and checklists to evaluate the performance of clinical faculty members. Ahmadi et al. (as cited by Haghdoost, and Shakibi) developed a study on the adapted personnel evaluation standards for monitoring and continuous improvement of a faculty evaluation system in the context of medical universities in Iran. This study attempted to assess multiple faculty roles, including educational, clinical, and healthcare services (23). The findings of our study showed other items to evaluate the performance of clinical faculty members. For evaluating a faculty member, a multidisciplinary approach is needed. Many studies focus on just one item of evaluation. For example, Kamran performed a study to design a method for the evaluation of the teaching quality assessment form in Lorestan, Iran (60). Also, Chandran designed a novel analysis tool to assess the quality and impact of educational activities (59). The findings of this study showed that all checklists and data-gathering tools had strengths and weaknesses.
5. Conclusions
Educational tasks in medical universities cover a wide range of activities. These activities include theoretical and practical teaching, counseling and guidance for students, supervising clinical and educational dissertations, active participation in morning rounds and reports, night watch, enclave, club journal, and workshops for professors, students, and staff. Given the breadth and variety of activities in this field of tasks, evaluating faculty members in this field certainly has its difficulties and complexities. Therefore, it is necessary to have a comprehensive and inclusive system that includes all professional aspects of medical professors.
In general, evaluating systems has consequences on the performance of faculty members. Policymakers and educational managers have an essential role in dealing with evaluating mechanisms and their effects on faculty members. Using a fair and general evaluating system is crucial and any neglect can hurt a faculty member's performance. In this systematic review, we provided a comprehensive discussion and summarized all aspects of evaluating the performance of clinical faculty member models, tools, and challenges. In conclusion, the present study's findings could help policymakers design an appropriate model for performance evaluation.