academic journalism

Assessment of Otoscopy: How Does Observation Compare to a Review of Clinical Evidence


avatar Simon Jonathan Davis 1 , * , avatar Jon Viljar Norvik 2 , avatar Kristin Elisa Ruud Hansen 3 , avatar Ingrid Vognild 4

1 MBChB, OSCE director & general practitioner, UiT, Norway

2 Department of Medical Biology, UiT, Norway

3 OSCE administrator & civil engineer (biotechnology), UiT, Norway

4 Radiologist in training, Department of Radiology, UNN hospital, Tromsø, Norway

How to Cite: Jonathan Davis S, Viljar Norvik J, Ruud Hansen K E, Vognild I. Assessment of Otoscopy: How Does Observation Compare to a Review of Clinical Evidence. J Med Edu. 2016;15(3):e105526.
doi: 10.22037/jme.v15i3.13866.


Journal of Medical Education: 15 (3); e105526
Published Online: January 11, 2017
Article Type: Research Article
Received: September 19, 2016
Accepted: November 08, 2016


Background and Purpose: To investigate how much the method of observation agrees with a standardised review of evidence of clinical examination, for the assessment of clinical otoscopic competence.Methods: 65 medical students took part in an Objective Structured Clinical Examination (OSCE) station using patients with real pathology. Examiners assessed otoscopic competency in tympanic membrane examination solely by distant observation. An external examiner later reviewed candidates’ documented findings on a schematic drawing of the tympanic membranes. Observed agreement of the two methods and Cohen’s kappa coefficient were calculated.Results: Mean otoscopy scores for examiner 1 and examiner 2 were 67.7% and 29.4% respectively. There was a significant difference using the Mann-Whitney U-test. OSCE observation declared 47.7% of candidates (31/65) to be clinically competent. Drawing-based analysis however deemed only 4.6% (3/65) to have achieved this competency. This represented more than a ten-fold overestimation of clinical competency by OSCE assessment. Observed agreement between assessment methods was 59.6%. Cohen’s kappa coefficient was 0.1.Conclusions: OSCE observational assessment of otoscopic clinical competency correlates poorly with review of evidence from clinical examination. If evidence review is acceptable as a better marker for competency, observation should not to be used alone in OSCE assessment. Evidence review itself is vulnerable to candidate guesswork. OSCE could possibly explore candidate demonstration with explanation of findings, by use of digital otoscopy offering a shared view of the tympanic membranes, as an improved standard of clinical competency assessment.


The body of the article can be found in the PDF file.


  • 1.

    References are available in the PDF file

© 2016, Journal of Medical Education. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License ( which permits copy and redistribute the material just in noncommercial usages, provided the original work is properly cited.