OptomCAS | OPTOMETRY ADMISSION TEST     

Optometric Education

The Journal of the Association of Schools and Colleges of Optometry

Optometric Education: Volume 46 Number 2 (Winter-Spring 2021)

PEER REVIEWED

Student Satisfaction with an Objective Structured Clinical Examination in Optometry

Patricia Hrynchak, OD, MScCH (HPTE), FAAO, DipOE, Jenna Bright, BSc, MSc, OD, Sarah MacIver, BSc, OD, FAAO, and Stanley Woo, OD, MS, MBA, FAAO

Abstract

An objective structured clinical examination (OSCE) is a performance-based examination in which students rotate through a series of stations where they are expected to perform specified tasks to demonstrate competency. This paper reports on student satisfaction with the OSCE administered at the end of a Doctor of Optometry professional program. Students were very positive about the interactions with standardized patients and the organization of the examination. They felt the examination used realistic scenarios. They had a mixed response to the use of simulators for skills assessment and the length of time available in each station to perform those tasks. Overall, the optometry students were very positive about the opportunity to participate in the OSCE prior to sitting for the entry-to-practice Optometry Examining Board of Canada examinations, which also use OSCEs.

Key Words: education, optometry, objective structured clinical examination, program evaluation, satisfaction, summative

Introduction

Graduates of schools and colleges of optometry must have the knowledge, skill and judgement to practice effectively and safely. Optometric education requires a system of assessment to determine whether students have reached the desired competencies1 for entry into practice. A system of assessment can include multiple assessment types including clinic-based assessment methods (mini-clinical evaluation exercise, global rating scales), written examinations (multiple-choice questions or key feature questions) and performance-based assessments.2

Figure 1. Miller’s pyramid of clinical assessment with assessment method examples at each level.
Click to enlarge

Miller proposed a framework of four levels of clinical assessment in healthcare education.3 Learners demonstrate increasing ability moving upward in a pyramid from knows (knowledge), knows how (application of knowledge), shows how (performance) to does (action). Each level of the pyramid requires the use of different assessment methods3 that combine to form a system of assessment.4 The objective structured clinical examination (OSCE) assesses at the “shows how” level (Figure 1).

The OSCE is a performance-based assessment in a simulated environment where students rotate from one station to the next and are expected to perform a particular clinical task or series of tasks in each station.5 Typically, one or two examiners in each station rate the performance on checklists, global rating scales or both.6 A standardized patient (SP) in the station may act to portray a relevant clinical scenario.7 The SPs can also rate student performance.5 The stations are timed, usually five to ten minutes in length, and the student moves from one task to the next through a predetermined number of stations.7 Rest stations are often interspersed through the examination.5

The OSCE is able to assess competencies such as communication, professionalism and patient-centeredness as well as the demonstration of higher-order reasoning skills in real time.6 Reliability can be increased by conducting the examination in a controlled environment, which makes it more objective than practice-based assessments.1 Reliability is a necessary component of a valid examination.8 The OSCE extends the psychomotor skills examination (proficiencies) by using cases and requiring active problem-solving, diagnosis and planning.6

The OSCE as an assessment format was originally developed by Ron Harden in 1975.9 It is considered the gold standard in high-stakes assessment in healthcare education.10 It is used extensively in undergraduate and postgraduate healthcare education as well as in national board examinations.11

The Optometry Examining Board of Canada (OEBC) is the Canadian equivalent to the National Board of Examiners in Optometry (NBEO) in the United States. “OEBC establishes a psychometrically valid and defensible assessment to establish entry-to-practice competence in optometry in Canada.”12 The results of the examination are used in registration (licensure) decisions by the provinces. There are two parts to the OEBC examinations: 1) a case-based, written multiple-choice examination and 2) an OSCE. The OSCE was first used as an assessment format by the OEBC in 2017. Although the success rate of University of Waterloo students completing the OEBC did not change, the students’ feedback to the school was that they generally felt unprepared when completing the OSCE. It was therefore important to introduce optometry students from the University of Waterloo School of Optometry and Vision Science to this assessment format.

Education program evaluation is the process of collecting, analyzing and interpreting data.13 Some models evaluate outcomes and others evaluate the processes of the education program.13 Kirkpatrick developed a four-level outcomes-based model. The levels are reaction (satisfaction), learning (knowledge or skill acquired), behavior (transfer of learning to the workspace) and results (impact on society).13 Student satisfaction surveys assess the reaction level of the model with the results used to improve the program.

In this pilot project, we developed an OSCE to improve on the system of assessment used to determine end-of-program competency in the University of Waterloo Doctor of Optometry program. An additional reason for the development was to expose the students to this type of assessment within the program before they attempted the high-stakes OEBC examinations. Research has shown that practicing an OSCE can lead to improved confidence and lowered stress.14

Here we report on the satisfaction of the students immediately after taking an OSCE at the end of their formal optometric education and before taking their OEBC OSCE. The aim is to use information gathered from the surveys to inform future administrations of the OSCE as part of the evaluation process.

Methods

We give a brief overview of the development of the OSCE followed by a description of the satisfaction survey administration. The OSCE was designed and developed by closely following the evidence-based principles outlined in AMEE Guide No. 81. Part II: organisation & administration.5 A team of four faculty members worked on the design, development and administration of the OSCE during a period of more than one year. One team member had previous experience with the OSCE at another institution.

The examination content was mapped to the OEBC’s national entry-level competency profile. Creating this blueprint is a way of defining what is to be measured by mapping to entry-level abilities.8 The domains of competence assessed were communication, professionalism, patient-centered care, assessment (skills), diagnosis and planning and patient management. The content areas were refractive care, binocular vision and ocular disease. The proportion of competencies in the cases was determined by the combination of frequency and importance of the competencies as described by OEBC in the development of its blueprint.15

We developed 11 stations: four stations on refractive care, three stations on binocular vision and four stations on ocular disease. Of the 11 stations, six interactive stations had SPs and five did not. To improve reliability, simulations (e.g., Eyesi Indirect Ophthalmoscope, VRmagic, Mannheim, Germany) were used for all skills where testing could have been done on a person. Three rest stations were interspersed among the active stations. Each station was 10 minutes long with two minutes available to read the case description and tasks required before entering the station and eight minutes in the station. This was the maximum number of stations that could be administered in a reasonable timeframe.

Two of the developers tested the stations with one acting as an assessor and the other as a student. The stations were then modified to fit within targeted time and difficulty level. Subsequently, we piloted the examination with seven volunteer students who had completed the International Optometric Bridging Program at the University of Waterloo School of Optometry and Vision Science. The results of the pilot helped to refine the stations for the next iteration.

The assessment tools developed for the stations were a combination of global rating scales and check sheets depending on the competencies assessed in the station. The assessors were volunteer faculty members and optometric residents at the School of Optometry and Vision Science. The 15 assessors were trained to grade performance consistently during a three-hour session. Specifically, they were trained to accurately and consistently distinguish between a “pass” (optometry candidate addressing all criteria determined necessary to be a minimally competent practicing optometrist) and a “fail” (optometry candidate not meeting the minimum requirement of competence). They assessed different levels of performance while viewing videos of mock performances (specifically a “pass,” “borderline” and “failing” candidate). These videos were of the development team acting out different levels of student performance. The assessment results were compared and discussed to ensure grading consistency.

The McMaster University Standardized Patient Program was hired to provide the SPs. We worked with the trainer from McMaster University to train the SPs to portray the character in the clinical scenario for each station that used them. If more than one SP was used for the same station they were trained together to ensure uniform performance. The SPs were very experienced and compensated for their expertise.

The examination was conducted in four sessions during two days. Ideally, there would be no opportunity for the candidates taking the exam to have any exposure to each other before taking the exam. However, this was not possible because the circuit took approximately two and a half hours to complete, and we lacked a sufficient number of assessors to run concurrent circuits with the number of students who were taking the examination. Students did sign a confidentiality agreement form requiring them not to disclose the content of the stations. We did not provide the content of the stations to the students; they did not have access to the assessment tools (check sheets and global rating scales), and they did not have advance knowledge of the psychomotor skills to be tested.

We invited 90 students in the graduating class to volunteer to take the OSCE via an electronic survey four months before the examination administration. Because this was a pilot, the OSCE was not a required element of assessment in the program. There was no incentive given for the students to participate. Students provided written consent to use their anonymized results from the examination during class time by a third party not involved with the project who did not instruct the students. The study received ethics clearance from the University of Waterloo Office of Research Ethics with standards that follow the Declaration of Helsinki. The students then voluntarily completed a survey regarding their perceptions of the examination immediately following the OSCE.

The satisfaction survey used was developed by the School of Pharmacy at the University of Waterloo and modified for use in optometry. The School of Pharmacy uses OSCEs throughout its professional program. The survey employs nine statements to which students are asked to respond about their satisfaction with the components of the OSCE using a five-point rating scale of strongly disagree, disagree, neither agree nor disagree, agree and strongly agree. It also includes open-ended comments. The percentage of students choosing each category for each of the statements was calculated. Two of the investigators independently delineated themes from the open-ended comments by reading the comments and identifying recurring concepts. The results were compared and discussed until agreement was reached. The prompts for these comments were included in survey items labeled “Things done particularly well” and “Anything that may have hindered the performance.”

The response categories were assigned a number from one to five corresponding to strongly disagree to strongly agree. Cronbach’s α was calculated to determine the internal consistency of the survey.

Results

Ninety students were eligible to take the OSCE. Of those eligible, 54 students volunteered for the examination and all volunteers attended the examination. All 54 students completed the survey. The survey statements and results are presented in Table 1. Cronbach’s α calculated for the internal consistency of the survey was good at 0.86. Deleting each of the questions did not improve the value, indicating that the separate questions contributed to the reliability of the survey.

Students agreed or strongly agreed that the examination was well-organized (100%) and that the staff members were helpful (94%). Students agreed or strongly agreed that the cases were representative of clinical optometric practice (98%). Students agreed or strongly agreed that the SPs were realistic (100%), provided the information they needed (92%) and resembled what they see in practice (98%). Students agreed or strongly agreed that information provided before the examination was sufficient (78%), equipment and resources provided were adequate (87%) and instructions provided in the non-interactive stations were clear (74%).

The themes delineated from the open-ended comments are reported in Table 2. Many of the comments echoed what was in the survey. Additionally, while there were some positive comments about the simulations (three responses), there were more comments about the difficulty of using simulators for skills normally performed on a person when they were not familiar with their use (16 responses).

The students were required to state what they were doing while performing a procedure and to state their diagnosis and plan while recording it (18 responses). They found these things difficult to do, as they were unfamiliar tasks. Students felt that the case information and station tasks posted on the outside of the station was insufficient and more information would have been preferred (nine responses). Time was also an issue given that some students felt they had insufficient time to complete the task(s) (16 responses). Students felt the instructions were not clear (five responses) and they were unsure how to prepare for the examination (three responses).

The other comments were non-themed, including positive comments about enjoying the counseling stations and a good balance of material being covered in the assessment. The comments about what hindered the performance were mostly specific comments about specific stations (e.g., not knowing how to use the 20D lens in the indirect ophthalmoscope station, not knowing what to expect and knowing the examiners).

Discussion

We report on student satisfaction with a pilot OSCE in optometric education as the first level in Kirkpatrick’s program evaluation model.13 The survey was internally consistent with a good Cronbach’s α of 0.86 and all questions contributing to the reliability. The results of the survey have helped to determine what practices to continue in future iterations and where improvements can be made. In addition, because the OEBC examinations include an OSCE we felt it was necessary to develop our own OSCE to give students experience with the examination format before they take the high-stakes assessment. Experience with the assessment method was intended to reduce anxiety and increase the feeling of preparedness.14

The students responded very positively to the SPs in the examination. It was the first time in the optometry program they were exposed to SPs. The positive response was likely because the SPs were hired from a professional SP program at McMaster University and were very experienced. We developed scripts for the SPs and trained them to respond to questions appropriately, including what affect to portray in each station.16 The practice of using trained SPs will be continued in future administrations of the examination.

Students were positive about the organization of the assessment. Developing and administering an OSCE is a challenging task requiring a considerable amount of administrative time and coordination of students, assessors, administrators, support staff and information technology.8

The students felt that the stations represented authentic cases from optometric practice. The case writers have extensive experience in clinical practice in the school environment and private practice. Each case was reviewed by our team and refined. One of our team has experience with OSCE case writing for an external educational organization. This is in line with best practices in OSCE development.16 Drawing on case writing experience will continue in the next iteration of our efforts to enhance the assessment.

We decided to use simulators for some of the skills testing that would typically be performed on a person. Simulators increase the reliability of an assessment by being standardized and reducing the variability found in clinical encounters.17 Simulators can be programed to portray a variety of complex presentations and perform the same way through multitudes of assessments.17 The simulators used were either free or inexpensive with the exception of the VRmagic indirect ophthalmoscope simulator that was already available in the program. No device malfunctions occurred in this examination. However, students questioned the fidelity of some simulations that they felt did not represent their real-world experience. In addition, they did not all have an opportunity to practice on the indirect ophthalmoscope simulator, and none had an opportunity to practice with the cover test, oculomotor or retinoscopy simulators. This was a perceived barrier to performance even though the simulators were straightforward to use and the students were given standardized instructions at the beginning of the station. In the future, assessor training will improve to support the student in using the simulator. While allowing the students to train on the simulators prior to the examination could be considered in future assessments,18 this approach will need to be weighed against the security of the examination content.

Students did not feel they were adequately prepared for the OSCE in that they were not provided with adequate information. This is not surprising given that the students had not experienced the assessment format in the past. In addition, during the consent process where questions could have been asked, we were not allowed to be present as determined by the Office of Research Ethics. In an OSCE, the students need to integrate information and actively solve problems in contrast to a checklist for skills-based testing. This produces a challenge for traditional studying, as they do not know how to prepare for the assessment. Student-run mock OSCEs have been used to prepare students for high-stakes assessment with some success14,19 More information will be provided to the students in advance of future assessments.

Students felt that time was too short in the stations that required them to perform skills whether using a simulator for the testing or using standard equipment. It was surprising how long it took the students to perform skills that were expected to be at the level of “unconscious competency”20 at the end of their program. Perhaps some basic skills (e.g., manual lensometry) have been replaced in practice with automated devices making assessment of that skill questionable as an entry-level competency. In future administrations, the full case details will be posted outside of the examination room during the two minutes students are reading the stem, which should help alleviate the time pressure. In addition, the one-minute warning will be changed to two minutes to help students better manage their time.

There were limitations to this study. The student sample was biased to those who volunteered for the examination. One-third of the class was finishing an on-site clerkship, but the other two-thirds were off campus. This required the off-site students to return to campus to take the examination. Therefore, they were less likely to attend the examination. In addition, the sample size was small and limited to only one institution. The results are therefore not generalizable to all professional optometry programs.

Conclusions

The overall satisfaction of optometry students on the first administration of an OSCE was high. The SPs and the organization were both rated favorably. Students noted some possible improvements to the stations that used simulators for skill performance and felt that the time in those stations was insufficient. The results indicate promise for the OSCE being a feasible tool to add to a system of assessment for determining the competencies of graduating optometry students.

Future work will report on utility of the assessment, which includes validity, reliability, equivalence, feasibility, education effect, catalytic effect and acceptability of the OSCE in our context.14 The examination results of this assessment will be analyzed and reported.

Acknowledgement

This study was funded by the Center for Teaching Excellence, University of Waterloo, Learning Innovation and Teaching Enhancement Grant.

References

  1. van der Vleuten CP, Schuwirth LWT, Scheele F, Driessen EW, Hodges B. The assessment of professional competence: building blocks for theory development.
    Best Pract Res Clin Obstet Gynaecol. 2010 Dec;24(6):703-19.
  2. Hrynchak P, Takahashi SG, Nayer M. Key-feature questions for assessment of clinical reasoning: a literature review. Med Educ. 2014;48(9):870-83.
  3. Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990 Sep;65(9 suppl):S63-7.
  4. Norcini J, Brownell Anderson M, Bollela V, et al. 2018 consensus framework for good assessment. Med Teach. 2018 Nov;40(11):1102-09.
  5. Khan KZ, Gaunt K, Ramachandran S, Pushkar P. The objective structured clinical examination (OSCE): AMEE Guide No. 81. Part II: organisation & administration. Med Teach. 2013 Sep;35(9):e1447-63.
  6. Casey PM, Goepfert AR, Espey EL, et al., Association of Professors of Gynecology and Obstetrics Undergraduate Medical Education Committee. To the point: reviews in medical education–the objective structured clinical examination. Am J Obstet Gynecol. 2009;200(1):25-34.
  7. Newble D. Techniques for measuring clinical competence: objective structured clinical examinations. Med Educ. 2004;38(2),199-203.
  8. Khan KZ, Ramachandran S, Gaunt K, Pushkar P. The objective structured clinical examination (OSCE): AMEE Guide No. 81. Part I: an historical and theoretical perspective. Med Teach. 2013;35(9):e1437-46.
  9. Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examination. Br Med J. 1975 Feb 22;1(5955):447-51.
  10. Harden RM. Misconceptions and the OSCE. Med Teach. 2015;37(7):608‐610.
  11. Hastie MJ, Spellman JL, Pagano PP, Hastie J, Egan BJ. Designing and implementing the objective structured clinical examination in anesthesiology. Anesthesiology. 2014;120(1):96-203.
  12. Optometry Examining Board of Canada [Internet]. Accessed 2019 May 21. Available from: http://www.oebc.ca
  13. Mohanna K, Cottrell E, Wall D, Chambers R. Teaching Made Easy: A manual for health professionals. 3rd Ed. New York, NY: Radcliffe Publishing; 2011.
  14. Young I, Montgomery K, Kearns P, Hayward S, Mellanby E. The benefits of a peer-assisted mock OSCE. Clin Teach. 2014 Jun;11(3):214-18.
  15. Cane D, Penny M, Marini A, Hynes T. Updating the competency profile and examination blueprint for entry-level optometry in Canada. Canadian Journal of Optometry. 2018;80(2):25-34.
  16. Daniels VJ, Pugh D. Twelve tips for developing an OSCE that measures what you want. Med Teach. 2018;40(12):1208-1213.
  17. Scalese R. Simulation-based assessment. in: A practical guide to the Evaluation of Clinical Competence. 2nd Ed. Elsevier, 2018.
  18. Courteille O, Bergin R, Stockeld D, Ponzer S, Fors U. The use of a virtual patient case in an OSCE-based exam–a pilot study. Med Teach. 2008;30(3):e66-76.
  19. Bevan J, Russell B, Marshall B. A new approach to OSCE preparation – PrOSCEs. BMC Med Educ. 2019 May;19(1):126.
  20. Manthey D, Fitch M. Stages of competency for medical procedures. Clin Teach. 2012 Oct;9(5):317-9.

Dr. Hrynchak [patricia.hrynchak@uwaterloo.ca] is a Clinical Professor at the University of Waterloo School of Optometry and Vision Science. She has a Master of Science in Community Health degree in Health Practitioner Teacher Education and is a Diplomate in Optometric Education with the American Academy of Optometry.

Dr. Bright is Director of the International Optometric Bridging Program at the University of Waterloo School of Optometry and Vision Science. Prior to obtaining her Doctor of Optometry degree, she earned a Master of Science degree with a focus on patient-centered communication in an optometric setting. Dr. Bright has a special interest in optometric education and cultural competency.

Dr. MacIver is a Clinical Associate Professor at the University of Waterloo School of Optometry and Vision Science and has a special interest in the treatment and management of chronic ocular disease, specifically glaucoma and dry eye. She has been invited to speak at various education events including interprofessional conferences and public education events to disseminate knowledge about eye health and vision care. Her areas of research include glaucoma, dry eye disease, interprofessional collaboration with primary healthcare and optometric education.

Dr. Woo is Director of the University of Waterloo School of Optometry and Vision Science. He is a Diplomate in Low Vision with the American Academy of Optometry and a Diplomate with the American Board of Optometry. His research interests include vision rehabilitation, ophthalmic imaging, public health policy and systems for optimizing patient care outcomes

Menu