OptomCAS | OPTOMETRY ADMISSION TEST     

Optometric Education

The Journal of the Association of Schools and Colleges of Optometry

Optometric Education: Volume 43 Number 2 (Winter-Spring 2018)

PEER REVIEWED

Virtual Patient Instruction and Self-Assessment Accuracy in Optometry Students

Bhavna R. Pancholi, PhD, MCOptom, and Mark C.M. Dunne, PhD, MCOptom, FHEA

Abstract

We tested the hypotheses that virtual patient instruction aimed at developing clinical decision-making skills in second-year optometry students in the United Kingdom (UK), is (1) associated with improved self-assessment accuracy, and (2) that this occurs regardless of academic ability, gender or learning style. Self-assessment accuracy, the difference between perceived mastery (questionnaire) and actual mastery (multiple-choice examination), was determined for five learning objectives: question selection, critical symptom recognition, test selection, critical sign recognition and referral urgency. Virtual patient instruction was not associated with improved self-assessment accuracy, which was generally poor across all learning objectives. There was evidence of over-estimated self-assessment in students, especially males, with poorer academic performance. Opportunities for optometry students to develop self-assessment skills should start early in the academic program and be reinforced throughout the entire curriculum.

Key Words: self-assessment accuracy, clinical decision-making, virtual patient, academic ability, gender, learning style

Introduction

Decision-making in a clinical context is defined as “making choices between alternatives in order to decide what procedures to do, to make a diagnosis, or to decide what treatments to prescribe.”1 We developed a virtual patient software to teach these skills to second-year optometry students at Aston University. This software was inspired by Pane and Simcock’s textbook “Practical Ophthalmology: a Survival Guide for Doctors and Optometrists,” which promotes a symptom-based approach.2 The intended learning objectives are shown in Table 1.

Clinical instructors are required to assess students’ clinical decision-making skills. This often takes place in training clinics where clinical instructors supervise groups of students. Group supervision makes it difficult, even when eye examinations are recorded on video, to provide timely feedback on every clinical procedure carried out by each student. Using virtual patient software, which automatically records every student decision via the keyboard and mouse, students can obtain immediate and consistent formative and summative feedback. Previous research has shown that assessments made by clinical supervisors can be inconsistent because assessment criteria can vary between supervisors, and the grades given by individual supervisors can vary on different occasions.3 Virtual patient software overcomes this by operating via consistent grading criteria. Thus, virtual patient instruction has the advantage of allowing unlimited risk-free and self-paced opportunities to apply clinical decision-making skills. In addition, virtual patient software can be programmed to simulate any desired eye condition, ensuring that students are exposed to an ample variety of pathologies during their training.

Accurate self-assessment can be an indicator of student ability to understand their own strengths and weaknesses and to recognize areas in which further practice is required to achieve mastery. In this regard, we believe that teaching methods should be designed to promote accurate self-assessment regardless of learning style, academic ability or gender. In this study, we evaluated whether virtual patient instruction was associated with student variations in self-assessment accuracy regarding specific elements of clinical decision-making skills. In the context of the study, self-assessment accuracy refers to how well a student’s perceived confidence in his or her mastery of the learning objectives listed in Table 1 reflects actual mastery. Self-assessment accuracy has been studied in students of various disciplines,4-9 including optometry,5 and some evidence that it is influenced by learning styles,6 academic ability6,7 and gender has been presented.8,9 However, the effect of virtual patient instruction on the accuracy of self-assessment of clinical skills remains largely unknown.

In this study, for all learning objectives shown in Table 1, we tested the following hypotheses:

  1. Better self-assessment accuracy occurs with virtual patient instruction because it facilitates unlimited practice and exposure to feedback that encourages students to re-evaluate their perceived skills
  2. Self-paced virtual patient instruction gives all students an equal opportunity to improve self-assessment independently of student learning style, gender and academic ability

Methods

Ethics

This study adhered to the tenets of the Declaration of Helsinki and was approved by Aston University’s Research Ethics Committee. Voluntary informed consent was obtained from all participants before any data were analysed.

Study design

Two cohorts of second-year optometry students participated in the study. The first cohort entered the course the year before the second cohort. Both cohorts received the same course content in which 22 types of presentation were covered: 10 presenting symptoms, such as vision loss and diplopia, and 12 presenting signs, such as eyelid spasm and anisocoria. These presentations covered more than 100 eye conditions.

For the first cohort (no virtual patient instruction; control group) classes were organized into nine two-week blocks in which students (a) received a didactic lecture, (b) applied what they had been taught over a period of one week by completing an online quiz, and (c) attended a class tutorial to discuss the same cases (constituting formative assessment). The online quiz aimed to evaluate students’ ability to determine the most likely diagnosis and referral urgency of three cases (for which immediate summative assessment was given). As time did not allow for 22 teaching blocks, corresponding to the 22 types of presentations, some of the nine blocks covered just one symptom or sign whereas others grouped several symptoms or signs into one block. The structure of each didactic lecture led students systematically through the five learning objectives.

For the second cohort (virtual patient instruction; intervention group) classes were organized into 20 one-week blocks in which students (a) received a tutorial covering the virtual patient “at a glance” guides, and (b) had unlimited practice on the virtual patient over a period of one week, in preparation for (c) online virtual patient assessments. This allowed enough time for all teaching blocks to cover just one symptom or sign except for two of the teaching blocks, which covered two presenting signs each.

The virtual patient was designed to provide a more interactive environment that matched, as closely as possible, the “natural flow” of an eye examination. Students were required to choose questions to ask based on the patient’s chief complaint, and to choose clinical tests to look for signs that would lead them to the most likely diagnosis and appropriate referral urgency. The virtual patient responded to questions and revealed symptoms and signs in the form of text and images. All findings were available for review in a virtual eye examination record. At the end of the examination, the virtual tutor provided formative feedback on every decision entered by the student, via the keyboard or mouse. Summative feedback was also provided based on the chosen questions, tests, diagnosis and urgency. Points were discounted for incorrect procedures such as failure to carry out pre- and post-dilation checks or attempting to look for a sign before selecting the required test.

During the unlimited practice sessions, students in the second cohort could switch the virtual patient to teaching mode. Here, the virtual tutor showed the “at a glance” approach guides mentioned above before directing students through each step and demonstrating how this altered the list of differential diagnoses. The virtual tutor also presented “pop-up” messages explaining the significance of any critical symptoms and signs. The intention here was for students to acquire background knowledge and clinical reasoning more as an apprentice does when observing a master than a scholar would do when reading a book. Students were able to choose a specific case or have one randomly selected from the database.

Participants

Second-year UK optometry students, most of whom had entered the program directly from high school, participated in this study. The first year of the degree program covers basic sciences including ocular biology, geometric optics and basic clinical techniques. Pre-clinical skills are developed during the second year, mainly on fellow students, and include the eye examination, contact lenses and advanced clinical techniques. Clinical practice dominates the third year and involves direct patient care at various clinics such as primary care, contact lenses, spectacle dispensing, binocular vision, low vision and ophthalmology. After graduation, students enter a pre-registration training program that lasts approximately a year. This postgraduate training is spent under the supervision of a qualified optometrist either in private practice or a hospital setting. During this last part of their training, graduates are required to pass further assessments to become registered as qualified optometrists.

The first study cohort (no virtual patient instruction; control group) consisted of 102 students (62 females and 40 males). The second cohort (virtual patient instruction; intervention group) consisted of 93 students (64 females and 29 males). All students in the classes to which both cohorts belonged were invited to participate (118 students in the first cohort; 120 students in the second cohort) but some refused consent (16% of the first cohort; 22.5% of the second cohort).

Perceived mastery

Perceived mastery was a self-assessed measure of the students’ own perception of their confidence in the five learning objectives (Table 1). Previous literature has referred to this as “self-efficacy.”10 A questionnaire was released two weeks before the end of the academic year with a one-week deadline. The questionnaire (Table 2) contained items that corresponded directly to the five learning objectives. Students responded to each question using a five-level Likert score, which was converted to a percentage such that 0% corresponded to a Likert score of 1 (“strongly disagree”) and 100% to a score of 5 (“strongly agree”).

Actual mastery

Following previous research,1 actual mastery was determined using end of year multiple-choice examinations. Both student cohorts were assessed by means of identical multiple-choice examinations. Aston University’s rules require that a proportion of multiple-choice examinations are altered each year. This was adhered to but still allowed for 25 multiple-choice questions, five per learning objective, to remain unchanged between both examinations. Example questions are shown in Table 3, one for each learning objective. Actual mastery scores represented the percentage of the five questions correctly answered for each learning objective.

Self-assessment accuracy

Self-assessment accuracy was initially determined by subtracting the actual mastery percentage from the perceived mastery percentage.12 Self-assessment accuracy was then classified into three groups: over-estimation, for percentage differences greater than zero; under-estimation, for percentage differences less than zero; and accurate, for percentage differences equal to zero.

Academic ability

Academic ability was based on the average grade achieved by each student across all second-year modules in sessional examinations performed at the end of the academic year.13 Academic grading in the UK is typically defined as follows: first class (score of 70-100%); upper second class (60-69%); lower second class (50-59%); and third class (40-49% score). A score below 40% is considered a failing grade. Students must achieve a lower second class grade or higher to progress onto the pre-registration program. All students participating in this study were in the tier of first class, upper second class or lower second class.

Learning style

During the first half of the course, all students completed the established Index of Learning Styles questionnaire.13 Students were initially classified along the four learning style dimensions: active-reflective, sensing-intuitive, visual-verbal and sequential-global. The four dimensions were then combined so that each student was re-classified as falling into one of 16 possible learning style profiles.13

Statistical analyses

Decision trees, a form of multivariate analysis, were generated using SPSS 21.0 (IBM SPSS Statistics) and findings were tested for statistical significance at the 95% level (P<0.05). Multivariate analyses eliminate confounding by accounting for all variables at once. Decision trees adopt a hierarchical output, where independent variables (i.e., virtual patient tuition, academic ability, gender and learning style) are shown in order of the strength of their association with the dependent variable (i.e., self-assessment accuracy). The most and least influential variables appear at the top and bottom of the trees, respectively. Branches only form for statistically significant associations.

The Chi-squared automatic interaction detection (CHAID) tree-growing method was adopted. Other researchers in the field of optometry have reported using the same method.14 Our study variables were categorical; therefore, Chi-square was used as the splitting criteria for generating decision-tree branches, and Bonferroni adjustments were applied to p-values to account for multiple tests. Decision trees consist of parent nodes that branch into child nodes. In our study, the minimum sample size for parent and child nodes was set at 30 and 15, respectively. By default, SPSS sets the maximum tree branching levels to three. We increased this to five (one more than the number of independent variables) to ensure maximum tree growth was achieved.

We made power calculations using GPower (version 3.1.0.).15 In our case, because any changes to teaching would require significant resources, we argue that it would only be justifiable to base changes on statistically significant findings for large effects. The highest degrees of freedom (df) required in our study was 30, i.e., 1 minus 3 levels of self-assessment accuracy (over-estimated, accurate and under-estimated) multiplied by 1 minus 16 learning style profiles (2 x 15 = 30 df). We calculated that a total sample of 99 students was required to enable Chi-square tests with 30 df to detect statistically significant large size effects at the 95% level of statistical significance with 80% power, a conventionally acceptable power.16-17 Our total sample of 195 students far exceeded this.

Results

The decision trees in Figures 1 through 4 show which of the independent variables (i.e., student cohort, academic ability, gender and learning style) were associated with self-assessment accuracy for each learning objective. “Question selection” decision tree is shown in Figure 1, “critical symptom recognition” in Figure 2, “critical sign recognition” in Figure 3, and “referral urgency selection” in Figure 4. No decision tree is shown for “test selection” as none of the independent variables was associated with self-assessment accuracy for this learning objective.

Figure 1. Decision tree showing statistically significant (P < 0.05 after correction for multiple comparisons) associations between academic ability, gender and self-assessment accuracy for the “question selection” learning objective (Table 1) for second-year optometry students (n = 195). This is a form of multivariate analysis that removes confounding between the independent variables entered (i.e., student cohort, academic ability, gender and learning style) before showing the remaining associations in hierarchical order (strongest to weakest). Each node of the decision tree shows the number (n) and percentage (%) of students for which self-assessment was over-estimated, under-estimated or accurate. Node 0 (the tree trunk) shows that accurate self-assessment was only found in 29.7% of the students for this learning objective. Academic ability showed the strongest association (first branching level leading to nodes 1 and 2, P = 0.002) in which self-assessment was over-estimated more often in students with lower grades (node 2: 49.5%) compared with those with higher grades (node 1: 23.9%). Decision tree analysis automatically governed assignment of students with first class, upper second class and lower second class grades as being of higher or lower academic ability. For students with lower grades, gender showed a slightly weaker association (second branching level leading to nodes 3 and 4, P = 0.024) in which self-assessment was over-estimated more often in males (node 4: 62.5%) compared with females (node 3: 41.3%). Click to enlarge

Root nodes (node 0) in each decision tree allow for comparison of the self-assessment accuracy measured for all 195 students across each learning objective. In the case of “test selection” (decision tree not shown), accurate self-assessment occurred in less than one-third of the students (28.2%, 55 students), being over-estimated (43.1%, 84 students) and under-estimated (28.7%, 56 students) in the remainder. This reflected a general trend across all learning objectives in which self-assessment was accurate for 26 to 31% of students and inaccurate for 69 to 74% of students (Figures 1 through 4).

The presence or absence of virtual patient instruction was only associated with variations in self-assessment accuracy for “critical symptom recognition” (Figure 2). Here, 61.3% of the cohort with virtual patient instruction showed over-estimated self-assessment compared with 35.3% of the cohort without this type of instruction. Academic ability was associated with self-assessment accuracy for all learning objectives except “test selection” (Figures 1 to 4). Here, over-estimation was more common in students with lower grades (37 to 76% of students depending on the learning objective) compared with those with higher grades (14 to 50% of students depending on the learning objective). Gender was only associated with self-assessment accuracy for “question selection” (Figure 1) in lower academic achievers, where over-estimation was more common in males (63%) than females (41%). Finally, learning style was not associated with self-assessment accuracy for any of the learning objectives considered.

Discussion

In this study, we investigated the potential effects of virtual patient instruction in student self-assessment accuracy. For that purpose, we selected a set of specific elements of clinical decision-making (“question selection,” “critical symptom recognition,” “test selection,” “critical sign recognition” and “referral urgency”) as variables within two student cohorts that were trained in classes that either lacked (i.e., control group) or included (i.e., intervention group) virtual patient instruction.

Our first working hypothesis was that, for the five learning objectives, better self-assessment accuracy would occur with virtual patient instruction. However, our results did not support this hypothesis. Virtual patient instruction was only associated with self-assessment accuracy for the learning objective “critical symptom recognition” and had the detrimental effect of increasing the proportion of students over-estimating their skills (Figure 2; 61.3% for teaching with virtual patient instruction [in node 1] compared with 35.3% for teaching without virtual patient instruction [in node 2]). Therefore, our notion that unlimited practice on the virtual patient would lead to improved self-assessment skills was not supported by the findings of the study.

Our second hypothesis was that self-paced virtual patient instruction would give all students an equal opportunity to improve self-assessment independently of student learning style, gender and academic ability. Our results did not support this hypothesis either. The associations detected between self-assessment accuracy and academic ability (Figures 1 to 4) and gender (Figure 1) were independent of the presence or absence of virtual patient instruction. In fact, our findings indicated that students exposed to virtual patient instruction were more likely to be left with an unrealistically high level of confidence in their ability to recognize symptoms of serious disease (Figure 2) and might, therefore, be unaware of their need for further study to improve this skill.

The lack of any positive association between self-assessment accuracy and virtual patient instruction was surprising in a generation of students that favor “concrete experience” and “active experimentation.”18 Despite this finding, virtual patient instruction remains part of our second-year clinical decision-making course. Student satisfaction scores for this course have ranged from 89% to 96% since it was introduced, with virtual patient instruction often placed at the top of a “what has worked best for you” list of effective learning resources. Interestingly, a previous study at the Rosenberg School of Optometry concluded that use of interactive learning material for first-year gross anatomy classes did not improve test scores but did increase motivation.19 Perhaps virtual patient instruction was beneficial for student learning on the basis that it motivates students rather than improves self-assessment accuracy.

Figure 2. Decision tree showing statistically significant (P < 0.05 after correction for multiple comparisons) associations between student cohort (i.e., the presence or absence of virtual patient instruction), academic ability and self-assessment accuracy for the “critical symptom recognition” learning objective (Table 1) for second-year optometry students (n = 195). This is a form of multivariate analysis that removes confounding between the independent variables entered (i.e., student cohort, academic ability, gender and learning style) before showing the remaining associations in hierarchical order (strongest to weakest). Each node of the decision tree shows the number (n) and percentage (%) of students for which self-assessment was over-estimated, under-estimated or accurate. Node 0 (the tree trunk) shows that accurate self-assessment was only found in 29.2% of the students for this learning objective. Student cohort showed the strongest association (first branching level leading to nodes 1 and 2, P < 0.001) in which self-assessment was over-estimated more often in students that received virtual patient tuition (node 1: 61.3%) compared with those that did not (node 2: 35.3%). Academic ability showed a slightly weaker association (second branching level leading to nodes 3 and 4, P = 0.041 and nodes 5 and 6, P = 0.023) in which, for both cohorts, self-assessment was over-estimated more often in students with lower grades (nodes 4 and 6 for the first and second cohorts: 75.6% and 65.0%, respectively) compared with those with higher grades (nodes 3 and 5 for the first and second cohorts: 50.0% and 28.0%, respectively). Decision tree analysis automatically governed grouping. Therefore, the assignment of students with first class, upper second class and lower second class grades as being of higher or lower academic ability differed for students with or without virtual patient instruction. Confounding variations in the academic ability, gender mix and learning styles in both cohorts (the first cohort studied the year before the second) were effectively accounted for in this analysis so that any associations remaining were most likely due to the presence or absence of virtual patient tuition. Click to enlarge

An interesting finding of our study was that self-assessment accuracy, for clinical decision-making skills at least, seemed generally poor across all learning objectives: 29.7% for “question selection,” 29.2% for “critical symptom recognition,” 28.2% for “test selection,” 26.2% for “critical sign recognition” and 30.8% for “referral urgency.” These findings corroborate earlier studies involving practitioners (practicing physicians, nurse practitioners and physician assistants) of a continuing medical education course on knee joint injection9 and junior medical officers carrying out routine skills during their first postgraduate year.20 However, accurate self-assessment has been reported in other studies involving third-year optometry students5 and computer engineering students.6 A systematic review7 of studies that included practicing physicians, residents or similar health professionals from the United Kingdom, Canada, United States, Australia or New Zealand concluded that physicians had a limited ability to accurately self-assess and that more advanced students and practitioners showed better self-assessment skills.

Our data suggested that academically stronger students were less likely to over-estimate their performance on four of the five learning objectives: “question selection” (Figure 1; 23.9% over-estimation in node 1 for stronger students compared with 49.5% in node 2 for others), “critical symptom recognition” (Figure 2; 50.0% and 28.8% over-estimation in nodes 3 and 5 for stronger students compared with 75.6% and 65% in nodes 4 and 5 for others), “critical sign recognition” (Figure 3; 41.3% in node 1 for stronger students compared with 61.2% in node 2 for others), and “referral urgency” (Figure 4; 14.1% in node 1 for stronger students compared with 36.9% in node 2 for others). Thus, our data was in agreement with some of the previously published work and suggested more developed metacognition skills in the stronger students’ group.

Nevertheless, a study on third-year students at the New England College of Optometry (NECO) showed that optometry students were competent at self-assessment of their clinical skills.5 These students were asked to self-assess their knowledge base and clinical skills. The clinical instructor supervising each student also evaluated the students using the same criteria. Students’ and instructors’ grades correlated to at least the 95% significance level (P < 0.05).5 We did not find statistically significant correlations between perceived mastery (self-assessment) and actual mastery (exam performance) for any of the five learning objectives we assessed (data not shown). So why did our study findings differ from those obtained at NECO? Several factors could have led to the differences observed. For instance, our students were in the second year of an undergraduate optometric program while the students from NECO were in the third-year of a doctoral program. In addition, perhaps different methods were utilized to measure self-assessment accuracy in each study.

We found that males who were academically weaker were prone to over-estimate their performance for the learning objective “question selection” (Figure 1; 62.5% in node 4 for males with second class grades compared with 41.3% in node 3 for females with second class grades). This also corroborates previous research on medical students taking part in a third-year surgery rotation,8 practitioners undergoing continuing medical education,9 and a meta-analysis of self-assessment in medical students.12 The first of these studies8 was designed to determine the ability of medical students to perform self-assessment. Data collected on medical students in their third-year surgery clerkship indicated that women under-estimated their performance and yet outperformed men. The second of these studies9 investigated how confidence, background, education and skills influenced a practitioner’s belief that he or she was qualified to perform a knee joint injection during a continuing medical education session. Participants completed questionnaires gauging confidence and self-assessment before and after instruction. Self-assessments were compared with actual performance on a simulator. Instruction improved confidence, competence and self-assessment, but men disproportionately over-estimated their skills and this worsened as confidence increased. The meta-analysis12 was conducted to gain a greater understanding of self-assessment accuracy in medical students. Its findings raised the importance of conducting analyses on factors that influence self-assessment accuracy, including gender. The studies analyzed indicated that female students under-estimated their performance more than male students and that gender analyses were often unreported.

Figure 3. Decision tree showing the statistically significant (P < 0.05 after correction for multiple comparisons) association between academic ability and self-assessment accuracy for the “critical sign recognition” learning objective (Table 1) for second-year optometry students (n = 195). This is a form of multivariate analysis that removes confounding between the independent variables entered (i.e., student cohort, academic ability, gender and learning style) before showing the remaining associations in hierarchical order (strongest to weakest). Each node of the decision tree shows the number (n) and percentage (%) of students for which self-assessment was over-estimated, under-estimated or accurate. Node 0 (the tree trunk) shows that accurate self-assessment was only found in 26.2% of the students for this learning objective. Academic ability showed the only association (one branching level leading to nodes 1 and 2, P = 0.013) in which self-assessment was over-estimated more often in students with lower grades (node 2: 61.2%) compared with those with higher grades (node 1: 41.3%). Decision tree analysis automatically governed assignment of students with first class, upper second class and lower second class grades as being of higher or lower academic ability. Click to enlarge

In contrast to the previously mentioned study on computer engineering students,6 our results showed that learning style profile was not associated with self-assessment accuracy. It has been suggested that teaching methods that are adapted to include both poles of the four learning style dimensions would be close to providing the optimal learning environment for most students.21 Therefore, the lack of any associations could be a positive finding as it suggests that our course on clinical decision-making, with or without virtual patient instruction, catered well to all learning styles. On a cautionary note, however, the study on computer engineering students made use of a more objective self-assessment scale and could, therefore, have been better set up to detect subtle variations associated with learning style.6

Study Limitations

The small size of each student cohort allowed us to achieve enough statistical power to detect associations with large size effects between virtual patient instruction, academic ability, gender, learning style and self-assessment accuracy. We believe, however, that only associations with large size effects justify changes to teaching practice, which would require substantial amounts of time and resources.

The first and second cohorts of students were recruited from classes that entered in different years. This could be a potential flaw in the study as the composition of classes can differ from year to year so that direct comparisons are confounded. The multivariate analyses carried out in this study do, however, provide protection against unsafe comparisons as they remove confounding. That is, yearly variations in the academic ability, gender mix and learning styles in both cohorts were effectively accounted for in our analyses. Therefore, any associations between virtual patient instruction and self-assessment accuracy represent only those that occur after removal of other confounding associations.

Like our virtual patient, the Ocular Disease Diagnostic Tutor (ODDT) software developed at NECO for fourth-year optometry students,22 enabled self-paced study. The ODDT was comprised of five activities: (1) interactive topic files providing background knowledge, (2) recognition exercises introducing clinical terms, (3) diagnostic cases testing recall of background knowledge and clinical terms, (4) clinical reasoning cases requiring formulation of differential diagnoses and treatment plans, and (5) interactive quizzes. Similar to our virtual patient, the ODDT software was designed to encourage problem-solving rather than factual recall.22 As mentioned in the Methods section, our virtual patient provided background knowledge via “at a glance” guides and allowed students to interact with a virtual clinical environment in order to demonstrate: 1) application of background knowledge, 2) recognition of critical symptoms and signs, 3) clinical reasoning, and 4) the use of appropriate terminology when recording clinical findings and selecting the most likely diagnoses and appropriate treatment plans. Despite similarities in the design concept of the virtual patient and ODDT software, ODDT was designed for fourth-year optometry students. This may explain, at least in part, why the results from the studies at NECO and our school differ.

Figure 4. Decision tree showing the statistically significant (P < 0.05 after correction for multiple comparisons) association between academic ability and self-assessment accuracy for the “referral urgency selection” learning objective (Table 1) for second-year optometry students (n = 195). This is a form of multivariate analysis that removes confounding between the independent variables entered (i.e., student cohort, academic ability, gender and learning style) before showing the remaining associations in hierarchical order (strongest to weakest). Each node of the decision tree shows the number (n) and percentage (%) of students for which self-assessment was over-estimated, under-estimated or accurate. Node 0 (the tree trunk) shows that accurate self-assessment was only found in 30.8% of the students for this learning objective. Academic ability showed the only association (one branching level leading to nodes 1 and 2, P = 0.001) in which self-assessment was over-estimated more often in students with lower grades (node 2: 36.9%) compared with those with higher grades (node 1: 14.1%). Decision tree analysis automatically governed assignment of students with first class, upper second class and lower second class grades as being of higher or lower academic ability. Click to enlarge

Our questionnaire on perceived mastery was not validated on an independent student population; therefore, its reliability could not be determined. Interestingly, the systematic review on the accuracy of physician self-assessment7 found that most studies had used self-assessment questionnaires that had not been validated. This was also true for some of the studies on medical students or practitioners cited above.8,9 The study involving computer engineering students used an objective self-assessment scale based on Bloom’s Revised Taxonomy.6 Use of a similar scale in our study might have allowed detection of subtle variations associated with learning style and may also be a valuable tool for future studies on the improvement of self-assessment in students as they progress through undergraduate and postgraduate optometric training.

We had developed a notion that unlimited self-paced virtual patient instruction would give all students an equal opportunity to improve self-assessment independently of student learning style, gender and academic ability. On reflection, we missed an opportunity to test this notion more thoroughly. Had we monitored how many times students accessed the online virtual patient for self-paced practice purposes, we might have obtained the data to better explain our findings. For example, we might have found that students were not taking the opportunity to practice, or that gender and academically ability influenced the level of engagement in self-paced practice. This will be a potentially valuable avenue for further study.

Conclusion

The findings of this study suggested that our second-year optometry students had poor self-assessment accuracy in relation to the clinical decision-making learning objectives shown in Table 1, and that the use of virtual patient instruction was not associated with an improvement in self-assessment accuracy. Student feedback, nevertheless, indicated that virtual patient instruction helped them to learn. We also observed that lower academic ability, especially in males, was associated with over-estimated self-assessment. Previous research carried out on improving self-assessment skills4, 23 has led to the suggestion that curricula should include opportunities for students to develop self-assessment skills early in their degree programs, and this should be reinforced throughout the entire curriculum.4 Additional research is needed to evaluate the efficacy of different instructional methods in promoting self-assessment accuracy in students. Data generated through these studies will aid in the design of successful implementation protocols that could be adapted and incorporated into the optometric curriculum.

Acknowledgements

We thank Abdullah Bhamji, Pardeep Chohan, Hardish Dhillon, Sharon Lamson, Hui Lok, Ebrahim Lorgat, Nashreen Mulla, Andrew Munson and Taha M Jalal for testing and evaluating prototypes of our virtual patient. We are also very grateful to the reviewers of Optometric Education for helping us to improve this paper.

References

  1. Faucher C. Differentiating the elements of clinical thinking. Optometric Education. 2011;36(3):140-5.
  2. Pane A, Simcock P. Practical ophthalmology: a survival guide for doctors and optometrists. Elsevier Health Sciences; 2005.
  3. Sharaf AA, AbdelAziz AM, El Meligy OA. Intra-and inter-examiner variability in evaluating preclinical pediatric dentistry operative procedures. Journal of Dental Education. 2007;71(4):540-4.
  4. Mort JR, Hansen DJ. First-year pharmacy students’ self-assessment of communication skills and the impact of video review. American Journal of Pharmaceutical Education. 2010;74(5):78.
  5. Denial A. Accuracy of self-assessment and its role in optometric education. Poster #53. Optometry and Vision Science. 2001;78(12):266.
  6. Alaoutinen S. Effects of learning style and student background on self-assessment and course performance. In: Proceedings of the 10th Koli Calling International Conference on Computing Education Research 2010 (pp. 5-12). ACM.
  7. Davis DA, Mazmanian PE, Fordis M, Van Harrison RT, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. JAMA. 2006;296(9):1094-102.
  8. Lind DS, Rekkas S, Bui V, Lam T, Beierle E, Copeland E 3rd. Competency-based student self-assessment on a surgery rotation. Journal of Surgical Research. 2002;105(1):31-4.
  9. Leopold SS, Morgan HD, Kadel NJ, Gardner GC, Schaad DC, Wolf FM. Impact of educational intervention on confidence and competence in the performance of a simple surgical task. J Bone Joint Surg Am. 2005;87(5):1031-7.
  10. Bandura A. Self-efficacy: toward a unifying theory of behavioral change. Psychological Review. 1977;84(2):191.
  11. Glittenberg C, Binder S. Using 3D computer simulations to enhance ophthalmic training. Ophthalmic Physiol Opt. 2006;26(1):40-9.
  12. Blanch-Hartigan D. Medical students’ self-assessment of performance: results from three meta-analyses. Patient Education and Counselling. 2011;84(1):3-9.
  13. Prajapati B, Dunne M, Bartlett H, Cubbidge R. The influence of learning styles, enrollment status and gender on academic performance of optometry undergraduates. Ophthalmic Physiol Opt. 2011;31(1):69-78.
  14. Rushton RM, Armstrong RA, Dunne M. The influence on unaided vision of age, pupil diameter and sphero‐cylindrical refractive error. Clinical and Experimental Optometry. 2016;99(4):328-35.
  15. Faul F, Erdfelder E, Lang AG, Buchner A. G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007;39(2):175-91.
  16. Prajapati B, Dunne M, Armstrong R. Sample size estimation and statistical power analyses. Optometry Today. 2010;16(07):10-8.
  17. Cohen J. Statistical power analysis for the behavioural sciences. Hillside. NJ: Lawrence Earlbaum Associates. 1988.
  18. Eubank TF, Pitts J. A comparison of learning styles across the decades. Optometric Education. 2011;36(2):72-5.
  19. Sanchez-Diaz PC. Impact of interactive instructional tools in gross anatomy for optometry students: a pilot study. Optometric Education. 2013;38(3):100-5.
  20. Barnsley L, Lyon PM, Ralston SJ, et al. Clinical skills in junior medical officers: a comparison of self‐reported confidence and observed competence. Medical Education. 2004;38(4):358-67.
  21. Felder RM, Silverman LK. Learning and teaching styles in engineering education. Engineering Education. 1988;78(7):674-81.
  22. Sleight WE. Assessment of the ocular disease diagnostic tutor as a learning tool. Optometric Education. 2011;36(2):63-71.
  23. Hawkins SC, Osborne A, Schofield SJ, Pournaras DJ, Chester JF. Improving the accuracy of self-assessment of practical clinical skills using video feedback–the importance of including benchmarks. Medical Teacher. 2012;34(4):279-84.
 Save article as PDF

Dr. Pancholi (nee Prajapati) carried out this study as part of her postgraduate research while she worked as a Clinical Demonstrator in the Optometry School at Aston University, UK. Previously, she had trained pre-registration students in private practice.

Dr. Dunne [m.c.m.dunne@aston.ac.uk] is Senior Lecturer in the Optometry School at Aston University, UK. He supervised Dr Pancholi’s postgraduate research.