OptomCAS | OPTOMETRY ADMISSION TEST     

Optometric Education

The Journal of the Association of Schools and Colleges of Optometry

Optometric Education: Volume 45, Number 1 (Fall 2019)

PEER REVIEWED

Does Practice Make Perfect? Relating Student Performance to Training Hours

Yutaka Maki, OD, MS, FCOVD, and Brian K. Foutch, OD, PhD, FAAO

Abstract

We tested the hypothesis that a positive correlation exists between time practiced outside of scheduled laboratories and high-stakes proficiency performance in a preclinical optometry course. Practice hours for 130 first-year optometry students were recorded weekly and compared with the average scores of two summative ― midterm and final ― proficiency examinations. Due to scheduling constraints, the final proficiency test was administered during weeks 12 and 14, and students were randomly assigned to either week. The group scheduled for week 14 practiced significantly more hours (85.7±19.4 hours vs. 73.0±16.8 hours) and, on average, performed significantly better (89.7±6.1% vs. 85.9±7.3%) than the week-12 group. At the individual level within each group, however, we found a negative correlation between practice hours and performance, particularly for the week-14 group. This study showed that students did not necessarily perform better by practicing more. The results are discussed in the context of clinical learning aspects such as deliberate practice and self-awareness of current skill set.

Key Words: assessments, preclinical, laboratory, practice, self-awareness

Background

We have all heard the phrase “practice makes perfect.” The Latin proverb “usus est magister optimus” translates to “practice is the best master,” and Aristotle said, “For the things we have to learn before we can do them, we learn by doing them.”1 Emphasis on the importance of practice is seen clearly in every culture and century, and it seems safe to assume that the more one practices, the better one performs. If this assumption were true, it would make sense to assign students struggling in preclinical optometry laboratory courses additional practice outside of scheduled laboratory times. By extension, it would be reasonable to assume that additional practice should result in higher grades or evaluations on preclinical assessments. In this study, we examined whether students who practiced more hours outside of their assigned course laboratory performed better on high-stakes midterm and final proficiency assessments.

Methods

We retrospectively reviewed the course records of 130 first-year optometry students enrolled in the second semester preclinical course during Spring 2015 and 2016. The data were collected in the clinical laboratory facilities at the University of the Incarnate Word Rosenberg School of Optometry (UIWRSO), San Antonio, Texas, under the supervision of the course instructors.

The students learned new clinical skills (retinoscopy, refraction, binocular tests) each week in the three-hour course laboratory (Table 1). To ensure they were reviewing the newly learned skills, all students were required to practice a minimum of four hours in the laboratory outside of instruction time through week 10. The newly learned skills were evaluated in the form of checkouts (short clinical skill assessments) throughout the semester (Table 1). The primary purpose of the checkouts was to provide students with formative feedback to learn from their successes and mistakes. A secondary goal was for students to practice more if they performed poorly. To ensure this was taking place, two additional hours were mandated for each checkout with a grade below 85%. Thus, students who passed or failed a checkout were required to practice a minimum of four or six hours, respectively. Beyond that, no additional assignments were given. However, all students were encouraged to practice more than the minimum assigned hours, especially if they felt they had not reached expected skill levels.

When students practiced outside of the scheduled laboratory, they were under the supervision of teaching assistants. Each student recorded the time he or she entered and exited the laboratory, and the teaching assistant confirmed the times. Each week’s logged hours were then submitted to the instructor of record in the subsequent course laboratory. The three scheduled hours they spent each week in the course laboratory were not included in the logged hours.

In addition to the checkouts, there was one midterm proficiency assessment (retinoscopy and refraction) and one final proficiency assessment (entire refractive sequence of lensometry, entrance tests, retinoscopy, subjective refraction and photometry). The rubrics used in checkouts and proficiency assessments were modeled after the Clinical Skills Exam evaluation forms used by the National Board of Examiners in Optometry.2 Unlike weekly checkouts, no additional practice hours were assigned to students who scored below 85% for the proficiency assessments.

We recorded the number of hours each student practiced up to their scheduled proficiency assessments. Because the final assessments were administered during weeks 12 and 14 of the semester, we randomly divided the students into two groups for analysis. The students’ cumulative grades from the midterm (33%) and final (67%) proficiency assessments were compared with the number of hours they practiced to determine whether there was any correlation (Pearson’s r reported). Checkout grades were excluded from the cumulative grade for the purpose of this retrospective analysis because the mandated additional two-hour assignment for failing checkouts inevitably would have decreased any positive relationship between the grade and the practice hours. In our analyses, we defined statistically significant as p-value less than 0.05.

Results

Overall, the second (week-14) group had higher mean cumulative lab grades and mean number of logged practice hours (89.7±6.1%, 85.7±19.4 hours) than the first (week-12) group (85.9±7.3%, 73.0±16.8 hours). The differences were statistically significant (p<0.01) for both the cumulative lab grades and the logged practice hours (Tables 2 and 3).

However, we found a negative correlation between cumulative grades and overall practice hours within each group. Within each group, the finding was statistically significant for the week-14 group (r=-0.30, p=0.02) (Figure 1 and Table 4) but not for the week-12 group (r=-0.02, p=0.84) (Figure 2 and Table 4). Similar patterns were seen when comparing final proficiency grades with the overall practice time (Table 4) as well as the cumulative grades with voluntary practice time (Table 5).

There was no statistically significant difference in practice time between the week-12 and week-14 groups up to the midterm proficiency assessment or the last (week 10) checkout (p=0.718 and p=0.487 respectively, Table 2). Also, there was no statistically significant difference in checkout and midterm grades between the week-12 and week-14 groups (p=0.473 and p=0.052 respectively, Table 3).

Further analysis of the correlation between post-midterm practice time and midterm proficiency grades revealed a statistically significant correlation for the week-14 group but not the week-12 group (p<0.01 and p=0.25 respectively) (Table 6).

When correlations among each assessment grade were analyzed, the final proficiency grades for the week-12 group were significantly positively correlated with midterm proficiency and checkout grades (p<0.01 each). This was not the case for the week-14 group (p=0.42 and p=0.05 for midterm proficiency and checkout grades respectively) (Table 7).

Discussion

The study results supported the conventional wisdom: “practice makes perfect.” The group given an additional two weeks increased their practice time and performed better. Although it could be argued that these results can be explained by the early group giving the latter group a “heads-up” on the proficiency assessments, we believe it would have given them little to no advantage. The final proficiency assessment was comprised of elements from all the checkouts, so there were few surprises. In addition, the same final proficiency assessment was given in previous years; therefore, students from either group could have asked upperclassmen about it.

At the individual level, we found different results within each group. Higher-performing students in the week-14 group practiced less than lower-performing students. This was not the case for the week-12 group, where we found no statistically significant correlation between practice and performance.

Many factors could explain this apparent contradiction in the week-14 group. First, students have different starting skill levels based on work or academic experience. Second, some learn new skills more innately than others, and we have informally observed that it is often the struggling students who practice more to compensate for their current lack of skills. In fact, the statistically significant negative correlation between post-midterm practice time and midterm proficiency grades of the week-14 group indicates that the outcome of the midterm proficiency assessment affected their practice behavior (Table 6). While we had anticipated that the effort shown by their increased practice time would be the predictor for their clinical performance, we have instead found their current clinical progress to be the predictor of their effort. That is, lower-performing students practiced more.

This type of self-adjusted practice behavior could be explained by emotional intelligence (EI), which is defined as the ability to monitor one’s own and others’ emotions, to discriminate among them, and to use this information to guide one’s thinking and actions.3 One model of EI proposed by Goleman and Boyatzis identifies four clusters of competencies: self-awareness, self-management, social awareness and relationship management.4 All of these clusters appear to be important predictors of clinical success, and several authors have observed positive associations between higher EI and better clinical performance in dental, medical and nursing school students.5-9 In one of the studies, self-management competencies were significantly correlated with student clinical performance (as measured by mean clinical grade).5 A recent study of optometry students in the United Kingdom demonstrated a positive association between self-awareness and academic performance.10 The characteristics of self-management include adaptability, initiative, persistence in pursuing goals, taking responsibility for personal performance, and striving to meet a standard of excellence.11 This also may explain the practice pattern of our higher-performing students. Perhaps they based their decision to practice less on either positive feedback received during weekly checkouts, proper time management of overall curriculum demands, or awareness of their clinic skills.

While considering EI may offer insight into the week-14 group’s negative correlation between practice and performance, we observed no statistically significant correlations between assessment grades (midterm proficiency, final proficiency or cumulative) and practice time (Tables 4-6) in the week-12 group. Because the major difference between the two groups was the amount of practice time given, this suggests availability of time is a key factor in behavioral change in students. This also explains why statistically significant correlations appeared between checkouts and proficiency assessments (midterm or final) in the week-12 group, while no such correlations appeared in the week-14 group (Table 7). Because the week-12 group lacked sufficient time to adapt their learning and close the gap in their achievement, midterm proficiency performance became the predictor for their final proficiency result.

This suggests the importance of making struggling students fully aware of their initial (or current) status and giving adequate support and time to close any achievement gap. Although we initially believed that weekly checkouts and the midterm proficiency assessment would be sufficient to provide such awareness, some of the lower-performing students did not respond to this feedback as we expected. When analyzing the bottom 20% of the week-12 and week-14 groups, we found that 57.1% and 33.3% of students were practicing below the average practice time, respectively. In the week-12 group, the average group and bottom 20% practice times were 73.0±16.8 hours and 72.9±16.1 hours, respectively. In the week-14 group, the average group and bottom 20% practice times were 85.7±19.4 hours and 93.5±20.3 hours, respectively. This indicates that identifying and guiding low-performing students who lack self-awareness and/or self-management earlier in the course is essential. Convincing students of their status might be challenging, however, as at least two previous investigations of competence awareness have shown that lower-achieving students overestimate their competence and vice versa for higher-achieving students (an effect that held even after feedback was given, though to a lesser extent).12,13

When offering support to low-performing students, it is important to teach them how to practice. Studies show that expert performance is most effectively attained by deliberate practice (DP), where focused training is designed and arranged by teachers and coaches to optimize improvement.14-16 This focused training also involves the provision of immediate feedback, time for problem-solving and evaluation, and opportunities for repeated performance to refine behavior. In a preclinical lab setting, perhaps the lower-performing students can be given individualized assignments based on their current performance and/or be paired with a teaching assistant or a high-performing student to focus on specific areas of struggle.

In the current study we did not provide any designed training, nor did we measure how each student actively engaged in DP. This certainly limits any inferences made about the effect of DP on performance. Further limitations of our study included a relatively small sample size and a retrospective period of only two years. Lastly, our study lacked any direct measures of EI or self-awareness. Further studies could and should do so by integrating focus group interviews with higher- and lower- achieving students and teaching assistants into the analysis.

Conclusion

Our study showed that individual performance depended on amount of practice and feedback from assessments. Although at the class level the group that had more time to practice performed better, at the individual level within each group students who practiced more did not necessarily perform better. Our findings suggested that students adjusted the number of hours they practiced according to their perception of their current skill set or a self-prescribed mastery goal. Ultimately, assessment performance and students’ practice behavior influenced each other. In addition, we found it to be essential to provide sufficient time for students to adapt and reach the competency level, otherwise earlier assessment results become the predictor for their final outcome. Lastly, it is imperative to make struggling students aware of their status and provide enough support as early as possible. While our findings have implications for medical and optometric education, they will be strengthened by more direct measurements of DP, self-awareness and EI. As a result of this analysis, we have considered providing all students at least three weeks to practice before major assessments. In addition, underperforming students will be prescribed a deliberate practice plan and be scheduled in the later assessment week.

Acknowledgements

A portion of this work was presented at the 2016 annual meeting of the American Academy of Optometry in Anaheim, Calif. The authors acknowledge the University of the Incarnate Word Rosenberg School of Optometry classes of 2018 and 2019 who served as subjects of this study.

References

  1. Broadie S. Ethics with Aristotle. Oxford University Press, Incorporated. 1991.
  2. National Board of Examiners in Optometry – NBEO [Internet]. Candidate Eligibility – NBEO. [cited 2017 Dec 17]. Available from: https://www.optometry.org/nccto.cfm
  3. Mayer JD, Salovey P. What is emotional intelligence? In: Salovey P, Sluyter DJ, eds. Emotional development and emotional intelligence: educational implications. New York, NY: Basic Books 1997;3-31.
  4. Goleman D, Boyatzis RE, McKee A. Primal leadership: realizing the power of emotional intelligence. Boston: Harvard Business School Press, 2002.
  5. Victoroff KZ, Boyatzis RE. What is the relationship between emotional intelligence and dental student clinical performance? J Dent Educ. 2013 Apr;77(4):416-26.
  6. Beauvais AM, Brady N, O’Shea ER, Griffin MT. Emotional intelligence and nursing performance among nursing students. Nurse Educ Today. 2011 May;31(4):396-401.
  7. Arora S, Ashrafian H, Davis R, Athanasiou T, Darzi A, Sevdalis N. Emotional intelligence in medicine: a systematic review through the context of the ACGME competencies. Med Educ. 2010 Aug;44(8):749-64.
  8. Satterfield J, Swenson S, Rabow M. Emotional intelligence in internal medicine residents: educational implications for clinical performance and burnout. Ann Behav Sci Med Educ. 2009;14(2):65-68.
  9. Stratton TD, Elam CL, Murphy-Spencer AE, Quinlivan SL. Emotional intelligence and clinical skills: preliminary results from a comprehensive clinical performance examination. Acad Med. 2005 Oct;80(10 Suppl):S34-7.
  10. Pancholi BR, Dunne MCM. Virtual patient instruction and self-assessment accuracy in optometry students. Optometric Education. 2018;43(2):1-16.
  11. Goleman D. Emotional intelligence. April 21, 2015. Accessed: October 5, 2017. Available from: www.danielgoleman.info/daniel-goleman-how-emotionally-intelligent-are-you/.
  12. Hodges B, Regehr G, Martin D. Difficulties in recognizing one’s own incompetence: novice physicians who are unskilled and unaware of it. Acad Med. 2001 Oct;76(10 Suppl):S87-9.
  13. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999 Dec;77(6):1121-34.
  14. Ericsson KA. Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med. 2008 Nov;15(11):988-94.
  15. McGaghie WC, Issenberg SB, Cohen ER, Barsuk JH, Wayne DB. Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad Med. 2011 Jun;86(6):706-11.
  16. Plant EA, Ericsson KA, Hill L, Asberg K. Why study time does not predict grade point average across college students: implications of deliberate practice for academic performance. Contemporary Educational Psychology. 2005;30(1):96-116.
 Save article as PDF

Yutaka Maki, OD, MS, FCOVD, [maki@uiwtx.edu] is an Assistant Clinical Professor at the University of the Incarnate Word (UIW) Rosenberg School of Optometry and Chief of the Binocular Vision and Vision Therapy Service at UIW Eye Institute. His didactic responsibilities include teaching courses in optics, preclinical optometry, vision therapy and strabismus/amblyopia.

Brian K. Foutch, OD, PhD, FAAO, is an Associate Professor at the University of the Incarnate Word Rosenberg School of Optometry where he teaches public health and epidemiology and neuroanatomy.