OptomCAS | OPTOMETRY ADMISSION TEST     

Optometric Education

The Journal of the Association of Schools and Colleges of Optometry

Optometric Education: Volume 46 Number 1 (Fall 2020)

PEER REVIEWED

Effects of Course Design Decisions on Optometry Student Outcomes: Analysis of Lecture Synchronicity and Flipped-Model Learning, 2013-2016

Emily J. Aslakson, OD, FCOVD, FAAO, and Daniel A. Taylor OD, EdD, MS, FAAO, DipOE

Abstract

This study compared asynchronous lecture presentation to live lecture presentation, flipped-classroom course models to traditional lecture-based course models, and the interaction between concurrently implemented lecture delivery and course models. Statistical analysis of optometry students’ Final Course Grades from neuroscience courses from 2013 and 2016 at Southern College of Optometry and Michigan College of Optometry at Ferris State University revealed that neither lecture delivery, course design, nor combined subsamples yielded significant differences. Significant variance between samples and subsamples existed on some summative assessments. The presence of significant error implied that knowledge mastery was improved by factors outside the study design, such as reinforcement activities and prior exposure to course material.

Key Words: flipped classroom, asynchronous presentation, optometric education, neuroscience education

Introduction

With emerging pedagogical technologies and methods of content delivery, the face of education is changing rapidly. These changes are seen at all levels of education, from preschool to doctoral programs. It is an educator’s duty to adapt to new technologies and methods to best serve students. But, is this embrace of technology always what is best for students? Do either the method of content delivery or the course model impact student success?

Though considerable research exists within optometric education and other education fields that evaluates the effects of innovative pedagogy, few investigators have considered how the effects of multiple simultaneous pedagogical innovations intersect. With the multitude of classroom innovations currently available, consideration of these combined effects is obviously needed. Among the pedagogical innovations recently embraced in optometric education is the flipped-classroom course organization, which attempts to improve student mastery and critical understanding of classroom material by using the lecture hall as a venue for active work.1,2 Lecture-capture technology has also been studied in optometric education. This technology provides both a backup copy of live lectures for review purposes and the flexibility to record lectures for viewing at a later time than that of the initial recording (i.e., asynchronous content delivery). The latter ability of lecture-capture technology has the potential to be quite useful for flipped-classroom organizations, as it permits facile movement of basic lecture-format material (e.g., statements of basic facts and concepts) out of the classroom for review prior to or after a classroom period, thus freeing classroom time for more active work.

The literature provides some context regarding these issues. For example, students tended to perform better on assessments of very difficult concepts when taught in a live lecture format rather than an asynchronous or online format.1,3 In courses that had asynchronously presented online learning modules (OLMs), students tended to perform better on short-term memory assessments (i.e., weekly quizzes) than those students who did not participate in OLMs (i.e., live lecture only). The OLMs did have interactive features in addition to recorded narratives and videos. However, there was no difference in performance on long-term memory assessments (i.e., Midterm and Final Examinations) between students who engaged with OLMs vs. those who attended only live lecture.4

This observation is consistent with further studies that explore how faculty presence affects online or flipped-model courses. Common themes in these studies are that a strong faculty presence in such settings increases student cognitive engagement and, furthermore, students expect a strong faculty presence in these courses.3,5,6 Smaller class sizes also correlated with increased student engagement, which was positively correlated with course grades.5 When both students and the faculty member were engaged in a course, students had a more positive experience with the course overall.

Interestingly, a common theme regarding lecture-capture technology and flipped-model classrooms is that while they offer significant benefits, they cannot and should not replace live lecture.4, 6-10,11 Some of the positive attributes of flipped-classroom models and lecture-capture technology from a student perspective are that they allow students access to lecture material whenever and wherever they need it, which is extremely important to students with busy academic and non-academic schedules.2,7 These models also permit self-pacing of instruction and allow presentation of material in multiple modalities.2,4 Research indicates that all students benefit from multiple modalities of content delivery (e.g., live lecture, recorded lecture, online assignments, etc.) and that some may need to spend more or less time with the material before mastering it. Learning is not a “one-size-fits-all” activity, and the availability of multiple modalities of delivery is often more effective than a single modality. However, a student must have an intrinsic desire to learn to succeed in the flipped classroom, so these models do not tend to work well for students who typically come to class unprepared.2

Faculty opinions toward flipped-classroom course organization and lecture-capture software tend to be more negative than those of students. Particularly poor are faculty opinions of lecture capture as a supplement to live lecture courses. The largest concern is that attendance in live lecture formats would decrease if lectures are recorded and made available for on-demand viewing. However, one study found that the availability of online lecture material did not decrease attendance compared to restriction of online lectures.10 Students reported that they predominately used online lecture material to catch up on lectures that were missed. The most common reason that students reported missing live lectures was because of conflicts with assignments in other courses.7

Comparisons of overall course performance between live lecture and asynchronous delivery courses have yielded conflicting outcomes. Some studies found that students who attended live lectures performed better than students who relied on online lecture material only.7,8 One of these studies also found that weaker students were more likely to miss live lecture periods and rely only on online lecture capture for content delivery, which may explain why those students were poorer performers.6,8 However, most studies have found that live lecture and asynchronous delivery have equivalent effects on student course performance.7-9 Some of these studies did not separate students who attended live lectures and used available online lectures, which may explain some of this discrepancy. In general, it seems a reasonable assumption that students who use all available resources will perform better than those who only use one.

One study explored the use of podcasts as an alternative to live lecture for optometry students.12 Just more than half of the students indicated they had downloaded and listened to at least one podcast and were classified as listeners. The rest of the students did not listen to any podcasts and were classified as non-listeners. Most listeners stated they used the podcasts to “fill in gaps” and revisit material from the live lecture. The vast majority of listeners felt that the podcasts were valuable in increasing understanding of material (94.6%). Of non-listeners, the main reason they did not use the podcasts was a lack of familiarity with how to access the material (34.4%). When asked if podcasts would be a suitable replacement for live lectures, both listeners and non-listeners (85.7% and 83.9%) felt they would not be a suitable replacement.

Purpose of the Study

There are several studies investigating the effects on student outcome measures of flipped-classroom courses and lecture-capture technology use. However, apart from one, none of these studies specifically investigates optometric education. Also, none looks at the combined effects of implementing these two classroom innovations simultaneously.12 Our study seeks to address these gaps by answering three research questions. First, do optometry students perform better on course assessments with live, in-person instruction as compared to asynchronous, distance instruction using lecture-capture technology (i.e., differences in distance)? Second, do optometry students perform better in a traditional, lecture-based educational format as compared to a flipped model that focuses more on textbook readings and assignments (i.e., difference in model)? Third, how do these differences in distance and in model affect optometry students’ academic performance when implemented simultaneously?

Methods and Results

Methods for this study were reviewed and approved by the Southern College of Optometry (SCO) and Ferris State University Institutional Research Boards on June 6, 2017 and June 29, 2017, respectively. A waiver of informed consent was obtained from the same bodies on June 27 and June 29 because the study involved only a retrospective analysis of existing academic records.

Data from subjects in the SCO and Michigan College of Optometry at Ferris State University (MCO) neuroscience courses (OPT 113 and OPTM 635, respectively) are included in this study. The data come from the course sections taught from Fall 2013 to Fall 2016. Due to differences in the curricular design of the two colleges, OPT 113 was taught during the first year of the SCO curriculum, while OPTM 635 was taught during the second year of the MCO curriculum. The Instructor of Record for the SCO neuroscience course recorded his lectures via the Tegrity lecture-capture software and made them available on the Tegrity web-based application to MCO students asynchronously. These lectures, as well as reading assignments, online assignments, examinations, quizzes and other course elements were shared so that the totality of the course material was the same for the SCO and MCO neuroscience courses. The lone exception to this practice was the last lecture of the semester, which routinely could not be recorded as normal due to the MCO Fall semester ending sooner than the SCO Fall semester. Thus, for most years, this lecture was specially recorded via Tegrity for the MCO class, while a live version of the lecture was presented to the SCO class afterwards. In 2016, this lecture was converted to a self-paced, mastery-model online learning assignment for all students.

Various professors at MCO acted as course liaisons, responsible for the administrative elements of the course. These professors did not perform content delivery, and were given a small amount of administrative workload for their time to act as course facilitators only. Three other elements differed between the two courses. The first was the lack of the Instructor of Record’s physical presence on the MCO campus. Second was the inclusion of a one-hour-per-week live recitation session, in which the Instructor of Record took questions and reviewed material with the MCO students via live video-conferencing, designed to replace in-person office hours. The third was the examination formats, which were given on paper and through the laptop computer-based ExamSoft platform for the SCO classes, and through the infrared remote-controlled Turning Point response system for the MCO classes. Though these evaluation methods had certain interfacial and aesthetic differences, other elements of the examinations, including content, time allotted and testing environment, were similar between sites. Aside from these three differences, students enrolled in the SCO and MCO courses had access to the exact same content and were evaluated using the exact same assessment items.

Asynchronous vs. synchronous course presentation (School)

There were 678 students who completed either OPT 113 or OPTM 635 between 2013 and 2016 and were included as subjects in this study. The records of students who withdrew from the course mid-term were not included among the subjects. The independent sorting variable — named School — was defined by whether a student took the live class (nlive) or the asynchronous class (nasyn). See the notes with individual tables for sample sizes.

Traditional vs. flipped classroom (Type)

For the 2013 and 2014 course administrations, the majority of the neuroscience course material was delivered as lecture, with suggested readings in the course textbook for enhancement, i.e., a traditional model. For the 2015 and 2016 administrations of the neuroscience courses, the Instructor of Record adjusted the course material (or, in common parlance, “flipped the classroom”) to make the material presentation more student-driven. Under this new course organization, the number of lectures was reduced by half. Lectures that remained were designed to enhance and expand the foundational knowledge the students had already obtained from their preparatory work. Practically, this meant that lectures contained complex and optometry-related course material but few factual or conceptual definitions. Basic course material was introduced via assigned textbook readings (for which students were given course time off) and reinforced through online learning modules developed by the Instructor of Record. Though it is beyond the scope of this report to describe these proprietary modules in detail, suffice it to say they followed the self-paced, mastery model of presentation and focused on detailed neuroanatomy subjects (specifically, cranial nerve anatomy and function, Horner’s syndrome, and — in the 2016 course administration — ascending and descending central pathways). Students were required to complete textbook readings over the material contained in the modules prior to completing the modules, which expanded and reinforced the textbook information. Internal studies indicated that completion of these modules led to effective knowledge transfer and, compared to traditional lectures, may have improved student performance on related examination items.13,14

The flipped-model course organization was used for the 2015 and 2016 administrations of the courses. Thus, in addition to the School variable, the students who completed one of the neuroscience courses from 2013 to 2016 can be further classified by Type: whether they completed the lecture-heavy, traditional course (ntrad) or the reading-driven, flipped course (nflip).

Academic variables

Existing academic records for subjects were obtained from the files of the Instructor of Record of OPT 113 and OPTM 635, while subjects’ Optometry Admission Test (OAT) scores were obtained from the offices of Academic Affairs of SCO and MCO. OAT academic average (OATAA), OAT total science (OATTS), OAT subsection scores (OATBIO, OATGC, OATOC, OATPHYS, OATRC and OATQR), Midterm Examination Grades (Mid1 and Mid2), Final Examination Grades (FinalEx) and Final Course Grades (Course%) were gleaned from these records and sorted by the subjects’ school-issued identification numbers. Throughout this paper, these are defined as academic variables.

Student records were randomized using a List Randomizer tool (https://www.random.org/) and assigned a unique depersonalized ID number based on that random order (“ID Number”). These data were compiled in a spreadsheet and imported into IBM SPSS 24 for analysis.

Research question 1

The first research question asked whether optometry students achieved similar grades from live instruction and asynchronous instruction in a neuroscience course. To evaluate this, we compared the means, medians and distributions of academic variables across different values for School.

Descriptive statistics and normality assumptions for the School value distributions across different academic variables are found in Tables 1 and 2.

  Independent-samples Student’s t-tests were performed on the parametric distributions across different values of School. Due to the uneven sample sizes between live and asynchronous groups, effect size was determined with the Hedge’s g method. Several non-parametric tests were performed on the non-parametric distributions, specifically the independent-samples median test, which compares medians from different populations, and the independent-samples Mann-Whitney U and independent-samples Kolmogorov-Smirnoff tests, which both evaluate the distributions from which samples are drawn.

Results

Full results of these analyses are presented in Table 3. Students in the live instruction sample had significantly higher OATAA, OATTS, OATRS, OATBIO and FinalEx scores (α<.05); though students in the asynchronous sample had significantly higher scores on Mid1 and Mid2 (α<.05). Differences between live and asynchronous sample students in other academic variables were not significant.

Research question 2

The second research question asked whether optometry students achieved similar grades in a traditional model (ntrad) and a flipped model (nflip) in a neuroscience course. To evaluate this, we compared the means, medians and distributions of our academic variables across different values for Type. Descriptive statistics and normality assumptions across different academic variables are found in Tables 4 and 5.

  Independent-samples Student’s t-tests were performed on all distributions across different values of Type, except for Mid2, which had a non-parametric element. The similar sample sizes between the traditional and flipped samples allowed effect size to be calculated using Cohen’s d statistic, except in cases where the standard deviations of the two samples’ performances were considerably different (defined here as >3.0). In those cases, Gates’ was used. The non-parametric independent-samples median, Mann-Whitney U and Kolmogorov-Smirnoff tests were performed on the non-parametric Mid2 distribution.

Results

Full results of these statistical analyses are presented in Table 6. There are significant differences (α<.05) between traditional-model and flipped-model students in OATAA, OATQR and OATGC scores, in Mid1 and FinalEx scores, and Kolmogorov-Smirnoff testing for Mid2. Differences between traditional-model and flipped-model students across other academic variables were not significant.

Research question 3

Because both the School and Type variables affected the neuroscience courses during the same time period, any effect identified by answering research questions one and two could be caused by either School, Type or some combination of the two effects. To better understand the true causes of any detectable effect, the third research question explored how the interactions between live/asynchronous presentations and traditional/flipped models influenced neuroscience course grades among optometry students. To do this, we compared the means of classroom academic variables (i.e., Mid1, Mid2, FinalEx and Course%) across different values of both School and Type (i.e., nlive*trad, nasyn*trad, nlive*flip and nasyn*flip) using analysis of variance (ANOVA) testing. Descriptive statistics for the combination School/Type distributions across classroom academic variables are found in Table 7. ANOVA testing assumes that samples are taken from a normally distributed population and that all study groups have equal population variance (i.e., homogeneity of variance). Results of this testing are found in Tables 8 and 9.

Mid2 met both the assumption of normality and the assumption of homogeneity of variance, and thus standard ANOVA was run to analyze this academic variable. The Course% classroom variable failed to meet the assumption of normality only, so standard ANOVA was run with the understanding that results should be carefully applied because the effect of non-parametric distributions upon the type 1 error rate is minimal. For those variables that failed to demonstrate homogeneity of variance only (i.e., Mid1 and FinalEx), Welch’s ANOVA test was run, which ignores the effect of variance in return for reduced discrimination.

Post-hoc testing

Because ANOVA and Welch’s ANOVA only report whether a difference exists or not, post-hoc testing is used to understand the implications of a rejection of the null hypothesis. We used Tukey’s method of post-hoc analysis on all significant findings to identify the pairs of means that differed significantly and those that did not (i.e., homogeneous subsets). Effect sizes for the ANOVA and Welch’s ANOVA results were determined by calculating η2 for School, Type, School*Type and statistical error (an SPSS-determined value) for each of the four classroom academic variables.

Results

The third research question investigated how the interactions between live/asynchronous presentations and lecture/flipped models influenced neuroscience course grades among optometry students. Full results of ANOVA and Welch’s ANOVA tests are presented in Table 10. Significant differences at the α<.05 level were found between different groups for Mid1, Mid2, FinalEx and Course%. Effect sizes are available in Table 11. Post-hoc testing results are available in Table 12.

Overall, the results showed:

  1. Mean grade on Mid1 was <strong>significantly lower</strong> for students who took the live, flipped-model course than for students in other groups
  2. Mean grade on Mid2 was <strong>significantly higher</strong> for students who took the asynchronous, flipped-model course than for students in other groups
  3. Mean grade on FinalEx was <strong>significantly lower</strong> for students who took the asynchronous, flipped-model course compared to those who took the traditional course (live or asynchronous)
  4. Mean grade on FinalEx was <strong>significantly higher</strong> for students who took the live, traditional-model course compared to those who took the flipped course (live or asynchronous)
  5. Mean Course% was <strong>significantly lower</strong> for students who took the asynchronous, traditional-model course compared to those who took the asynchronous, flipped-model course

Discussion

Our findings are fairly consistent with the literature: In general, there is no significant difference in student neuroscience course grades between either traditional models and flipped models, or live and asynchronous delivery methods, though individual examination grades may vary significantly. This suggests the variations in performance found in this study are the result of factors that influence short-term performance — as measured by individual examinations — but not long-term mastery of knowledge over the course of a semester. Because material delivery methods and course organization models were course-wide changes in this study, it is unlikely these changes can satisfactorily explain the variance seen in individual examination scores. Also, η2 effect size analyses for ANOVA testing of School and Type intersectional subsamples for Mid1, Mid2, FinalEx and Course% values indicated that the vast majority of the variance seen in these values across subsamples is due to error; that is, factors that were neither controlled for nor investigated by our study. Because most of the variance of individual examination scores was apparently caused by factors other than delivery method and teaching model, the discussion that follows is primarily concerned with possible influences on short-term student performance that are peripheral to our research questions.

Entering academic ability

Analysis of OAT academic averages across samples of School demonstrated that students in the live instruction sample had scored significantly higher than the students in the asynchronous sample. Analysis of OAT subsection scores showed a similar pattern: live-instruction-sample students had scored higher than asynchronous-instruction-sample students on all OAT subsections (either significantly or insignificantly so).

A similar analysis of OAT academic averages across samples of Type show that students who were in the flipped classroom sample had scored significantly higher than students who were in the traditional sample. Likewise, OAT subsection scores (except for the subsection score in organic chemistry) showed significantly or insignificantly higher performances by students who would eventually enter the flipped sample compared to those who would enter the traditional sample.

These variations between the OAT scores and subscores of School and Type samples are explained by initial differences in admissions standards and applicant pool quality between the two institutions at the beginning of the study period, as well as increases in those same admissions standards and applicant pool quality over the four-year duration of the study. The larger question raised by the significant differences in OAT scores and subscores between samples is whether the studied populations are in fact comparable, or whether their variable OAT performances provide evidence of a considerable difference in entering academic ability.

Based on the results of the study it seems that — if there is a difference between samples — such differences in entering academic ability as measured by OAT did not seem to correlate with performance on individual examinations or course grades. OAT performances would predict that the live delivery, flipped-model sample would have the best academic outcomes, when in fact this subsample showed no significant difference between several other groups in performance on all academic variables.

Effects of optometry college enrollment duration and familiarity with material

Students who received asynchronous content delivery performed better than those who received live content delivery on both Mid1 and Mid2. The effect size for Mid1 indicates this was a moderate-strength difference, which may be due to the facts that MCO students in OPTM 635 (asynchronous sample) were second-year optometry students, while SCO students in OPT 113 (live sample) were first-year, and in fact first-semester, optometry students. MCO students had also been exposed to some neuroanatomy material previously in their first-year general anatomy and physiology and ocular anatomy courses, so that portions of the content featured in both Midterms was “review” for the MCO students. We believe the additional year in optometry college and greater familiarity with some of the course material made the students in the asynchronous sample more academically mature and thus better-suited to succeed on assessments than those in the live sample.

Mid2 in particular showed a large discrepancy in scores between the two samples of School with a resultant medium-to-large effect size, greater than that of Mid1, though the non-parametric nature of the Mid2 distribution makes Cohen’s effect size analysis somewhat inaccurate. What effect is actually present between samples of School on Mid2 may stem in particular from the previously discussed “review” that MCO students enjoyed because asynchronous sample students had learned about cranial nerves in the aforementioned two courses completed during their first years in optometry college, as well as in their concurrently taught optometric procedures course in their second-year curriculum. Because cranial nerve anatomy and assessment was a major portion of the Mid2 Examination (accounting for approximately 55% of examination items), it is likely asynchronous sample students found a majority of questions on Mid2 to be more familiar than did live sample students, while there were no reviewed subjects that constituted such a large portion of the assessment items on either Mid1 or FinalEx. This idea of having a higher baseline of knowledge as an explanation for differences in performance has been hypothesized in other studies.15

Final examination study strategies

Though students in the asynchronous sample performed significantly better on Midterm Examinations, live sample students performed significantly better on the FinalEx, resulting in no significant difference in Final Course Grade between the two samples. This inverse relationship between Midterm performance and FinalEx performance is probably not reflective of a true difference between the samples, but rather a difference in study strategies: students who entered the FinalEx having performed better on the Midterm Examinations likely prepared differently than those who were in a more precarious position.

Effects of reinforcement assignments on the flipped classroom

Across values of Type, students in the traditional sample performed better on Mid1 (with a moderate effect size) and the FinalEx (with a strong effect size), while students in the flipped sample performed statistically the same as traditional students on Mid2 and in their Final Course Grades.

The effect of content reinforcement may explain part of these differences. In the flipped model, the majority of the material assessed on all examinations was introduced through textbook readings. No learning modules were assigned to reinforce Mid1 material. In contrast, Mid2 material was reinforced by two online mastery-model learning modules, which together covered more than 60% of the Mid2 assessment items. FinalEx material included one reinforcing learning module for the flipped sample, covering less than 10% of the material assessed by the FinalEx. The literature suggests that the more students practice retrieving material from memory, the better they learn the material, particularly when the practice sessions are spaced out in time from one another.4,16 That students were assigned an active, self-assessing method to retrieve and reinforce material first learned days prior may explain the difference in scores.

Our findings suggest that flipping the classroom so that the majority of material is introduced by student-directed activities may actually be less effective in terms of student learning outcomes than the traditional lecture model, in the absence of reinforcing assignments. However, with reinforcing assignments, academic outcomes in flipped-classroom models approach and may exceed those of traditional lecture courses. Thus, more rigorously designed flipped-model schemes that include regular summative assessment of assigned readings and classroom-introduced materials are to be preferred to a more laisse-faire approach.

It is not clear from this study whether the use of reinforcing assignments would improve academic performance in a traditional lecture setting, where material is introduced primarily from lectures rather than textbooks. Cognitive learning science seems to indicate that carefully designed reinforcing activities should improve learning of material regardless of its mode of presentation, but whether reinforcement works better in traditional or flipped-model courses remains an open question.4,16

Effects of School combined with effects of Type

Though we studied the interaction between differences in material delivery and course model on neuroscience grades, it is difficult to draw implications from the extensive testing done on the intersectional subsamples of School and Type variables, for the significant findings do not seem to tell a consistent story. As stated above, effect size analyses indicated that the majority of the variance found in classroom academic variables between subsamples were caused by factors that were not controlled or investigated by our study.

The subsample of students who received live, flipped-model instruction scored most poorly on both semester examinations (though, only the performance on Mid1 was significantly worse). There does not appear to be an obvious explanation for this and it is likely incidental.

FinalEx Grades for students in the asynchronous, flipped-model subsample were significantly lower than those of students who received traditional classroom instruction (regardless of live or asynchronous material delivery). Some of the effect could be due to the asynchronous/flipped sample’s exceptional performance on Mid2 (see below), which reduced the importance of the FinalEx toward the sample’s Final Course Grades.

Building mental maps to reduce cognitive load

The subsample of students who specifically received flipped-style instruction asynchronously dramatically outperformed all other subsamples on Mid2 (i.e., nearly 9% higher average grades than the next nearest group) and earned significantly higher Final Course Grades than students in traditional course models. This subsection of students benefited from both reinforcing learning modules and relatively greater maturity and experience with neuroanatomical material, and saw substantial improvement in examination scores compared to student samples that benefitted from only one — or none — of these effects (i.e., when Mid2live*trad was compared to Mid2live*flip, or Mid2live*trad to Mid2asyn*trad). For these latter cases, the effects on Mid2 scores were less than 1% for each comparison. It may be, therefore, that the interaction between previous experience and the use of reinforcing learning modules is not merely additive but multiplicative. Though one hesitates to press the possibility of exponential effects too far based on one comparison, the positive potential of such an effect certainly invites additional investigation.

Cognitively, such a multiplicative effect can be explained by the mental map concept, which states that fluency in a particular complex skill (in this case, clinical assessment of cranial nerves) is based on the construction and refinement of a high-quality mental map of that material within a student’s long-term memory. For complex actions like clinical cranial nerve assessment, the many facts, concepts and deductions needed to arrive at a correct diagnosis can be cognitively exhausting for a novice. Each step must be intentionally and consciously recalled, implemented and assessed in its correct order. The potential for forgetting a step or making a simple mistake under such levels of cognitive load is high.

With repeated, high-quality practice, however, complex processes are automatized, as the mind builds a mental map that includes all the discrete elements of the process in one whole. Thus, for the student whose practice has moved her from novice to intermediate, performing cranial nerve assessment is a simpler process, involving mere activation of the mental map she has already built. The cognitive load is lower because there are fewer discrete parts to attend to, and the potential for error is lessened. The implication is that programmed reinforcement of previously learned and recently relearned material, by students who have a year’s practice mastering complex clinical concepts, could shepherd the development of a sophisticated mental map in a way that simply does not occur in the absence of one or both of these elements.16

Conclusion

There do not appear to be significant differences in Final Course Grades between live and asynchronous content delivery methods, traditional and flipped-course models, or combination samples. However, mastery of discrete skills or areas of knowledge may be influenced by many factors external to this study, including assignment of reinforcing learning modules in addition to initial material presentation, and previous exposure to material in prior contexts. Combination of these two elements seems to produce a multiplying effect on retention and mastery, suggesting that repeated and varied practice is crucial for learning course material.

References

  1. Leung JYC, Kumta SM, Jin Y, Yung ALK. Short review of the flipped classroom approach. Med Educ. 2014;48(11):1127-1127. doi:10.1111/medu.12576.
  2. Ozdamli F, Asiksoy G. Flipped classroom approach. World Journal on Educational Technology. 2016;8(2):98-105. doi:doi.org/10.18844.wjet.v8i2.640.
  3. O’Reilly K. Faculty presence promotes quality of education in the online asynchronous classroom. Contemporary Issues in Education Research (CIER). 2011 Jan;2(3):53. doi:10.19030/cier.v2i3.1087.
  4. Phillips JA. Replacing traditional live lectures with online learning modules: effects on learning and student perceptions. Currents in Pharmacy Teaching and Learning. 2015 Nov-Dec;7(6):738-744. doi:10.1016/j.cptl.2015.08.009.
  5. Pilotti M, Anderson S, Hardy P, Murphy P, Vincent P. Factors related to cognitive, emotional, and behavioral engagement in the online asynchronous classroom. International Journal of Teaching and Learning in Higher Education. 2017;29(1):145-153.
  6. Jong ND, Verstegen DML, Tan FES, O’Connor SJ. A comparison of classroom and online asynchronous problem-based learning for students undertaking statistics training as part of a public health Masters degree. Advances in Health Sciences Education. 2012;18(2):245-264. doi:10.1007/s10459-012-9368-x.
  7. Simcock DC, Chua WH, Hekman M, Levin MT, Brown S. A survey of first-year biology student opinions regarding live lectures and recorded lectures as learning tools. Adv in Physiol Educ. 2017 Mar 1;41(1):69-76. doi:10.1152/advan.00117.2016.
  8. Demir EA, Tutuk O, Dogan H, Egeli D, Tumer C. Lecture attendance improves success in medical physiology. Adv in Physiol Educ. 2017;41(4):599-603. doi:10.1152/advan.00119.2017.
  9. Skylar AA. A comparison of asynchronous online text-based lectures and synchronous interactive web conferencing lectures. Issues in Teacher Education. 2009;18(2):69-84.
  10. Gosper M, Macneill M, Phillips R, Preston G, Woo K, Green D. Web-based lecture technologies and learning and teaching: a study of change in four Australian universities. Online Learning. 2011;15(4). doi:10.24059/olj.v15i4.200.
  11. Peska DN, Lewis KO. Uniform instruction using web-based, asynchronous technology in a geographically distributed clinical clerkship: analysis of osteopathic medical student participation and satisfaction. J Am Osteopath Assoc. 2010;110(3):135-142. doi.org/10.7556/jaoa.2010.110.3.135.
  12. Hamilton-Maxwell K. A pilot study of optometry student perceptions, acceptance and use of podcasting. Optometric Education. 2016;41(2):1-8.
  13. Taylor DA. Examination grades in OPT 113: neuroanatomy at Southern College of Optometry (2011-2014). Daniel Taylor Research. https://www.taylorphotoclub.com/research/?p=11. Published September 13, 2015. Accessed Sept. 9, 2019.
  14. Taylor DA. Understanding the cranial nerves: evaluation of a self-paced online module in optometric education [doctoral dissertation]. Memphis: University of Memphis; 2016.
  15. Jordan J, Jalali A, Clarke S, Dyne P, Spector T, Coates W. Asynchronous vs didactic education: it’s too early to throw in the towel on tradition. BMC Medical Education. 2013;13(1). doi:10.1186/1472-6920-13-105.
  16. Brown PC, McDaniel MA, Roediger HL. Make It Stick: The Science of Successful Learning. Cambridge, MA: Belknap Press of Harvard University Press; 2014.
 Save article as PDF

Dr. Aslakson [emilyaslakson@ferris.edu] is an Assistant Professor at the Michigan College of Optometry at Ferris State University. She teaches clinical neuroanatomy, clinical neuro-optometry and physical examination, and is an instructor in the Vision Therapy Laboratory. Dr. Aslakson also oversees students in the Primary Care, Pediatrics and Binocular Vision, and Vision Rehabilitation clinics at the University Eye Center.

Dr. Taylor is the Associate Dean for Academic and Student Affairs at the Michigan College of Optometry at Ferris State University. A Fellow of the American Academy of Optometry, he completed his Diplomate in the Optometric Education Section in 2018.