Optometric Education

The Journal of the Association of Schools and Colleges of Optometry

Optometric Education: Volume 47 Number 1 (Fall 2021)


Standardized Tests as Predictors of Success
in Health Professions Education: a Scoping Review

Naida Jakirlic, OD, FAAO, Caroline Ooley, OD, FAAO, and Elizabeth Hoppe, OD, MPH, DrPH, FAAO


Standardized examinations were created to evaluate student academic aptitude and eliminate bias from college admissions. The validity of standardized examinations in predicting student success in doctoral health professions education has been minimally explored. A scoping review was conducted to determine what is known about the predictive value of standardized tests in doctoral health professions education in the United States and Canada. A total of 323 titles and abstracts were reviewed for inclusion and exclusion criteria. Fifteen full-text articles were ultimately chosen for inclusion in this study. The results indicate conflicting data and underscore the necessity for further research in this area.

Key Words: doctoral health profession, standardized exam, academic success


Standardized testing is used as a formal assessment of academic ability in order to predict student success in higher education at both the undergraduate and graduate levels. Standardized tests differ from in-class assessments of knowledge because they are administered in a controlled environment, thus allowing for comparison of student performance that is presumably independent of socioeconomic status (SES), gender and race. Two supporting arguments for standardized testing assume that the standardization process eliminates potential for bias and that standardized tests can accurately assess students’ intellectual ability.1

The first large-scale standardized tests were administered in 1901 by the College Entrance Examination Board, which is known today as the College Board.2,3 One reason for creating and administering the tests was to reduce the volume and variety of pre-matriculation exams required by each undergraduate institution.2,3 Due to a national push to mandate aptitude tests for college admission, the College Board administered the first Scholastic Aptitude Test (SAT) in 1926 to thousands of students.2,3 In 1959, the American College Testing Company administered the ACT for the first time.2,3 Shortly after the first administration of the SAT, American graduate institutions followed suit. The first Medical College Admissions Test (MCAT) was given in 1928, the first Graduate Record Examination (GRE) in 1949, and the first Law School Admissions Test (LSAT) was given in 1948.2,4 The first Optometry College Admission Test (OCAT) was developed in 1971 by the Psychological Corporation under the sponsorship of the Association of Schools and Colleges of Optometry and was administered for the first time in the fall of 1972.5

Despite attempts to eliminate bias from standardized examinations, many in academia argue that requiring standardized exams for admission into undergraduate and graduate programs significantly disadvantages students of female gender and lower SES and those in under-represented minority (URM) groups. Miller and Stassun argue that requiring the GRE significantly decreases the opportunities for women, URM students and lower SES students to enter the science, technology, engineering and math (STEM) professions.4 They point out that the Educational Testing Service, which administers the GRE, publicizes that women score 80 points lower in the physical sciences than men, and African Americans score 200 points below white test-takers on the exam.4

Moneta and Koehler report that students with low SES perform worse on standardized exams, possibly due to lack of access to academic preparation and lack of funds to pay for exam retakes if the first score is low.6 Nankervis argues that the SAT underestimates future success of female test-takers because males average 35 points higher than females in the mathematics section.7 In another publication, Wilson discusses a study demonstrating that metrics-based file reviews of applicants excluded twice the number of applicants who identified as historically URM, and moving away from metrics-based admissions processes resulted in a remarkable increase in admission of URM students to a doctoral biomedical science program.8 Wilson also reiterates that women and URM students score lower on the GRE than white and Asian-American men; therefore, using GRE scores to stratify doctoral applicants significantly reduces the diversity of the applicant pool.8

In addition to the reported bias against women, URM students and students of lower SES, there are conflicting reports about the ability of standardized exams to predict academic success at both the undergraduate and graduate level. Kuncel argues that standardized exams are effective predictors of performance in graduate school but the combination of standardized exam scores and undergraduate grade point average (uGPA) gives the most accurate prediction of academic success.9 He also states that student motivation and interest are crucial for continued exertion throughout graduate school and cannot be measured with standardized exams.9

In contrast, Miller and Stassun argue that there is a weak correlation between the GRE and success in STEM fields.4 They point out that research from the Educational Testing Service shows that the predictive validity of the GRE is limited to first-year graduate grade point average (gGPA) but academic success is much broader than first-year gGPA.4 Academic success encompasses first-year gGPA, gGPA, degree attainment, licensing examination performance, faculty evaluation of students, residency attainment and completion, and numerous other outcomes that cannot be predicted by GRE scores.4

Similarly, Moneta-Koehler and colleagues found that GRE scores are moderate predictors of first-semester graduate grades and weak to moderate predictors of overall gGPA.6 They also found that the GRE does not predict other skills necessary to succeed in biomedical doctoral programs and concluded that the limited benefits of the GRE do not outweigh the expense of excluding URM students and students of lower SES from entering graduate biomedical doctoral programs.6

While a reasonable amount of literature about the ability of standardized exams to predict success in biomedical graduate programs exists, current studies exploring the validity of standardized exams in predicting academic success in doctoral health professions programs are relatively scant, particularly in the field of optometry.5,11-18 To gain insight into how many doctoral health professions programs require standardized exams as part of their admissions requirements, the authors first identified 12 health professions to explore, including optometry, dentistry, allopathic medicine, osteopathic medicine, podiatry, audiology, physical therapy, veterinary medicine, occupational therapy, pharmacy, acupuncture and chiropractic programs. Nursing was not included due to the wide range of doctoral-level nursing programs, the diverse pathways to attainment of doctoral degrees within the nursing profession, and the differing requirements for each program. Once the programs for inclusion were identified, their respective standardized test requirements were summarized (Table 1).10

As suggested by Arksey and O’Malley, a scoping review methodology is well-suited for four primary contexts.19 The review question undertaken in this research addresses three of the four circumstances identified: 1) to examine the extent, range and nature of research activity; 2) to summarize and disseminate research findings; and 3) to identify research gaps in the existing literature. This project utilized a scoping review methodology to gain a deeper understanding of the current status of standardized testing in health professions admissions processes, along with any research evaluating the predictive power of pre-admissions standardized test results for ultimate academic and/or professional success. Furthermore, this research seeks to summarize what is currently known by gathering and assessing published works relevant to this inquiry, and upon review of the summary, to identify needs for further research on this topic.


The scoping review methodological framework adheres to the guidelines suggested by Arksey and O’Malley and was conducted as an iterative process following the five suggested stages: Stage 1: identifying the research question; Stage 2: identifying relevant studies; Stage 3: study selection; Stage 4: charting the data; and Stage 5: collating, summarizing and reporting the results.19

The investigators determined the inclusion and exclusion criteria prior to searching the literature. The inclusion criteria focused specifically on scholarly, peer-reviewed indexed literature describing information related to health professions education, admissions to health professions education programs, and standardized testing. The time period for inclusion was limited to the past 10 years, and language was limited to English. Only articles published about health professions education in the United States and Canada were included due to potential differences in health professions education relative to governmental, regulatory, economic and cultural factors in countries outside North America.

Articles with a focus on non-health professions programs, such as biomedical sciences, and articles from gray literature sources were excluded. Gray literature was defined as a thesis, dissertation, non-peer reviewed study, conference proceeding or editorial. It was noted that one article could have multiple reasons for rejection. To establish clearly defined guidelines for rejection, each reason was enumerated. In each case where a paper was excluded, the primary reason for rejection was noted.

Literature search

A combination of methods was used to locate articles for this scoping review. Keywords were used to retrieve the broadest possible number of articles related to the research question, and controlled language was used to construct a narrow, defined search strategy. A discussion among the investigators, serving as content experts, and two vision science librarians, serving as technical experts, reviewed the research question in depth and resulted in a listing of the following keywords for a broad search: educational measurement, school admissions criteria, academic success, GRE admissions health professions, standardized testing graduate education. A review of the controlled language (Table 2) available in electronic databases was conducted using the keywords to locate appropriate terms to create focused search strategies. All keywords, controlled language terms and subsequent search strategies were vetted by both vision science librarians.

The databases included in the study were selected based on library subscription, availability of controlled language search option, and comprehensive coverage of the topic. The investigators and the consulting librarians determined which search filters (Table 3) would yield the best, most relevant results. The searches of the following databases were conducted in August 2019: PubMed/Medline, Embase, Cumulative Index to Nursing and Allied Health Literature (CINAHL), and Cochrane Library.

All searches for all databases were filtered for English-only articles published in the past 10 years. Embase had the following additional filters added: Embase-only articles, articles in press, and review-only articles. After a review of the keywords, the controlled language terms were chosen to best retrieve search results that were focused more on the topic being searched.

Inter-rater reliability calibration

After mechanical and manual de-duplication of all articles resulting from the full electronic search, all investigators participated in a calibration session to determine inter-rater reliability. In this session, 14 abstracts were reviewed to gauge inter-rater reliability regarding the application of the inclusion and exclusion criteria. This session took place prior to the title, abstract and full-text review. The decision-making process for each of the articles reviewed was documented on a case-by-case basis then summarized to identify major categories for the reasons to include or exclude an article. Through discussion, the investigators’ inter-rater reliability for title/abstract review was able to reach 100% agreement.

After calibration, each investigator reviewed two-thirds of the titles and abstracts, with two investigators randomly assigned to each article. If consensus was reached by two of the investigators to either include or exclude an article, that action was immediately taken. If there was no consensus, the third investigator was used as a tiebreaker to determine final inclusion or exclusion of the article. Once a full consensus was achieved regarding every article, each investigator was randomly assigned 11 full-text articles for a thorough review.

As suggested by Arksey and O’Malley, data extracted from each source were charted and entered into a data-charting form using the database program Excel. Data were charted independently by each investigator with confirmations by co-investigators when questions arose. Data charting focused on summarizing each publication’s process and methodology, predictors and outcomes. For each source included in the scoping review, the following variables were included: authors, year of publication, study location (to ensure United States or Canada), health professions studied, study design utilized, standardized admissions test evaluated, outcome measures or indicators of success and means of measuring outcomes, statistical test, results and significance level, and the publication’s main conclusions.


A total of 323 articles underwent title and abstract review by two authors to determine inclusion for full article review. Of the 323 articles reviewed, the two authors agreed on 305 (94.4%) of the articles, with 18 (5.6%) articles requiring a title and abstract review by the third author. Of the 18 titles and abstracts that underwent a review by the third author, five were accepted for the full article review, resulting in a total of 33 articles accepted for full article review and 290 rejected articles. The primary reasons for rejection were summarized in Table 4. After the full article review, 18 additional articles were excluded for the reasons summarized in Table 5. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flowchart20 for this review can be found in Figure 1.

The final scoping review was completed for a total of 15 papers (Table 6). The six different health professions represented in the 15 papers included in the scoping review were pharmacy (5 papers), dental medicine (3 papers), veterinary medicine (2 papers), physical therapy (1 paper), allopathic medicine (3 papers), and allopathic medicine combined with PhD (1 paper).

Four different standardized tests were represented in the papers included in the scoping review: GRE (four articles), Pharmacy College Admission Test (PCAT – five articles), Dental Admission Test (DAT – two articles), and MCAT (four articles).

The majority of papers included in the scoping review assessed more than one primary outcome variable. Eleven of the papers included some assessment of the association between standardized tests and gGPA, including the ability to predict the gGPA at different points in the program, such as at the time of graduation, in the first-year curriculum, multiple years in the program, or for specific courses such as basic science courses or clinical evaluations. Eleven of the papers evaluated the association between standardized tests and performance on board examinations. Three papers evaluated the predictive value of standardized exams on residency success. One paper evaluated the predictive value of the MCAT on gGPA, time to defend PhD, board scores, publication number and career outcomes.

Ten of the papers reported positive findings. The DAT, GRE, MCAT and PCAT were all found to be predictive of board examination results.21-26 The GRE, DAT and PCAT were found to be positive predictors of gGPA.25,27-29

Five of the papers reported somewhat mixed results. For example, it was found that the PCAT was predictive of gGPA; however, PCAT scores were inadequate when used alone.30 PCAT scores were also found to be less strongly predictive of pharmacy program GPA than the uGPA.31 One paper found that PCAT scores were predictive of pharmacy program GPA but not predictive of board examination results.28 Similarly, another paper found that the DAT was less predictive than the uGPA for the gGPA.27 One paper found that the GRE indirectly predicted board scores via the gGPA.32 Thus, in essence, the gGPA was more predictive of board scores than the GRE.

Three papers did not find an association between a standardized test and the outcome of interest. In two studies, the MCAT did not predict board examination results or gGPA.33,34 In one study, the GRE did not predict specialty board results.35

Several limitations are noted for this scoping review. The review did not yield any optometry-specific literature using the databases that the investigators chose; thus, optometry-specific literature was not included in the scoping review. Additionally, the review focused only on graduate, doctoral-level health professions in the United States and Canada. Other research for non-doctoral-level professions as well as research in other countries may have provided additional insight into this research question. The authors decided not to include literature about nursing programs due to the wide range of programs and varying pathways to degree attainment within the profession as it differs greatly from the traditional academic trajectory of an optometry student. Including studies from doctoral nursing programs may add additional insight about the question at hand. The time frame for inclusion in the scoping review was limited, and publications before and after the period of the review may provide additional information. The scoping review used qualitative techniques for interpreting the data and did not employ quantitative methods. Additional research on this topic might have been found by including published abstracts, conference proceedings or other sources of gray literature. Despite these limitations, this scoping review seeks to shed light on a topic of great interest for the profession of optometry, particularly due to the paucity of current research on this topic as it relates directly to optometric education.


Standardized tests have been used to assess academic aptitude in order to determine student preparedness for higher education. Initially designed to minimize potential for bias and increase accuracy in assessing students’ intellectual ability, standardized tests have come under increasing scrutiny due to possible bias against low-income, minority and female students. In addition to this drawback, many have questioned the true ability of standardized exams to predict student success, particularly in under-represented populations. The purpose of this study was to conduct a scoping review of the ability of standardized tests to predict success in doctoral health professions programs in order to shed light on the role of these exams for admission into optometry school.

The results of the scoping review suggest that health professions programs are invested in evaluating the predictive value of standardized testing as a tool to be utilized in the admissions process. Most of the publications included in this scoping review assessed the relationship between standardized testing and academic achievement within a specific health professions program. Few publications carried the assessments further into correlations with ultimate success in clinical practice. The publications included in this scoping review demonstrate disagreement about the value of standardized exams in predicting success in doctoral-level health professions education.

The limited number of articles included in this scoping review suggests that there is not an abundance of solid evidence to support the value of standardized testing for admissions decision-making in the health professions. There certainly appears to be some evidence of the value of standardized exams to predict academic success, but the magnitude of the potential benefits of testing has not been compared to the potential costs of limiting access to, and perpetuating bias against, under-represented student groups. Because the predictive value of standardized exams cannot be compared for applicants who did not matriculate into a health professions program, there is no means of knowing how many of those candidates would have been successful in the programs they were denied entrance into based on their exam scores. There is also no way to measure the ultimate impact on the pipeline of healthcare providers and the public who would have been served by the individuals who were denied entrance into the various programs.

Since the first administration of the OCAT in 1972 (which was renamed Optometry Admission Test in 1987), few studies have explored the ability of the exam to predict success in optometry school.5,11-18 Out of eight studies that looked at the predictive value of the OAT, four studies used the first- and second-year optometry GPA as the main outcome measures.5,12,14,17 Another four studies looked at additional outcome measures, including class rank at graduation, clinic performance, cumulative 4-year GPA, and National Board of Examiners in Optometry (NBEO) Part 1 performance.10,11,13,16,18 Three of these studies were published in the year 2000 or earlier, which makes extrapolating the results to today’s optometry students extremely problematic as substantial changes have been made to both the OAT and individual program curricula since the studies were conducted. These circumstances highlight the necessity for updated research on this topic so that optometry admissions committees do not continue to rely on outdated information to guide their admissions policies. Until such studies come to fruition, it behooves optometric institutions to look at what is currently known in other doctoral health professions about the role of standardized exams in predicting academic success that is not limited to first- and second-year gGPA. This scoping review aims to achieve exactly that, and its findings can be utilized by optometric institutions until new optometry-specific research comes to light.

Of particular interest to the authors of this study is the value of standardized examinations in admissions decisions for schools and colleges of optometry. While optometry programs continue to review the potential benefits and limitations of the OAT, until very recently all 23 schools of optometry in the United States required a graduate entrance exam as part of their admissions criteria. Historically, all 23 programs required the OAT, but in recent years many optometry schools have started to accept other graduate entrance exams in lieu of the OAT, including the GRE, DAT, PCAT and MCAT.10 While several studies suggest a positive predictive value of the OAT for academic success in optometry school,5,11-18,36 the shift away from the OAT by many optometry schools leads one to ask what predictive power the exam truly has for ultimate student success, and whether requiring an entrance exam negatively affects the diversity of the student body across all 23 programs. Furthermore, given the fact that many optometry schools are now accepting the GRE, DAT, PCAT and MCAT in lieu of the OAT, the results of this scoping review provide timely and valuable information for optometric institutions by looking at their predictive value for success in other doctoral health professions where the exams have been utilized for far longer.

There is certainly an increasing move away from requiring standardized examinations across many disciplines, including the health professions, due to several reasons. If the recent decision by Indiana University School of Optometry (IUSO) to move toward test-optional admissions requirements is any indication, the profession of optometry may start to follow suit. One of IUSO’s motives for the pivotal change is its finding that “test scores are becoming a weaker predictor of future academic success,”37,38 which is at least partially consistent with the results of this scoping review. In light of this, the glaring lack of optometry representation in the literature search on this topic is greatly concerning. It needs to be ameliorated with future research so that optometry admissions committees have sound and current data on which to base their admissions decisions so they may ensure student success in their programs without affecting the diversity of their graduates.


This scoping review demonstrates the limited body of research on a critical topic and highlights the need for further exploration to fully understand the complexities of the value of standardized testing as part of the admissions process for doctoral health professions programs, particularly for the profession of optometry. The paucity of studies on this topic, particularly in regard to optometry, is troubling due to the ever-increasing demand for qualified and diverse healthcare providers. Understanding the framework of doctoral health professions education and the lessons that have been learned by other doctoral health professions will help guide decision-making by optometry admissions committees until new and relevant optometry-specific research is published. The current studies that exist on this topic suggest conflicting data about the ability of standardized exams to predict student success in doctoral health professions education. This finding makes it even more critical to invest in research on this topic as standardized exams can heavily disadvantage certain student populations, thus negatively impacting the profession of optometry by decreasing the diversity of the healthcare provider force.

Of particular interest to the authors is the ability of standardized exams to predict student success in optometry school and future clinical practice. There exists a dire need for longitudinal research comparing students admitted with and without the OAT and their success in school and in future optometric careers. Retrospective analyses of longer-term outcomes, such as on-time licensure, NBEO success on first attempt, residency placement or measures of “practice success” linked back to pre-matriculation variables, including standardized test scores, will shed more light on the value of standardized exams in the optometry admissions process. Furthermore, due to the growing utilization of the GRE, DAT, MCAT and PCAT in lieu of the OAT for optometry school admissions across many institutions, it is imperative for similar research to be conducted about the validity of those exams in predicting short- and long-term success outcomes for optometry students. Until such research comes to realization, the current scoping review provides vital information about the utility of these exams in predicting success in other doctoral health professions. Prospective comparison within and between programs for policies of test-optional or test-agnostic admissions vs. traditional requirements for OAT would yield tremendous information that admissions committees could utilize when selecting students into their programs. Additionally, case studies to further elaborate on perceived barriers and biases associated with the OAT are necessary to complete the picture of why this research needs to be conducted in the first place.


  1. Rudy R. Barreras, Assessment & Public Relations Librarian, Western University of Health Sciences
  2. Angela Lee, Associate Professor, Health Science Librarian, Pacific University


  1. Dotson L, Foley V. Common core, socioeconomic status, and middle level student achievement: implications for teacher preparation programs in higher education. Journal of Education and Learning. 2017;6(4):294-302.
  2. Nettles MT. History of testing in the United States: higher education. The ANNALS of the American Academy of Political and Social Science. 2019;683(1):38-55.
  3. Syverson S. The role of standardized tests in college admissions: test-optional admissions. New Directions for Student Services. 2007;118:55-70.
  4. Miller C, Stassun K. A test that fails. Nature. 2014;510:303-304.
  5. Wallace WL, Levine NR. Optometry college admission test. Am J Optom Physiol Opt. 1974 Nov;51(11):872-886.
  6. Moneta-Koehler L, Brown AM, Petrie KA, Evans BJ, Chalkley R. The limitations of the GRE in predicting success in biomedical graduate school. PLoS One. 2017 Jan 11;12(1):e0166742.
  7. Nankervis B. Gender inequities in university admission due to differential validity of the SAT. Journal of College Admission. 2011;213:24-30.
  8. Wilson MA, Odem MA, Walters T, DePass AL, Bean AJ. A model for holistic review in graduate admissions that decouples the GRE from race, ethnicity, and gender. CBE Life Sci Educ. 2019 Mar;18(1):ar7.
  9. Kuncel NR, Hezlett SA. Assessment. Standardized tests predict graduate students’ success. Science. 2007 Feb 23;315(5815):1080-1.
  10. Ooley C, Jakirlic N, Hoppe E. Review of standardized testing in doctoral health professions admission requirements. Optometric Education. 2021 Fall;47(1).
  11. Trick LR, Davis SL, Zipprich A. A nationwide evaluation of the academic performance of male and female optometry students. J Am Optom Assoc. 1990 Aug;61(8):609-12.
  12. Corliss DA. Statistical tools for predicting academic performance in optometry school. Optometric Education. Winter 1991;16(2):41-48.
  13. Wingert TA, Davidson DW, Davis S. Predictors of performance in optometry school – a study at the University of Missouri-St. Louis. Optometric Education. Fall 1993;19(1):18-21.
  14. Kramer GA, Johnston J. Validity of the Optometry Admission Test in predicting performance in schools and colleges of optometry. Optometric Education. Winter 1997;22(2)53-59.
  15. Spafford MM. Primary and secondary selection tools in an optometry admission process. Optometric Education. Summer 2000;25(4):116-121.
  16. Bailey JE, Yackle KA, Yuen MT, Voorhees LI. Preoptometry and optometry school grade point average and optometry admissions test scores as predictors of performance on the national board of examiners in optometry part I (basic science) examination. Optom Vis Sci. 2000 Apr;77(4):188-93.
  17. Goodwin D, Ricks JA, Fish BB, Kelsey NJ, Good AD, Remington LA, Bodner TE. Predicting academic success in optometry school. Optometric Education. Spring 2007;32(3)85-90.
  18. Buckingham RS, Bush SR. Predictors of academic success for students at the Michigan College of Optometry. Optometric Education. Summer 2013;38(3):92-99.
  19. Arskey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19-32.
  20. Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009 Jul 21;6(7):e1000097.
  21. Behar-Horenstein LS, Garvan CW, Bowman BJ, Bulosan M, Hancock S, Johnson M, Mutlu B. Cognitive and learning styles as predictors of success on the National Board Dental Examination. J Dent Educ. 2011 Apr;75(4):534-43.
  22. Cameron AJ, MacKeigan LD, Mitsakakis N, Pugsley JA. Multiple mini-interview predictive validity for performance on a pharmacy licensing examination. Med Educ. 2017 Apr;51(4):379-389.
  23. Zuckerman SL, Kelly PD, Dewan MC, et al. Predicting resident performance from preresidency factors: a systematic review and applicability to neurosurgical training. World Neurosurg. 2018 Feb;110:475-484.e10.
  24. Raman T, Alrabaa RG, Sood A, Maloof P, Benevenia J, Berberian W. Does residency selection criteria predict performance in orthopaedic surgery residency? Clin Orthop Relat Res. 2016 Apr;474(4):908-14.
  25. Fuentealba C, Hecker KG, Nelson PD, Tegzes JH, Waldhalm SJ. Relationships between admissions requirements and pre-clinical and clinical performance in a distributed veterinary curriculum. J Vet Med Educ. 2011 Spring;38(1):52-9.
  26. Hollman JH, Rindflesch AB, Youdas JW, Krause DA, Hellyer NJ, Kinlaw D. Retrospective analysis of the behavioral interview and other preadmission variables to predict licensure examination outcomes in physical therapy. J Allied Health. 2008 Summer;37(2):97-104.
  27. Ballard RW, Hagan JL, Cheramie T. Relationship between hand-skill exercises and other admissions criteria and students’ performance in dental school. J Dent Educ. 2015 May;79(5):557-62.
  28. Eiland LS, Gaillard PR, Fan S, Jungnickel PW. Differences in predictors of academic success using multi and individual year student admissions data. Pharmacy Education, 2018;18(1):255-8.
  29. Meagher DG, Pan T, Perez CD. Predicting performance in the first-year of pharmacy school. Am J Pharm Educ. 2011 Jun 10;75(5):81.
  30. Ferrante AB, Lambert J, Leggas M, Black EP. Predicting Student Success Using In-Program Monitoring. Am J Pharm Educ. 2017 Aug;81(6):111.
  31. Myers TL, DeHart RM, Vuk J, Bursac Z. Prior degree status of student pharmacists: is there an association with first-year pharmacy school academic performance? Curr Pharm Teach Learn. 2013 Sept-Oct;5(5):490-93.
  32. Danielson JA, Wu TF, Molgaard LK, Priest VA. Relationships among common measures of student performance and scores on the North American Veterinary Licensing Examination. JAVMA. 2011;238(4):454-61.
  33. Barber C, Hammond R, Gula L, Tithecott G, Chahine S. In search of black swans: identifying students at risk of failing licensing examinations. Acad Med. 2018 Mar;93(3):478-485.
  34. Bills JL, VanHouten J, Grundy MM, Chalkley R, Dermody TS. Validity of the Medical College Admission Test for predicting MD-PhD student outcomes. Adv Health Sci Educ Theory Pract. 2016 Mar;21(1):33-49.
  35. Grillo AC, Ghoneima AAM, Garetto LP, Bhamidipalli SS, Stewart KT. Predictors of orthodontic residency performance: An assessment of scholastic and demographic selection parameters. Angle Orthod. 2019 May;89(3):488-494.
  36. OAT Validity Study 2016-2018 Data [Internet]. Chicago, IL: Association of Schools and Colleges of Optometry; c2020 [cited 2021 Sept 30]. Available from: https://www.ada.org/~/media/OAT/Files/oat_validity_study.pdf?la=en.
  37. Bonanno JA. IU School of Optometry Moving to Admission Test-Optional Approach [Internet]. Bloomington, IN: Indiana University School of Optometry; July 6, 2020 [cited 2021 Sept 30]. Available from: https://blogs.iu.edu/iusonews/2020/07/06/iu-school-of-optometry-moving-to-admission-test-optional/.
  38. How to Apply to the Doctor of Optometry Program [Internet]. Bloomington, IN: Indiana University School of Optometry [cited 2021 Jan 23]. Available from: https://optometry.iu.edu/admissions/apply/doctor-of-optometry/index.html.
 Save article as PDF

Dr. Jakirlic [njakirlic@westernu.edu] is an Assistant Professor at Western University of Health Sciences College of Optometry. She teaches various topics related to ocular disease and sees patients in clinic.

Dr. Ooley is an Associate Professor at Pacific University College of Optometry. She mainly teaches systemic disease and anatomy, and she sees patients in clinic.

Dr. Hoppe is the Founding Dean of the College of Optometry at Western University of Health Sciences. She teaches evidence-based decision-making.