OptomCAS | OPTOMETRY ADMISSION TEST     

Optometric Education

The Journal of the Association of Schools and Colleges of Optometry

Optometric Education: Volume 42 Number 2 (Winter-Spring 2017)

Competency-based Assessment of Refractive Error Measurement in a Mozambique Optometry School

Kajal Shah PhD, Kovin Naidoo, PhD, OD, Luigi Bilotto, OD, and James Loughman, PhD

 

Abstract

Background: The aims of this study were to develop a process for assessing refractive error management competence among the first two cohorts of students in a new optometry program at Unilúrio in Mozambique and to understand the effectiveness of implementing the process in the context of a low resource environment. Methods: The assessment methods were developed using information from a literature review and a focus group discussion and implemented on 15 students. Results: The exams consisted of direct observation of two patients, short-answer questions and a structured oral examination. Conclusion: The use of existing checklists and rating skills helped to identify areas of competence deficits. Areas for further development of the assessment process include increasing assessor training and guidelines for patient standardization.

Key Words: competency assessment, refraction, optometry students, Mozambique

 

Introduction

The Mozambique Eyecare Project is a higher education partnership between the Dublin Institute of Technology (DIT), the Brien Holden Vision Institute (BHVI), the University of Ulster and Universidade Lúrio (UniLúrio), Nampula, for the development, implementation and evaluation of a model of optometry training at Unilúrio in Mozambique. The four-year optometry program was based on a curriculum developed by BHVI with competencies drawn from the global competency-based model of the World Council of Optometry (WCO) and the Association of Regulatory Boards of Optometry (ARBO). The model allows for objective comparisons of scope of practice between countries. The global competency-based model provides a vertical career ladder for individuals seeking to expand their scope of clinical practice and includes four categories of clinical care. Each category requires a set of competencies that includes the previous category: optical technology, visual function, ocular diagnostic and ocular therapeutic.1 The minimum required for individuals to call themselves an optometrist is demonstrating competence in dispensing, refracting, prescribing and the detection of disease/abnormality.1 In Mozambique the exact scope of practice of optometry is not defined, but the curriculum enables at least the ocular diagnostic category to be met.

Competencies are seen as a framework for entry-level abilities in the profession of optometry in most countries. Students have to show by some means of assessment (a specific examination or some form of continuing assessment program) that they are competent in the areas listed.2 The definition of competence provided by the General Optical Council (GOC) in the United Kingdom (UK) is: “Competence has been defined as the ability to perform the responsibilities required of professionals to the standards necessary for safe and effective practice. A competency will be a combination of the specification and application of a knowledge or skill within the occupation, to the appropriate standard.”3

Literature on methods of assessing clinical competency has existed in medicine for many years; however, little published research exists for optometry.4-7 In the UK, the GOC describes the required competencies in detail, but it does not specify the method of assessment. This is left to the respective training institutions and professional organizations responsible for assessment and certification.4

An ideal assessment tool would have to be reliable and valid.8 Reliability is a measure of the reproducibility or consistency of a test, and is affected by many factors such as examiner judgments (inter-rater, examiner experience), inter-case (student) reliability, inconsistency of patient performance, and reliability of rating scales.6 Validity refers to the ability of the assessment to measure what it is supposed to measure. No valid assessment methods that measure all facets of clinical competence have been designed.6 Other factors, including the feasibility of running and resourcing the examination, are also important in a developing country context.8

Miller’s pyramid conceptualizes the essential facets of assessment of clinical competence.7 The base represents the knowledge components of competence: ‘knows’ (basic facts), ‘knows how’ (applied knowledge), ‘shows how’ and ‘does’. The base levels are assessed with written tests of clinical knowledge such as multiple-choice questions, short-answer questions, essays and oral examinations. They are still popular in the training of optometry students in the UK and Europe and in the entry-level examinations to the profession in the United States (US).9,10 Direct observation of students in clinics, the use of standardized patients (SP) and objective structured clinical examinations (OSCE) are commonly used to test the ‘shows how’ component.10-12 The final assessment of pre-registration optometry students in the UK is in the form of an OSCE, wherein students rotate through a series of stations to demonstrate clinical skills applied in a range of contexts.11 However, little literature exists on assessment of exit-level competencies from the optometry program, which is the context in Mozambique, as opposed to entry level into the profession, even though assessment strategies can be similar.

Uncorrected refractive error has been identified as a major cause of visual impairment in Mozambique.13 The only providers of refraction services within the national health system in Mozambique are ophthalmic technicians. However, previous assessment of their refraction skills showed they needed upskilling to make them competent at refraction.14 We did not set out to assess dispensing and contact lens fitting, which are all part of the competency skill set of an optometrist.1 There were various reasons for this. For dispensing, the spectacle supply system at the university had not been established when the students graduated; therefore, their exposure to dispensing was restricted. Once the students have graduated and started working within the national health system, their access to contact lenses is limited apart from in the larger central and provincial hospitals. Hence, refractive error measurement was deemed the most important responsibility at present of the Mozambican optometrist. For this paper, refractive error management includes the clinical judgement related to the patient’s age, symptoms, accuracy of the subjective or objective refractive result, binocular vision status and disease.15 Moreover, there is little or no supervision of students once they’ve graduated. In the absence of alternative refractive care provision, emphasis had to be placed on ensuring they were competent in their refraction routine.

The aim of this study was two-fold: 1) to report on the development of a process for assessing refractive error management competence that is practical to implement and keeps staffing and resourcing costs at sustainable levels within the context of limited academic resources, and 2) to understand the effectiveness of implementation of the process in the context of a low resource environment, in terms of its reliability and validity.

Competence Assessment Development and Implementation

This article describes two components: 1) the development of the competency assessment methods and process, and 2) the implementation of the assessment process. The evaluative elements of this work were conducted according to the tenets of the Declaration of Helsinki and approved by the research ethics committee at the Dublin Institute of Technology.

Assessment Development: Methods

Information was gathered from a literature review of assessment methods in medicine6-8,16,17 and high-stakes optometry exams,9-11,18 the latter being the only literature available for optometry.

A focus group discussion was conducted with two lecturers from Unilúrio responsible for the clinics of the first cohort of students, and three of the program developers on the basis of their clinical and academic expertise. They were asked to read and sign a consent form by the investigator acting as the facilitator of the focus group. The members of the group, two each from South Africa and Colombia and one from Canada, had an average of 16 years of clinical experience and an average of nine years teaching experience in international undergraduate optometry education, particularly in curriculum design, teaching and developing and conducting assessments.

The investigator informed the participants about the objective of the focus group. The primary intention was to develop the assessment methods for evaluating competencies of the optometry students, concentrating on refraction, before they graduated. Qualitative data based on grounded theory was captured on assessment methods and their evaluation, which would be feasible given the challenges that existed for a new program in a low academic resource context.19 The participants were asked how best to evaluate the optometry students’ refraction competencies as the standard necessary for entry into the profession in Mozambique. The discussion was recorded by the investigator, read, coded, categorized and analyzed thematically. In order to improve the credibility of the data, member checking was used.20 This involved the data being presented to the focus group members to confirm the credibility of the themes and whether the overall account was realistic and accurate.

Assessment Development: Results

The key themes extracted from the focus group, which informed the development of the assessment methods included: 1) exclusion of OSCEs, 2) practice assessment by direct observation, 3) theory exams, and 4) qualitative observations of the competency assessment process.

Exclusion of OSCEs

The existing literature on different assessment procedures suitable for use in medicine6-8,16,17 and optometry9-11,18 was discussed. The two most commonly cited methods of assessment of clinical competencies, identified from the literature review, are the direct observation of students performing these clinical skills and objective structured clinical examinations (OSCEs). However, there is little published literature on the use of OSCEs in Africa. In a review of the economic feasibility of OSCEs in undergraduate medical studies, only 17 of the 1,075 publications were from Africa.21 A study comparing six assessment methods for their ability to assess medical students’ performance and their ease of adoption with regard to cost, suitability and safety in South Africa revealed OSCEs to be the most costly.22 Hence, OSCEs and the use of standardized patients were ruled out due to lack of academic resources and examiners.

Recommendations were made on the most suitable methods for competency assessment in Mozambique, taking into account that integrating disease and binocular status with the refractive result is necessary for prescribing a refractive correction.15 The competencies would be assessed as follows.

Practical assessment by direct observation

This had to be constructed to maximize validity and reliability against the time and cost of running and resourcing the exams. Students undertook an eye examination of two real patients, a presbyope and a pre-presbyope, under observation of two examiners for each patient. Clinical performance was assessed for communication, history and symptoms, vision and visual acuity (with pinhole if necessary), pupil distance, assessment of pupil responses, cover test, ocular motility, near point of convergence, externals, retinoscopy, best sphere, cross cylindrical refraction, binocular balance and near vision, final prescription, ophthalmoscopy, advice, recording, management and time-keeping. (Appendix A)

The WCO global competency model would be used as the framework for the assessment, with the assessment method mapped to the elements of competencies and performance criteria and the level of difficulty expected to be mastered by the student specified, to enhance content validity.

Direct ophthalmoscopy and an external exam using a slit lamp were also included because the presence/absence of pathology would indicate the level of best-corrected visual acuity and help in the management of the patient. A pass-fail cut-off score of 75%, as stipulated by the university and backed by literature, was maintained by the participants of the focus group discussion.10 The skills were weighted according to their importance for safe, effective clinical practice based on the literature and clinical assessment experience of the focus group participants.10 The weightings and number of checklist items for every skill are reflected in the results in Table 1 with 100% being the overall score.

The time allowed was 50 minutes. If the examiner considered that the examination was difficult (due to a complex refraction, low vision, pathology, patient being illiterate or unable to communicate in Portuguese), an additional 15 minutes could be allowed. Examiners were to consider the difficulty of the patient in the marking of the student.

Theory exams

To cover the background knowledge required for the competent practice of refraction, two theory exams would be set, short-answer questions and a structured oral exam. Both exams would be double-marked using checklists. The overall pass mark for this was set at 50%, as stipulated by the university, backed by literature,5 and agreed upon by the focus group, with each section contributing equal weight.

  1. short-answer questions (SAQ) (one hour): This consisted of six case slides. Five of the patient cases had a color photograph of an ocular condition, and one comprised a binocular vision scenario in which the patient history and clinical data were presented. The student was examined on recognition (signs and symptoms), judgment (differential diagnosis and extra tests necessary), refraction management and decision-making skills (e.g., referral, low vision appliances) for the five cases with a photograph, and a diagnosis and treatment plan for the binocular vision case. The cases were standardized in terms of content (the elements of competencies and performance criteria assessed) and difficulty for both cohorts, taking into account the depth of coverage of a topic expected in the students’ answers and the amount of time required to answer a question to the appropriate standard. Model answers were prepared ranking the importance of the different components using guidance from best practice tools in optometry, and graded using a checklist.10,23
  2. structured oral exam (half hour): This consisted of an oral exam of three case studies from the students’ portfolio: one low vision, one binocular vision and one pathology patient, and the management of their refractive error. A checklist with a set of questions was used to elicit the students’ knowledge and rationale in the management of the topic under examination as well as the ability to communicate this knowledge. The checklist included the competencies to be assessed and was adapted from checklists used in optometry registration exams in the UK.23

For both theory exams, each question/case was first marked independently out of 10 by two examiners and then averaged to give a final score. Students who passed both the theory and the clinical exam were deemed competent to refract.

Qualitative observations of the competency assessment process

Qualitative observations of the competency assessment process were made by the examiners. These were used to provide information, regarding the results, to the university and the faculty. This would help identify factors affecting student performance that the quantitative assessment results would not provide. Feedback would be provided to the faculty enabling them to develop an understanding of the results from the clinical assessments of the optometry students. Faculty would have the opportunity to learn from this and improve teaching as a consequence.

Overall, the methodology should be appropriate to provide an assessment of optometry students’ refraction knowledge, skills, behaviors, attitudes and values, undertaken in a clinical context of a complete eye examination. This would be a low-stakes assessment with the students’ performance not affecting their overall university end-of-year result. Before the clinical assessments were carried out, all the students had a portfolio that documented their refraction competencies including retinoscopy, sphero-cylindrical refraction and binocular balance tests. The students were eligible for the final examination when they had: a) been signed off on the relevant competencies in their portfolio, and b) successfully completed multiple-choice questions in the five courses (clinical optometry, low vision, binocular vision, optometry and clinical medicine and occupational optometry) in their seventh (penultimate) semester.

Assessment Implementation: Methods

Subjects

All 15 students (nine from the first intake, cohort A in 2012, and six from the second, cohort B in 2013) who had progressed to the final semester in their fourth year were invited to participate in the study. The students read and signed a consent form for their inclusion in the study, and confidentiality of the results was maintained throughout.

Equipment

The research equipment used in the study comprised:

  • visual acuity chart (3-meter phoroptor chart with duochrome and cross-cylinder targets)
  • streak retinoscope
  • trial lens set and frames / phoroptor
  • cross cylinders +/-0.25D and +/-0.50D
  • +/-0.25DS and +/-0.50Ds flippers
  • torchlight
  • cover stick
  • slit lamp
  • ophthalmoscope

Data analysis

Data were entered into an SPSS database (version 21) and analyzed for inter-rater agreement. Consistency between the students and the examiners was analyzed with Cohen’s kappa statistic. Descriptive statistics were produced for the clinical competency assessments, and the difference in performance between the first and second cohort were analyzed using a Mann Whitney U test. A significance value of p < 0.05 was adopted throughout the analysis.

Refractive error analysis

Based on the literature of repeatability and reproducibility of refractive values, a variance of +/-0.75D sphere and cylinder was set as the limit of acceptability for retinoscopy and subjective refraction.24

Examiners

The selection criteria for the external examiner were clinical and academic optometry experience, ability to communicate in Portuguese, familiarity of the health context and availability for placement in Mozambique. The researcher with 14 years clinical and public health experience in optometry and four years experience in the training and evaluation of pre-registration optometry students in the UK met the criteria to carry out the evaluations.

Four of the Unilúrio lecturers, two for each cohort, were recruited as internal optometrist examiners. Two were from Colombia and two from Spain. Two had completed their post-graduate studies, one in Spain and one in the UK. The internal examiners had an average of 10 years clinical experience and four years teaching experience.

All examiners had knowledge of the methods used and were provided training by the program developers on the use of the standardized checklists along with the performance criteria and competency standards necessary for the students to exhibit entry-level competency in refraction on graduation. Two of the internal examiners assessed the practical competency, and two assessed the theoretical exam consisting of the SAQs and the oral exam (one for each cohort), along with the external examiner.

Assessment Implementation: Results

Clinical competency assessment

Thirty patients were examined (mean range 37.6 years; standard deviation 18.4 years; age range 7 to 72 years; 16 male [53%] and 14 female [47%]) by nine students from the first cohort and six students from the second cohort.

Fourteen patients had low refractive error (sphere within +/-0.75) and seven had best-corrected decimal visual acuity <0.4. Refraction results from the two graders were averaged. Inter-rater K value was >0.6 for all skills, showing a good strength of agreement between the two raters.25 The only significant inter-cohort difference was in binocular balance and near visual acuity. Table 1 summarizes the mean marks with the standard deviation for both cohorts for every technique, the inter-cohort difference and the total number of students passing every skill.

Theory exam

Table 2 demonstrates the number of students passing the two sections of the theory exam. The inter-rater agreement for the theory exam was >0.6, indicating good agreement.

Qualitative observations of clinical assessment

The examiners noticed certain factors in play during the assessments. Eleven students did not carry out binocular balance tests. For both retinoscopy and subjective refraction, there was a lack of clarity with the instructions, with poor fixation targets being presented. The students did not detect a retinoscopy reflex in any of the patients with high myopia. They could not control the subjective response of patients whose response pattern was poor. They took too much time on history and symptoms, which resulted in less time for refraction and other tests. Overall, 10 students did not relate patient symptoms to management.

Discussion

The aim of this study was to evaluate the design of a competency assessment process and gain an understanding of the effectiveness of the process for assessing clinical competency in refraction. Before the clinical assessments were carried out, all the students had a portfolio and had been ‘checked off’ for all the refraction competencies. However, from the results of these assessments, it appeared that the portfolio served to reflect the procedure being performed and audit skills acquisition rather than check on quality or even proficiency.

Overall, only four students passed the clinical competency assessment. As this was a low-stakes assessment there could have been a lack of motivation to perform well on the part of the students. The qualitative observations identified some of the factors that led to the students failing. These were communicated to the lecturers in a feedback session. This input to the faculty, isolated in a developing country context, has enabled them to learn how to refine student training.

There are several factors that need to be considered in assessing the implications of this study: the lack of standardization of patients; the methodology of direct observation of real clinic patients; the use of SAQs and an oral exam; the increasing importance of using OSCEs; the setting of competency standards and the training and recruitment of examiners. These are all discussed below.

Seven students saw patients with severe, untreated pathology and complex refractive errors. The mix of patients being tested and the complexity of skills being assessed can result in a lack of reliability. SPs are people who are simulating real patients with defined criteria to provide students with consistent and equivalent assessment experiences.26,27 Overall, the high costs of training and expertise to ensure reproducibility and consistency of scenarios could not be justified in the context of student assessment in Mozambique.6 The recommendation is to integrate a degree of standardization for future student assessment. A focus group discussion by the faculty to set the criteria for standardization is proposed. The criteria could include patient age, range of refractive error (if complex then every student should get a complex case), best-corrected visual acuity, past experience of optometric exam, absence or presence of pathology, and ability to communicate in Portuguese. This will facilitate the selection of patients that meet defined criteria, by faculty, for competency assessments without incurring an increase in cost and ensure that the marks on the assessment correlate well with the assessments of the students over their entire program.

The methodology of direct observation of real clinic patients is increasingly challenged on the grounds of authenticity and unreliability due to examiner and patient variance.6 Inter-rater reliability measures the consistency of rating of performance by different examiners.6 The use of two trained raters, for every practical and theoretical exam, with good inter-rater agreement (kappa greater than 0.6) helped to increase consistency.25 Providing the examiners with a standardized checklist increased the reliability of direct observation, and this has been shown to be as reliable as an OSCE.26 A ‘Hawthorne’ effect occurs when a student or practitioner behaves differently because they are being observed. This effect can have a positive impact on student performance;12 however, the effect is inevitable with any methodology involving direct observation.12

Students were familiar with the test formats employed for the theory exams. SAQs were designed to assess problem-solving and data-interpretation skills when faced with common clinical management problems. The oral exam was based on the students’ case records, and examined the knowledge, values and attitudes that informed the students’ management of the patients. The issue of reliability and validity in this study was addressed by using two trained raters with good inter-rater agreement and checklists for both exams. The exams were mapped to the elements of competencies and performance criteria and the level of difficulty expected to be mastered by the student specified.

As a potential solution to the concerns of reliability and validity of the other assessment methods, the OSCE has gained increasing importance in the assessment of clinical competency in medicine and optometry in the UK and US.10,11,28,29 Wide sampling of cases and structured assessment improve reliability, but the OSCE is expensive and labor-intensive.4,6 In Mozambique, due to the lack of SPs and expertise among the faculty to implement and grade OSCEs, they were not considered a feasible assessment method for a new program in a low resource environment. In addition, students were not familiar with the format of OSCEs. Direct expenses of an OSCE include the cost of training standardized patients, examiners, support staff, development of scoring tools and venue costs dependent on the number of stations. However, these costs can be reduced by the use of volunteer faculty, volunteer patients and students as raters.21 Further research is required on the cost of implementing the OSCE (materials, examiners and patients or patient simulators) and the reliability and validity offered compared with the other methods, specifically in a low resource environment.

In this study, the setting of competency standards was stipulated by the university, backed by a literature review and agreed upon by the focus group (75% clinical and 50% theory).5,10 Absolute standards that are criterion referenced are most appropriate for tests of competence.30 In this case, the exams for the two cohorts were not identical as they contained different patients and cases. Hence, percentage scores did not reflect the same level of knowledge. In the long run, a more systematic, transparent approach to standard-setting and pass-fail criteria, supported by a body of published research, needs to be adopted. This involves evaluating the content and difficulty of the examination.30 Standards should be consistent with the purpose of the test and based on expert judgement informed by data about examinee performance.30

The examiners were all experienced and competent optometrists. The use of multiple examiners is a means that has been shown to enhance reliability.6 The examiners were all given explicit criteria and training in the use of checklists, performance criteria and competency standards based on good practice.5 The ideal proposed for an exit assessment is a group of external assessors, accredited for suitability by a professional body of optometrists, trained at the required level with experience in competency teaching and assessment.31 They should all be competent in the area they are to assess and familiar with the competency standards. The selection of examiners in Mozambique will evolve over time as more students graduate, a professional body is formed and accreditation to become an assessor offered.

There are certain limitations to the study of assessment methodology. Our sample of 15 students was small but represented 100% of the final-year optometry students. The study concentrated only on refraction because the spectacle supply system at the university had not been established and access to contact lenses is limited. Intraocular pressures were not assessed as this assessment was concentrating on refraction error management competence. However, this assessment methodology could be expanded to include the additional elements in a more comprehensive “suitability to practice” exit competency assessment.

Conclusion

As optometry continues to move towards competency-based curricula, educators require appropriate tools to support the assessment of competencies. The use of existing checklists and rating skills helped to identify areas of competence deficits. Overall, the methodology of direct observation, SAQs and an oral structured exam has shown good inter-rater reliability with the use of these standardized checklists. The main recommendations are the provision of clear guidelines to faculty for the standardization of patients during exams for the assessment to be reliable and repeatable, and increasing assessor training. More data on the use of OSCEs and standard-setting to ensure case specificity and increase validity are required for this methodology to be adapted for use in optometry schools with similar academic resource limitations.

References

  1. Global Competency Model – World Council of Optometry [Internet]. 2005 [cited 2014 Feb 7]. Available from: www.worldoptometry.org.
  2. Kiely P, Horton P, Chakman J. Competency standards for entry-level to the profession of optometry 1997. Clin Exp Optom. 1998;81(5):210–221.
  3. General Optical Council. The General Optical Council Stage 2 Core Competencies for Optometry. Available at: https://www.optical.org/en/Education/core-competencies–core-curricula/.
  4. Siderov J, Hughes JA. Development of robust methods of assessment of clinical competency in ophthalmic dispensing–results of a pilot trial. Health Soc Care Educ. 2013;2(1):30–36.
  5. Kiely PM, Horton P, Chakman J. The development of competency-based assessment for the profession of optometry. Clin Exp Optom. 1995;78(6):206–218.
  6. Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. The Lancet. 2001 Mar 24;357(9260):945–9.
  7. Miller G. The assessment of clinical skills/competence/performance. Acad Med. 1990 Sep;65(9):63–7.
  8. Wass V, McGibbon D, Van der Vleuten C. Composite undergraduate clinical examinations: how should the components be combined to maximize reliability? Med Educ. 2001 Apr 22;35(4):326–30.
  9. European Diploma in Optometry; Candidate Guidelines. European Council of Optometry and Optics [Internet]. [cited 2012 Jun 4]. Available from: https://www.ecoo.info.
  10. National Board of Examiners in Optometry: Exam information [Internet]. [cited 2014 Nov 14]. Available from: https://www.optometry.org/part_matrix.cfm.
  11. Pre-registration scheme [Internet]. [cited 2015 Mar 8]. Available from: https://www.college-optometrists.org/en/qualifying-as-an-optometrist/pre-registration-scheme/.
  12. Shah R, Edgar D, Evans BJ. Measuring clinical practice. Ophthalmic Physiol Opt. 2007;27(2):113–125.
  13. Loughman J, Nxele L, Faria C, Thompson SJ. Rapid assessment of refractive error, presbyopia and visual impairment and associated quality of life in Nampula, Mozambique. J Vis Impair Blind. 2014;in press.
  14. Shah K, Naidoo K, Chagunda M, Loughman J. Evaluations of refraction competencies of ophthalmic technicians in Mozambique. J Optom. 2016 Jul-Sep;9(3):148–57.
  15. Hrynchak PK, Mittelstaedt AM, Harris J, Machan C, Irving E. Modifications made to the refractive result when prescribing spectacles. Optom Vis Sci. 2012 Feb;89(2):155–60.
  16. Epstein RM. Assessment in medical education. N Engl J Med. 2007 Jan 25;356(4):387–96.
  17. Van Der Vleuten CP. The assessment of professional competence: developments, research and practical implications. Adv Health Sci Educ. 1996;1(1):41–67.
  18. OCANZ: Candidate Guide [Internet]. [cited 2014 Nov 20]. Available from: https://www.ocanz.org/candidate-guide.
  19. Patton MQ. Qualitative research. Thousands Oaks, California: Sage Publications; 2005.
  20. Creswell JW, Miller DL. Determining validity in qualitative inquiry. Theory Pract. 2000;39(3):124–130.
  21. Patrício MF, Julião M, Fareleira F, Carneiro AV. Is the OSCE a feasible tool to assess competencies in undergraduate medical education? Med Teach. 2013 Jun 1;35(6):503–14.
  22. Walubo A, Burch Vanessa. A model for selecting assessment methods for evaluating medical students in African medical schools. Acad Med. 2003 Sep;78(9):899–906.
  23. The College of Optometrists: Examiner and Assessor Training Workbook [Internet]. 2012. Available from: https://www.college-optometrists.org/.
  24. MacKenzie GE. Reproducibility of sphero-cylindrical prescriptions. Ophthalmic Physiol Opt. 2008 Mar;28(2):143–50.
  25. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977 Mar;33(1):159–174.
  26. Wass V, Jones R, Van der Vleuten C. Standardized or real patients to test clinical competence? The long case revisited. Med Educ. 2001;35(4):321–325.
  27. Barrows HS. An overview of the uses of standardized patients for teaching and evaluating clinical skills. AAMC. Acad Med. 1993;68(6):443–51.
  28. Newble D. Techniques for measuring clinical competence: objective structured clinical examinations. Med Educ. 2004;38(2):199–203.
  29. Swanson DB, van der Vleuten CPM. Assessment of clinical skills with standardized patients: state of the art revisited. Teach Learn Med. 2013;25(supp1):S17–25.
  30. Norcini JJ. Setting standards on educational tests. Med Educ. 2003 May 1;37(5):464–9.
  31. Toohey S, Ryan,G, Hughes C. Assessing the practicum. Assess Eval High Educ. 1996;21(3):215–27.
Appendix A: Click to enlarge

Appendix A: Click to enlarge

 Save article as PDF

Dr. Shah [kajshah@aol.com] is a Research Optometrist at Dublin Institute of Technology in Ireland. Her research has focused on evaluation of competence and developing competency frameworks for optometrists and mid-level eyecare personnel in Mozambique.

Dr. Loughman is a Professor of Optometry at Dublin Institute of Technology in Ireland. He has a specific academic and research interest in preventive eye health interventions for the most common causes of blindness and visual impairment.

Dr. Bilotto is the Human Resource Development Global Director for the Brien Holden Vision Institute in Durban, South Africa. His responsibilities include setting up sustainable and high-quality optometry training programs in the developing world.

Dr. Naidoo is the CEO of the Brien Holden Vision Institute and the Africa Vision Research Institute and an Associate Professor of Optometry at the University of KwaZulu Natal in Durban, South Africa.