Resident Selection Process and Prediction of Clinical Performance in an Obstetrics and Gynecology Program

Similar documents
Use of the Kalamazoo Essential Elements Communication Checklist (Adapted) in an Institutional Interpersonal and Communication Skills Curriculum

INTERNAL MEDICINE IN-TRAINING EXAMINATION (IM-ITE SM )

PROFESSIONAL TREATMENT OF TEACHERS AND STUDENT ACADEMIC ACHIEVEMENT. James B. Chapman. Dissertation submitted to the Faculty of the Virginia

GUIDELINES FOR COMBINED TRAINING IN PEDIATRICS AND MEDICAL GENETICS LEADING TO DUAL CERTIFICATION

ACADEMIA AND CLINIC. Methods

Surgical Residency Program & Director KEN N KUO MD, FACS

Longitudinal Integrated Clerkship Program Frequently Asked Questions

Tun your everyday simulation activity into research

PROGRAM REQUIREMENTS FOR RESIDENCY EDUCATION IN DEVELOPMENTAL-BEHAVIORAL PEDIATRICS

REGULATION RESPECTING THE TERMS AND CONDITIONS FOR THE ISSUANCE OF THE PERMIT AND SPECIALIST'S CERTIFICATES BY THE COLLÈGE DES MÉDECINS DU QUÉBEC

FINAL EXAMINATION OBG4000 AUDIT June 2011 SESSION WRITTEN COMPONENT & LOGBOOK ASSESSMENT

2. Related Documents (refer to policies.rutgers.edu for additional information)

Update on the Next Accreditation System Drs. Culley, Ling, and Wood. Anesthesiology April 30, 2014

Chapters 1-5 Cumulative Assessment AP Statistics November 2008 Gillespie, Block 4

The patient-centered medical

STA 225: Introductory Statistics (CT)

Applications from foundation doctors to specialty training. Reporting tool user guide. Contents. last updated July 2016

Race, Class, and the Selective College Experience

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy

THE UNIVERSITY OF TEXAS HEALTH SCIENCE CENTER AT HOUSTON MCGOVERN MEDICAL SCHOOL CATALOG ADDENDUM

The AAMC Standardized Video Interview: Essentials for the ERAS 2018 Season

THE UNIVERSITY OF TEXAS HEALTH SCIENCE CENTER AT HOUSTON MCGOVERN MEDICAL SCHOOL CATALOG ADDENDUM

Teacher intelligence: What is it and why do we care?

Basic Standards for Residency Training in Internal Medicine. American Osteopathic Association and American College of Osteopathic Internists

Physician Assistant Program Goals, Indicators and Outcomes Report

American Journal of Business Education October 2009 Volume 2, Number 7

Psychometric Research Brief Office of Shared Accountability

THE BROOKDALE HOSPITAL MEDICAL CENTER ONE BROOKDALE PLAZA BROOKLYN, NEW YORK 11212

Meet the Experts Fall Freebie November 5, 2015

HEALTH INFORMATION ADMINISTRATION Bachelor of Science (BS) Degree (IUPUI School of Informatics) IMPORTANT:

A Game-based Assessment of Children s Choices to Seek Feedback and to Revise

Curriculum Assessment Employing the Continuous Quality Improvement Model in Post-Certification Graduate Athletic Training Education Programs

The Impact of Postgraduate Health Technology Innovation Training: Outcomes of the Stanford Biodesign Fellowship

Next Steps for Graduate Medical Education

Loyola University Chicago ~ Archives and Special Collections

Evaluating Postdoctoral Dental Candidates: Assessing the Need and Recommendations for a National Qualifying Examination

Effective practices of peer mentors in an undergraduate writing intensive course

How to Judge the Quality of an Objective Classroom Test

Alyson D. Stover, MOT, JD, OTR/L, BCP

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

Linking the Ohio State Assessments to NWEA MAP Growth Tests *

Probability and Statistics Curriculum Pacing Guide

AYO DRUGSTORE RECORDS ( )

Using a Simulated Practice to Improve Practice Management Learning

Predicting the Performance and Success of Construction Management Graduate Students using GRE Scores

PREPARING FOR THE SITE VISIT IN YOUR FUTURE

Improving recruitment, hiring, and retention practices for VA psychologists: An analysis of the benefits of Title 38

Pharmaceutical Medicine

Women in Orthopaedic Fellowships: What Is Their Match Rate, and What Specialties Do They Choose?

Clinical Quality in EMS. Noah J. Reiter, MPA, EMT-P EMS Director Lenox Hill Hospital (Rice University 00)

UVM Rural Health Longitudinal Integrated Curriculum Hudson Headwaters Health Network, Queensbury, New York

GDP Falls as MBA Rises?

RESEARCH ARTICLES Objective Structured Clinical Examinations in Doctor of Pharmacy Programs in the United States

MEDICAL COLLEGE OF WISCONSIN (MCW) WHO WE ARE AND OUR UNIQUE VALUE

Application Guidelines for Interventional Radiology Review Committee for Radiology

IMSH 2018 Simulation: Making the Impossible Possible

The Outcome Project of the Accreditation Council for

Procedia - Social and Behavioral Sciences 209 ( 2015 )

Redirected Inbound Call Sampling An Example of Fit for Purpose Non-probability Sample Design

Global Health Interprofessional Program Summer Zambia

Do multi-year scholarships increase retention? Results

Interprofessional Education Assessment Strategies

Faculty Feedback User s Guide

Research Design & Analysis Made Easy! Brainstorming Worksheet

OVERVIEW OF CURRICULUM-BASED MEASUREMENT AS A GENERAL OUTCOME MEASURE

Program Information on the Graduate Certificate in Alcohol and Drug Abuse Studies (CADAS)

CURRICULUM VITAE. COLLEEN M. SANDOR, Ph.D.

Curriculum Vitae Sheila Gillespie Roth Address: 224 South Homewood Avenue Pittsburgh, Pennsylvania Telephone: (412)

Consultation skills teaching in primary care TEACHING CONSULTING SKILLS * * * * INTRODUCTION

Effective Recruitment and Retention Strategies for Underrepresented Minority Students: Perspectives from Dental Students

National Survey of Student Engagement at UND Highlights for Students. Sue Erickson Carmen Williams Office of Institutional Research April 19, 2012

Match Week & Match Day Requested Information Class Meeting Awards Ceremony Match Ceremony

Mathematics Program Assessment Plan

American College of Emergency Physicians National Emergency Medicine Medical Student Award Nomination Form. Due Date: February 14, 2012

Mastering Team Skills and Interpersonal Communication. Copyright 2012 Pearson Education, Inc. publishing as Prentice Hall.

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Effect of Cognitive Apprenticeship Instructional Method on Auto-Mechanics Students

ASSESSMENT REPORT FOR GENERAL EDUCATION CATEGORY 1C: WRITING INTENSIVE

PSIWORLD Keywords: self-directed learning; personality traits; academic achievement; learning strategies; learning activties.

Biological Sciences, BS and BA

EMORY UNIVERSITY. SCHOOL OF MEDICINE. Emory School of Medicine records,

Cognitive Apprenticeship Statewide Campus System, Michigan State School of Osteopathic Medicine 2011

The Relation Between Socioeconomic Status and Academic Achievement

Understanding and Interpreting the NRC s Data-Based Assessment of Research-Doctorate Programs in the United States (2010)

Analyzing the Usage of IT in SMEs

Summarizing Webinar Protocol and Guide for Facilitators

ALAMO CITY OPHTHALMOLOGY

What effect does science club have on pupil attitudes, engagement and attainment? Dr S.J. Nolan, The Perse School, June 2014

RC-FM Staff. Objectives 4/22/2013. Geriatric Medicine: Update from the RC-FM. Eileen Anthony, Executive Director; ;

We are delighted to welcome Drayton Saunders to the Community Board of the Sarasota Campus, FSU College of Medicine.

Proposal for an annual meeting format (quality and structure)

A Model to Predict 24-Hour Urinary Creatinine Level Using Repeated Measurements

King-Devick Reading Acceleration Program

Effects of a Course on Ophthalmologist Communication Skills: A Pilot Study

Reading Comprehension Tests Vary in the Skills They Assess: Differential Dependence on Decoding and Oral Comprehension

Dr. Isadore Dyer, Association of American Medical Colleges

Evaluation of a College Freshman Diversity Research Program

Tools to SUPPORT IMPLEMENTATION OF a monitoring system for regularly scheduled series

School Inspection in Hesse/Germany

Transcription:

Resident Selection Process and Prediction of Clinical Performance in an Obstetrics and Gynecology Program Alexander Olawaiye John Yeh Matthew Withiam-Leitch Department of Gynecology and Obstetrics University at Buffalo School of Medicine and Biomedical Sciences Buffalo, New York, USA Background: To date, there is no published data showing a correlation between the National Residency Matching Program Rank list position and performance in residency. Purpose: The background was to assess whether a residency applicant selection process is able to predict the subsequent performance of candidates during residency. Methods: Over a period of 3 academic years, resident candidates were invited to a structured interview process. This interview process was used to generate a rank list for the National Residency Matching Program (NRMP). We evaluated the clinical performance of the residents that matched to our program at the end of the 1st postgraduate year. We then examined the correlation between the NRMP rank position and the residents clinical performance score. Results: For the 3 academic years, the residents mean total performance score was similar for the residents for each of the years. There was a positive correlation between the resident candidate NRMP rank list percentile and the subsequent 1st year clinical performance evaluation score (r =.60, p <.001). Conclusions: A structured selection process of residency applicants can predict their subsequent 1st year clinical performance. Teaching and Learning in Medicine, 18(4), 310 315 The transition from the medical school to residency is a time-consuming process for the students and the program directors. 1 3 The student is anxious to match into a program, which among other considerations would enhance his or her chances of future success in either academic or clinical practice. On the other hand, a program director s major interest is to select candidates who would perform well in residency. 4 6 The current system to recruit medical students to American residency programs is through the National Residency Matching Program (NRMP). Each residency program evaluates resident candidates and generates a rank list of candidates. The methodology to evaluate and rank resident applicants varies by programs. The programs aim to attract the most desirable candidates, and presumably, applicants high on the rank list are those the program feels would be the most successful residents. However, to date, there is no published data showing a correlation between NMRP rank list position and performance in residency. We standardized our resident selection process to be heavily weighed toward an interview process that has Copyright 2006 by Lawrence Erlbaum Associates, Inc. the inclusion of interviewers of multiple backgrounds. The goal of our study was to investigate whether our selection process has produced an applicant rank list that correlates with clinical performance in residency. Materials and Methods In this study, we focused on the resident selection process for the Department of Obstetrics and Gynecology of the University of Buffalo. Our program has 9 first-year resident positions to fill per academic year. In this report, the selection process studied was for residents recruited for these 3 academic years; 2002 to 2003, 2003 to 2004, and 2004 to 2005. This study was approved by the Institutional Review Board of the University of Buffalo. Resident Selection Process and NRMP Rank List Development Beginning with academic year 2002 to 2003, we implemented a structured selection process (Figure 1). Correspondence may be sent to Matthew Withiam-Leitch, MD, PhD, University at Buffalo, Department of Gynecology-Obstetrics, 219 Bryant Street, Buffalo, NY 14222, USA. E-mail: alwl@buffalo.edu 310

SELECTION PROCESS AND PERFORMANCE IN RESIDENCY M. Withiam-Leitch initially screened all applicants (> 200/year) received over the electronic residency application system (ERAS). These ERAS applications use a standardized format that included board scores, dean s letter, medical student transcript, personal statements, letters of recommendation, and curriculum vitae. From these ERAS applications, we then invited the screened candidates to interview. During each academic year, there were a total of three interview dates. There were eight evaluators for each interview date. We made a conscious effort to have interviewers that represented the full spectrum of career opportunities available in obstetrics and gynecology. Thus, on each interview day, we included private practice obstetrician-gynecologists, academic generalist obstetrician-gynecologists, maternal fetal medicine specialists, gynecologic oncologists, and reproductive endocrinologists. All invited candidates for all three dates were interviewed by the program director and the two associate program directors. Each candidate was interviewed by a total of six evaluators. Prior to the beginning of each interview day, there was a meeting of the evaluators to discuss the evaluation process and to maintain uniformity in the process. Evaluators were asked to provide a score of 1 to 5 points for each of five categories (communication skills, insight into the specialty, motivation, compassion, and fit into the program). Each evaluator then provided a total composite score of 5 to 25 for each candidate. We then instructed the evaluators to use this composite score to rank the candidates they interviewed from best to worst. Thus, the composite score was used to generate each individual interviewer s own rank list. Because each candidate was interviewed by six attending physicians, each candidate received six rank scores. From the rank scores, we generated an average rank score for each candidate. Finally, using the average rank scores, we ranked all the candidates from best to worst for that particular interview date. At the end of the interview season, a meeting involving all evaluators was held. We generated a final rank list integrating all three interview dates. We facilitated the integration of the three rank lists from the three interview dates by two mechanisms: (a) the program director and the two associate program directors had interviewed all candidates on all interview dates, and (b) several other evaluators had participated on more than one interview date. We subsequently submitted this final rank list to the NRMP. In each academic year, there were different numbers of candidates included on the final NRMP rank list. Therefore, for this study, we generated a rank percentile per candidate per year to depict how high the candidate was ranked in relation to the total number of candidates in that year. This rank percentile controlled for the differences in the number of candidates interviewed per academic year. Resident Clinical Evaluation Process and Total Performance Scores At the end of the first postgraduate year, the recruited residents were evaluated by attending physicians. Each resident s performance was evaluated using a Global Resident Evaluation Form that we developed. We developed the Global Resident Evalua- Figure 1. Summary of methods to generate the National Residency Matching Program (NRMP) rank list of resident candidates. 311

OLAWAIYE, YEH, WITHIAM-LEITCH tion Form to evaluate residents according to the standards recently implemented by the Accreditation Council on Graduate Medical Education (ACGME). The ACGME requires that residents be exposed to and trained in six core competencies patient care, medical knowledge, practice-based learning and improvement, interpersonal communication skills, professionalism, and system-based learning. Full-time and volunteer attending faculty scored residents performance on a scale ranging from 1 (poor) to 9(exceptional) in each of the six ACGME core competency areas. For each resident, we then calculated a total performance score by taking an average of the scores from these six core competencies. We used this total performance score (potential range of 1.0 9.0) in the analysis performed in this study. Statistics We presented the data as mean ± standard deviation. We did data analysis with a simple linear regression model, analysis of variance tests, t tests, and a variance ratio test using SAS Version 9 (SAS Institute, Inc., Cary, NC, USA). We considered p <.05 statistically significant. We therefore pooled data from these three academic years for our subsequent analysis. Figure 2 shows the relation between the rank positions of all residents and their total performance score as evaluated by attending faculty at the end of the 1st year in residency. There was a correlation between the rank percentile of the candidates and their total performance score. The higher the rank percentile on our NRMP rank list, the better the subsequent clinical performance of the resident. The correlation coefficient was statistically significant (r =.60; p <.001). We further subdivided the residents into two groups: those residents that were ranked in the top half (rank percentile 50% to 99%) and those ranked in the bottom half (0% to 49%). For the top ranked residents, 12 of 12 had a total performance score above 6.0 (Figure 3). On the other hand, only 1 of 14 residents in the bottom half had a total performance score above 6.0. The total performance score of the upper half was significantly higher than the lower half (6.4 ± 0.21 vs. 5.3 ± 0.98; p <.001). Additionally, use of a variance ratio test showed that there was significantly less variance in the total performance score for residents ranked in the top half as compared to residents ranked in the bottom half (Figure 3). Results During the study period, we interviewed 107 candidates 29 applicants for the academic year 2002 to 2003, 38 for 2003 to 2004, and 40 for 2004 to 2005. The NRMP rank list position and total performance score for each resident for each of the three academic years is shown in Table 1. The average total performance score did not differ significantly among the three academic years included in the study (p =.78). Conclusions Our study indicates that the applicant selection process can predict subsequent performance of candidates in residency. The NRMP rank list generated from our selection process significantly correlated with the strength of the candidates clinical performance in residency as determined by attending physician evaluations. To our knowledge, this study is the first report that shows a correlation between a program s NRMP rank list and subsequent residents clinical performance. Table 1. NRMP Rank List Position and Total Performance Score for Each of 3 Academic Years 312 Academic Year 2002 2003 2003 2004 2004 2005 Resident ID Rank % a Total PS Rank % a Total PS Rank % a Total PS 1 93.1 6.4 94.7 6.1 97.5 6.7 2 82.8 6.0 81.6 6.5 55.0 6.4 3 79.3 6.4 78.9 6.5 52.5 6.3 4 75.9 6.4 76.4 6.6 50.0 6.7 5 48.3 4.8 31.6 5.2 40.0 5.2 6 34.5 5.8 28.9 5.6 37.5 5.2 7 31.1 7.2 21.1 5.9 27.5 5.8 8 27.6 4.7 18.4 5.8 5.0 5.7 9 b 10.6 4.5 2.5 2.8 M Total PS (± SD*) 6.0 (0.85) 5.9 (0.69) 5.7 (1.21) Note: NRMP = National Residency Matching Program. There was no statistically significant difference in the mean total performance score (PS) between the 3 academic years (analysis of variance test; p =.78). a Rank % = rank percentile. b Ninth resident transferred out of program for 2002-2003.

SELECTION PROCESS AND PERFORMANCE IN RESIDENCY Figure 2. Simple linear regression analysis shows a statistically significant correlation between National Residency Matching Program rank percentile and total performance score. (r =.60, p <.001). Figure 3. Distribution of total performance scores between residents whose National Residency Matching Program (NRMP) rank percentile was in the top 50% and the bottom 50%. A variance ratio test showed that there was a significant difference in the variances between the top 50th percentile and bottom 50th percentile residents (p <.001). Our selection process is heavily weighted toward the interview. The initial screening of candidates from the ERAS system was simply to identify those candidates that presumably had the academic credentials to successfully complete a residency program in obstetrics and gynecology. These traditional credentials medical school transcripts, board scores, and letters of recommendations have been referred to as cognitive attributes. 7 10 These cognitive attributes have been shown to predict future success on standardized exams 10 12 but have not correlated with success in clinical performance. 4,5 In contrast, the interview has been viewed as a tool to assess noncognitive attributes. Noncognitive attributes such as communication skills, professionalism, ethics, and the like are important attributes of a clinician. 8,13 15 In fact, the new ACGME core competencies are intended to assure that residents are exposed and educated in these noncognitive attributes. This is why we had chosen to focus our selection process on the interview. In fact, several studies have shown that residency program directors regard the interview as the most important component of the residency selection process. 16 18 However, none of these studies have attempted to show a correlation between the interview process and subsequent resident clinical performance. There are two possible reasons to explain how our interview process predicts future resident performance. First, we controlled for the interinterviewer variability by having the attending physicians rank their candidates from best to worse. The composite score of 5 to 25 points was only used by the individual interviewer to generate their personal rank for the candidates they interviewed. We then combined the ranks from these individual interviewers to calculate an average overall rank. This minimizes the potential 313

OLAWAIYE, YEH, WITHIAM-LEITCH for an individual interviewer s composite scoring to skew the overall rank list. In other words, this technique helps to control for bias in tendencies by different interviewers for leniency or severity. 19 Second, interviewers consisted of attending physicians from all walks of life. The generalist obstetrician-gynecologist may be most interested in whether a candidate will fit in with others in the program, whereas a subspecialist may focus on the research potential of the candidate. A private practice obstetrician-gynecologist may concentrate an interview assessing whether the candidate has the communications skills to provide good care to their patients. For a candidate to be ranked high on the NRMP rank list, it would have been necessary for multiple evaluators, regardless of their primary interest, to rank the candidate high. In our opinion, it is the breadth of viewpoints of the evaluators that has led to our interview process being able to generate a rank list that predicts the performance of candidates in residency. It should be noted that our approach is in contrast to a trend toward developing a highly structured and standardized interview process to reduce interrater variability. 19,20 Other than our study, there is little published evidence that the interview process can predict future clinical performance. 8,19,21 For example, Komives et al. 21 were unable to show a significant correlation between the applicant interview and resident performance among internal medicine residents. In Komives et al. s 21 study, the resident clinical performance evaluations were done by chief residents and may have been flawed by internal inconsistencies. In our study, the clinical evaluations were done by experienced attending physicians, and we used the average scores from all of them to calculate a total performance score. This approach limited potential bias by one particular attending toward a resident. Additionally, the total performance score we used was calculated using a scale that rated the ACGME core competencies. The ACGME core competencies evaluate many of the same noncognitive attributes as the interview, for instance, communication skills and professionalism. Thus, the qualities assessed in our interview process were similar to the qualities assessed in our total performance score. Bell et al. 22 likewise failed to show a correlation between their selection process and subsequent resident performance in an obstetric and gynecology program. Unlike our study, Bell et al. s 22 resident selection process was heavily weighted toward cognitive attributes such as U.S. Medical Licensing Examination scores and honors in medical school clerkships. Given the preceding, our data suggest that our resident interview process is relatively successful in predicting subsequent resident clinical performance. However, there are limitations and weaknesses. First, our ranking process seems to suggest that residents in the top 50th percentile of the NRMP rank list perform relatively well by our assessment standards. However, we are less able to predict clinical performance in the lower 50th percentile. The variance in clinical performance was significantly greater in these candidates. Second, although our selection process was strongly weighted toward the interview and noncognitive attributes, prior knowledge of the candidate s cognitive attributes may have influenced to some degree the evaluators ranking of a candidate. In the future, comparing the evaluators preinterview and postinterview rank list will help to delineate the potential influence of these cognitive and noncognitive attributes. Third, an attending that evaluated a resident may have also interviewed this resident as a candidate. This unblinded performance evaluation could introduce bias. However, as the resident performance evaluations are done 18 months after the interview season, the evaluators would have to have excellent recall for this bias to occur. Finally, this study only addresses the selection process with regard to performance early in residency. We cannot yet address whether the selection process has any predictive value on resident performance at the end of residency training. In summary, we have described a resident selection process heavily dependent on the interview that leads to an NRMP ranking system that correlated reasonably well with 1st-year clinical performance of residents. It is unlikely for any two programs to have the same criteria for resident selection because each program has different concerns and interests. It is, however, important for each program to formulate a uniform and objective selection process, an essential component of which is an interview process that may predict future performance of the residents. References 1. Wagoner NE, Suriano JR. Recommendations for changing the residency selection process based on a survey of program directors. Academic Medicine 1992;67:459 465. 2. Swanson AG. The preresidency syndrome : An incipient epidemic of educational disruption. Journal of Medical Education 1985;60:201 202. 3. Weinberg E, Casty F. Results of a survey concerning application for residency training. Journal of Medical Education 1987;62:763 765. 4. Erlandson EE, Calhoun JG, Barrack FM, et al. Resident selection: Applicant selection criteria compared with performance. Surgery 1982;92:270 275. 5. Hojat M, Gonnella JS, Veloski JJ, Erdmann JB. Is the glass half full or half empty? A reexamination of the associations between assessment measures during medical school and clinical competence after graduation. Academic Medicine 1993;68(2 Suppl.):S69 S76. 6. Yager J, Strauss GD, Tardiff K. The quality of deans letters from medical schools. Journal of Medical Education 1984;59:471 478. 7. Mitchell KJ. Traditional predictors of performance in medical school. Academic Medicine 1990;65:149 158. 314

SELECTION PROCESS AND PERFORMANCE IN RESIDENCY 8. Johnson EK, Edwards JC. Current practices in admissions interviews at U.S. medical schools. Academic Medicine 1991;66:408 412. 9. Streyffeler L, Altmaier EM, Kuperman S, Patrick LE. Development of a medical school admissions interview Phase 2: Predictive validity of cognitive and non-cognitive attributes. Medical Education Online 2005;10:14. Available at: http://www.med-ed-online.org. Accessed August 1, 2005. 10. Rippentrop AE, Wong M, Altmaier EM. A content analysis of interviewee reports of medical school admissions interviews. Medical Education Online 2003;8:10. Available at: http://www.med-ed-online.org. Accessed August 1, 2005. 11. Walton H. Admissions procedures to medical schools. Medical Educaton 1994;28:263 264. 12. Collins JP, White GR, Petrie KJ, Willoughby EW. A structured panel interview and group exercise in the selection of medical students. Medical Education 1995;29:332 336. 13. Edwards JC, Elam CL, Wagoner NE. An admission model for medical schools. Academic Medicine 2001;76:1207 1212. 14. Rolfe IE, Pearson S, Powis DA, Smith AJ. Time for a review of admission to medical school? Lancet 1995;346:132 133. 15. Hojat M, Robeson M, Damjanov I, Veloski JJ, Glaser K, Gonnella JS. Students psychosocial characteristics as predictors of academic performance in medical school. Academic Medicine 1993;68:635 637. 16. Wagoner NE, Suriano JR, Stoner JA. Factors used by program directors to select residents. Journal of Medical Education 1986;61:10 21. 17. Wagoner NE, Gray GT. Report on a survey of program directors regarding selection factors in graduate medical education. Journal of Medical Education 1979;54:445 452. 18. Taylor CA, Weinstein L, Mayhew HE. The process of resident selection: A view from the residency director s desk. Obstetrics and Gynecology 1995;85:299 303. 19. Edwards JC, Johnson EK, Molidor JB. The interview in the admissions process. Academic Medicine 1990;65:167 177. 20. Luke PE, Altmaier EM, Kuperman S, Ugolini K. A structure interview for medical school admission, Phase 1: Initial procedures and results. Academic Medicine 2001;76:66 71. 21. Komives E, Weiss ST, Rosa RM. The applicant interview as a predictor of resident performance. Journal of Medical Education 1984;59:425 426. 22. Bell JC, Kanellitsas I, Shaffer L. Selection of obstetrics and gynecology residents on the basis of medical school performance. American Journal of Obstetrics and Gynecology 2002;186:1091 1094. 315