A Measure of Medical Instructional Quality in Ambulatory Settings: The MedIQ

Similar documents
The patient-centered medical

Greek Teachers Attitudes toward the Inclusion of Students with Special Educational Needs

Longitudinal Integrated Clerkship Program Frequently Asked Questions

GUIDELINES FOR COMBINED TRAINING IN PEDIATRICS AND MEDICAL GENETICS LEADING TO DUAL CERTIFICATION

Use of the Kalamazoo Essential Elements Communication Checklist (Adapted) in an Institutional Interpersonal and Communication Skills Curriculum

VIEW: An Assessment of Problem Solving Style

Process Evaluations for a Multisite Nutrition Education Program

Basic Standards for Residency Training in Internal Medicine. American Osteopathic Association and American College of Osteopathic Internists

Sheila M. Smith is Assistant Professor, Department of Business Information Technology, College of Business, Ball State University, Muncie, Indiana.

ACADEMIA AND CLINIC. Methods

Alpha provides an overall measure of the internal reliability of the test. The Coefficient Alphas for the STEP are:

UVM Rural Health Longitudinal Integrated Curriculum Hudson Headwaters Health Network, Queensbury, New York

Study Abroad Housing and Cultural Intelligence: Does Housing Influence the Gaining of Cultural Intelligence?

The role of the physician primarily

PROGRAM REQUIREMENTS FOR RESIDENCY EDUCATION IN DEVELOPMENTAL-BEHAVIORAL PEDIATRICS

A National Survey of Medical Education Fellowships

Social Emotional Learning in High School: How Three Urban High Schools Engage, Educate, and Empower Youth

Medical student research at Texas Tech University Health Sciences Center: Increasing research participation with a summer research program

Medical Complexity: A Pragmatic Theory

Running head: LISTENING COMPREHENSION OF UNIVERSITY REGISTERS 1

Paramedic Science Program

Integrating the Learner into the Busy Practice

Developing Students Research Proposal Design through Group Investigation Method

Next Steps for Graduate Medical Education

The One Minute Preceptor: 5 Microskills for One-On-One Teaching

English for Specific Purposes World ISSN Issue 34, Volume 12, 2012 TITLE:

Student Perceptions of Reflective Learning Activities

Linking the Common European Framework of Reference and the Michigan English Language Assessment Battery Technical Report

Alyson D. Stover, MOT, JD, OTR/L, BCP

MEDICAL COLLEGE OF WISCONSIN (MCW) WHO WE ARE AND OUR UNIQUE VALUE

E N H A N C I N G C O M M U N I T Y P E D I A T R I C S T R A I N I N G

PROFESSIONAL TREATMENT OF TEACHERS AND STUDENT ACADEMIC ACHIEVEMENT. James B. Chapman. Dissertation submitted to the Faculty of the Virginia

How to Judge the Quality of an Objective Classroom Test

Phase 3 Standard Policies and Procedures

Early Warning System Implementation Guide

Global Health Kitwe, Zambia Elective Curriculum

Health Literacy and Teach-Back: Patient-Centered Communication. Copyright 2011 NewYork-Presbyterian Hospital

Interprofessional educational team to develop communication and gestural skills

Section 3.4 Assessing barriers and facilitators to knowledge use

Surgical Residency Program & Director KEN N KUO MD, FACS

UIC HEALTH SCIENCE COLLEGES

Empowering Students Learning Achievement Through Project-Based Learning As Perceived By Electrical Instructors And Students

A Pilot Study on Pearson s Interactive Science 2011 Program

Curriculum Assessment Employing the Continuous Quality Improvement Model in Post-Certification Graduate Athletic Training Education Programs

A Retrospective Study

Psychometric Research Brief Office of Shared Accountability

CHA/PA Newsletter. Exploring the Field of Hospitalist Medicine. CHA/PA Fall Banquet

RC-FM Staff. Objectives 4/22/2013. Geriatric Medicine: Update from the RC-FM. Eileen Anthony, Executive Director; ;

The Impact of Postgraduate Health Technology Innovation Training: Outcomes of the Stanford Biodesign Fellowship

American College of Emergency Physicians National Emergency Medicine Medical Student Award Nomination Form. Due Date: February 14, 2012

BIENNIUM 1 ELECTIVES CATALOG. Revised 1/17/2017

A 3-Year M.D. Accelerating Careers, Diminishing Debt

Wildlife, Fisheries, & Conservation Biology

Practices Worthy of Attention Step Up to High School Chicago Public Schools Chicago, Illinois

FREQUENTLY ASKED QUESTIONS

A Study of Metacognitive Awareness of Non-English Majors in L2 Listening

ASSESSMENT OF STUDENT LEARNING OUTCOMES WITHIN ACADEMIC PROGRAMS AT WEST CHESTER UNIVERSITY

Clerkship Committee Meeting

Interdisciplinary Journal of Problem-Based Learning

Inquiry Learning Methodologies and the Disposition to Energy Systems Problem Solving

STUDENT PERCEPTION SURVEYS ACTIONABLE STUDENT FEEDBACK PROMOTING EXCELLENCE IN TEACHING AND LEARNING

EDUCATION. MEDICAL LICENSURE State of Illinois License DEA. BOARD CERTIFICATION Fellow, American Academy of Pediatrics FACULTY APPOINTMENTS

THE UNIVERSITY OF TEXAS HEALTH SCIENCE CENTER AT HOUSTON MCGOVERN MEDICAL SCHOOL CATALOG ADDENDUM

Curriculum Vitae of. JOHN W. LIEDEL, M.D. Developmental-Behavioral Pediatrician

CÉGEP HERITAGE COLLEGE POLICY #15

PL Preceptor News June 2012

RESEARCH ARTICLES Objective Structured Clinical Examinations in Doctor of Pharmacy Programs in the United States

MIDDLE AND HIGH SCHOOL MATHEMATICS TEACHER DIFFERENCES IN MATHEMATICS ALTERNATIVE CERTIFICATION

Medical Student Education Committee. MSEC Minutes: August 18, 2015

Assessing Digital Identity and Promoting Online Professionalism: Social Media and Medical Education

Constructing Blank Cloth Dolls to Assess Sewing Skills: A Service Learning Project

INTERNAL MEDICINE IN-TRAINING EXAMINATION (IM-ITE SM )

Application Guidelines for Interventional Radiology Review Committee for Radiology

RESIDENCY IN EQUINE SURGERY

Preparing for Medical School

ACBSP Related Standards: #3 Student and Stakeholder Focus #4 Measurement and Analysis of Student Learning and Performance

Simulation in Radiology Education

SSIS SEL Edition Overview Fall 2017

PREPARING FOR THE SITE VISIT IN YOUR FUTURE

Observing Teachers: The Mathematics Pedagogy of Quebec Francophone and Anglophone Teachers

Update on the Affordable Care Act. Association of Business Administrators September 24, 2014

THE UNIVERSITY OF TEXAS HEALTH SCIENCE CENTER AT HOUSTON MCGOVERN MEDICAL SCHOOL CATALOG ADDENDUM

Equine Surgery Residency Program

Clinical Quality in EMS. Noah J. Reiter, MPA, EMT-P EMS Director Lenox Hill Hospital (Rice University 00)

THEORY OF PLANNED BEHAVIOR MODEL IN ELECTRONIC LEARNING: A PILOT STUDY

Joint Board Certification Project Team

MIDDLE SCHOOL. Academic Success through Prevention, Intervention, Remediation, and Enrichment Plan (ASPIRE)

What is PDE? Research Report. Paul Nichols

AC : DEVELOPMENT OF AN INTRODUCTION TO INFRAS- TRUCTURE COURSE

Interprofessional Education Assessment Strategies

Medical educators are growing

Status of the MP Profession in Europe

Tools to SUPPORT IMPLEMENTATION OF a monitoring system for regularly scheduled series

STUDENT LEARNING ASSESSMENT REPORT

Tennessee Chapter Scientific Meeting

Ph.D. in Behavior Analysis Ph.d. i atferdsanalyse

SETTING STANDARDS FOR CRITERION- REFERENCED MEASUREMENT

Section on Pediatrics, APTA

Delaware Performance Appraisal System Building greater skills and knowledge for educators

Program Curriculum. Organ Systems Block 2 Neuro/Behavior/MS/Derm. Comprehensive Assessment 1 week. Modules weeks

Transcription:

263 A Measure of Medical Instructional Quality in Ambulatory Settings: The MedIQ Paul A. James, MD; Jason W. Osborne, PhD Background and Objectives: Concerns exist about the quality of medical student training outside the academic medical center. Yet, measures of this quality are lacking. This study introduces an instrument to measure instructional activities in primary care settings, based on a learner-centered model. The study also examines the instrument s ability to predict specific learner outcomes. Methods: The MedIQ is a 25- item instrument designed to assess preceptor activities, environmental interactions, learning opportunities, and learner involvement in patient care. The MedIQ was administered in third-year generalist clerkships at one medical school, and the results were compared to extant measures of precepting effectiveness, student grades, specialty choice, and National Board of Medical Examiners scores. Results: The results revealed strong reliability for all scales, moderate construct validity, and weak criterion-related validity. Additionally, some scores predicted specialty choice. Conclusion: The MedIQ is a promising measure of instructional quality in ambulatory medical settings. (Fam Med 1999;31(4):263-9.) Teachers teach and learners learn clinical medicine through established patterns that have persisted since the time of Flexner. 1 Scholarly interests have predominantly focused on strategies to improve teaching, rather than to enhance the more distal outcome of learning. Although there is general agreement that the medical educational system produces well-trained physicians, little is known about the effectiveness of the process of instruction, why it is effective, and what strategies could make it more effective. Research has offered few glimpses into the workings of the process of clinical instruction, and it remains a black box in which good students interact with good teachers to produce good doctors. However, new methods of understanding quality 2,3 and the move to ambulatory education 4 in community sites have encouraged educators to question the quality of medical education. 5-8 Often, attempts to measure quality of training focus on programmatic evaluation rather than quality improvement through validated benchmarks. Although From the Department of Family Medicine, State University of New York at Buffalo. many program directors have insights as to which clinical sites and teachers provide better training, there is no validated assessment of the process of instruction. No clear definition of effective training exists, and researchers have not developed a model to measure the effectiveness of medical training based on universally agreed-on outcomes. Academic departments have developed measures of precepting or teacher effectiveness in an ad-hoc fashion, rather than using theoretically driven models. Few instruments for evaluating teachers have been tested for their psychometric properties (ie, reliability and validity), leading to the use of instruments that may be unreliable or invalid. Thus, there is a need for a valid and reliable tool that may assist in programmatic evaluation and provide benchmarks of quality over time to improve instruction in community-based practices. 9 Experiential learning theory 10 offers an applied approach to learning that is consistent with the clinical demands of community practices; it encouraged us to reexamine instruction from the perspective of the participants in the process, using a grounded-theory approach. Quality management theory enabled us to further conceptualize the process of instruction using systems theory to itemize input variables and examine

264 April 1999 Family Medicine outcomes. Shipengrover and James have applied these theories and presented a model for measurement that provided the basis for instrument development for this study. 11 This paper presents our instrument, the MedIQ (medical instructional quality) and includes an evaluation of its psychometric properties (reliability and validity) as a measure of medical instructional quality in ambulatory settings. The MedIQ is unique in its development, approach, and application in medical education measurement, and implications for the use of the instrument will be emphasized. Methods Instrument Development The MedIQ questionnaire emerged from studies of the teaching and learning environments in community practices. 12,13 Three types of qualitative data contributed to the instrument s development: anthropological study of precepting interactions, structured diaries kept by medical students during their training, and interviews with preceptors. From these data, 4 themes emerged as important indicators of quality medical education: 1) the role of the preceptor in facilitating learning, 2) the role and context of the clinical environment, such as other staff and resource availability, 3) opportunities available to learn, and 4) active involvement by the learner in the care of patients. Based on this information, we constructed items designed to measure these 4 aspects of ambulatory medical education. Following instrument development, the content and content validity of the MedIQ was subjected to peer review by academic and community preceptors and by educators at national meetings. Our desire to measure activities that enhance learning, not characteristics of the preceptor or attributes of the environment that are less amenable to change, guided the development of the instrument. The primary purpose of the MedIQ is to measure indicators of instructional quality (ie, processes pertaining to learning clinical medicine in ambulatory settings) rather than outcomes. However, an ultimate aim for such an instrument is to link specific processes of instruction with predictable outcomes, such as the achievement of clerkship goals, student satisfaction, and/or preceptor effectiveness. Because the MedIQ is a measure of clinical instruction, we tested it in a variety of settings within 3 primary care clerkships. These clerkships, although qualitatively similar, were different in duration. The family medicine rotation lasted 7 8 weeks, medicine for 4 weeks, and pediatrics for 2 3 weeks. During each clerkship, students were encouraged to experience and learn skills, attitudes, and knowledge pertinent to a generalist approach. Sample The instrument was administered to medical students rotating through family medicine, internal medicine, and pediatric clerkships during the second through sixth rotations of the third year of medical school at the State University of New York at Buffalo in 1996 1997. At the end of the rotation, students completed the MedIQ to assess their experiences in ambulatory medical education. All students worked in only one site for the duration of their clinical outpatient experience, but if they worked with more than one preceptor at the site, they were asked to respond with reference to the preceptor who had the greatest effect on their education. The final sample included a total of 156 observations from 131 students (28 from pediatrics, 67 from internal medicine, and 61 from family medicine). The instrument was not administered in every rotation due to administrative and logistical factors beyond our control, thus reducing our sample size, and some students were assessed in more than one rotation. Variables Table 1 lists the items measured in the MedIQ instrument. Items in the preceptor subscale were measured on a 6-point scale from strongly disagree (1) to strongly agree (6). Items on the learning opportunities and site environment scales were measured on a 6-point scale from not at all (1) to very much (6). Items on the participation scale were measured on a 4-point scale, including no exposure (1), observation only (2), supervised participation with little responsibility (3), and supervised participation with shared responsibility (4). These items were analyzed for reliability and construct validity using principal components factor analysis of all 156 questionnaires. A second instrument to compare with the MedIQ was the Student Assessment of Clinical Preceptors, an informal assessment of clinical preceptors traditionally used in our department. It is composed of 9 items asking the students to rate whether the preceptor was a good role model, provided adequate instruction, encouraged the student to take responsibility, provided direct supervision, suggested pertinent outside reading, reviewed the student s records, was prompt in appointments, showed tact and consideration in dealing with others, and provided feedback to the student. These items were measured on a 5-point scale from never (1) to always (5). When subjected to a principal components factor analysis with Varimax rotation, all items except provided adequate time for instruction and provided sufficient direct supervision loaded on a single factor. These 2 items also failed to correlate with each other and were excluded from further analyses. The remaining items, which had good internal reliability (α=.81), were averaged to form a precepting score.

265 Table 1 MedIQ and Measures of Internal Consistency A. Statements in this section describe your preceptor(s) in relation to your learning activities at the site. Item Total Cronbach s My preceptor(s): Correlation Alpha Disagree Agree.94 1. Established an active role for me in the office 1 2 3 4 5 6.65 2. Prepared me for patient encounters, by:.75 Reviewing patient s history with me 1 2 3 4 5 6 Prioritizing pertinent issues 1 2 3 4 5 6 Assigning pertinent topics to read 1 2 3 4 5 6 Making follow-up appointments for when I was available 1 2 3 4 5 6 Actually demonstrating the techniques/procedures to be performed 1 2 3 4 5 6 in other ways (please specify) 3. Listened well to:.67 The patient 1 2 3 4 5 6 Me 1 2 3 4 5 6 4. Instructed me at a level consistent with my knowledge and skill 1 2 3 4 5 6.75 5. Brought to my attention or reinforced physical findings 1 2 3 4 5 6.74 that I had previously not seen 6. Made sure I learned something from every patient 1 2 3 4 5 6.78 7. Asked questions to enhance my learning 1 2 3 4 5 6.78 8. Created an environment in which I felt comfortable accepting challenges, 1 2 3 4 5 6.72 even at the risk of making mistakes 9. Focused on improving my understanding of:.83 History taking 1 2 3 4 5 6 Physical exams 1 2 3 4 5 6 Use of laboratory tests 1 2 3 4 5 6 Use of radiology 1 2 3 4 5 6 Pathophysiology 1 2 3 4 5 6 Decision-making process 1 2 3 4 5 6 Treatment options 1 2 3 4 5 6 The role of the health care team 1 2 3 4 5 6 The importance of self-directed learning 1 2 3 4 5 6 10. Gave me specific feedback geared toward my skill improvement in:.71 Presenting findings/cases 1 2 3 4 5 6 History taking 1 2 3 4 5 6 Physical exam 1 2 3 4 5 6 Assessment and clinical decision making 1 2 3 4 5 6 Documentation 1 2 3 4 5 6 Patient education 1 2 3 4 5 6 11. Demonstrated the value of respecting patient preferences even when they differed from my own 1 2 3 4 5 6.66 (Table continues on the next page)

266 April 1999 Family Medicine Table 1 (continued) B. Items in this section refer to the learning opportunities that you were exposed to at the site. Item Total Cronbach s The opportunities: Correlation Alpha Disagree Agree 12. Were diverse enough to let me learn from different interesting cases 1 2 3 4 5 6.55.87 13. Were too diverse, not allowing me to develop proficiency 1 2 3 4 5 6.73 14. Offered me the chance to develop proficiency through repeated practice 1 2 3 4 5 6.73 15. Were repetitive without offering new learning opportunities 1 2 3 4 5 6 16. Increased my independence in care for patients 1 2 3 4 5 6.73 17. Improved my communication skills with: 1 2 3 4 5 6.70 Doctors 1 2 3 4 5 6 Other health care providers 1 2 3 4 5 6 Patients 1 2 3 4 5 6 18. Included my participation in: 1 2 3 4 5 6.75 History taking 1 2 3 4 5 6 Physical exams 1 2 3 4 5 6 Assessment and clinical decision making 1 2 3 4 5 6 Documentation 1 2 3 4 5 6 Patient education 1 2 3 4 5 6 19. My involvement in the following can be best described as:.93 Supervised Supervised Participation Participation No Observation With Little With Shared Exposure Only Responsibility Responsibility Acute diseases.76 Chronic diseases.80 Health maintenance.78 Psychosocial problems.78 Complicated cases.76 Simple cases.79 Patient education.78 Patient follow-up.71 Office procedures.51 C. The focus of this section is the practice site as your learning environment..87 Not at All Very Much 20. The office environment was well suited to involve students in patient care. 1 2 3 4 5 6.79 21. I found that the office staff helped medical students learn. 1 2 3 4 5 6.75 22. I felt like an integral part of the health care team. 1 2 3 4 5 6.71 23. I felt that my time in the office was being wasted. 1 2 3 4 5 6.62 24. This practice modeled: 1 2 3 4 5 6.64 Serving the underserved in the community 1 2 3 4 5 6 Coordinating patient care among agencies, specialists, and hospital 1 2 3 4 5 6 Responsibility for all patient health care needs 1 2 3 4 5 6 Developing personal relationships with patients over time 1 2 3 4 5 6 Examining the social and cultural context of illness 1 2 3 4 5 6 Too Slow Too Fast 25. In relation to effective learning, I felt that the pace of patient care was: 1 2 3 4 5 6 * * Item excluded from further analysis

267 One additional question asked students whether they would recommend their preceptor to other students, scored on a yes/no scale. This item was included as a separate variable in further analyses. We also obtained the final grade for each student s family medicine rotation. This grade is strongly influenced by preceptor ratings of the student (30% of the grade), as well as quiz performance (30% of the grade); the rest of the grade is based on case presentations and group activities. Other outcomes data we collected included student scores on the National Board of Medical Examiners (NBME) Part II and specialty choice based on National Resident Matching Program (NRMP) results. One item, measuring pace of instruction (#25 on Table 1), did not load with the site environment factor and was excluded from further analysis. Having established the unitary nature of each subscale, we computed internal consistency (Cronbach s alpha) coefficients for each subscale. A summary of these analyses (Cronbach s alphas and item total correlations) is shown in Table 1. As shown in Table 1, all subscales showed internal consistency, ranging from α=.87 (both learning opportunities and site environment) to α=.94 (preceptor). Additionally, examinations of the item total correlations indicated that all items were highly correlated with the total subscore. Data Analysis An important aspect of construct validity is demonstrating correlations between the MedIQ and other measures of the same construct. We computed correlations between the MedIQ and the Student Assessment of Clinical Preceptors. Another component of scale validation is providing evidence that a measure is related to theoretically predicted outcomes or consequences. In this case, one might expect instructional quality to be related to student outcomes, such as grades in the family medicine clerkship, NBME Part II scores for all students, and choice of specialty based on NRMP results in 1998. Thus, we computed correlations between the MedIQ scales and these outcomes. To confirm that each subscale formed a single, strong factor, we subjected items in each subscale to a principal components factor analysis with Varimax rotation. Assessments of ambulatory clerkship experience and grades were not available to us for the pediatric and medicine experiences. Criterion validity assessments using grades and preceptor evaluations were performed on family medicine clerkship students only, while the entire sample was used for specialty choice and NBME scores. Results Subscale Factor Structure and Reliability As expected, items from each subscale did form a single factor when subjected to principal components factor analysis. In all cases, only one factor with an eigenvalue greater than 1.0 emerged (eigenvalue=7.05 for preceptor, 3.51 for learning opportunities, 5.88 for participation, and 3.19 for site environment); an eigenvalue of 1.0 is the customary cut-off for a meaningful factor. Construct Validity We examined subscale interrelationships to test construct validity. The results are shown in Table 2. All subscales correlated strongly (ranging from r=.58 to r=.79). In light of this, we questioned whether a single second-order factor might account for these intercorrelations. To test this, we computed another principal components factor analysis on the subscale scores. One strong factor emerged (eigenvalue 2.93), accounting for 73% of the shared variance. This finding justified computing a total score as an overall statistic. However, our decision to maintain the subscales independently was based on their theoretical importance 10 and their usefulness for giving more-specific feedback to preceptors about how to improve their instructional quality. As expected, the MedIQ preceptor subscore correlated most strongly with students assessment of clinical preceptors (r=.41, P<.002). The learning opportunities Table 2 Relationships Among the MedIQ Subscales LEARNING SITE PRECEPTOR OPPORTUNITIES INVOLVEMENT ENVIRONMENT Subscales of the MedIQ Items 1 11 Items 12 18 Item 19 Items 20 24 Preceptor 1.00.75*.44*.66* Items 1 11 on MedIQ Learning opportunities 1.00.64*.79* Items 12 18 on MedIQ Involvement 1.00.58* Item 19 on MedIQ Site environment 1.00 Items 20 24 on MedIQ The Pearson correlations represent the relationship between the total scores for 2 subscales. Specific items for each subscale are shown in Table 1. * P<.0001

268 April 1999 Family Medicine subscale (r=.29, P<.036) and the total score (r=.29, P<.038) also correlated with the students assessments of the preceptor. Additionally, students willingness to recommend their preceptor to other students correlated strongly with the MedIQ preceptor subscale (r=.38, P<.006), site environment (r=.30, P<.030), and the total score (r=.30, P<.029). These findings provide evidence for construct validity of the MedIQ. Criterion-related Validity Because students final grades were strongly weighted toward the preceptor, we expected significant relationships between the MedIQ preceptor subscale and grades. As predicted, the preceptor subscale and grades were significantly correlated (r=.32, P<.015), and the site environment subscale was also marginally correlated with grades (r=.24, P<.073). These findings provide evidence of criterion-related validity. Another student outcome measure for comparison with the MedIQ was scores from Part II of the NBME certification exam. MedIQ scores were not significantly related to NBME scores. Finally, we hypothesized that experiences in the third year of medical school should predict choice of specialty. Comparisons of MedIQ scores indicated that students choosing family practice as a specialty rated the instructional quality on the family medicine clerkship significantly higher than those not choosing family practice (total score, P<.04; preceptor score, P<.085; environment score, P<.014; and score on learning opportunities, P<.055, one tailed). Students choosing internal medicine as a specialty rated their ambulatory internal medicine experience significantly higher than those not choosing internal medicine (total score, P<.095; preceptor score, P<.10, one tailed). Those choosing to specialize in pediatrics were not significantly different from those not choosing to specialize in pediatrics. However, the pediatric ambulatory experience was only of 2 3 weeks duration. Discussion Efforts to redesign education in ambulatory settings have been hampered by a lack of rigorous and coherent research, and the need to establish valid and reliable measures of quality has been identified as a major issue facing the medial education research enterprise. 9 The results of this study provide evidence that the MedIQ is a reliable and valid instrument for assessing instructional quality in ambulatory medical settings. Specifically, the instrument showed strong reliability and promising construct- and criterion-related validity. The subscales confirm our belief that 4 constructs are vital to effective instruction: 1) activities of effective precepting as opposed to preceptor characteristics, 2) the adequacy of the clinical environment for instruction, 3) salient learning opportunities driven by patient interactions, and 4) student experience that when confronted with patient problems, students are allowed and encouraged to actively participate. The conceptual model on which the MedIQ is based emphasizes that instruction is a process dependent on numerous factors, which include curricular goals, the teacher, the learner, the environment, and other resources used to gain clinical experiences. 11,13 Other factors as yet undetermined may also contribute to the process; however, it is the elements of this process that predict specified outcomes and that have thus far not been clearly articulated. The MedIQ is the first instrument to integrate these many components of instruction into one measurement tool. Other published works have addressed these constructs individually. Researchers have focused on the assessment of teacher and site characteristics 14,15 but with an emphasis on teaching rather than learning. These analyses have shown that the teaching environment has little impact on student perceptions of effective teaching. However, this finding contrasts to what has been found in primary care settings, where the environment has been shown to be an important contributor to learning. 16,17 Stritter and Baker emphasized the importance of teacher and learner preferences along with site characteristics in relation to the goals of instruction. 18 Bligh and Slade developed a questionnaire that, while sensitive to learner attitudes and styles, does not provide specific feedback for teaching activities and site opportunities. 19 Because the questionnaire s 6 subscales are specific to the learner only, it does not lend itself to comparison with the MedIQ. There are limitations to this work. First, this research explores a new measurement model, which conceptually limits comparison to other similar models because no other standardized instruments exist. Second, our model seeks to predict well-established educational outcomes. However, little consensus exists as to what appropriate outcomes might be and how best to measure them. Course grades, NBME scores, preceptor assessments, and specialty choice are the few measurable outcomes available. While not ideal, these outcomes serve the purpose of evaluating the MedIQ. Finally, this work presents results from only 3 generalist departments within one large university and may not be representative of all outpatient educational experiences. Future work should explore the external validity of the MedIQ in other settings. Conclusions The MedIQ enables the evaluation of ambulatory training to be guided by a validated instrument. As a tool for quality management, it may contribute to improvements in measures of educational outcomes by providing the means to 1) assess processes of instruction in different settings, 2) monitor performance rates

269 (benchmarks) for educational sites and teachers over time, 3) provide specific feedback to clinicians based on standardized norms, and 4) identify strategies needed to improve instruction. Acknowledgments: We acknowledge the contributions of Judy Shipengrover, PhD; Vathsala Stone, PhD; Carlos Jaen, MD, PhD; Denise Pikuzinski; Thomas Rosenthal, MD; Bruce Young, MA; and Robin Graham, PhD, MPH, for their conceptual and technical contributions at various points during this project. We also thank Erin Cleary for her invaluable assistance with manuscript preparation. This project was funded, in part, by a National Board of Medical Examiners (NBME) Medical Education Research Fund grant. This project does not necessarily reflect NBME policy, and NBME support provides no official endorsement. Corresponding Author: Address correspondence to Dr James, State University of New York at Buffalo, Department of Family Medicine, SUNY Office of Rural Health, Erie County Medical Center, 462 Grider Street, Buffalo, NY 14215. 716-898-5273. Fax: 716-898-3536. E-mail: paj@acsu.buffalo.edu. REFERENCES 1. Flexner A. Medical education in the United States and Canada. A report to the Carnegie Foundation for the Advancement of Teaching. New York: Publisher? 1910. 2. Deming WE. Out of the crisis. Cambridge, Mass: Massachusetts Institute of Technology, 1986. 3. Donabedian AL. The criteria and standards of quality. Ann Arbor, Mich: Health Administration Press, 1982. 4. Association of American Medical Colleges. Community-based, ambulatory care experiences for medical students: summary report of the Section on Medical Schools Work Group on Medical Education in an Evolving Health Care System. Washington, DC: Association of American Medical Colleges, 1995. 5. American Medical Association. Maintaining educational quality in the context of health system change. Chicago: American Medical Association, 1994. 6. Glasser M, Stearns JA. How ambulatory care is different: a paradigm for teaching and practice. Med Educ 1993;27:35-40. 7. Irby DM. Teaching and learning in ambulatory care settings: a thematic review of the literature. Acad Med 1995;70:898-931. 8. Bowen JL, Stearns JA, Dohner C, Blackman J, Simpson D. Defining and evaluating quality for ambulatory care educational programs. Acad Med 1997;72:506-10. 9. Bordage G, Burack JH, Irby DM, Stritter FT. Education in ambulatory settings: developing valid measures of educational outcomes and other research priorities. Acad Med 1998;73:743-9. 10. Kolb DA. Experiential learning. Englewood Cliffs, NJ: Prentice-Hall, 1984. 11. Shipengrover JA, James PA. Measuring instructional quality in community-based medical education: looking into the black box. Med Educ 1999;in press. 12. Young BL, Graham RP, Shipengrover J, James PA. Components of learning in ambulatory settings: a qualitative analysis. Acad Med 1999;in press. 13. James PA, Schwartz DG, Rosenthal T, Feather J. Community-based teaching: a guide to developing and implementing education programs for medical students and residents in the practitioner s office. In: Deutsch SL, ed. State University of New York at Buffalo: community academic practice initiative. Philadelphia: American College of Physicians 1997:186-96. 14. Irby DM, Ramsey PG, Gillmore GM. Characteristics of effective clinical teachers of ambulatory care medicine. Acad Med 1991;66:54-5. 15. MacDonald PJ, Bass MJ. Characteristics of highly rated family practice preceptors. J Med Educ 1983;58:882-93. 16. Biddle WB, Riesenberg LA, Darcy PA. Medical students perceptions of desirable characteristics of primary care teaching sites. Fam Med 1996;28:629-33. 17. Gruppen LD. Implications of cognitive research for ambulatory care education. Acad Med 1997;72:117-20. 18. Stritter FT, Baker RM. Resident preferences for the clinical teaching of ambulatory care. J Med Educ 1982;57:33-41. 19. Bligh J, Slade P. A questionnaire examining learning in general practice. J Med Educ 1996;30:65-70.