Continuous Improvement Planning Document Candidate and Completer Data Evidence Lesson Plan Assessment What does it assess? Who does the evaluating? Standards 1-8: planning for instruction Primary Instructors of 220, 321, 346/7 Using Data on Candidate Progress (formative) and Performance (summative) Collected how Analyzed how? What do we hope to learn? often? in EDUC 220 (baseline for an assessment cohort), 321 (midpoint for assessment cohort), and 346/7 (endpoint for assessment cohort) Scores received by candidates on each indicator will be aggregated by class to see which indicators pose difficulty at each stage in the program Each course assessment point has a different acceptable score because we anticipate growth through the program and therefore raise the bar for acceptability--candidates in EDUC 220 are expected to be between developing and proficient (average score of 2.5 on all indicators); candidates in EDUC 321 are expected to be proficient (average score of 3); candidates in EDUC 346/7 are expected to be between proficient and exemplary (average score of 3.5). Scores taken as a whole to see if patterns emerge across the Who will participate in analysis? who deliver the program insight as to which standards (1-8) are most Analysis should also yield insight into the progress of individual candidates so that we may offer additional support for struggling learners.
Dispositions Assessment Diversity Reflection Standards 9-10: professional dispositions Candidates; Instructors of 220, 321, 346/7 Standards 1-2: understanding of diversity as in EDUC 220, 321, and 346/7 in EDUC 220, 321, and 346/7 program in indicators that pose difficulty Lesson plans submitted by candidates in 321 and 346/7 will also be compared to their previous lesson plan scores to evaluate individual and group progress Individual candidates selfassessment scores on the indicators will be compared to their scores by their instructor If discrepancies exist, or if a candidate has scored unacceptable on any indicator, a meeting will be scheduled with two faculty members and the candidate to (1) discuss concerns and (2) write a plan of action. Each year, documentation of these meetings (along with any previous dispositional assessments submitted) will be shared at meetings of the and with the Teacher Preparation Committee during application review and prior to student Scores received by candidates on each indicator will be aggregated by course to see which indicators pose difficulty at each of three information about individual candidates dispositional performance so that we can address concerns with candidates and implement action plans. Consistent concerns (especially those that persist despite intervention) may lead to counseling the candidate out of the program. insight as to which aspects of diversity (responsibility; kinds of diversity; skills, knowledge, dispositions
Clinical and Field Evaluation relates to the profession of teaching Instructors of 220, 321, 346/7 Standards 1-10: planning and in EDUC 220, 321, and two in 346/7 stages in the program (beginning, middle, endpoint in a time series) Each course assessment point has a different acceptable score because we anticipate growth through the program and therefore raise the bar for acceptability-- candidates in EDUC 220 are expected to be between developing and proficient (average score of 2.5 on all indicators); candidates in EDUC 321 are expected to be proficient (average score of 3); candidates in EDUC 346/7 are expected to be between proficient and exemplary (average score of 3.5). Scores taken in aggregate to see if patterns emerge across the program in indicators that pose difficulty Diversity assessments submitted by candidates in 321 and 346/7 will also be compared to their previous diversity assessments scores to evaluate individual and group progress Scores received by candidates on each of the 20 indicators will be aggregated by class to see which indicators pose difficulty at each stage in the program needed to teach diverse students well; and importance of self awareness) are most Analysis should also yield insight into the progress of individual candidates so that we may offer additional support for struggling learners (or counsel them out of the program). insight as to which of the indicators are most
Electronic Portfolio Evidence Employer Survey implementation of instruction as well as professional deportment Host teachers (220, 321, 346/7) and College Supervisors (346/7) competence in each of the 10 Standards (Summative) What does it assess? Who does the evaluating? Each class has a different acceptable score (see above) Scores taken in aggregate to see if patterns emerge across the program in indicators that pose difficulty Field/Clinical Evaluations of candidates in 321 and 346/7 will also be compared to their previous Field/Clinical Evaluations to evaluate individual and group progress. Each spring in Scores received by candidates on EDUC 346/7 each of ten indicators will be (not a timeseries emerge in indicators that pose aggregated to see if patterns assessment) difficulty Candidates at this stage are expected to be at least proficient on all indicators. Using Data on Completer and Impact** Analyzed how? Collected how often? employers of asked to Ratings on each of the questions will be aggregated to see if patterns emerge in indicators that pose difficulty Who will participate? Analysis should also yield insight into the progress of individual candidates so that we may offer additional support for struggling learners (or counsel them out). insight as to which standards (1-10) are most What do we hope to learn? Analysis of survey data and interview notes should yield insight into areas of strength and weakness among of our program,
Completer Survey Completer Focus Group Interview Employers Completers impact on P-12 learning Program effectiveness Completers (selfassessment) complete this survey asked to complete this survey asked to participate in a focus group interview with Data will be disaggregated by program area to see how program perform. Ratings on each question will be aggregated to see if patterns exist in indicators that pose difficulty Four of the questions specifically ask to reflect on the degree to which R-MC prepared them for success in each category. These responses are numerical and narrative. Responses will be compiled separately and used to reflect on program effectiveness. Questions focus on how Completers measure student learning and use assessment to inform pedagogical changes Notes from focus group interview will be compiled in order to provide additional insight into strengths and areas related to measuring student our teaching approaches to Analysis of survey data should yield insight into areas of strength and weakness among of our program, and where we need to adjust our teaching approaches to Analysis of program effectiveness questions should provide specific insight into the degree to which felt R-MC prepared them and yield specific recommendations for program improvement. Analysis of focus group feedback should yield insight into how well Completers are prepared to teach diverse learners and measure student learning. Areas of strength and weakness will be identified
Completers impact on P-12 learning learning and using data to drive instruction, as well as R-MC program effectiveness. and used to make program changes. Program effectiveness Completer VA Teacher Summative Performance Evaluation (T- LIPES) Completer Observations Clinical/ Field Evaluation Completers impact on P-12 learning School-based administrators invited to submit copies of their final formal evaluations recent (last year) are asked if we may send a member of the Scores on final evaluations and final recommendations by employers (for continued employment, areas of growth) are considered alongside Employer Surveys and Completer Surveys to assess the preparedness of our for the work of Section 7 on Virginia Teacher Evaluations focuses specifically on teachers impact on student learning. Scores on this section will be compiled separately to assess completer impact on P-12 learning. Scores on each of the indicators will be aggregated to see if there are patterns in the indicators that pose difficulty for in Analysis of formal evaluations should yield insight into areas of strength and weakness among of our program, our teaching approaches to Analysis of scores on Section 7 will yield insight into impact on student learning. Analysis of evaluations should yield insight into areas of strength and weakness among of our program, and where we need to adjust our teaching approaches to
to their classroom to observe their Data will be disaggregated by program to see how perform. *Progress Note: Because we spent AY 2017-2018 designing, validating, establishing inter-rater reliability, and implementing new assessments, our pilot cycle of data was completed in May 2018. Actual data gathering and analyses to be used for program improvements begins in AY 2018-2019. **Data-Gathering Note: Because the state of Virginia does not allow the release of K-12 student data, we are unable to assess the impact of our on learning as in end-of-year exams (SOLs). We believe that the combined data we collect on completer effectiveness (employer surveys and interviews, formal evaluations, completer surveys and focus group interview, and observations) will shed light on their impact on student learning.