Assessing speaking skills:. a workshop for teacher development. Ben Knight

Similar documents
Author: Justyna Kowalczys Stowarzyszenie Angielski w Medycynie (PL) Feb 2015

Language Acquisition Chart

REVIEW OF CONNECTED SPEECH

The Effect of Discourse Markers on the Speaking Production of EFL Students. Iman Moradimanesh

Lower and Upper Secondary

Assessment and Evaluation

CELTA. Syllabus and Assessment Guidelines. Third Edition. University of Cambridge ESOL Examinations 1 Hills Road Cambridge CB1 2EU United Kingdom

To appear in The TESOL encyclopedia of ELT (Wiley-Blackwell) 1 RECASTING. Kazuya Saito. Birkbeck, University of London

Vicente Amado Antonio Nariño HH. Corazonistas and Tabora School

CEFR Overall Illustrative English Proficiency Scales

Life and career planning

Ministry of Education General Administration for Private Education ELT Supervision

Grade 4. Common Core Adoption Process. (Unpacked Standards)

Candidates must achieve a grade of at least C2 level in each examination in order to achieve the overall qualification at C2 Level.

DOES RETELLING TECHNIQUE IMPROVE SPEAKING FLUENCY?

Teachers Guide Chair Study

Think A F R I C A when assessing speaking. C.E.F.R. Oral Assessment Criteria. Think A F R I C A - 1 -

ESL Curriculum and Assessment

A Study of Metacognitive Awareness of Non-English Majors in L2 Listening

Rubric for Scoring English 1 Unit 1, Rhetorical Analysis

Creating Travel Advice

Lecturing Module

International Conference on Education and Educational Psychology (ICEEPSY 2012)

English for Specific Purposes World ISSN Issue 34, Volume 12, 2012 TITLE:

Planning a Dissertation/ Project

Conversation Task: The Environment Concerns Us All

The role of the first language in foreign language learning. Paul Nation. The role of the first language in foreign language learning

Evidence-Centered Design: The TOEIC Speaking and Writing Tests

Learning and Retaining New Vocabularies: The Case of Monolingual and Bilingual Dictionaries

California Department of Education English Language Development Standards for Grade 8

Textbook Evalyation:

Unit 13 Assessment in Language Teaching. Welcome

Introduction to the Common European Framework (CEF)

Common Core Exemplar for English Language Arts and Social Studies: GRADE 1

Practice Learning Handbook

Practice Learning Handbook

ANGLAIS LANGUE SECONDE

FCE Speaking Part 4 Discussion teacher s notes

The Use of Drama and Dramatic Activities in English Language Teaching

Integrating culture in teaching English as a second language

Qualification handbook

5. UPPER INTERMEDIATE

CEF, oral assessment and autonomous learning in daily college practice

Welcome to the Purdue OWL. Where do I begin? General Strategies. Personalizing Proofreading

Communicative Language Teaching (CLT): A Critical and Comparative Perspective

Mastering Team Skills and Interpersonal Communication. Copyright 2012 Pearson Education, Inc. publishing as Prentice Hall.

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

Table of Contents. Introduction Choral Reading How to Use This Book...5. Cloze Activities Correlation to TESOL Standards...

Statistical Analysis of Climate Change, Renewable Energies, and Sustainability An Independent Investigation for Introduction to Statistics

Information for Candidates

Evidence for Reliability, Validity and Learning Effectiveness

teaching issues 4 Fact sheet Generic skills Context The nature of generic skills

Degree Qualification Profiles Intellectual Skills

Welcome to MyOutcomes Online, the online course for students using Outcomes Elementary, in the classroom.

Politics and Society Curriculum Specification

Facing our Fears: Reading and Writing about Characters in Literary Text

Guidelines for Writing an Internship Report

Stimulation for Interaction. 1. Is your character old or young? He/She is old/young/in-between OR a child/a teenager/a grown-up/an old person

Reading Grammar Section and Lesson Writing Chapter and Lesson Identify a purpose for reading W1-LO; W2- LO; W3- LO; W4- LO; W5-

USING DRAMA IN ENGLISH LANGUAGE TEACHING CLASSROOMS TO IMPROVE COMMUNICATION SKILLS OF LEARNERS

Eliciting Language in the Classroom. Presented by: Dionne Ramey, SBCUSD SLP Amanda Drake, SBCUSD Special Ed. Program Specialist

Assessment Pack HABC Level 3 Award in Education and Training (QCF)

Instructional Supports for Common Core and Beyond: FORMATIVE ASSESMENT

The College Board Redesigned SAT Grade 12

One Stop Shop For Educators

Let's Learn English Lesson Plan

Scoring Guide for Candidates For retake candidates who began the Certification process in and earlier.

Primary Teachers Perceptions of Their Knowledge and Understanding of Measurement

Student Name: OSIS#: DOB: / / School: Grade:

BENGKEL 21ST CENTURY LEARNING DESIGN PERINGKAT DAERAH KUNAK, 2016

PREP S SPEAKER LISTENER TECHNIQUE COACHING MANUAL

Internship Department. Sigma + Internship. Supervisor Internship Guide

The Task. A Guide for Tutors in the Rutgers Writing Centers Written and edited by Michael Goeller and Karen Kalteissen

Effective Instruction for Struggling Readers

Age Effects on Syntactic Control in. Second Language Learning

Why Pay Attention to Race?

Films for ESOL training. Section 2 - Language Experience

English for teachers of EFL Toward a holistic description

Merbouh Zouaoui. Melouk Mohamed. Journal of Educational and Social Research MCSER Publishing, Rome-Italy. 1. Introduction

IMPROVING SPEAKING SKILL OF THE TENTH GRADE STUDENTS OF SMK 17 AGUSTUS 1945 MUNCAR THROUGH DIRECT PRACTICE WITH THE NATIVE SPEAKER

Organizing Comprehensive Literacy Assessment: How to Get Started

Providing student writers with pre-text feedback

Arizona s English Language Arts Standards th Grade ARIZONA DEPARTMENT OF EDUCATION HIGH ACADEMIC STANDARDS FOR STUDENTS

Technical Skills for Journalism

1.2 Interpretive Communication: Students will demonstrate comprehension of content from authentic audio and visual resources.

Jazz Dance. Module Descriptor.

BSBCMM401A Make a presentation

Improving Advanced Learners' Communication Skills Through Paragraph Reading and Writing. Mika MIYASONE

SOFTWARE EVALUATION TOOL

Intensive Writing Class

Linking the Common European Framework of Reference and the Michigan English Language Assessment Battery Technical Report

English Language Arts Missouri Learning Standards Grade-Level Expectations

White Paper. The Art of Learning

Extending Place Value with Whole Numbers to 1,000,000

EFL teachers and students perspectives on the use of electronic dictionaries for learning English

What the National Curriculum requires in reading at Y5 and Y6

Artwork and Drama Activities Using Literature with High School Students

Rendezvous with Comet Halley Next Generation of Science Standards

Public Speaking Rubric

PAGE(S) WHERE TAUGHT If sub mission ins not a book, cite appropriate location(s))

Transcription:

Assessing speaking skills:. a workshop for teacher development Ben Knight Speaking skills are often considered the most important part of an EFL course, and yet the difficulties in testing oral skills frequently lead teachers into using inadequate oral tests or even not testing speaking skills at all. This article describes a workshop used in teacher development programmes to help teachers with one aspect of the problem of oral testing: what should we look for when we assess a student s ability to speak English? The workshop looks first at the range of criteria that teachers might use in such assessment. Then it examines how the selection and weighting of those criteria should depend on the circumstances in which the test takes place. The article also discusses issues raised by the workshop, and considers its applicability to people working in different circumstances. Reasons for the Assessment of speaking skills often lags far behind the importance given workshop to teaching those skills in the curriculum. We recognize the importance of relevant and reliable assessment for providing vital information to the students and teachers about the progress made and the work to be done. We also recognize the importance of backwash (the effect of the test on the teaching and learning during the course). Most teachers would accept that if you want to encourage oral ability, then test oral ability (Hughes, 1989:44). But the problems of testing oral ability make teachers either reluctant to take it on or lacking in any confidence in the validity of their assessments. Such problems include: the practical problem of finding the time, the facilities and the personnel for testing oral ability; the problem of designing productive and relevant speaking tasks; and the problem of being consistent (on different occasions, with different testees and between different assessors). Another problem, which is the focus of the workshop framework described here, is deciding which criteria to use in making an assessment. The workshop has two principal aims: 1 to make teachers more consciously aware of the different possible criteria they could be using to assess their students speaking skills; 2 to make teachers more aware of the way their selection and weighting of those criteria depend on the context in which they are to be used. Achieving these aims is crucial for making valid and reliable tests. Except where tests are being marked holistically (simply in terms of degrees of communicative success), marking involves the use of assessment criteria. Even when the assessment is holistic on the surface, the assessor may be thinking in terms of criteria in judging that overall communicative success (Bachman, 1990: 329). It is doubtful whether the criteria can be 294 ELT Journal Volume 46/3 July 1992 Oxford University Press 1992

considered justified and validated if the assessor is not even explicitly aware of them. The reliability of an assessor on different occasions with different testees can be improved by more explicit criteria, as can the reliability between assessors. The workshop The workshop takes between about 1½ and 2¼ hours and requires two or three short video clips of students talking.] Making your own video clips is preferable, as you can make the task and situation reflect the type of test which the teachers you are addressing are most likely to use. Stage 1: a. Viewing and reflection. (10 mins.) assessment T eac h ers are shown a video clip of a student (or students) talking and are criteria asked to reflect on the question Which aspects of the students speaking would affect the grade you would give the students for their speaking skills?. The presenter needs to say in advance how long the clip will be, and what instructions were given to the students. b. Discussion (15 mins.) Teachers can compare their notes in pairs or small groups, and then this discussion can open up into a plenary. The objective at this stage is to get the teachers to be more conscious of what affects their own judgements, and to see how others may view it differently. The presenter s role will include pinning people down on vague terms, such as communicative or passive (I have heard communicative being used for: i) easy to understand, ii) says a lot, iii) makes up for linguistic weakness with gestures, etc., iv) interacts well with other person in role-play). Another role is to elicit or suggest concrete examples (from the clip) of features being discussed (e.g. an example of inappropriate language ). There is no need at this stage to try to resolve all the differences of opinion, as often those differences stem from different assumptions about the testing context, which will be looked at later. After this discussion, it is useful to show the clip again, so that people can reconsider the points made by themselves and others. c. List of assessment criteria (10-15 mins.) The presenter then hands out copies of the list of assessment criteria (see Figure 1). This list is fairly comprehensive in its broad categories, though within those there could be many more detailed criteria (for example, in Figure 1 Assessment criteria 1 GRAMMAR a. range b. accuracy 2 VOCABULARY a. range b. accuracy 3 PRONUNCIATION a. individual sounds (esp. phonemic distinctions) b. stress and rhythm Assessing speaking skills 295

c. intonation d. linking/elision/assimilation 4 FLUENCY a. speed of talking b. hestitation while speaking c. hesitation before speaking 5 CONSERVATIONAL SKILL a. topic development b. initiative (in turn taking, and topic control) c. cohesion: i) with own utterances ii) with interlocutor d. conversation maintenance (inc. clarification, repair, checking, pause fillers, etc.) 6 SOCIOLINGUISTIC SKILL a. distinguishing register and style (e.g. formal or informal, persuasive or conciliatory) b. use of cultural references 7 NON-VERBAL a. eye-contact and body posture b. gestures, facial expressions 8 CONTENT a. coherence of arguments b. relevance an investigation of fluency alone (Lennon, 1990: 404-405), 12 different variables were looked at, ranging from words per minute to mean pause time at T-Unit boundaries ). The amount of explanation needed for the terms on the list will of course depend on the teachers. d. Viewing and comment (10-15 mins.) The teachers are then shown another clip of students talking, and are asked to think about the usefulness and relevance of the criteria on the list for assessing the students speaking skills, adding or deleting as they think necessary. The objective of this stage is to consider further the criteria they think are relevant for assessing speaking skills, and also, by getting them to relate their views to the terms on the list, to give them a common vocabulary to aid discussion of differences of opinions. I have found teachers tend to be less talkative here, since it is a point of mental reorganization for them as they try to relate their own feelings and experience with the list. Stage 2: Assessment and the context a. Introduction (5 mins.) By this stage, the question of context should have arisen several times ( e.g. in the form of comments beginning Well, it depends on why... /what... /where... /how... ). The presenter now recalls various examples of these and notes how they show the importance of the context in deciding the choice of assessment criteria. 296 Ben Knight

b. Examples (10-15 mins.) Teachers are then given the hand-out with the examples of different selections and weightings of criteria together with descriptions of the relevant contexts (see below). Note that the number under the Weighting column does not represent maximum marks for that criterion, but its value relative to the other criteria. For example, each criterion might be given a mark out of ten, and each score would be multiplied by its weighting number before being totalled up. Teachers can then be encouraged to ask any questions about the criteria, the context and the relationship between the two. For example: Why did you include listening comprehension in the placement test, but not in the end-of-term test? It would, of course, be wise for you as presenter to use your own examples (criteria you have used yourself) so that you are more likely to be able to answer such questions. Criteria and context Look at these two examples of differences in the selection and weighting of assessment criteria. Note how these two examples of assessment criteria sets vary according to the situation, and try to list the factors in the testing situation which can affect such selection and weighting. 1 Placement test A placement test for University students (to decide which level they go into for their General English Communication course. This is an interview - basically answering questions - with a teacher who is both interviewer and scorer at the same time. It is taken with a single written gap-fill test, which assesses knowledge of grammatical structures, vocabulary, and functionally appropriate structures. In informal terms, the qualities we felt were important for success in a class were the ability to understand the teacher, knowledge of vocabulary, structures and functions (largely tested in the written gap-fill test), confidence and a willingness to take chances and try things out, as well as the ability to distinguish (productively) basic phonemes in English. The category of range of grammar and vocabulary aims to capture people with a wide experience of English (who we thought would progress more quickly). (Numbers on main list) Weighting 1 Range of grammar and vocabulary (la and 2a) 3 2 Accuracy of grammar and vocabulary (lb and 2b) 2 3 Phonemic distinctions (3a) 2 4 Hesitation (4b, c) 4 5 Initiative, topic development, and conversational control (5a, b, d) 4 6 Listening comprehension (not listed) 5 2 End-of-term ESP test This test was given at the end of one term of a course for receptionists in international hotels, to see how much they had progressed and what they needed to work on the following term. The speaking test was a role-play with two students, and the teacher only observed and marked the score sheet. There were several other tests - gap-fill and multiple-choice tests of grammar, vocabulary, and functions, and a listening comprehension test. Assessing speaking skills 291

1 Grammar and vocabulary 2 Pronunciation a. individual sounds b. stress and rhythm c. intonation and linking 3 Fluency a. hestitation before speaking b. hestitation while speaking 4 Conversational skill a. cohesion b. conversation maintenance 5 Sociolinguistic skill a. distinguishing register and style (Numbers on main list) (1 and 2) (3) (4) (5) (6) b. use of cultural references 6 Non-verbal (7) a) eye contact and body posture b) gestures and facial expressions 7 Content (relevance) (8) Weighting 3 1 1 1 1 c. Context variables (15 mins.) By looking at the examples and thinking of their own experience, the teachers are asked to abstract the variables in the context which may affect the choice and weighting of the criteria. The variables could include the following: i. The purpose of the test: - achievement, proficiency, predictive or diagnostic? - and depending on that: the course objectives; the underlying theory (formal or informal) of what constitutes language proficiency; the future situations the students are being tested for; the types of feedback the students in question would understand and could benefit from. ii. The circumstances of eliciting the sample of language being assessed. - the degree of freedom or control over what the student could say and do. - the number of participants and their roles in the situation. iii. Observation restrictions - extent to which assessor participates in speaking situation (e.g. interviewer or observer). - whether recorded or not (on audio or video cassette). iv. the other tests in the battery (e.g. the selection or weighting of a criterion for grammatical accuracy may depend on how much it has been assessed in accompanying written tests). d. Using the different criteria sets (15 mins.) (Optional if short of time.) Teachers watch another video clip and assess the student s oral skills 298 Ben Knight

using first one of the example criteria sets and then the other. This is to demonstrate how different criteria sets (appropriate in different contexts) can produce different assessments of the same performance. Different criteria sets will not always produce differing results, and so care needs to be taken to use a clip which will make the point clearly. e. Task (20-30 mins.) Teachers are given details of a testing situation (preferably compatible with their own) and are asked to decide on the criteria they would use and the weighting for those criteria. There follows an example of such a task: TASK (Selecting Assessment Criteria.) You should imagine you are responsible for the oral testing of 100 students who have applied to study in U.S. colleges for one year (various subjects). You have to judge whether they have sufficient speaking skills to survive and do well there. You can conduct a lo-minute interview for each (with only one interviewer/assessor for each interview). The interviewers are all experienced and trained EFL teachers. The other tests in the battery will be multiple-choice tests of grammar and vocabulary, of academic reading and lecture listening comprehension, and essay-type questions. Decide on the criteria you would use in assessing relative weighting of each. their spoken English, and the The purpose of this task is: - for teachers to think more concretely about the points raised so far, to let them see how referring to a particular context (described in the task) can reduce the differences in opinion when talking more generally. - to provide an intermediary stage between the thinking in the earlier part and the need for application of those thoughts to their own situation after the workshop. f. Conclusion (5 mins.) The presenter can ask students for their comments on the workshop---how useful it was, how it could have been more useful, whether they think they would change the way they assess their students speaking skills, and so on. Discussion There is still a great deal of subjectivity in a) the selection of criteria, and 1. Objective b) the way each criterion is measured (e.g. how exactly do you decide the criteria? grammatical accuracy of a speaker s performance?). The workshop aims only to improve the quality of those subjective decisions about selecting criteria by making it more conscious and explicit, and by giving the teachers a chance to discuss other points of view. It assumes that teachers do not have the resources to carry out their own research. A kind of collective subjectivity can be reached for how each criterion is measured by training or moderating sessions for assessors. But for those who have the time and resources to look closely and objectively at these questions, the following will be of interest: Hinofotis (1980), Hieke (1985), Fulcher (1987), and Lennon (1990). Assessing speaking skills 299

2. Analytic or Several tests, such as the RSA Profile Certificate or the ILR (Interagency holistic Language Roundtable) oral interview, use a different type of criterion to assessment? this workshop. The speakers are observed in different speaking tasks and they are simply judged for their degree of success in that task. This holistic approach argues that, as we cannot observe directly mental characteristics like grammatical knowledge or ability to maintain conversations, it will be inaccurate to give (inferred) scores for them. Rather we should simply assess the learner s (observable) success in performing authentic language tasks. The approach behind this workshop, however, is one which argues that it is those mental abilities (which we must infer from the learner s performance) that we are most interested in, for at least the following reasons. Firstly, we cannot predict, let alone test for, every function and situation which a learner might need English for. Therefore any claim about general proficiency must involve a lot of inferences from the few sample functions/situations we do test. Secondly, a lot of teachers own tests are partly diagnostic, and teachers need to know more about why a particular learner performed badly in some situations and better in others. This will usually come down to inferred component abilities, such as range of vocabulary or fluency. For a detailed discussion of the two approaches, see Bachman (1990: 41-42, 301-333). Hughes (1989: 110) recommends using both approaches, with one as a check on the other. 3. When to do this This workshop on its own may seem peripheral as teachers often worry workshop? more about such problems as making a practical test, setting fair tasks and getting a fair sample of the students language, and being consistent. However, it is probably helpful to tackle the problem of assessment criteria before these other questions, since we need to start by deciding what we want to measure, before deciding what is the most reliable and practical means to measure it. The danger, otherwise, is that we choose reliable and practical tests (e.g. multiple-choice tests) which do not give us the information we really want about our students oral skills, and which can have a negative effect on students attitudes to developing those skills during the course. 4. 300 Too complicated? Considering the context in selecting assessment criteria does make the discussion more complicated. So with teachers for whom this topic is completely new, it would probably be better to leave such considerations aside or condense them severely. I have found some untrained teachers saying that they wished they could have come away from the workshop with one fixed set of best criteria. Gebhard (1990: 158,160) reports that handed-down direction is preferred by beginning teachers (quoting research in Copeland, 1982) and by teachers in certain countries who feel that if the teacher is not given direction by the supervisor, then the supervisor is not considered qualified. However, taking the testing context into account is valuable, despite the added complexity, in dealing with two common problems with teacherdevelopment workers. Ben Knight

Firstly, it makes it easier for teachers to apply what they learn in the workshop to their own situations, especially when they are working in contexts very different from that of the presenter. This is also helped by the final task (which is an exercise in applying to a particular situation principles learnt during the workshop). Secondly, it helps resolves conflicts of opinion. Many of the disagreements at the beginning of a workshop can be related to different assumptions about the testing context and its effect on the selection of criteria. Thus, it not only improves our understanding, but also improves the conduct of the workshop: it avoids the anything goes approach which creates cynicism, and it reduces the ex cathedra judgements by the presenter which can lead to resentment or passivity. 5. Usable in other The main limitations to this workshop are: circumstances? a. There must be a video player, and some video clips of students speaking (this is best done by recording students speaking in situations most likely to be used in tests by those teachers - i.e. a video camera is helpful.). b. The teachers must have a sufficient level of English (or other target language) to assess the students and to contribute to the discussion. c. The teachers need to have some experience of assessing students speaking skills. Pre-service teachers would probably be overwhelmed by the workshop in its present form. The workshop, however, need not be limited to native speakers, trained teachers, or to EFL teachers, as it has proved as useful to non-native speakers, untrained (but practising) teachers, and teachers of other languages. Each of these three latter groups may have certain characteristics which should be taken into account. Some non-native speakers tend to place far greater emphasis on grammatical accuracy to begin with, though this is usually due to the way they learnt English rather than with any difficulty in perceiving discourse and sociolinguistic skills. Experienced, untrained teachers often lack the relevant vocabulary to talk clearly about learning and teaching, and sometimes appear to be more dogmatic or to lose perspective (e.g. claiming the single most important criterion in assessing speaking skills is whether the speaker keeps eyecontact with the listener). While it is obvious that the details of grammatical accuracy and pronunciation will be different for other languages, that is probably also true of the other criteria such as conversation maintenance and non-verbal behaviour. However, provided their level of English is sufficient for the tasks, the workshop s aims (awareness of the range of different criteria possible, and awareness of how their selection depends on the testing context) can be met for teachers of different languages in the same group. It should also be noted that these three groups bring different perspectives Assessing speaking skills 301

which can only enrich the discussions: for example, suggesting as a criterion for communication between non-native speakers the ability to adapt your level of English to that of your interlocutor, or (from a Japanese teacher) the skill of stating an opinion in such a way that it is easy for the listener to disagree without seeming argumentative. Conclusions The workshop works as a way of stimulating teachers to think about and discuss the way they assess their students speaking skills. It is rare for a participant not to be absorbed by the tasks and the exchange of ideas. A few participants have found it rather frustrating not to have a fixed set of testing criteria at the end of the workshop, but most seemed to find the process of relating criteria to context helpful in clarifying their own positions. A large survey of teachers testing experience found there is evidence that most [teachers] prefer to use informal and flexible approaches which can be adapted to different student populations (Brindley, 1989: 31). This workshop suits such preferences. Received September 1991 Note 1 The video clips I have used (about 3-5 mins. long) mainly showed students speaking in pairs, usually in a simple role-play. For example: Student A: You want to go on a holiday to Hawaii. Try to persuade your partner to come with you, though he or she wants to go somewhere else. Student B: You want to go on a holiday to Europe. Try to persuade your partner to come with you, though he or she wants to go somewhere else. References Bachman, L. 1990. Fundamental Considerations in Language Testing. Oxford: Oxford University Press. Brindley, G. 1989. Assessing Achievement in the Learner-centred Curriculum. Sydney: National Centre for English Language Teaching and Research. Copeland, W. 1982. Student teachers preference for supervisory approach. Journal of Teacher Education. XXXIII/2: 32-36. Fulcher, G. 1987. Tests of Oral Performance: the need for data-based criteria. ELT Journal XLI/4: 287-291. Gebhard, J. 1990. Models of Supervision: choices. in Richards J. and D. Nunan (eds.) 1990. Hieke, A. 1985. A Componential Approach to Oral Fluency Evaluation. The Modern Language Journal. LXIX/2: 135-42. Hinofotis, F. 1983. The Structure of Oral Communication in an Educational Environment: a comparison of factor analytic rational procedures. in Oller, J. (ed.) 1983. Hughes, A. 1989. Testing for Language Teachers. Cambridge: Cambridge University Press. Lennon, P. 1990. Investigating Fluency in EFL: A Quantitative Approach. Language Learning XL/3: 387/417. Oller, J. 1983. Issues in Language Testing Research. Rowley, Mass.: Newbury House. Richards, J. and D. Nunan (eds.) 1990. Second Language Teacher Education. Cambridge: Cambridge University Press. The author Ben Knight teaches EFL and linguistics at Shinshu University, Japan. He obtained an MSc in Applied Linguistics from the University of Edinburgh in 1987, and has taught EFL/ESL in Britain, Kenya, Italy, India, and Sri Lanka. His current interests include the testing of spoken English, teacher development, and language learning beyond the classroom. 302 Ben Knight