The Challenges of Stakeholder Participation: examples of evaluation projects from the Youth Justice context 1

Similar documents
California Professional Standards for Education Leaders (CPSELs)

Stakeholder Engagement and Communication Plan (SECP)

School Leadership Rubrics

Community engagement toolkit for planning

Quality in University Lifelong Learning (ULLL) and the Bologna process

Higher education is becoming a major driver of economic competitiveness

AUTHORITATIVE SOURCES ADULT AND COMMUNITY LEARNING LEARNING PROGRAMMES

Improving the impact of development projects in Sub-Saharan Africa through increased UK/Brazil cooperation and partnerships Held in Brasilia

Council of the European Union Brussels, 4 November 2015 (OR. en)

THE IMPACT OF STATE-WIDE NUMERACY TESTING ON THE TEACHING OF MATHEMATICS IN PRIMARY SCHOOLS

Understanding Co operatives Through Research

The Political Engagement Activity Student Guide

Politics and Society Curriculum Specification

EUROPEAN UNIVERSITIES LOOKING FORWARD WITH CONFIDENCE PRAGUE DECLARATION 2009

Document number: 2013/ Programs Committee 6/2014 (July) Agenda Item 42.0 Bachelor of Engineering with Honours in Software Engineering

Programme Specification

KENTUCKY FRAMEWORK FOR TEACHING

Information Sheet for Home Educators in Tasmania

The European Consensus on Development: the contribution of Development Education & Awareness Raising

Life and career planning

Essential Learnings Assessing Guide ESSENTIAL LEARNINGS

University of Arkansas at Little Rock Graduate Social Work Program Course Outline Spring 2014

Interview on Quality Education

The Mission of Teacher Education in a Center of Pedagogy Geared to the Mission of Schooling in a Democratic Society.

Referencing the Danish Qualifications Framework for Lifelong Learning to the European Qualifications Framework

Position Statements. Index of Association Position Statements

Learner voice. a handbook from Futurelab

Developing a Language for Assessing Creativity: a taxonomy to support student learning and assessment

FACULTY OF PSYCHOLOGY

Special Educational Needs & Disabilities (SEND) Policy

Drs Rachel Patrick, Emily Gray, Nikki Moodie School of Education, School of Global, Urban and Social Studies, College of Design and Social Context

A Note on Structuring Employability Skills for Accounting Students

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

Learning and Teaching

Evidence into Practice: An International Perspective. CMHO Conference, Toronto, November 2008

PERFORMING ARTS. Unit 2 Proposal for a commissioning brief Suite. Cambridge TECHNICALS LEVEL 3. L/507/6467 Guided learning hours: 60

MODERNISATION OF HIGHER EDUCATION PROGRAMMES IN THE FRAMEWORK OF BOLOGNA: ECTS AND THE TUNING APPROACH

Providing Feedback to Learners. A useful aide memoire for mentors

5 Early years providers

teaching issues 4 Fact sheet Generic skills Context The nature of generic skills

The whole school approach and pastoral care

Expanded Learning Time Expectations for Implementation

2013/Q&PQ THE SOUTH AFRICAN QUALIFICATIONS AUTHORITY

HARPER ADAMS UNIVERSITY Programme Specification

PROPOSED MERGER - RESPONSE TO PUBLIC CONSULTATION

Development and Innovation in Curriculum Design in Landscape Planning: Students as Agents of Change

Interim Review of the Public Engagement with Research Catalysts Programme 2012 to 2015

Children and Adults with Attention-Deficit/Hyperactivity Disorder Public Policy Agenda for Children

Alternative education: Filling the gap in emergency and post-conflict situations

Initial teacher training in vocational subjects

VISION: We are a Community of Learning in which our ākonga encounter Christ and excel in their learning.

Setting the Scene: ECVET and ECTS the two transfer (and accumulation) systems for education and training

Abstractions and the Brain

DSTO WTOIBUT10N STATEMENT A

An APEL Framework for the East of England

EPA RESOURCE KIT: EPA RESEARCH Report Series No. 131 BRIDGING THE GAP BETWEEN SCIENCE AND POLICY

b) Allegation means information in any form forwarded to a Dean relating to possible Misconduct in Scholarly Activity.

Express, an International Journal of Multi Disciplinary Research ISSN: , Vol. 1, Issue 3, March 2014 Available at: journal.

MSW POLICY, PLANNING & ADMINISTRATION (PP&A) CONCENTRATION

Personal Tutoring at Staffordshire University

The recognition, evaluation and accreditation of European Postgraduate Programmes.

Davidson College Library Strategic Plan

Productive partnerships to promote media and information literacy for knowledge societies: IFLA and UNESCO s collaborative work

THE QUEEN S SCHOOL Whole School Pay Policy

Working with Local Authorities to Support the Localism Agenda

KNOWLEDGE IN DECISION- MAKING IN FINLAND

CHAPTER 4: RESEARCH DESIGN AND METHODOLOGY

Types of curriculum. Definitions of the different types of curriculum

To provide students with a formative and summative assessment about their learning behaviours. To reinforce key learning behaviours and skills that

Practice Learning Handbook

Early Warning System Implementation Guide

Quality teaching and learning in the educational context: Teacher pedagogy to support learners of a modern digital society

BSc (Hons) Banking Practice and Management (Full-time programmes of study)

DEPARTMENT OF SOCIAL SCIENCES

e-portfolios in Australian education and training 2008 National Symposium Report

Practice Learning Handbook

Power of Ten Leadership Academy Class Curriculum

Coaching Others for Top Performance 16 Hour Workshop

The NH Parent Partner Program

Aligning learning, teaching and assessment using the web: an evaluation of pedagogic approaches

Assessment and Evaluation

Special Educational Needs Policy (including Disability)

Ministry of Education General Administration for Private Education ELT Supervision

CARDIFF UNIVERSITY OF WALES UNITED KINGDOM. Christine Daniels 1. CONTEXT: DIFFERENCES BETWEEN WALES AND OTHER SYSTEMS

PATTERNS OF ADMINISTRATION DEPARTMENT OF BIOMEDICAL EDUCATION & ANATOMY THE OHIO STATE UNIVERSITY

Australia s tertiary education sector

ACTION LEARNING: AN INTRODUCTION AND SOME METHODS INTRODUCTION TO ACTION LEARNING

Types of curriculum. Definitions of the different types of curriculum

Section 3.4. Logframe Module. This module will help you understand and use the logical framework in project design and proposal writing.

Billett, S. (1994). Situating learning in the workplace: Having another look at Apprenticeships. Industrial and Commercial Training, 26(11) 9-16.

10.2. Behavior models

WHAT IS AEGEE? AEGEE-EUROPE PRESENTATION EUROPEAN STUDENTS FORUM

Going back to our roots: disciplinary approaches to pedagogy and pedagogic research

European Higher Education in a Global Setting. A Strategy for the External Dimension of the Bologna Process. 1. Introduction

PEDAGOGICAL LEARNING WALKS: MAKING THE THEORY; PRACTICE

PROGRAM HANDBOOK. for the ACCREDITATION OF INSTRUMENT CALIBRATION LABORATORIES. by the HEALTH PHYSICS SOCIETY

DRAFT DRAFT SOUTH AFRICAN NURSING COUNCIL RECOGNITION OF PRIOR LEARNING IMPLEMENTATION GUIDELINES AND QUALITY ASSURANCE STANDARDS PREPARED BY:

University of Toronto

IMPACTFUL, QUANTIFIABLE AND TRANSFORMATIONAL?

NORTH CAROLINA STATE BOARD OF EDUCATION Policy Manual

Transcription:

The Challenges of Stakeholder Participation: examples of evaluation projects from the Youth Justice context 1 Heidi Berger-Bartlett and Toni Craig Heidi Berger-Bartlett heidi.bergerbartlett@families.qld.gov.au Review and Evaluation Branch Department of Families GPO Box 806 Brisbane 4001 Paper presented at the 2002 Australasian Evaluation Society International Conference October/November 2002 Wollongong, Australia. www.aes.asn.au Abstract Much has been written about the usefulness of including stakeholders in the design and implementation of program evaluations. The benefits of such collaborative and participatory research models have been widely lauded, particularly in the context of developmental or process evaluations with a focus on program improvement. Participation of stakeholders can occur throughout the stages of evaluation, from developing the evaluation framework through to data collection, formulating recommendations and the publication of findings. Stakeholder participation is not however problem-free. Difficulties commonly occur as a result of: Differing philosophical concepts and theoretical approaches Differing understandings of the purpose of evaluation and accompanying expectations Cultural differences Competing organisational goals Political agendas The history of the project and entry point of evaluators When conducting evaluation projects within a government context, these issues need to be addressed with even greater care. The diverse range of stakeholders, public sector accountabilities and the changing political context in which evaluations occur are factors which make participative evaluation of public sector programs particularly challenging for evaluators. Pertinent examples from the Youth Justice praxis will be presented that illustrate these issues for evaluators. The presenters will offer some strategies for managing the delicate process of ensuring meaningful and valued stakeholder participation while maintaining research integrity. Negotiation, consultation and dialogue with those involved and served by the program are critical to progressing the evaluation. Other experiences and strategies will be discussed. Key Words: participation, evaluation, public sector, youth justice 1 Disclaimer: The views in this paper are not a formal statement of Queensland Government policy. 1

Introduction and scope This paper provides the background to an interactive presentation where strategies for dealing with the challenges of stakeholder involvement in public sector evaluation projects will be shared and discussed. Our motivation for both the paper and presentation is our experience of stakeholder participation as an integral but challenging part of evaluation. We are eager to facilitate discussion to further explore some of the challenges and learn from other experiences. Much has been written about the usefulness of including stakeholders and program participants in the design and implementation of program evaluations. The benefits of such collaborative and participatory models have been widely lauded, particularly in the context of developmental or process evaluations with a focus on program improvement. In this paper we outline some of the reviewed literature related to stakeholder participation and then apply this to three specific evaluation projects. Our experience has revealed that difficulties commonly occur in relation to stakeholder participation as a result of factors such as differing understanding of the purpose of evaluation and accompanying expectations, the developmental stage of the project and entry point of evaluators. We will offer some strategies for ensuring quality stakeholder participation, while maintaining research integrity. Negotiation and dialogue with those involved and served by the program are key factors in resolving conflict and progressing the evaluation. Other experiences and strategies will be explored with participants in the interactive presentation. In concluding, we argue that conflict is an essential feature of participatory evaluation and that an organisational commitment to participatory evaluation provides the environment in which this conflict can be successfully negotiated. Participatory evaluation It is widely accepted that evaluation is a social process, implying the need for a participatory approach. Similarly, the benefits of involving stakeholders in decision-making processes have been acknowledged. However, the realisation of participatory evaluation in practice varies, its methods lack transparency and its success is debated. The emphasis on partnership, emancipation and empowerment is particularly of concern as an outcome in the context of development aid programs. Evaluations in this sector have focused increasingly on organisational learning and capacity building and are directed by typical questions such as how can evaluation build local capacity and contribute to a learning culture? and how can evaluation contribute to the achievement of sustainable human development? (United Nations Development Programme, 1997, p.1) In the United Nations handbook the Office of Evaluation and Strategic Planning, participatory evaluation is described as people centred with project stakeholders and beneficiaries being the key actors of the evaluation. Approaches taken are reflective and action-oriented aiming to build knowledge and capacity and provide beneficiaries and stakeholders with tools to transform their environment. Project participants and stakeholders should be involved in understanding the internal dynamics of their project, its success and failure, and in proposing solutions for overcoming obstacles (OESP Handbook, 1997, p.2). Key functions identified are: 1. to build the capacity of stakeholders to reflect, analyse and take action; 2. to contribute to the development of lessons learned that can lead to corrective actions or improvements by project recipients; 3. to provide feedback for lessons learned; 4. to ensure accountability to stakeholders, managers and donors by furnishing information on the degree to which project objectives have been met and how resources have been used. (OESP Handbook, 1997, p.2-3) It is anticipated that the sense of ownership developed during the participation increases the likelihood of recommendations and corrective actions being actioned and implemented (OESP Handbook, 1997, p.1). 2

Benefits of participation The Participatory Monitoring and Evaluation approach utilised in the context of organisational learning has similarly affected thinking in this area. There has been growing appreciation for the benefits of participation for internal learning processes rather than external data-seeking purposes. According to Estrella, participatory monitoring and evaluation strives to be an internal learning process which enables people to reflect on past experience, examine present realities, revisit objectives, and define future strategies by recognising different needs of stakeholders and negotiating their diverse claims and interests (Estrella, 2000a, p.11). As a tool for self-assessment it also promotes self-reliance in decision-making and problem solving, thereby strengthening people s capacity to take action and promote change. Common features contributing to good Participatory Monitoring and Evaluation practice are participation, learning, negotiation and flexibility (Estrella, 2000b). Estrella (2002a), however, acknowledges the challenges of translating Participatory Monitoring and Evaluation into practice, especially when developing indicators and measures, establishing standards of rigour and mixing approaches and methods to maintain flexibility. While accepting participation as a basic principle in development evaluations, Rebien (1996) suggests that it is the level of participation that presents the most distinguishing characteristics. From meta-analyses of participatory evaluation projects, he identified three criteria to warrant the label of participatory evaluation: 1. Stakeholders must have an active role as subjects in the evaluation process (ie. identifying information needs and design). 2. Participants should include at least the representatives of beneficiaries, project field staff, project management and the donor. 3. Stakeholders should participate in at least three stages of the evaluation process (design, data interpretation and using evaluation information) (Rebien, 1996). Challenges of participation Gregory (2000, p.181), however, argues that Rebien s promotion of active participation is rather more difficult to enact in practice where relations of dependency (power, status, expertise) may discourage stakeholders or participation is used as a means to achieve predetermined objectives (ie. defending the process or ensuring acceptance of findings). She also asserts that resorting to representatives is problematic and may result in the exclusion of important stakeholder groups and/or necessary external expertise. Unless the representatives are elected the process is not truly democratic. Gregory concludes that by concentrating stakeholder participation on functional tasks, the anticipated transfer of knowledge (learnings about methodology use in evaluations) may be more the transfer of the methodology users knowledge and skills (Gregory, 2000, p.183). This limits stakeholders capacity to carry out future evaluations themselves. Oakley (1991), from his development work perspective, categorised three obstacles to participation: Structural related to the political environment and the restriction of policymaking to a few individuals. Administrative related to the barriers of centralised administrative and planning procedures and the reluctance to relinquish control. Social related to the deeply ingrained culture of dependence on experts and leaders. (Oakley, 1991, quoted in Gregory, 2000, p.184) In Patton s (1997) view, the dilemmas of participation can be alleviated by focusing on the prime aim of an evaluation, which is to provide decision-makers and intended users with the necessary information (Utilisation- Focused Evaluation). This pragmatic approach involves the selection of participants who are strategically located people who are enthusiastic, committed, competent and interested (Patton, 1997, p.354). However, this approach appears to underestimate the differing degrees of strategic influence held by participants and the differing levels of ability. In addition, an assumption is made that every participant has the ability to learn through exposure to the evaluation process. In reality however, utilisation-focussed evaluations have tended to work with decisionmakers (the powerful) rather than community members (the less powerful). While Guba and Lincoln (1989) also 3

acknowledge the existence of political forces that may restrict participation in the evaluation process, they see it as the evaluator s responsibility to redress imbalances in knowledge and power. In her critique, Gregory suggests that while Guba & Lincoln s Fourth Generation Evaluation appears the most explicit concept, it includes some rather naïve assumptions which are unlikely to promote participatory evaluation (Gregory, 2000, p.188). Based on her analysis she concludes that power is really the great unmentionable in evaluation theory (p.194) in finally determining the level and effectiveness of participation. Huxham (2000, p.350) asserts, however, that genuine problems can also occur when parties are too equal. Stakeholder participation in the public sector While evaluation is an inherently political process (Weiss, 1987), evaluations of public sector programs have an additional overt political context. It is inherently and intensely political, because it offers judgements regarding the appropriateness of utilising public resources in specific ways, of employing public servants in particular tasks, of allocating funds to programs, of providing clients and groups with public benefits. (O Faircheallaigh, Wanna and Weller, 1999, p.193) As a result, evaluations in this context attract a large amount of interest and attention from stakeholder groups. O Faircheallaigh, Wanna and Weller (1999) assert that it is critical to devise strategies that allow for the involvement of external stakeholders at crucial stages in the evaluation. However, they propose that program managers should have a major say in the design, implementation of evaluations and the way in which findings are used. Other strategies suggested are to co-opt external stakeholders onto internal evaluation teams or to submit evaluation documents to independent external reviewers (O Faircheallaigh and Ryan, 1992). These approaches, however, do not resolve the issues of the power differential between policy makers and external stakeholders and clearly tips the balance in favour of the former. The Department of Families approach to evaluation The approach to evaluation in our Department has evolved over the last few years and has been reflected in the structure of the organisation. Until this year, the only program area within the Department with a dedicated evaluation function was the Youth Justice Directorate. There is now however a renewed commitment across the Department to evaluation with a focus on continuous learning within the organisation. To this end, a dedicated Review and Evaluation Unit has been established within the Department to oversee and conduct evaluation activities of key projects. We will now briefly address the philosophy and methods advocated and applied in this Department and comment on the recent refinement of focus. a) Philosophy: Evaluation has been understood and applied as a social process with project stakeholders as key actors (Rebien, 1996; Wadsworth, 1998; OESP Handbook, 1997). The new approach of this Department develops the notion of stakeholder participation further as an essential feature of a learning organisation. It emphasises the generation of knowledge about the complexity of human, social, political, cultural and contextual elements involved and utilising the concept of continuous learning as proposed in Action Research models (Grundy, 1982; Mc Kernan, 1991; Hart & Bond, 1995; Wadsworth, 1998) or the double-loop learning models (Argyris, 1982; Schön, 1983). b) Purpose: The evaluation form has been either to clarify program objectives, to improve programs, to determine whether programs are expanded or discontinued or a combination of all three forms (Owens and Rogers, 1999). With these forms remaining, however, the future intent is to enhance organisational learning consistent with the strategic direction of the Department. This means an increased emphasis on clarification and improvement. c) Methodology: The methodological approach has been utilisation-focussed or pragmatic, which assumes that the impact of evaluation on policy-making is a cumulative and incremental process. While the participation of stakeholders was considered essential, this was incidental to the outcomes and treated as a means (albeit an essential one) to the end. In practice, there has been a broad representation of stakeholders from across government and non- 4

government agencies. The involvement of beneficiaries or recipients however has been largely limited to the data collection stages of evaluation. While there has been a strong emphasis on negotiation, interaction, learning and capacity building (as in Fourth Generation Evaluation and PM&E approaches), the Department s approach to participation has not extended to the notion of empowerment or self-determination of stakeholders (Fetterman, 2000). Evaluation exemplars and analysis of participation The following discussion provides the context in which evaluations have been undertaken in the Youth Justice Directorate, focussing on three particular evaluation projects. The projects/programs are: 1. A non-statutory early intervention service for young people at risk of offending and their families managed by the Department of Families. 2. A statutory service targeting young people subject to supervised orders of the Court managed by the Department of Families. 3. A non-statutory service for the prevention of crime, substance abuse and self-harm, targeting Indigenous young people living in remote areas of Queensland. The program was funded by the Department of Families but managed by a non-government organisation. Each of these programs is analysed in terms of the following dimensions in Table 1. Participant dimensions Representation (Gregory, 2000) Power (Gregory, 2000, Pawson & Tilley, 1997, Guba & Lincoln, 1989) Expertise (Gregory, 2000) Conflicting interests Evaluator dimensions Role of the evaluators Entry point of evaluators Objectivity (how an evaluation is conducted, who conducts it and the relationship between the evaluator and program manager (O Faircheallaigh, Wanna and Weller, 1999) Through our involvement in evaluations, we have identified conflicting interests, the role of evaluators and their entry points as additional dimensions worthy of special attention. Although implicitly related to power, we suggest that conflicting interests need to be managed with care considering their capacity to subvert the evaluation process. Although the programs and the nature of the evaluations differed, there were a number of issues that consistently arose concerning stakeholder participation. The difficulty in effectively representing the multiplicity of views of the different stakeholders. The lack of beneficiary representation that would make these processes truly participatory. The power differentials between community stakeholders and departmental stakeholders. Conflicts of interest that impacted directly on the capacity of the programs to deliver outcomes and indirectly on the capacity of evaluators to evaluate the program. 5