Associations between outcome measurement, accountability and learning for non-profit organisations

Similar documents
School Leadership Rubrics

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

Utilizing Soft System Methodology to Increase Productivity of Shell Fabrication Sushant Sudheer Takekar 1 Dr. D.N. Raut 2

Document number: 2013/ Programs Committee 6/2014 (July) Agenda Item 42.0 Bachelor of Engineering with Honours in Software Engineering

Centre for Excellence Elite Sports Program

Key concepts for the insider-researcher

MSW POLICY, PLANNING & ADMINISTRATION (PP&A) CONCENTRATION

Higher education is becoming a major driver of economic competitiveness

Observing Teachers: The Mathematics Pedagogy of Quebec Francophone and Anglophone Teachers

Conceptual Framework: Presentation

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

Position Statements. Index of Association Position Statements

e-portfolios in Australian education and training 2008 National Symposium Report

Software Maintenance

PROVIDENCE UNIVERSITY COLLEGE

Planning a research project

Summary results (year 1-3)

Executive Summary. Laurel County School District. Dr. Doug Bennett, Superintendent 718 N Main St London, KY

AUTHORITATIVE SOURCES ADULT AND COMMUNITY LEARNING LEARNING PROGRAMMES

Volunteer State Community College Strategic Plan,

school students to improve communication skills

Using Team-based learning for the Career Research Project. Francine White. LaGuardia Community College

Upward Bound Program

A cautionary note is research still caught up in an implementer approach to the teacher?

ADDIE: A systematic methodology for instructional design that includes five phases: Analysis, Design, Development, Implementation, and Evaluation.

Special Educational Needs and Disabilities Policy Taverham and Drayton Cluster

Newcastle Safeguarding Children and Adults Training Evaluation Framework April 2016

PUPIL PREMIUM POLICY

Introductory thoughts on numeracy

PREDISPOSING FACTORS TOWARDS EXAMINATION MALPRACTICE AMONG STUDENTS IN LAGOS UNIVERSITIES: IMPLICATIONS FOR COUNSELLING

Politics and Society Curriculum Specification

Programme Specification. BSc (Hons) RURAL LAND MANAGEMENT

HARPER ADAMS UNIVERSITY Programme Specification

Note on the PELP Coherence Framework

GUIDE TO EVALUATING DISTANCE EDUCATION AND CORRESPONDENCE EDUCATION

Unit 7 Data analysis and design

THEORY OF PLANNED BEHAVIOR MODEL IN ELECTRONIC LEARNING: A PILOT STUDY

Three Strategies for Open Source Deployment: Substitution, Innovation, and Knowledge Reuse

Abstractions and the Brain

A Note on Structuring Employability Skills for Accounting Students

MAINTAINING CURRICULUM CONSISTENCY OF TECHNICAL AND VOCATIONAL EDUCATIONAL PROGRAMS THROUGH TEACHER DESIGN TEAMS

Program Change Proposal:

TEACHING QUALITY: SKILLS. Directive Teaching Quality Standard Applicable to the Provision of Basic Education in Alberta

CONCEPT MAPS AS A DEVICE FOR LEARNING DATABASE CONCEPTS

EDUCATION TEACHING EXPERIENCE

The Effect of Extensive Reading on Developing the Grammatical. Accuracy of the EFL Freshmen at Al Al-Bayt University

URBANIZATION & COMMUNITY Sociology 420 M/W 10:00 a.m. 11:50 a.m. SRTC 162

Programme Specification. MSc in International Real Estate

Strategic Practice: Career Practitioner Case Study

COMMISSION OF THE EUROPEAN COMMUNITIES RECOMMENDATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL

CONFERENCE PAPER NCVER. What has been happening to vocational education and training diplomas and advanced diplomas? TOM KARMEL

5 Early years providers

TU-E2090 Research Assignment in Operations Management and Services

Growth of empowerment in career science teachers: Implications for professional development

Curriculum Policy. November Independent Boarding and Day School for Boys and Girls. Royal Hospital School. ISI reference.

PROPOSAL FOR NEW UNDERGRADUATE PROGRAM. Institution Submitting Proposal. Degree Designation as on Diploma. Title of Proposed Degree Program

Beyond the contextual: the importance of theoretical knowledge in vocational qualifications & the implications for work

Stakeholder Engagement and Communication Plan (SECP)

Ministry of Education General Administration for Private Education ELT Supervision

INSTRUCTION MANUAL. Survey of Formal Education

PA 7332 Negotiations for Effective Management Syllabus Fall /23/2005 MP2.208; Green Tuesdays 7:00-9:45 pm

Using Online Communities of Practice for EFL Teacher Development

Classroom Teacher Primary Setting Job Description

Post-intervention multi-informant survey on knowledge, attitudes and practices (KAP) on disability and inclusive education

Davidson College Library Strategic Plan

METHODS OF INSTRUCTION IN THE MATHEMATICS CURRICULUM FOR MIDDLE SCHOOL Math 410, Fall 2005 DuSable Hall 306 (Mathematics Education Laboratory)

PEDAGOGICAL LEARNING WALKS: MAKING THE THEORY; PRACTICE

ESSENTIAL SKILLS PROFILE BINGO CALLER/CHECKER

STANDARDS AND RUBRICS FOR SCHOOL IMPROVEMENT 2005 REVISED EDITION

THE IMPACT OF STATE-WIDE NUMERACY TESTING ON THE TEACHING OF MATHEMATICS IN PRIMARY SCHOOLS

Alternative education: Filling the gap in emergency and post-conflict situations

Knowledge Synthesis and Integration: Changing Models, Changing Practices

Guidelines for the Use of the Continuing Education Unit (CEU)

Drs Rachel Patrick, Emily Gray, Nikki Moodie School of Education, School of Global, Urban and Social Studies, College of Design and Social Context

2015 Annual Report to the School Community

What is PDE? Research Report. Paul Nichols

Formative Assessment in Mathematics. Part 3: The Learner s Role

Adler Graduate School

THE QUEEN S SCHOOL Whole School Pay Policy

COUNSELLING PROCESS. Definition

Council of the European Union Brussels, 4 November 2015 (OR. en)

Journal title ISSN Full text from

Professional Experience - Mentor Information

FY year and 3-year Cohort Default Rates by State and Level and Control of Institution

UNESCO Bangkok Asia-Pacific Programme of Education for All. Embracing Diversity: Toolkit for Creating Inclusive Learning-Friendly Environments

Inquiry Learning Methodologies and the Disposition to Energy Systems Problem Solving

BASIC EDUCATION IN GHANA IN THE POST-REFORM PERIOD

Concept mapping instrumental support for problem solving

Accounting & Financial Management

The University of North Carolina Strategic Plan Online Survey and Public Forums Executive Summary

The feasibility, delivery and cost effectiveness of drink driving interventions: A qualitative analysis of professional stakeholders

Delaware Performance Appraisal System Building greater skills and knowledge for educators

Higher Education Review (Embedded Colleges) of Navitas UK Holdings Ltd. Hertfordshire International College

Every curriculum policy starts from this policy and expands the detail in relation to the specific requirements of each policy s field.

DICE - Final Report. Project Information Project Acronym DICE Project Title

Professional Experience - Mentor Information

Assessment. the international training and education center on hiv. Continued on page 4

Analyzing the Usage of IT in SMEs

Math Pathways Task Force Recommendations February Background

GCSE English Language 2012 An investigation into the outcomes for candidates in Wales

Transcription:

IJPSM 12,2 186 Associations between outcome measurement, accountability and learning for non-profit Natalie Buckmaster Department of Commerce, The Australian National University, Canberra, Australia Keywords Accountability, Non-profit organizations, Outcomes, Organizational learning Abstract Outcome measurement procedures have been advocated recently as a means of eliciting better accountability and more effective program evaluation by non-profit. The principal benefits of utilising these techniques have not been appreciated widely, thwarting more widespread application. This paper analyses those benefits with a view to encouraging nonprofit to embrace the opportunity to promote organisational learning. A challenge for non-profit is to discover management tools and methods to facilitate and accelerate learning (Pedler and Aspinwall, 1996). The International Journal of Public Sector Management, Vol. 12 No. 2, 1999, pp. 186-197. # MCB University Press, 0951-3558 Introduction The major argument developed in this paper is that outcome measurement can be used effectively as a tool for learning, thereby providing feedback to program managers in non-profit (NPOs). The focus is on programs such as those which seek, for example, to educate persons, prevent drug and alcohol abuse, and reduce crime rates. Applied effectively, outcome measurement is argued to facilitate learning and the formulation of new strategies. It is dependent on transmission of relevant meaningful information without distortion, to enable higher levels of understanding and better decision making. The greatest impediment to measuring outcomes is one of resource availability and a lack of knowledge about its principal benefits. While a lack of resources is a debilitating issue in the NPO sector, this paper identifies the principal benefit of measuring outcomes. A recent focus on outcomes has been accompanied by changes in policy and general public concerns for accountability. Organisations are being requested increasingly to demonstrate that specified goals have been achieved. Pressure for results has intensified. For example, the US Government Performance and Results Act (GPRA, 1993) specified that those funded by federal government must set program outcome goals and publicly report on progress toward achieving these goals. Such legislation is arguably an important indication of likely future developments in the Western world. The US The author is grateful to Russell Craig, Peter Booth, Roger Burritt, Mark Lyons, and Gary Kelly for comments on an earlier draft of this paper. The usual disclaimer applies.

Government Accounting Standards Board (GASB) is also considering whether to require outcome information as part of a financial accounting framework. While much of the pressure to measure outcomes is coming from accountability requirements, the need to do so is primarily to learn and manage programs promptly and properly. The paper critically overviews the issues that have prompted interest in outcome measurement. While such issues provide some background for recent implementations, they have a propensity to ignore its real significance. The discussion centres on the inadequacies of traditional performance measurements and recent accountability issues. Outcome measurement is defined and characteristics are identified that may be associated with its effective application. The principal benefit is revealed to be the promotion of higher organisational learning. Associations for non-profit 187 Background issues relevant to measuring outcomes Criticism of traditional performance measurements Performance measurement systems are mechanisms to guide an organisation toward achieving its purpose (Ziebel and DeCoster, 1991). Traditionally, such systems have been cumbersome, complex, imperfect and often flawed (Ridgway, 1956; Etzioni and Lehmann, 1967; Baumler, 1971; Hopwood, 1974; Scott, 1981). Traditional measurement systems of non-profit have been decidedly uncomplex, focusing mostly on such constructs as inputs, processes, and outputs, with a view to evaluating efficiency and effectiveness (Brinkerhoff, 1979; Hofstede, 1981; Anthony and Herzlinger, 1980; Abernethy and Chua, 1994; Osborne et al., 1995; DiMaggio, 1996). The goals of non-profit are often ambiguous because of conflicts over perceived stakeholder interests and a lack of knowledge about relationships between measures and goals (Warna, 1967; Hofstede, 1981). Goals are broad and value-laden, representing such performance outcomes as enhanced education, effective prevention of substance abuse and improved quality of life. Such goals are typically altruistic, qualitative, long term, intangible, people-oriented, and non-monetary (Kanter, 1979; Thompson and McEwan, 1958; Milofsky, 1988; DiMaggio, 1988; Drucker, 1990; Salipante, 1995). There is thus no single measure of success (Drucker, 1978; Anthony and Young, 1988; Drucker, 1990; Osborne et al., 1995; Stone and Gershenfield, 1996). Traditional performance measurements are inadequate for monitoring achievement of these goals, prompting superior methods that integrate quantitative and qualitative information (Herman and Heimovics, 1994; Murray and Tassie, 1994; Osborne, 1994; Osborne and Tricker, 1995). Outcome measurement is argued to be such a method. Pressure for more accountability There is little agreement about what constitutes accountability in the non-profit sector. Money is entrusted to NPOs that are accountable to various constituencies including governments, donors, clients, regulatory bodies,

IJPSM 12,2 188 employees, boards of directors and communities. NPOs are dependent on these constituencies for financial support. External funding bodies typically determine many of the accountability criteria. The political nature of information is obvious. Regardless, funders want confirmation that money expended on programs results; in intended outcomes; moreover, that services affect individuals and communities, both in the short term and long term. Accountability has become vital in the non-profit sector as governments effect funding stringencies by introducing criteria based on the ability to prove that specified goals have been achieved. For example, GPRA requires funded by the US federal government to focus on program results and to undertake strategic planning and performance measurement. Under the Act, must set program goals including outcomerelated goals and then publicly report on the achievement of these goals. The Act's purpose is to increase public confidence and to improve program effectiveness by systematically holding accountable for outcomes and results. Criticisms emerge where outcome data are provided exclusively for accountability. A comprehensive mission statement, outcome-related goals and a description of how these goals will be achieved are its basic requirements. However, others are more demanding, even excessive, such as annual program performance reports which require comparison of outcome indicators against appropriate yardsticks. There is a cost benefit issue as precious resources are diverted from value creating activities into an accountability process. The validity and reliability of data is questionable if used to compete for funds. While the temptation to use data for comparisons is inevitable, it is contentious and reported results will be viewed with cynicism. It provokes data manipulation, degenerating into a numbers game and an obsession with counting numbers for often questionable purposes, bordering on what might be characterised as ``macho quantoidism''. A framework to measure outcomes Outcomes may be defined as:. those benefits or changes for individuals or communities after participating in the programs of non-profit (UWA, 1995); or. an assessment of the results of a program activity compared to its intended purpose (GPRA, 1993). Outcomes are the intended effects of services on people. Inputs are those resources provided to the program for expenditure, for example, training materials, salaries and volunteer time. Processes require inputs to achieve a mission, for example, providing counselling to sick persons and enhancing literacy skills of children. Outputs are the direct products of the program activities and are measured in units, for example, the number of meals served to aged persons or the number of alcoholics attending rehabilitation programs.

Outcomes are more comprehensive than traditional constructs. Their measurement requires collecting data from a variety of sources (Figure 1). Much of those data are subjective. Outcomes can be differentiated as follows (UWA, 1995):. Initial outcomes are those benefits received by program participants almost immediately after participating in programs.. Intermediate outcomes are those benefits received by program participants within approximately a year.. Long-term outcomes are those benefits received by program participants only after 12 months and further over extended periods of time. For example, consider a child literacy program. An initial outcome objective is to make the general public, parents, educators, child care workers and medical professionals aware of the problem and have children with inadequate literacy skills in the program. An initial measure is the number and percentage of illiterate children enrolled in the program. An intermediate outcome objective is for illiterate children to acquire reading skills and complete the program. An outcome measure is the number and percentage of illiterate children gaining effective reading skills and successfully completing the program. A long term outcome objective may be to have these children complete primary, secondary and tertiary education as they would not otherwise have done. An outcome measure is the number and percentage of children who participate in the initial program that later progress to complete primary, secondary and tertiary education. The technique requires collecting data from a range of internal and external sources. Much data may be utilised for this task. Sources include formal Associations for non-profit 189 INPUTS PROCESS OUTPUTS OUTCOMES Resources dedicated to the program money staff volunteers facilities equipment and supplies regulations funders requirements Provided by the program to fulfil its mission food and shelter job training public education counselling mentoring Source: Adapted from UWA, (1995, p. 3) Direct products of program activities classes taught counselling sessions literature distributed hours of service delivered participants serviced Benefits for participants of program activities new knowledge increased skills changed attitudes modified behaviour improved condition altered status Figure 1. Outcome measurement model

IJPSM 12,2 190 program records of the organisation; data collated by other ; evaluating experiences of consumers during and after program participation; perceptions of the general public; independent observers, peer review and internal and external benchmarking. It may not always be appropriate to measure outcomes with statistical data. Outcome measurement incorporates different stakeholder interests, collates information about relationships between goals and results, reflects changes in the external environment, and recognises the importance of descriptive explanatory information. The framework effectively requires that implementation be an all embracing system. Strategic plans must include a documented mission statement, outcome related goals and objectives, formulated strategies designed to achieve goals and identifying key factors external to the organisation that affect achievement of goals. Financial budgets are consistent with such a plan. Program goals are defined initially. Performance indicators are then established for each program to measure and assess results. Strategic plans and budgets are later modified and updated to reflect program performance reports. Such reports contain actual performance indicators and make comparisons with appropriate yardsticks. Explanatory data identify success and failure in achieving results. Outcome measurement requires that goals and objectives be clarified, measures be linked to such goals, data collection methods be valid and reliable, and the time span be identified to collect and analyse outcome data. They are used in measuring progress, adjusting or redesigning programs, developing new initiatives, evaluating environmental conditions, shaping an organisation's strategic direction and stimulating learning capability. The process is facilitated by integrating team learning, strategic management, organisation-wide planning sessions, and participation of stakeholders (Northern Californian Community Services Council, 1995). It is a collaborative process. A broad cross-section of the community, service providers, funders, policy makers and clients negotiate intended outcomes. Some caution is necessary. A substantial problem arises in obtaining accurate and reliable data for measurement over longer time periods, and the costs associated with doing so. For example, consider a pre-school providing education to under privileged children. The intended long-term outcome is that children will perform better than they otherwise would in primary and secondary school. This requires a sufficient time span to accumulate outcome data and produce a measure. The dilemmas are that people change radically and other factors affect the decision to complete formal schooling and education; while funders work on a basis of fiscal years, outcomes have a longer life span and unfortunately explanatory data are often neglected. Measuring outcomes and organisational learning The importance of organisational learning is acknowledged in management literature (Argyris and Schon 1978; Fiol and Lyles, 1985; Levitt and March, 1988; Senge, 1990; Huber, 1991), yet the non-profit sector is rarely seen as a rich

source of learning. The contract culture and competition for funding imply a need to develop strategies to learn quickly (Pedler et al., 1997). Organisations that will truly excel in the future understand the significance of organisational learning in affecting performance (Senge, 1990). Outcome measurement stimulates learning by acquiring meaningful information and applying it to manage programs more effectively. Definition of organisational learning Learning is referred to as a process of detecting and correcting error, where error is defined as any feature, knowledge or knowing that inhibits learning (Argyris and Schon, 1974; Argyris, 1977a; 1977b). Several fragmented definitions include: encoding and modifying routines; acquiring knowledge useful to the organisation; increasing the organisational capacity to take productive action; interpretation and sense making; developing knowledge about action-outcome relationships; and detection and correction of error (Edmondson and Moingeon, 1996). Derived from systems theory, it builds on past knowledge and experience, that is, organisational memory (Argyris and Schon, 1974). This in turn depends on questioning policies, strategies and procedures. Learning occurs when an organisation's unit acquires knowledge that it recognises as potentially useful to the organisation (Huber, 1991). It is related largely to a permanent change in behaviour (Mock et al., 1972; Kolb et al., 1974; Argyris and Schon, 1978; Stata, 1989; Senge, 1990). An organisation must test and improve its mental models and behavioural routines. Such learning occurs by understanding changes in the external environment and then adopting beliefs and behaviour compatible with such changes (Espejo and Belahav, 1996). It must be greater than, or equal to, the rate of change in its environment (Dixon, 1994). Learning is derived from a process of experiences, reflection, hypothesis building and testing. Associations for non-profit 191 Single and double loop learning The existence of different levels of learning is of central importance. Examples include:. single versus double loop learning (Argyris and Schon 1974);. lower versus higher level learning (Fiol and Lyles, 1985);. adaptive versus generative learning (Senge, 1990); and. incremental versus second-order learning (Ciborra and Schneider, 1992). In each construct the lower level concept makes progress towards stated goals. The higher level concept involves questioning the appropriateness of goals. Different levels of learning have dissimilar impacts on the strategic management of the firm (Fiol and Lyles, 1985). The literature draws extensively from the earlier work of Argyris and Schon (1974; 1978)[1] who refer to single loop learning and double loop learning.

IJPSM 12,2 192 Single loop learning can be compared to the reaction of a thermostat, as it detects deviations from the prescribed temperature and turns the heat off. When the thermostat turns the heat on or off, it is keeping with the program of orders given to it. The thermostat does not analyse the reasons for the variance. This is single loop learning because the underlying program is not questioned. Argyris and Schon (1978) assert that the overwhelming amount of learning is single loop because are designed to identify and correct errors. They also argue that are typically quite good at such learning, which is relatively straightforward because the errors are usually attributable to defective actions or strategies. If the thermostat could question itself about whether it should be set at 68 degrees, it would be capable not only of detecting error but of questioning the underlying policies and goals as well as its own program. Double loop learning is more comprehensive, challenging current operating assumptions and often entailing changing existing norms and practices. It involves deeper inquiry and questioning, sometimes implying power and conflict struggles. Error correction may require a learning cycle where the norms of the organisation are themselves modified. Double loop learning requires a double feedback loop, connecting the detection of error not only to strategies and assumptions but to the very norms which define effective performance. The underlying program is itself questioned. The error is diagnosed as an incompatibility of governing values. Where the response is appropriate, the error is corrected and the learning cycle ends. Implications for organisational learning Outcome measurement is a tool for learning. It emphasises that feedback is derived from a systems approach and is argued to be a powerful device to facilitate the learning process. The technique evaluates a program by its ability to acquire inputs, process the inputs, and channel the outputs for maximum external effect. Individuals typically respond to traditional performance measurements by modifying strategy within the current norms of organisation. For example, suppose one goal of a positive parenting program is to be cost efficient. The ratio of inputs to outputs suggests resources are not being used efficiently. Management may react by implementing cost control strategies such as activity based costing. This is illustrative of single loop learning because the underlying program is not questioned. Such learning is derived from internal data and based on repetition, is short term, relatively straightforward and typically captures only a single element of what an organisation does. In contrast, outcome measurement provides knowledge of the effect of programs in the external environment, providing superior information. While planned outcomes may differ according to different constituencies, their participation in choosing intended outcomes and courses of action is crucial. Negotiated outcomes lead to a reduction in

dysfunctional group dynamics. Outcome data are quantitative and explanatory in nature. Information is fed back into planning systems, and goals and strategies are changed accordingly to effect learning. For example, suppose an intended outcome of a drug rehabilitation program is to reduce the abuse of ``hard'' drugs, particularly heroin. This is in order to achieve reduced drug dependency and drug-related crimes. Outcome measures monitor program retention and completion rates, number of persons free of heroin use, number of persons free of all substances and reductions in drugrelated crime. Outcome data also monitor persons in treatment and estimated total number of people in the community who are drug dependent. It reveals a gap between these measures. The underlying program is questioned. Goals and strategies are revised and new services are offered to fill the gap: natural therapy, methadone treatment, quick detoxification and rehabilitation services that specifically cater for ethnic minorities. Key constituencies such as hospitals, clients (drug addicts), governments, employees, and donors are invited to participate and negotiate the type of outcome data to be used to estimate the daily cost of drug habits to establish the number of crimes that would be committed such as robbery to support drug habits. Customer surveys are modified to incorporate quantitative and qualitative data about the sex, age, length of addiction, ethnic origin, retention rates, failure rates and arrest rates of clients. Political factors such as a desire to reduce crime are factored into the analysis. Explanatory data are analysed to follow up on longer term outcomes of treatment to ascertain whether specific crimes had been committed which have not led to arrest. It also monitors the drugs involved in such crime. If crime is associated with a number of different drug habits, the norm and goal is modified so that clients are essentially free of narcotics of any kind, not just heroin. This is characteristic of higher level learning as underlying policies and goals have been questioned critically and response to error is in the form of an inquiry into traditional norms. An example of how measuring outcomes led to higher learning in a program servicing blind and vision-impaired persons is shown in Figure 2. The intended outcome of the program was to provide educational and mobility training services to blind and vision-impaired people, hence making opportunities available to improve their quality of life. Outcome measures derived from comprehensive customer surveys revealed that consumers were extremely dissatisfied with mobility training. This prompted the organisation providing the service to question the underlying program. Key constituents were asked to participate in this process. It led to an investigation into the current norms and procedures of the organisation. The goal was changed to one of enhancing educational opportunities, as mobility was really outside its core activities. A decision was made that this be outsourced to an external organisation specialising in such training. Measuring outcomes of this program clearly affected learning. Associations for non-profit 193

IJPSM 12,2 194 1. Identify Intended Program Outcome Objectives To provide education and mobility training to blind and vision impaired persons to enhance opportunities to improve quality of life. 2. Select Outcome Measures/Evaluation Criteria All clients were surveyed to ascertain satisfaction with quality and level of service according to: age geographical location blindness or vision impairment quality of resources provided for education and leisure satisfaction with mobility training general comments 3. Invite Stakeholders to Participate In Feedback Process Key constituencies such as clients, government, employees, private donors and community representatives were then invited to comment on outcome measures, make suggestions for improvement. Figure 2. How outcome measures lead to learning: a blind and vision impairment accessibility program 4. Outcome Data Fed Back Into Planning Systems to Effect Higher Learning Outcome data revealed that levels of overall client satisfaction associated with quality of materials provided to improve education and leisure time were high. Clients were dissatisfied with mobility training exercises leading to a questioning of the underlying goal. The organisation questioned whether they should provide mobility training at all. A decision was reached that this was outside their core expertise and a decision made to outsource the service to the Guide Dogs Association. The program outcome goal was modified accordingly. Strategies were questioned and changed the focus more to providing a wider range of and better quality educational tools for blind and vision impaired persons. 5. Reduction of Intergroup Conflicts A reduction of group conflict was experienced. Those employees that provided mobility skills traditionally received unsatisfactory performance appraisals. This previously caused conflict with other employees whose client feedback was of a high standard. Conclusions A renewed focus on results and outcomes has been accompanied by demands for more accountability and criticisms of alternative performance measurements. Caution is required to ensure this does not impede its principal benefit of effecting learning. There are quality control issues at stake, particularly when utilised for comparability. It may be tempting to use data for external comparisons, but this is dangerous where funding decisions are based on these comparisons. Even where like programs are grouped together for the sake of comparison, the results should be interpreted with scepticism. A possible solution is to have a customer survey outsourced and have external funding agencies perform periodic audit procedures on such data. But these solutions have significant cost implications. Preserving data integrity and objectivity is a major issue if outcome data are to effect learning, providing vital information to key constituencies and enabling programs to be managed more effectively. Measuring outcomes extends beyond the basic error detection and correction prompted by traditional performance measurement which enabled single loop learning. Knowledge of external effect and errors can be diagnosed as the incompatibility of internal governing values and norms. Managers can develop new norms and existing objectives or policy which are questioned seriously. It

is the modification of goals, strategies and norms in parallel with such a diagnosis that prompts higher level learning. Outcome measurement effectively requires participative styles where stakeholders negotiate intended outcomes, qualitative and quantitative yardsticks, valid and reliable data, and feed information back into strategic planning systems. Its contribution to the sector is in its ability to effect learning. There is pressure to measure outcomes for reasons of accountability. The primary need to do so should be to effect learning capability. Future work ought to examine policy such as GPRA, the reliability of existing frameworks, and its implications for learning and ultimately performance. The principal merit in measuring outcomes is in its potential to manage programs more effectively. Research agendas can examine performance measurement system design to consistently reflect external information. The profit-oriented system has recently moved away from reliance on accounting-based performance measures to focus on external performance information, effecting higher learning. This trend is relevant to the non-profit sector and warrants further study. Associations for non-profit 195 Note 1. This is consistent with the definition of learning adopted in this paper, which is that of Argyris and Schon (1978). The work of Argyris and Schon earlier attempted to theorise organisational learning. References Abernethy, M. and Chua, W.F. (1994), ``A field study of organisational control and culture change in a non-profit organisation'', Working Paper, University of New South Wales. Anthony, R. and Herzlinger, R. (1980), Management Control in Nonprofit Organisations, Irwin, Homewood, IL. Anthony, R.N. and Young, D. (1988), Management Control of Non-profit Organisations, Irwin, Homewood, IL. Argyris, C. (1977a), ``Double loop learning in '', Harvard Business Review, September/October, pp. 115-25. Argyris, C. (1977b), ``Organisational learning and management information systems'', Accounting, Organisations and Society, Vol. 2 No. 2, pp 113-23. Argyris, C. and Schon, D. (1974), Organisational Learning: A Theory of Action Perspective, Addison-Wesley, Reading, MA. Argyris, C. and Schon, D. (1978), Organisational Learning: A Theory of Action Perspective, Addison-Wesley, Reading, MA. Baumler, J.V. (1971), ``Defined criteria of performance in organisational control'', Administrative Science Quarterly, Vol. 16 No. 3, pp. 343-50. Brinkerhoff, D.W. (1979), ``Review of approaches to productivity, performance, and organisational effectiveness in the public sector: applicability to non-profit '', PONPO Working Paper 10, Institution for Social and Policy Studies, Yale University, New Haven, CT. Ciborra, C.U. and Schneider, L.S. (1992), ``Transforming the routines and contexts of management, work and technology'', in Adler, P.S. (Ed.), Technology and the Future of Work, MIT Press, Cambridge, MA, pp. 269-91.

IJPSM 12,2 196 DiMaggio, P. (1988), ``Non-profit managers in different fields of service: managerial tasks and management training'', in O'Neill, M. and Young, D. (Eds), Educating Managers of Nonprofit Organisations, Praeger, New York, NY, pp. 51-69. DiMaggio, P. (1996), ``Measuring the impact of the non-profit sector on society in probably impossible but possibly useful: a sociological perspective'', Independent Sector Conference, 5-6 September, Washington, DC. Dixon, M. (1994), The Organisation Learning Cycle, McGraw-Hill, New York, NY. Drucker, P.F. (1978), ``Managing the third sector'', The Wall Street Journal, 3 October. Drucker, P.F. (1990), Managing the Non-profit Organisation, Butterworth Heinemann, London. Edmondson, A. and Moingeon, B. (Eds)(1996), ``When to learn how and when to learn why: appropriate organisational learning processes as a source of competitive advantage'', in Organisation Learning and Competitive Advantage, Sage, London, pp. 1-37. Espejo, P. and Belahav (1996), Organisation Transformations and Learning ± A Cybernetic Approach to Management, John Willey, New York, NY. Etzioni, A. and Lehmann, L.W. (1967), ``Some dangers in `valid' social measurement'', Annals of American Academy of Political and Social Science, Vol. 323, pp. 1-15. Fiol, C. and Lyles, M. (1985), ``Organisational learning'', Academy of Management ReviewVol. 10 No. 4, pp. 803-13. Government Performance and Results Act of 1993 (US), Pub. LNo. 103/62, 107 stat285. Herman, R. and Heimovics, R. (1994), ``Cross national study of a method for researching nonprofit organisational effectiveness'', Voluntas, Vol. 5 No. 1, pp. 59-85. Hofstede, G. (1981), ``Management control of public and not-for-profit activities'', Accounting, Organisations and Society, Vol. 6 No. 3, pp. 193-211. Hopwood, A. (1974), Accounting and Human Behaviour, Accountancy Age Books, Haymarket, London. Huber, G.P. (1991), ``Organisational learning: the contributing processes and the literature'', Organisational Science, Vol. 2 No. 1, pp. 88-115. Kanter, M.K. (1979), The Measurement of Organisational Effectiveness Productivity, Performance and Success, Program on Nonprofit Organisations, Yale University. Kolb, D., Rubin, I. and McIntyre (1974), Organisational Psychology: An Experimental Approach, Prentice-Hall, Englewood Cliffs. Levitt, B. and March, J. (1988), ``Organisational learning'', Annual Review of Sociology, Vol. 14, pp. 310-40. Milofsky, C. (1988), Community Organisations: Studies in Resource Mobilisation and Exchange, Oxford University Press, New York, NY. Mock, T., Estrin, T. and Vasarhely, V. (1972), ``Learning patterns, decision approach, and value of information'', Journal of Accounting Research, Vol. 10 No. 1, pp. 129-53. Murray, V. and Tassie, B. (1994), ``Evaluating the effectiveness of non-profit '' in Herman R.D. (Eds), The Jossey-Bass Handbook of Non-profit Leadership and Management, Jossey-Bass, San Francisco, CA. Nielson, W.A. (1979), The Endangered Sector, Columbia University Press, New York, NY. Northern California Community Services Council (1995), Understanding Outcomes, Northern California Community Services Council, San Francisco, CA. Osborne, S.P. (1994), The Role of Voluntary Organisations in Innovations in Social Welfare Services, Joseph Rowntree Foundation Findings No. 46. Osborne, S. and Tricker, M. (1995), ``Researching non-profit organisational effectiveness: a comment on Herman and Heimovics'', Voluntas, Vol. 6 No. 1, pp. 85-92.

Osborne, S., Bovaid, T., Mahon, S., Tricker, M. and Waterson, P. (1995), ``Performance management and accountancy in complex public programmes'', Financial Accountability and Management, Vol. 11 No. 1, pp. 19-38. Pedler, M. and Aspinwall, K. (1996), The Purpose and Practice of Organisation Learning, McGraw-Hill, London. Pedler, M., Burgoyne, J. and Boydell, T. (1997), The Learning Company ± A Strategy for Sustainable Development, McGraw-Hill, London. Ridgway, V.F. (1956), ``Dysfunctional consequences of performance measurements'', Administrative Science Quarterly, Vol. 1, pp. 240-7. Salipante, P. (1995), ``Managing traditionality and strategic change in non-profit '', Nonprofit Management and Leadership, Vol. 6 No. 1, pp. 3-19. Scott, W. R. (1981), Organisations: Rational, Natural, and Open Systems, Prentice-Hall, Englewood Cliffs, NJ. Senge, P.M. (1990), The Fifth Discipline: The Art and Practice of the Learning Organisation, Double Day, New York, NY. Stata, R. (1989), ``Organisational learning: the key to management innovation'', Sloan Management Review, Vol. 12 No. 1, pp. 63-74. Stone, M. and Gershenfield, S. (1996), ``Challenges of measuring performance in non-profit '', Independent Sector Conference, 5-6 September, Washington, DC. Thompson, J. and McEwan, W. (1958), ``Organisational goals and environment'', American Sociological Review, Vol. 23, pp. 23-31. United Way of America (1995), Measuring Program Outcomes: A Practical Guide, United Way of America, Washington, DC. Warner, W. (1967), ``Problems in measuring the goal attainment of voluntary '', Adult Education, Vol. 19 No. 1, pp. 3-15. Ziebel, M.T. and DeCoster, D. (1991), Management Control Systems in Non-profit Organisations, Harcourt Brace Jovanovich, Orlando, FL. Associations for non-profit 197