Assessing and improving partnership relationships and outcomes: a proposed framework

https://doi.org/10.1016/S0149-7189(02)00017-4Get rights and content

Abstract

To date, no evaluation frameworks are specifically targeted at evaluating partnership relationships, as opposed to partnership programmatic outcomes. Following a discussion and definition of partnership, its defining features, and value-added, the paper proposes a framework for assessing partnership relationships in order to: (1) improve partnership practice in progress, (2) refine and test hypotheses regarding partnership's contribution to performance and outcomes, and (3) suggest lessons for future partnership work. The proposed assessment approach is continuous, process-oriented and participatory, and developmental. Targets of assessment include compliance with prerequisites and success factors, degree of partnership practice, the outcomes of the partnership relationships, partners' performance, and efficiency. Indicators and associated methods are proposed for each of these. The framework addresses the evaluation challenges of integrating process and institutional arrangements into performance measurement systems, thus contributing to relationship performance as well as program outcomes. It can also be used to enhance the theory and practice of partnership.

Introduction

Throughout the public, private, and non-profit sectors, there is increasing experimentation with the use of partnerships, alliances, and networks to design and deliver goods and services. Partnership, in particular, is touted as the answer to many public service challenges.1 However, it remains unclear whether or not partnership actually enhances performance, and if so, how? The increase in the rhetoric and practice of partnership is based on the assumption that partnership not only enhances outcomes—whether qualitatively or quantitatively, but it also results in synergistic rewards, where the outcomes of the partnership as a whole are greater than the sum of what individual partners contribute. Some research supports partnership's contribution to improved performance.2 However, most evidence of inter-organizational partnerships' contributions to performance is anecdotal, except in some private sector alliances, where increased efficiencies can be quantified (Shah & Singh, 2001). In short, synergistic results are often sought and referenced, but they are rarely fully articulated and measured (Dobbs, 1999). Furthermore, the process or how to of creating such synergistic rewards are more hopeful than methodical or well understood.

Under the new public management, evaluation most often concentrates on results or outcomes. While these are important in ensuring responsiveness, accountability, and quality, they do not tell us much in terms of how to improve public service delivery and enhance efficiency, especially when results are disappointing. Recent innovations in the private sector underscore the shortcomings of over-emphasizing or looking exclusively at outcomes, e.g. financial performance, and ignoring process dimensions.3 One danger is sacrificing long-term value creation for short-term performance (Kaplan & Norton, 2001). From a pragmatic perspective, focusing only on results is simply not an effective management approach. While outcomes may be ‘valid as infrequent indicators of the health of entire systems’, they are not useful for ‘making tactical decisions’ or interpreting performance within shorter time frames (Schonberger, 1996, p. 17).

Good evaluation practice suggests that, ideally, evaluation takes into account all key factors that may influence outcomes. This would encompass the institutions and incentives governing the implementation of policies and programs, including informal rules, regulations, controls, and structures (Squire, 1995). These dimensions are crucial components of cause-and-effect linkages that comprise strategy and ultimately lead to performance outcomes (Kaplan & Norton, 2001). Unfortunately, addressing process and institutional arrangements is among the most common difficulties associated with performance indicators and performance monitoring (Funnell, 2000).

The fad of performance management frequently ignores these elements. There are several reasons for this; among them are two evaluation challenges, and one reality. First, processes and institutional arrangements are not only difficult to measure, they are sometimes difficult to identify and articulate. Varying degrees of formalization, organization culture, and the broader social culture, including personal relationships, may yield exceptions to planned procedure. Understanding the gap between implementation plans and actual operations can be difficult. In addition, indicators of such process and institutional features are not easily quantified. These systems are also dynamic, necessitating continuous review with periodic adjustments in targets of analysis and program theory assumptions. The second challenge is one of attribution. How can we know that this particular process or institutional arrangement causes this particular outcome? Or is even associated with it? Outcomes are separated causally and temporally from such inputs; attribution requires a sophisticated investigation of cause-and-effect relationships that may entail multiple intermediate stages. Even after such analysis, attribution may be problematic.

One reality of why processes and institutional arrangements are relatively ignored in the current emphasis on performance measurement is simply that they are not immediately exciting. They are not immediate, in that it takes time for these arrangements to become institutionalized and some element of them will always remain dynamic. They are not exciting because in the eyes of direct program beneficiaries, they are less important than the outcomes themselves. While the general public may persist in this prioritization, it behoves public managers to become more technical and scientific about the way they assess and improve public programs as a means to enhancing these outcomes that are so valued. Do we expect anything less from the private, commercial sector? Consumers may continue to vote with their dollars, but investors want to know that companies are internally efficient and effective, enabling them to sustainably produce valued goods and services, respond to changes in the marketplace, and pursue innovation. Similarly, public managers and policymakers must be accountable not only to the recipients of public goods and services, but also to tax payers, who want to know that those goods and services have come at an efficient price.

The purpose of this paper is to propose a framework for assessing partnership work in progress, with an eye to improving partnership practice as a means to enhancing outcomes. Such a task will necessarily entail different layers of assessment, with slightly varying purposes. First, a developmental evaluation approach, i.e. one that seeks to improve work in progress, will be used to dialectically determine indicators, collect data, and assess partnership practice. This approach aims to ensure good partnership practice—consistent with our general knowledge of what partnership means, in order, second, to support a theory-based evaluation, which seeks to test the theory that partnership contributes to performance. Together, the two approaches will help to maximize the effectiveness of the partnership in progress, and in the event the program is not successful, help preclude assumptions that ineffectiveness of the overall program is attributable to theory failure, as opposed to process failure (Birckmayer & Weiss, 2000). In this sense, we need to examine partnership both as a means and an end in itself. The proposed assessment approach and its application seek to: (1) improve partnership practice in the context of program implementation; (2) refine and test hypotheses regarding partnership's contributions to performance and outcomes; and (3) suggest lessons for future partnership work in order to maximize its potential to enhance outcomes.

The paper begins with a brief description of the nature and definition of partnership, followed by a review of existing conceptual frameworks that may be useful in assessing partnership work. The proposed framework is then presented, including the general approach, a discussion of what to measure, and a proposed methodology.

Section snippets

The nature of partnerships4

Partnership is promoted both as a solution to reaching efficiency and effectiveness objectives, and as the most appropriate relationship as defined by value-laden principles. Based on a review of the literature (Brinkerhoff, 2002b), the ideal type of partnership can be defined as follows:

Partnership is a dynamic relationship among diverse actors, based on mutually agreed objectives, pursued through a shared understanding of the most rational division of labor based on the respective comparative

Existing conceptual frameworks for assessing partnership work

While the evaluation and performance management literature is replete with discussions of measuring outcomes and results, there is very little written about evaluating or assessing partnership relationships themselves. For example, Provan and Milward (2001) propose a framework for evaluating public sector networks at three levels: the community, the network, and the organization/participant. At the network level, they mainly suggest structural targets of analysis (e.g. number of partners, and

The proposed assessment approach

Evaluation theory and practice has evolved substantially. It now encompasses process and implementation evaluations, in addition to impact and end of project evaluations, and it honors the use of a range of research designs and methods whose choice is driven by evaluation purpose (Dym & Jacobs, 1998). This latter evolution is largely credited to Patton's (1997) promotion of utilization-focused evaluation. Under this approach, evaluation is judged according to its utility and actual use. This

Assessment targets

Before discussing the general parameters of what should be assessed in a partnership relationship, an important caveat must be mentioned. Because many benefits of partnership work derive from the relationship itself, and because all relationships are dynamic, partnership assessment should be seen as an evolving process. While movement is not automatically uni-directional (i.e. always leading towards a positive direction) the potential for partnership's added value tends to develop over time and

Proposed assessment methodology

The proposed assessment methodology conforms to the critical friend and developmental models described above. It is also multi-faceted. There are three primary methods proposed. A summary of the application of each to the assessment targets appears in Table 1. Sequencing of these methodologies is important to their iterative contributions. The partner survey will encourage participants to begin to reflect on the issues. The survey and an initial baseline assessment and process observation will

Review and next steps

The proposed partnership relationship framework addresses the evaluation challenges of integrating process and institutional arrangements into performance measurement systems, thus contributing to relationship performance as well as program outcomes. It also potentially enhances the theory and practice of partnership.

The developmental model and critical friend approach address the challenge of identifying, articulating, and measuring processes and institutional arrangements by: (1) maintaining

Lessons learned

This framework was originally developed for application to a Federally funded consortium of non-profits and private consulting firms. The consortium management committee rejected the framework as proposed, with claims, among others, that it was insufficiently specific. The framework's introduction and subsequent developments yield three important lessons for evaluating partnership relationships.

First, people continue to be uncomfortable with addressing issues of trust and other relationship

References (52)

  • M Edwards

    Too close for comfort? The impact of official aid on nongovernmental organizations

    World Development

    (1996)
  • S Albert et al.

    Organization identity

  • J.D Birckmayer et al.

    Theory-based evaluation in practice: What do we learn?

    Evaluation Review

    (2000)
  • J.M Brinkerhoff

    Government-NGO partnership: A defining framework

    Public Administration and Development

    (2002)
  • J.M Brinkerhoff

    Partnerships for international development: Rhetoric or reality

    (2002)
  • P Brown

    The role of the evaluator in comprehensive community initiatives

  • D.L Brown et al.

    Participation, social capital, and intersectoral problem-solving: African and Asian cases

    World Development

    (1996)
  • C Charles et al.

    Partnering for results: Assessing the impact of inter-sectoal partnering

    (1999)
  • CIVICUS (the World Alliance for Citizen Participation) (2001). CIVICUS index on civil society project,...
  • S Commins

    World vision international and donors: Too close for comfort?

  • J.P Connell et al.

    Applying a theory of change approach to the evaluation of comprehensive community initiatives: Progress, prospects, and problems

  • Delp, P., Thesen, A., Motiwalla, J., & Seshadri, N (1977). System tools for project planning. Bloomington, IN: Program...
  • J.H Dobbs

    Competition's new battleground: The integrated value chain

    (1999)
  • P Drucker

    Managing the nonprofit organization: Principles and practices

    (1990)
  • B Dym et al.

    Taking charge of evaluation

    The Nonprofit Quarterly

    (1998)
  • A.E Ellinger et al.

    Developing interdepartmental integration: An evaluation of three strategic approaches for performance improvement

    Performance Improvement Quarterly

    (2000)
  • A Fowler

    Striking a balance: A guide to enhancing the effectiveness of NGOs, in International Development

    (1997)
  • S.C Funnell

    Developing and using a program theory matrix for program evaluation and performance monitoring

    New Directions for Evaluation

    (2000)
  • J.J Gabarro

    The development of working relationships

  • D.A Gioia et al.

    Organizational identity, image and adaptive instability

    Academy of Management Review

    (2000)
  • J.C Greene

    Three views on the nature and role of knowledge in social science

  • C Handy

    Understanding voluntary organisations

    (1988)
  • J.R Harbison et al.

    Institutionalizing alliance skills: Secrets of repeatable success

    Strategy and Business

    (1998)
  • T.A Huebner

    Theory-based evaluation: Gaining a shared understanding between school staff and evaluators

    New Directions for Evaluation

    (2000)
  • The grassroots development framework: Project objectives, baseline data, and results report

    (1999)
  • Cited by (0)

    View full text