MAZI Articles

Working Toward Evidence-Based Process: Evaluation That Matters
by Ailish Byrne

For many years, communication practitioners have struggled to convince others of the value and centrality of communication processes to broader development goals and initiatives. Moving beyond “belief” in participatory approaches, fundamental to this challenge is the need to demonstrate effectively what participatory communication can achieve, i.e., its contribution to broader development processes and outcomes.

Inevitably, this puts the spotlight on evaluation. In this piece I argue that we need to realize, in practice, the potential of evaluation processes to grapple with the deeper, complex questions that underlie any social development. Explicit attention to fundamental values and their implications for practice is a strength of genuinely participatory approaches to development, including communication for social change, which, inevitably, is always one dimension of larger development processes. I argue in this piece that, given their shared fundamentals and inevitable links, the evaluation of communication for social change has much to learn from, and contribute to, innovative, creative approaches to the evaluation of participatory development. Following is a framework for the evaluation of communication for social change, in the broader context of evaluation of participatory development.

My focus is not the detail of how to evaluate communication for social change processes[1] but, rather, deeper underlying questions and assumptions that inform how this is approached and practiced—and its ultimate value. Nevertheless, the paper has an applied focus. I do not rehash the well-trodden debate on traditional versus more empowering and participatory forms of evaluation per se, but I show how a growing number of practitioners and theorists across sectors and disciplines are illuminating the way forward.

I consider the wider context of innovative, participatory approaches to the evaluation of social development that informs the epistemological and theoretical bases of the evaluation of communication for social change, and in which it is located. As the evaluation guru M.Q. Patton recently noted, “The people who are operating out of vision, social innovators who are learning to pay attention to their environment and what’s going on around them and acting responsively, they want the rigor of evaluative thinking that developmental evaluation offers but without the baggage of forced, imposed, and premature clear, specific, and measurable objectives” (Patton, 2007:114).

The Consortium is part of this growing movement.

Concepts of participation, empowerment, equity, voice, sustainability, local ownership and partnership are prevalent in development and evaluation today, but what do they mean? What do they imply for organizational learning and development? What are the implications for those who claim to work from a values base, a characteristic that, in theory, distinguishes much of the non-profit sector?

This article was inspired by a rereading of select evaluation literature, fuelled by active reflection on our practice. This includes pondering why we are repeatedly asked to introduce and support PM&E at such a late stage, when organisations eventually seek alternatives to traditional evaluations that are not giving them what they want or need. Instead, we share with authors cited a belief in developmental evaluation that is firmly grounded in practice and lived realities. From such a perspective monitoring and evaluation (M&E) is integral to participatory development processes, including communication for social change. It is not something to be tagged on at the last minute, or a “quick fix”. Dialogue is the essence of participatory evaluation and of communication for social change processes. Therefore I argue that evaluation should be integral to any communication for social change approach and that communication for social change processes are fertile ground to explore evidence-based process.[2]

Selected evaluation texts share with communication for social change a fundamental epistemological basis leading to a dialogical approach. The ideas of Schwandt (2003, 2005) on practice-oriented evaluation and “a return to the rough ground” are particularly relevant and cited below. Given dominant characteristics of today’s development world, this is particularly important: “This way of thinking about the centrality of practice to evaluation is especially necessary at the present moment because it helps restore a sense of social practices as moral-political and not simply scientific undertakings… .Reducing practice to performance—that is, to the efficient and effective accomplishment of service based on scientific evidence of what works – reflects an exceedingly narrow conception of the kinds of evaluation knowledge, learning, and inquiry relevant to enhancing practice” (Schwandt, 2005:104).

At a time of rapid change, growing internationalism, increasing calls for accountability and demonstration of impact, and shifts toward “virtual” working patterns, networks and partnerships, serious questions are raised and major challenges posed for evaluation. As M.Q. Patton states: “International diversity is challenging our thinking about what constitutes good evaluation work and what it means for evaluation to be used in different cultural and political contexts…. That narrow form of defining what “true” evaluation is, it seems to me, is challenged by the different cultural and political ways people think about knowledge, what constitutes knowledge, what constitutes evidence, how evidence impacts a political context, and the dramatically different role of nonprofits and governments in different places” (Patton, 2007:112).

This paper considers the implications for practitioners, development agencies and donors.

  • In Section 1, I consider the wider context that social change occurs in, highlighting the implications for evaluation.
  • Section 2 focuses on significant and welcome shifts in evaluation theory and practice in recent decades, arguing that evaluation should be grounded in lived practice and should be integral to organizational learning and social development processes, including communication for social change.
  • Section 3 considers factors that make organizations and the wider development environment more conducive to evaluation processes that are meaningful and of practical value to key stakeholders.

Section I: Social Change in Context: How Development Paradigms (Implicitly or Explicitly) Impact on Evaluation

“For many of the social actors involved, interventions have no clear beginning marked by the formal definition of goals and means, nor any final cut-off point or “end date” as identified by the writing of final reports or evaluations” (Long, 2002:4)

“Processes of change… are already there, moving or latent, and must be read and worked with as natural processes inherent to the lives and cultures of people themselves. This kind of orientation applied respectfully and skilfully, may indeed yield the impact and sustainability that is so desperately sought. Perhaps then our obsession with accountability may be allayed, not because we will have learnt how better to measure impact, but because we will have learnt how to practice better, to read change more accurately and work with it more effectively” (Reeler, 33).

“Development interventions are always part of a chain or flow of events located within the broader framework of the activities of the state, international bodies and the actions of the different interest groups operative in civil society. They are also linked to previous interventions, have consequences for future ones, and more often than not are a focus for intra- and inter-institutional struggles over perceived goals, administrative competences, resource allocation, and institutional boundaries” (Long, 2002:4).

To be realistic about what any development intervention can achieve, and about its evaluation, it is essential to appreciate wider, longer-term contexts that social change interventions occur within. Thus the Theory of Social Change[3] subscribed to by the Consortium highlights the need to understand and observe change processes already under way, before intervening. Therefore, spending time listening, observing and learning about indigenous or existing change processes is crucial but all too rarely practiced.

The following characteristics of the development sector have major implications for its evaluation.

Development is today characterized by an unprecedented pace and degree of organisational change, which calls for flexible processes that can meet changing stakeholder needs. This is accompanied by a growing “projectization” of development, witnessing ever more short-term projects and casual labour, in contrast to more sustainable, indigenous organizations. Inadequate appreciation of the theory of social change underlying particular programmes results in a lack of awareness of “the power that projects bring in forcing a narrow concept of change on situations where they do not apply”, as explored further in Section II below (Reeler:8). Thus while the “speak” is becoming more participatory, bottom-up and horizontal, “there is, paradoxically, a strengthening of pressure for upward, vertical accountability to the North” (Reeler, 4).

The above raise questionable assumptions including:(i) project interventions being seen as vehicles to deliver development, while indigenous social change processes are at best ignored; (ii) problems are visible to practitioners through cause-and-effect analysis; (iii) unpredictable factors are ”inconveniences to be dealt with along the way”; and (iv) logical and linear assumptions about achieving desired outcomes (Reeler, 7).

Evaluation has faced particular challenges. Increasing calls to demonstrate “measurable” impact have fuelled a welcome focus on M&E, but increased volumes of aid have resulted in tighter controls and greater upward accountability, at a cost to institutional learning and development and, arguably, to social impact. Pressure for particular types of monitoring and evaluation has fuelled outsourcing, “robbing organizations of rich learning processes” (Reeler, 4). As well, judgmental evaluations and a dominant performance-indicator culture have served to diminish trust between stakeholders and fuelled a culture of “low trust” (J. Lewis). Despite the rhetoric of “evidence-based policy and practice”, there remain major barriers to achieving it, while evidence-based process has been neglected (J. Lewis).

Learning is fundamental to evaluation processes. While in theory different forms and sources of experiential learning and knowledge are encouraged, in practice learning is simultaneously oriented to performativity and uniformity—fuelled by so-called “accountability for efficiency and effectiveness” (Schwandt, 2003:358). Thus while lifelong learning for flexibility and change is emphasized, most educational institutions and organizations are “becoming more performance-oriented, consumer-oriented and corporatist”(ibid, 358).

Frustrations with the above scenario have fuelled significant attention to alternative evaluation paradigms and practice across sectors, particularly in the past two decades. These share the following fundamentals, with clear methodological implications:

Recognizing that development inevitably entails unpredictable, non-linear, messy processes: This highlights the need to deconstruct intervention as typically assumed as a linear-rational process and, rather, “recognize it for what it fundamentally is, namely, an ongoing, socially-constructed and negotiated process, not simply the execution of an already-specified plan or framework for action with expected outcomes” (Long, 2002, 6). Recognising social change processes as complex, unpredictable, involving different actors and factors, calls for commitment to inclusive reflection and learning from experience to understand what is working, why and what changes are necessary. Thus project plans must always be seen as drafts and works in progress (Reeler, 26).

Appreciating multiple realities and perspectives on reality, that is, the practical relevance of epistemology: “Knowledge emerges out of a complex interplay of social, cognitive, cultural institutional and situational elements. It is, therefore, always essentially provisional, partial and contextual in nature, and people work with a multiplicity of understandings, beliefs and commitments” (Long, 2002:2).

“A fundamental principle of actor-oriented research is that it must be based on actor-defined issues or problematic situations, whether defined by policy makers, researchers, intervening private or public agents or local actors, whatever the spatial, cultural, institutional and power arenas involved. Such issues or situations are often, of course, perceived, and their implications interpreted, very differently by the various parties or actors involved. Hence, from the outset, one faces the dilemma of how to represent problematic situations when confronted with multiple voices and contested ‘realities’” (Long 2002, 10).

Epistemology (i.e. theories of knowledge) raises fundamental questions about what constitutes “knowledge”, what is legitimised as “knowledge” and by whom, whose interests it serves, and so on. It therefore has a very practical relevance as P. Reason notes: “the most important task for our age is to learn to think in new ways, and some time spent on practical epistemology is worthwhile” (Reason, 1988, 38).

Appreciating this means seeing knowledge-making as a complex, ongoing, evolving process that inevitably reflects power differentials and encompasses struggles over contested meanings: “knowledge emerges as a product of interaction, dialogue, reflexivity, and contests of meaning, and involves aspects of control, authority and power” (Long, 2002:8).

Within a participatory paradigm, as in communication for social change approaches and others advocated here, emphasis is consciously on the voices, lived experience, perceptions and knowledge of intended beneficiaries and less powerful stakeholders (the “voiceless”). This fits closely with N. Long’s “actor-oriented perspective”, stemming from the belief that “social action and interpretation is context specific and contextually generated… [and] …Meanings, values and interpretations are culturally constructed but they are differentially applied and reinterpreted in accordance with existing behavioural possibilities or changed circumstances, thereby generating ‘new’ cultural standards” (Long, 2002:3).

Focusing on dialogue as the essence of development: In common with an actor-oriented perspective, communication for social change appreciates the importance and fundamentality of “the particulars of people’s lived-in worlds” (Long, 4). Thus, means and arenas for engaging diverse stakeholders in processes of meaning- and knowledge-making, which arguably, constitute the essence of development and developmental practice, is highlighted and the value of dialogue comes to the fore: “Knowledge emerges as a product of interaction, dialogue, reflexivity, and contests of meaning, and involves aspects of control, authority and power” (Long, 2002:8).

Similarly in communication contexts, growing evidence demonstrates the importance of strategies that give voice to affected groups and prioritise communication environments of interpersonal communication, dialogue and debate, rather than “education” through messages. Thus “only when people become truly engaged in discussions and talking about HIV [for example], does real individual and social change come about” (Panos, 46).

Schwandt usefully describes dialogue and conversation across differences as a “play of ideas [rather] than as a device of procedural rationality by means of which different interests are adjudicated… because the participants’ dialogues are not simply about the subject matter in question… but about the very identity of the participants themselves. It is through dialogue (reflection and conversation) of this kind that the participants may reach mutual understanding and realize their ‘interests’” (2001:232).

Having an explicit values basis:Conscious attention to addressing power imbalances and privileging the voices of people who are traditionally marginalized and “voiceless” implies challenging hierarchies at all levels and revisiting what constitutes “knowledge”. From such a perspective planning, implementation and evaluation are focused on the priorities and needs of intended beneficiaries, with “success” determined by how far these are being met.

This has particular methodological implications, as discussed in Section II. Thus, “responsive evaluation“ requires open and equal relationships and a certain power balance to give all stakeholders a fair chance in the process and to facilitate meaningful and genuine engagement” (Abma, 32).

Focusing on the “primacy of practice”:So what is development? From a dialogical perspective where stakeholders engage in dialogical processes and influence each other, a programme is not a means to an end but, rather, “a practice that has different, sometimes conflicting, meanings for various participants” (Abma, 33).

Likewise, Schwandt (2001)notes, “what is a ‘real’ interest of each stakeholder is only realized as it is enacted in the situation at hand. Thus, there must be some way in which these different self-understandings and interests are actually engaged by and with one another and thereby held up to scrutiny and the possibility of transformation” (ibid, 231). In such open-ended processes, participants “recognize that the interpretations they reach are always situated, corrigible and subject to re-interpretation” (ibid, 231).

Starting thus from messy realities, or “rough ground”, have significant implications for evaluation: “The practical refers to the real, embodied, linguistic, material world of practitioners… This is ‘rough ground’ …because it is always a contentious and contingent matter to deal with the concrete case in all its completeness and with all its differences from other concrete cases” (Schwandt, 2003:355). The “practical” is changeable and “we are self-interpreting, meaning-making beings, and the task of interpreting the value of our activities and actions is always contingent, complex, contested and never finished” (ibid, 355).

Schwandt thus advocates for “a return to the primacy of practice with its concern for recovering a kind of moral-practical knowledge by means of which we make our way through life as human beings” (Schwandt, 2003:361).

Implications of the above for evaluation and for individuals and organisations working in the field are considered in Sections II and III. Methodologically, they call for detailed ethnographic understanding of everyday life and “the processes by which images, identities and social practices are shared, contested, negotiated, and sometimes rejected by the various actors involved” (Long, 2002:2). Therefore, good practitioners spend time with people observing and learning, asking questions, building relationships and trust and facilitating mutual learning from experience. That is, practicing action learning. This requires particular attitudes, competencies and abilities and highlights the importance of developing capacity at different levels.

All development practice ultimately seeks to improve the lives of intended beneficiaries, but it does so in myriad ways. Starting from a values basis, we are challenged to face the deeper social, political, ethical and moral questions about our practice, as many authors valuably highlight. Thus, as agents and doers “the particular decisions and actions of interest are those concerned with value-rational questions such as: How should I be in this situation? What should be done? Is this desirable? The ways in which we answer such questions are in part constitutive of the kinds of human beings we choose to be” (Schwandt, 2003:354). The implications for evaluation and for evaluators are discussed in Section II.

Section 2: Evaluation

“A core idea of the very idea of being a professional practitioner is precisely to wrestle with the ends or goods that a practice is intended to serve. To recover that idea, to provide an antidote to a narrow conception of evidence-based thinking, we need to restore the centrality of practice to evaluation” (Schwandt, 2005:105).

“The power of process use and the power of the notion that “what gets measured gets done” challenge us to think about our field well beyond its technical and methodological elements, important as those are. There are deep-seated moral and political dimensions to our actions that have consequences for people and programmes. Both the positives and negatives of process use invite us to look carefully and thoughtfully at the impacts of engaging in this activity called “evaluation”, quite apart from what findings themselves may yield. What we do in the process of generating those findings has its own impacts” (Patton, 1998:233).

“Perhaps the time has come to start looking at how the people on the front line can be helped to make better-informed and more effective judgments, rather than creating ever more precise performance targets” (Lewis, 2001: 389).

Historical context: Evaluation involves active reflection and learning from experience, with the aim of improving practice and, ultimately, the outcomes of development. Perhaps more than any other arena, evaluation is a site for “engaging with differences of perspective, experience, value, ideology, power, privilege and possibility. At issue is the character or nature of that engagement” (Schwandt, 2003, 356). As considered in Section I, trends in evaluation mirror those in development more widely. Thus we have seen a shift from a perspective of evaluation as a set of skills or tools to be applied to a situation, to one that recognizes it as “a practical, material and political undertaking concerned with examining and enhancing the ways we make interpretative judgments of the value of human actions that unfold in specific social, historical and cultural contexts” (ibid, 357).

Traditionally, evaluation paradigms and practice have been imposed by donors for (upward) accountability purposes and long seen by recipients as a judgmental, policing mechanism. Growing frustration with such approaches, combined with increased recognition of the importance of, and advocacy for, community and practitioner voices to be central to this process and agenda, have fuelled the development of more inclusive approaches, as has awareness of the need to ensure that evaluation findings are both useful and used. Increased attention to “learning organizations” has simultaneously raised serious questions about why the learning potential of evaluation is so often not realized. A large body of evidence now testifies to the benefits of meaningful participation in evaluation processes themselves.

Widely felt frustrations with the above scenario, for practitioners and organisations, have fuelled interest and commitment to alternative developmental approaches, as seen in the proliferation of participatory monitoring and evaluation, in theory if not in practice. Many remain frustrated by trying to meet donor demands (to appease donors) and, simultaneously, to ensure effective organizational learning and improved practice. Thus, the field has seen significant shifts from narrow, expert-driven, upward-accountability evaluations, to more inclusive processes that consider broader impacts and change, that widen accountability (downwards, to beneficiaries), that appreciate local context and characteristics and that prioritise internal learning and capacity development.

The implications are not to be underestimated, and Carden thus speaks of giving evaluation away, which “calls for new approaches to evaluation, which both recognize the need for accountability and quality control and build the internal capacity of organizations for using evaluation for their own organizational planning and management purposes” (Carden, 75). This means engaging with deeper issues that lie at the heart of evaluation practice, going well beyond arguments about methodology to explore “processes and relationships that will enable a more creative, meaningful and appropriate M&E practice at all levels within the development sector” (Dlamini, 4).

In a developmental or learning paradigm, calls for a pedagogical, “practice-oriented approach” to evaluation, based on commitment to “a process of teaching and learning about the deliberation of value; one that is encouraging and facilitative of critical reflection and self-transformation in conversation with others” (Schwandt, 2005), foreground relationships and dialogue:

“Learning about the deliberation of value is a social, shared undertaking, not a private matter for each individual… we come to reasonable and just answers to questions of appropriate means and ends through dialogue and conversation with others. Consequently, this kind of evaluation is committed to the goals of participation, collaboration, and cooperation in the exploration of the evaluative imperative at the centre of practice” (Schwandt 2005:103).

These changes have been fuelled by wider appreciation of the limitations of traditional evaluations and recognition of the need to develop methodologies that effectively capture the complexity and richness of our work. Emphasis is on local ownership and remaining open to unfolding process and risk-taking. Thus MQ Patton speaks of “bringing complexity science to evaluation and social innovation… using evaluative thinking to support people who are on a journey without a clear, pre-determined destination. It’s an important journey driven by vision and values, but they don’t have performance objectives or measurable outcomes. Indeed, performance objectives would get in the way, would actually undermine openness and emergence” (Patton 2007:114).

Alternative approaches, grounded in lived realities and practice, seek “to illuminate and open to critical reflection the kind of knowledge that resides not in scientific statements of program outcomes and effects but in practice. Thus, the kinds of knowledge it is concerned with are located in lived action… in the body... in the world… and in relations” (Schwandt 2005, 102). Therefore, evaluators are interested in experiences and perceptions, in particular as constructed in stories, which “reveal the meaning and ambiguity of everyday situations and experiences, and as such, stories illuminate what really matters to stakeholders” (Abma, 33).

The fundamentals of evaluation within a participatory paradigm, including communication for social change, include:

1. An explicit values basis

With equity and social justice key emphasis is on “giving voice” to the less powerful. Intended beneficiaries and participants have a key voice in determining “success”, influencing strategies and assessing performance. If monitoring and evaluation are meaningfully grounded in fundamental values, they need re-envisioning:

“ …participation not simply as a tool for manipulation or a fashionable methodology, but as a process that allows for the “voices” of all concerned to be heard. It is these voices that are the source of data… Participation is not just rhetorical or symbolic. Rather, the knowledge, skills, abilities, experiences and capabilities that each individual brings to the organization [or project] are recognized” (Dlamini, 7).

The implications are significant and include addressing inequities of power and voice, challenging traditional hierarchies and engaging stakeholders in empowering educational processes:

“PM&E is more than just a method or set of techniques. Like other processes of knowledge creation, it is also a deeply embedded social and political process, involving questions of voice and power. In aiming to privilege the voice of weaker or more marginalized groups, PM&E often raises sensitive, or threatening, questions about responsibility, accountability and performance” (Gaventa and Blauert, 229).

2. Appreciation of the implications of multiple agendas and perspectives

As Gaventa and Blauert highlight, “validating multiple perspectives—an essential characteristic of PM&E—is, therefore, crucial in making people feel more secure about expressing their analysis and concerns” (239). Importantly, it can strengthen trust when actors with more structural, institutionalised power evaluate themselves and become more transparent about their own successes and shortcomings (ibid, 39).

Stemming from constructivist (rather than positivist) perspectives on knowledge, meaning is seen to be shaped through interactions. Thus, in development processes every stage of planning, implementation and evaluation entails “complex sets of evolving social practices, negotiations, and political and epistemological struggles that involve a multiplicity of actors with divergent and sometimes contradictory agendas” (Long, 2002:5). In such a paradigm monitoring has therefore progressed beyond merely managing impacts or outcomes and “must play a major role in creating a framework for negotiating common meanings and resolving differences and validation of approaches… the role of process monitors is then more of advocacy, facilitation or nurturing than analysis” (Gaventa and Blauert, 238). Implications for evaluators and facilitators are considered in depth below.

3. Explicit focus on enhancing understanding, on learning and capacity development, that is, on developmental evaluation

Process use “refers to using evaluation logic and processes to help people in programs and organisations to learn to think evaluatively… Learning how to think evaluatively is learning how to learn… It is a kind of process impact that organisations are coming to value because the capacity to engage in this kind of thinking has more enduring value than a delimited set of findings, especially for organizations interested in becoming what is now popularly called ‘learning organizations’… Specific findings typically have a small window of relevance. In contrast, learning how to think and act evaluatively can have ongoing impact” (Patton 1998, 227).

Responsive approaches aim to enhance understanding by engaging stakeholders in dialogues that strengthen mutual understanding and practice. Thus “stakeholders must engage in particular forms of dialogue for an evaluation to attain its aims. Stakeholders must be active participants for responsive evaluation to succeed to cover and include various interests and values. Dialogue is central to its success, because stakeholders learn about the experiences and frustrations of others only through conversations” (Abma, 34).

Likewise, Patton’s “process use” refers to “individual changes in thinking and behaving that occur among those involved in evaluation as a result of the learning that occurs during the evaluation process” (Patton, 1998:225).

Similarly, Schwandt speaks of “evaluation as pedagogy”, a teaching and learning activity “more about learning than judging; more about participants becoming critically aware of their own positions on issues and developing an understanding and appreciation of new and different perspectives and values… this learning process is made possible by dialogues of several kinds in which the evaluator played a variety of roles as a teacher and facilitator” (2001:232).

From such a perspective, reflection and learning from experience are central: “As a pedagogical engagement with practice, evaluation fully embraces the fact that we live in a contested socio-political landscape in which we constantly struggle to ‘go on’ with one another… this landscape is ‘rough ground’ because it presents us with paradoxes of the practical as we aim to make our way through the everyday activities of defining our responsibilities, designing policies, implementing programmes, delivering services and evaluating the merit and worth of our actions” (Schwandt, 2003:357).

Across sectors, the large body of educational perspectives shares a focus on facilitating learning and making evaluative judgments in specific situations, rather than on technical know-how and narrow instrumental knowledge of effectiveness and goal attainment (Schwandt 2001, 233).

5. Evaluation solidly grounded in practice and messy realities

“Evaluation in action treats real, situated practices: real decision makers, real participants, real actions – things different and richer than their theoretical representations in terms of programmes, programme theories, underlying mechanisms and outcomes produced in variable contexts. If we accept the premise that evaluation is a kind of inquiry that requires a pedagogical encounter with practice, then we must focus on that practice” (Schwandt 2003, 359).

“In contrast to the “performativity” of much evaluation, “the ‘practical turn’ in evaluation always questions a central assumption at the heart of social practices in late modernity, namely, that we are fully capable of self-determination through reason, logic and evidence… it advocates a return to the primacy of practice with its concern for recovering a kind of moral-practical knowledge by means of which we make our way through life as human beings” (Schwandt, 2003:361).

Schwandt’s plea for evaluation grounded in practice and in “practical knowledge traditions” is very relevant to communication for social change approaches, with its focus on relationships and networks of people, their obligations and responsibilities, their memories, language, and interactions (2005, 102). Warning against the dangers of becoming disenchanted and cynical about the value of asking questions about the nature and meaning of organized social practices, he highlights the urgency of this today:

“This way of thinking about the centrality of practice to evaluation is especially necessary at the present moment because it helps restore a sense of social practices as moral-political and not simply scientific undertakings… Reducing practice to performance—that is, to the efficient and effective accomplishment of service—based on scientific evidence of what works—reflects an exceedingly narrow conception of the kinds of evaluation knowledge, learning, and inquiry relevant to enhancing practice” (ibid, 104).

What does this mean for evaluators?

“Informed by an ethic of responsibility, the evaluator as teacher works as a knowledgeable guide helping practitioners enhance their exercise of the practical arts. How do and should practitioners stand in relation to the situation at hand? How do and should they determine what to value and what to attend to in this situation? How should they grasp who they are in relation to others involved in the situation? How do they and should they use evidence and concepts to help better understand the evaluative decisions they face? …how do practitioners engage in evaluative judgment, how do they deliberate, and how might they deliberate better” (Schwandt 2003, 362).

The above perspectives have significant implications for evaluators and those facilitating such processes. It calls for demystifying the field, letting go of traditional “expert” roles and, rather, encouraging and supporting people to ask their own questions, share experience and learn from each other on a more equitable basis. Shedding any pretense at value-free research or practice, like action researchers evaluators advocate and promote evaluation as “a deliberative conversation about value and facilitate/orchestrate the examination of value-rational questions in a given practice” (Schwandt, 2005:103).

Methodologically, plurality implies that evaluation design emerges through conversation with key stakeholders. There is no blueprint and appreciating the particularities of each context is crucial. Thus the evaluator carefully determines features of the situation at hand and “… brings his understandings of principles, values and so on embedded in evaluation tradition together with the relevant aspects of the lived realities of the case at hand to make reasoned judgments about how he should best proceed with this particular evaluation” (Schwandt 2001, 230).

Evaluators need to engage with a range of interests, broker exchanges of information between different groups and foster understanding and mutual learning. Thus as “critical interlocutors and interpreters… evaluators must play a role in determining, describing and explaining existing local interpretational practices” (Schwandt, 2001:234). As interpreter, the evaluator brokers different “cultural” understandings among groups (ibid, 234). The evaluator’s role is thus one of interpreter, educator and facilitator, being an enabling partner and collaborator rather than an expert (Abma, 35). As a “facilitator of deliberative evaluative practice, not a producer and dispenser of scientific wisdom… the evaluator as expert helps practitioners understand the demands of the practical” (Schwandt, 2003:362).

Therefore “learning in and from evaluation becomes more complex than models of knowledge transfer now make it out to be. The evaluator becomes a student, teacher and facilitator of deliberative evaluative practice, not a producer and dispenser of scientific wisdom… the evaluator as expert helps practitioners understand the demands of the practical” (Schwandt 2003, 362).

Readiness and ability to deal with unpredictable, complex processes and differences are crucial. As evaluations evolve, the inclusion of different stakeholders in PM&E brings new realities, priorities and interests which often fuel conflict and demand careful management.

What challenges do these issues present?

Good evaluation practice has never been quick or easy and is inevitably fraught with politics at different levels. Also, all evaluation processes are significantly impacted by the wider organisational contexts and environments of which they are a part. Thus, organisational strengths will likely facilitate effective evaluation practice, while weaknesses will inhibit it. This is particularly true of meaningful participatory approaches that, beyond rhetoric and by definition, tend to highlight and raise complex, often uncomfortable issues about organizational cultures, systems and practice.

Evaluations demanded by higher or distant authorities continue to challenge and frustrate many practitioners. While instrumentalist, judgmental approaches to monitoring and evaluation claim to strive for efficiency, in reality they often stifle learning from experience. Also, many professional environments are not conducive to or supportive of innovation and risk-taking, even when long-established procedures are clearly not working. We are therefore challenged to “explore approaches to M&E that would enable us to let go of control and open us to the risk of making meaning out of our work, allowing new forms to take shape, enabling us to see these and learn from what is emerging” (Dlamini, 2).

As facilitators of PM&E and organisational development processes, creating and sustaining conducive environments remain a major challenge, as does effectively sharing, learning from and documenting practical experience, in ways appropriate for diverse audiences. Given inequalities at all levels, it is a constant battle to keep evaluation practice “a practical activity requiring the exercise of moral-political judgment”, rather than “a technical activity governed by methodological rules and narrow instrumental concerns” (Schwandt 2001,229).

At deeper levels there remains a need to strengthen the conceptual and methodological bases of PM&E, to strengthen human and institutional PM&E capacity and institutional learning, and to apply PM&E more widely to new areas including issues of governance (Gaventa and Blauert, 243)[4]. We cannot ignore these challenges: “While the challenges are great, so are the stakes. Ultimately, asking questions about success, about impacts and about change is critical to social change itself. Learning from change is not an end itself, but a process of reflection that affects how we think and act to change the future” (ibid, 243). Finally, as Schwandt valuably outlines, a practice-oriented approach to evaluation demands being at once philosophical, contextual, pragmatic and transformative (2005:103).

Section 3: Implications for the future of PM&E

“A vision-led approach, which may have an element of discovering the way forward, will need to have more flexibility of methods and time-lines and a greater need for learning from ongoing experience and adjusting plans and even the vision itself, as the realities of putting a Project into practice are brought to bear” (Reeler, 30).

This section considers the implications for those working in social development within the context of communication for social change if evaluation is to be strengthened and of value. It includes factors internal to organizations and those wider to the development sector, both immediate and longer-term. The aim is to highlight what will make environments more conducive to meaningful participatory evaluation processes.

In a rapidly changing and unpredictable field, with new stakeholders and their interests constantly coming on board, ensuring meaningful evaluation will remain challenging. As Gaventa & Blauert note, PM&E “highlights the complexity of social and power relationships amongst multiple-stakeholders” (p.229). However much can be learned from experience to date. Overarching changes widely called for include establishing more open, flexible and responsive organizational processes. For evaluation this means less imposed accountability demands and methods and, instead, support for sustained processes of dialogue, collaborative reflection, learning from experience and risk-taking.

The following factors are widely recognized as facilitating PM&E

  • Open and safe space for people to participate and voice their own views and concerns;
  • Enabling policies and necessary financial support, to enable and legitimate in particular the involvement of less powerful stakeholders;
  • Capacity, including for creativity and flexibility;
  • Incentives to reward staff for innovation, learning and adaptation;
  • Support and facilitation from able and creative intermediary institutions;
  • Adequate time;
  • Openness to and reward of risk-taking;
  • Institutional openness and willingness to participate fully;
  • Leadership and champions, for PM&E to be both effective and sustainable;
  • Senior-level staff involvement and commitment, including informed initial buy-in, so resources and support for evolving process and findings are sustained. This is particularly important as PM&E often unleashes uncomfortable, deeper questions about organizational practice, processes and priorities;
  • Strong and sustained management support, including dedicated human and financial resources;
  • Clarity about roles and responsibilities from the start;
  • Supportive networks for mutual exchange of experience and learning;
  • The support of relevant interest groups, internally and externally; and
  • A firm grounding in local context and culture.

Underlying the above are inte-linked, deeper-level implications, in particular the need:

To strengthen and equalise relationships and foster trust

”Our partners, who have more often than not been the subject of evaluation, bring strong direct experience to those issues that could strengthen our own use of evaluation as well as their control of the evaluation process in their own settings” (Carden, 191).

“The real work lies in emergent processes of building identity, relationships, leadership etc. that no project can predetermine or guarantee” (Reeler, 28).

The quality of relationships and dimensions of trust are highly significant. If facilitated well, PM&E can itself be a means of redressing power imbalances. In each instance we need to understand the existing interests, relationships and processes that support or inhibit change and decide on preferred relationships and strategies on this basis. This highlights the importance of supportive networks and of deeper consideration of what “partnership” and “collaboration” mean, while involving partners in meaningful ways. Experience demonstrates that a sense of ownership of the issue and of change processes, whereby affected groups are involved in setting the agenda, helps translate policies into effect (Lewis, 2001).

It also implies investing time in developing trust and safety, to ensure quality dialogues and relationships (Abma, 32). As noted, “building trust is a more complex relationship process that requires time and commitment… it requires that we move away from cumbersome reporting processes that focus on information instead of engagements that build and deepen understanding and connection” (Dlamini, 7). Trust requires more than enabling particular voices to be heard, it means more powerful stakeholders need to evaluate themselves in honest and transparent ways (Gaventa & Blauert, 239).

There are implications for more powerful players letting go, supporting and trusting the experience and judgement of those on the front line (Lewis, 389)[5].

The above demand shedding expert roles and spending time with people listening and learning, rather than imposing. More ethnographic understanding of everyday life, social practices and local meanings is called for (Long, 2). Questions are raised about (mis) use of the term “partnersor “partnership” if these merely reflect traditional power differentials. Experience suggests that failure to acknowledge power differentials and hiding behind “partner” rhetoric undermines honest dialogue and relationships (Reeler, 32).

To develop capacity for integral, expanded learning within institutions

“Essential to organisational learning is understanding how knowledge is acquired, how the resulting information is shared and interpreted, and how effective organizational memory is. Thus, organizational learning at its most basic is both the detection and correction of errors, and the application… of the lessons learned. Such learning is not always conscious or intentional. PM&E aims to make it more so” (Gaventa & Blauert, 235).

“From a learning perspective we see M&E as one of the pillars that give shape to the development sector and to the relationships that give it form” (Dlamini, 6).

“The concept of a learning approach to evaluation has major implications within the organization in terms of human resources and time investment in evaluation” (Carden, 89)

Organisations that are more flexible, open to unfolding process and findings, more inclusive and transparent, tend to foster meaningful participation and collective learning. Environments need to be considered safe for quedtioning, open sharing, critical reflection, learning and risk-taking.

Deeper learning, as in evaluation processes, involves unlearning which is central to transformative change processes: “The practice here is of surfacing the hidden roots, revealing the repeated patterns of behaviour, culture, habits and relationships that unconsciously govern the responses to the experience of crisis that people have. Further work requires bringing to light the deeply hidden and no longer appropriate values, beliefs or principles governing people’s behaviours and habits – those that are real rather than the stated values and beliefs” (Reeler, 23). This is as true of organisations as of communities. Such processes entail engaging with resistance to change and fear of what might be lost, as is common in communication for social change processes.

Monitoring and evaluation processes need to be embedded in processes of wider organisational learning, such that they help to develop stronger, more creative and responsive organizations. PM&E can fuel institutional learning by developing systematic and adaptive ways of understanding achievements. For ‘ownership’ to progress beyond rhetoric, “learning needs to recognize the role and responsibility of each individual, and the personal or collective benefits or problems to be expected. In contrast to conventional M&E, PM&E has the potential to enhance this sense of ownership amongst stakeholders both within the institution and outside” (Gaventa & Blauert: 237). However, learning from M&E is not automatic; it demands the investment of time and space in strengthening relationships and deepening understanding of each other’s lived realities. Thus relationships, rather than systems and mechanisms of information flow, are central (Dlamini, 13).

To achieve its potential, it is crucial that M&E is fundamentally linked to and part of broader processes of organisational learning and development. The growth of attention to strengthening learning and establishing “learning organizations” in recent years is to be welcomed and has highlighted how challenging this can be in practice, as discussed below.

Participatory evaluation has much potential in this context.

To make evaluation and learning integral to wider organisational development processes

PM&E practitioners and writers all emphasise the importance of evaluation and learning processes being integral to wider organizational processes and practice. Although this seems obvious, factors that militate against it include professional silos, the artificial division of related responsibilities, weak capacity and understanding, little attention to refection and learning from experience (i.e. to experiential knowledge), dominant professional and knowledge hierarchies, and inadequate higher-level support.

Ideally M&E lies at the heart of development work and practice, “it is a process that is deeply ingrained into the way the organization works; it lives at the core of its identity, practice and dominant orientation” (Dlamini, 5). This implies going well beyond the realm of typical cause-effect evaluations and textual evaluation reports. Rather, M&E “has to live in the culture and orientation of the organisation as a whole and the individuals in it… have to contribute towards increased understanding, thinking and practice” (ibid, 11). Learning from M&E can fuel clarity about social purpose and potential impact, through processes of honest reflection and questioning.

Thus M&E processes should create space for people to express themselves and shape their experience into stories to be shared. They should engage stakeholders in questioning assumptions (often unconscious, inherited or learned ways of knowing and doing things) and in critical reflection to enhance understanding and mutual learning. They should give ‘voice’ to and affirm the contributions of key stakeholders. When linked to organisational learning, M&E informs an organisation or project’s direction and purpose.

These are dynamic processes. As participation goes to scale evaluation and learning processes become increasingly complex as new stakeholders come on board. Thus there is a shift from focusing on primary stakeholders as the critical factor, to wider appreciation of the need for broader institutional change and the need to link actors at different levels of the system (Gaventa and Blauert, 230).

To broaden accountability

What does accountability mean? Who is accountable to whom?

“Accountability is increasingly recognized as relating not only to financial transparency, but also to learning about the social and economic impact of the organisation’s activities. This involves changing (and reversing) relationships amongst and between stakeholders – accountable through dialogue and disclosure already implies a certain openness to learning. For institutions to change, actors internal to the organization also need to be willing to probe their own organization, recognize and discuss different ‘hierarchies’, be open about mistakes as well as successes, and, above all, know that the opinions expressed by them can lead to internal as well as external change” (Gaventa & Blauert:238).

Many have called for expanded, wider understanding of “accountability”, in particular for development practitioners to be more accountable to their primary beneficiaries, taking understanding well beyond meeting donor reporting requirements. PM&E can encourage greater responsiveness and ‘downward’ accountability by public, private and non-profit institutions (Gaventa & Blauert, 233). As Reeler highlights: “There must be financial accountability, up and down, but accountability for impact, for the work itself, is a much bigger question that can only be satisfied through restructuring our relationships as practitioners around collaborative processes of honest learning from experience” (Reeler, 33).

Schwandt usefully takes this further, calling for responsibility (with moral undertones) rather than accountability: “Accountability encourages us to regard learning in evaluation as a matter of acquiring knowledge of what works as a commodity that is bought, sold and applied. Responsibility reinforces the idea that learning unfolds in a reciprocal engagement that presumes a set of motivations and dispositions to do what is right; it regards learning and education as communal processes of becoming persons of a particular kind” (Schwandt, 2003:362). Others similarly place broader notions of accountability at the heart of development processes themselves, seeing our key challenge as one of “nurturing and building a culture of self-reflection and self-evaluation which will enforce new kinds of accountability” (Dlamini, 13).

The challenge and tensions of responding to demands for wider impact (strategic) accountability, rather than merely resource accounting (functional accountability), are not to be underestimated, as experience testifies (Gaventa and Blauert, 238).

To foster appropriate donor support, both conceptual and financial

High-level support is essential for all of the above and is fundamental in making particular environments more or less conducive to open, honest reflection and learning from practice in participatory processes. This does call for donors to let go, impose less and be more trusting and open to complex, unfolding processes, particularly in relation to evaluation. Thus donors “need an approach that provides resources for intuitively developed plans with broad outcomes, that trusts that something positive may emerge and is willing to invest in that possibility” (Reeler, 28). It implies questioning and revisiting dominant forms of evaluation, reporting and accountability, as discussed above. It means a move towards supporting action-learning processes (ibid, 29).

In the development communication field a recent report highlights the challenges this poses: “Progress will not always be simple to assess, with indicators buried within complex social patterns and emergent trends. The kinds of variable indicators used to mark programming success needs to be reappraised. More high-level support is needed for the development of M&E tools for working on enabling communication environments. This would improve both their effectiveness, and their legitimacy within the broader development sector… In order to promote and legitimise more empowering, more effective communication, it is important that senior figures within the multilaterals coordinate at least a series of common assumptions and principles for implementing and monitoring communication activities” (Panos, 37).

Many in the development sector have long called for more flexibility and openness in donor funding such that it recognizes the realities of evolving process and unanticipated consequences or impacts, while strengthening honest reflection and organisational learning. This highlights the importance of core funding within closer, more accountable learning relationships, to enable “flexibility and initiative according to changing conditions on the ground” (Reeler, 32).

Much of the above sounds far from new: emphasis on strengthening relationships and trust, on acknowledging and redressing power differentials, on capacity development, on dimensions of flexibility, openness and letting go. The importance of adequate time and supporting resources, in particular to help ensure the participation of more marginalised stakeholders, cannot be overstated. What these factors highlight is the power of forces that perpetuate dominant practice and the need for us all to help move the field forward.

To ensure more meaningful, inclusive and accountable evaluation processes, the onus is therefore on all of us to strengthen related capacity and to honestly share experience and learning. We need to advance the evidence base and share lessons learned about both process and outcomes.

Conclusion

Informed by practical experience and recent evaluation literature, this paper has highlighted the challenges and potential of more inclusive, educational and empowering approaches to the evaluation of social development and communication for social change.

I have questioned assumptions about development practice and its evaluation, in particular calling for explicit recognition that development agencies are only one player in a far broader field to be reflected in demands and expectations of evaluation. This calls for greater realism about potential outcomes and impact, and lessening the pressure on organizations to evaluate their work in questionable ways from which they themselves gain little. As a valuable contribution on HIV/AIDS communication strategies recently noted, “the most effective responses to HIV/AIDS are those which emerge from within societies; and they tend to be long-term, complex and difficult to evaluate. These are precisely the strategies which donors, despite their best intentions, find most difficult to support” (Panos, 4). As seen again and again, when institutions are too vested in project approaches, “it is difficult to muster the courage, let alone to find the time, to ask the difficult questions” (Reeler, 33).

Instead, it is important to grant space, time and support for more inclusive processes of sharing, reflection and learning, or developmental evaluation. Much evidence bears witness to the benefits of participation in evaluation processes themselves, as discussed. Recognising that different stakeholders “find themselves already in a process of valuing, of deciding whether they are doing the right thing and doing it well” (Schwandt, 2001:232) and are merely ‘put in the way’ of evaluation experiences, has methodological implications for development practitioners and evaluators.

Showing the relevance of an actor-oriented perspective grounded in practice and lived realities, the importance of ethnographic approaches, participant observation and learning are emphasized. Practitioners and evaluators are challenged to shed their (often assumed or prescribed) “expert” status and to become learners themselves, as well as facilitators of broader learning processes. Thus, the particularities of “people’s lived-in worlds” are central (Long, 2002:4) and evaluation processes should offer opportunities for meaningful engagement through dialogue, to enhance understanding and strengthen practice. Evaluators are interested in experiences and perceptions, particularly as constructed in stories that “reveal the meaning and ambiguity of everyday situations and experiences, and… illuminate what really matters to stakeholders” (Abma, 33). Appreciation of local context and particularities means that evaluations may themselves be narrative and qualitative, telling before, during and after stories: “there is usually a rich story of change to hear from the people themselves, where impact can be very clearly felt and witnessed” (Reeler, 30). Thus, biography and story constitute alternatives to simplistic analysing of cause and effect, as “stories help people to reveal their knowledge, to acknowledge their experience and wisdom, to see the resources and resourcefulness they have but may have been blind to” (Reeler, 19). Growing recognition of the significance of and potential that stories offer for meaningful evaluations has fuelled interest and notable developments in this area, including in the Most Significant Change approach, which we will explore in follow-up articles.

I highlight the importance of devoting space, time and resources to reflecting on and learning from experience, in recognition that an imperative to evaluate, i.e., “a deliberative conversation about value, about the appropriateness and aptness of goals and means” (Schwandt, 2005:103), lies at the heart of professional practice. Thus dialogue, raising questions, identifying and clarifying values, beliefs, assumptions and forms of knowledge are critical to development processes themselves and central to evaluation.

Within a communication context, evaluation has a major role to play in fuelling positive change. As argued ”we need greater efforts to prove the impact and legitimacy of more empowering communication methodologies. Donors need to invest in this process… local ownership, sustained community mobilization, political engagement and other characteristics of past success stories, are all best fostered through supporting an enabling communication environment” (Panos, 58).

We remain challenged to evaluate our actions in ways “at once more sensitive to the contingent and specific, and more open to the unthinkable and undisciplined possibilities of ambivalence” (see Schwandt, 2003:361). We need to learn from and be part of the creative and innovative mavericks of the sector (Reeler, 33) and are challenged to capture and share learning effectively from experience, such that evaluation is of value in today’s complex, dynamic environments.

I believe strongly that like all participatory practice, “one is never ‘finished’ with becoming a ‘good’ practitioner of evaluation; it is an activity that is always unfolding” (Schwandt 2001, 229). Welcome, growing attention to deeper organizational learning and development processes foreground the role and potential of evaluation processes, which should be integral. They highlight the challenge to develop and strengthen evaluation capacity at all levels. This includes contributing to evidence-based process, to which communication for social change communication for social change approaches lend themselves.

Finally, how can we ensure that evaluation matters? How do we know if our evaluation practice is itself succeeding and of value? What and whose criteria should determine this? There are significant implications for considerations of the validity and rigor of evaluative practice. These and other issues concerning shared learnings will be the focus of future articles


References

Abma, Tineke A (2006). “The Practice and Politics of Responsive Evaluation American Journal of Evaluation Vol. 27(1): 31-43

Carden, Fred (2000). Giving evaluation away: “Challenges in a learning-based approach to institutional assessment”. In Estrella, Marisol et al, Learning from Change. Issues and experiences in participatory monitoring and evaluation. London: Intermediate Technology Pubs: 175-191

Dlamini, Nomvula (2006). Transparency of Process. Monitoring and evaluation in learning organizations. South Africa, CDRA. See www.cdra.org

Gaventa, John & Blauert, Jutta (2000). Learning to change by learning from change: Going to scale with participatory monitoring and evaluation. In Estrella, Marisol et al, Learning from Change. Issues and experiences in participatory monitoring and evaluation. London: Intermediate Technology Pubs: 229-243

Lewis, Janet (2001) of Joseph Rowntree Foundation, UK. “Reflections on Evaluation in Practice”. Speech presented at the UK Evaluation Society Conference on 8 December 2000. Evaluation Vol. 7(3): 387-394

Long, N., and Long, A. (eds). (1992). Battlefields of Knowledge: The Interlocking of Theory and Practice in Social Research and Development. London: Routledge

Long, Norman (2002). “An Actor-oriented Approach to Development Interventions”. Background paper prepared for APO Meeting, Tokyo 22-26 April 2002.

Preskill, H & Torres, R (1999). “Building Capacity for Organizational Learning Through Evaluative Inquiry”. Evaluation Vol. 5 (1): 42-60

Oral History Project Team/ Patton, MQ (2007). “The Oral History of Evaluation, Part 5. An Interview with Michael Quinn Patton”. American Journal of Evaluation Vol. 28(1): 102-114

Panos (2003). “Missing the Message? 20 years of learning from HIV/AIDS.” London: Panos

Parks, W, Gray-Felder, D, Hunt, J & Byrne, A (2005). “Who Measures Change? An Introduction to Participatory Monitoring and Evaluation of Communication for Social Change”. New Jersey: Communication for Social Change Consortium

Patton, Michael Quinn (1998). “Discovering Process Use”. Evaluation Vol. 4(2): 225-233

Reason, Peter (1988). Human Inquiry in Action. Developments in New Paradigm Research. London: Sage Pubs.

Reeler, Doug (2007). A Theory of Social Change and Implications for Practice, Planning, Monitoring and Evaluation. South Africa: CDRA. See www.cdra.org

Schwandt, Thomas A (2001). “Understanding Dialogue as Practice”. Evaluation Vol. 7(2): 228-237

Schwandt, Thomas A (2003). “‘Back to the Rough Ground!’Beyond Theory to Practice in Evaluation”. Evaluation Vol. 9(3): 353-364

Schwandt, Thomas A (2005). “The Centrality of Practice to Evaluation”. American Journal of Evaluation Vol. 26(1): 95-105


[1] In 2005 the Consortium published a key document and practical guides on the participatory monitoring and evaluation of communication for social change approaches. See: http://www.communicationforsocialchange.org/pdf/who_measures_change.pdf

[2] In her speech to the UK Evaluation Society Conference in December 2000, Janet Lewis of the Joseph Rowntree Foundation called for more evidence-based process. See Lewis (2001).

[3] Reeler usefully describes a Theory of Social Change as “an observational map to help practitioners, whether field practitioners or donors, including the people they are attempting to assist, to read and thus navigate processes of social change” (Reeler:2).

[4] The Communication for Social Change Consortium is working in this area and in 2005 published a key document and supporting practical guides on the participatory monitoring and evaluation of communication for social change approaches. See http://www.communicationforsocialchange.org/pdf/who_measures_change.pdf

[5] As Janet Lewis of the Joseph Rowntree Foundation advocated, in a speech at the UK Evaluation Society Conference in December 2000.

Click here to return to Mazi 13