Print Version Print Version Email to Friend Email to Friend
MAZI Articles

Pushing the Boundaries: New Thinking on How We Evaluate
by Ailish Byrne

Evaluation experts today are challenging conventional thinking and coming up with innovative approaches. The Consortium recently has been working with UNAIDS, Panos-London and other organisations to strengthen how we evaluate social change communication programmes and processes. Last May, Ailish Byrne, the Consortium’s senior associate for research and evaluation, facilitated a meeting of experts in the field. Here Byrne highlights emerging trends, challenges and concepts that an upcoming publication will detail.

We have known for a long time that how we evaluate communication for social change and social change communication needs strengthening. We want to use emerging findings to improve programmes, improve how we show the impact of CFSC, learn from experience, share learning and manage multiple accountabilities (social as well as financial).

While there is wide agreement that evaluation needs strengthening, debates rage on about how best to do this. We consider the issues with particular reference to HIV/AIDS prevention programmes, but our research has far wider relevance.

The wider context is significant. There have been growing and widening calls for better demonstration of impact across the development sector. This trend has emerged in the context of widespread frustration; programme staff and fieldworkers often feel they have evaluation systems and demands imposed on them, some of which make little sense to them. Such systems and demands can often fail to capture the essence of their programs and achievements.

Rather than fostering learning, evaluation often stifles it: Pressure to “show results”, i.e., particular results in particular ways, leads to findings and conclusions of dubious quality and relevance. Donors often rely on external evaluators, feeling that evaluation capacity is weak, that impact is not being adequately demonstrated and/or that programmes are not adequately using evaluation findings to strengthen their work. Other stakeholders feel marginalised, or left out altogether of evaluation processes and related decisions.

So evaluations too often end up as a little-used reports on a shelf, viewed as having only limited value and use to key stakeholders. The rich learning potential of evaluation is often lost, and, too often commitment to findings is minimal.

There are many reasons for these frustrations and the resistance to change that appear deeply ingrained at all levels. However, it is indisputable that some of them stem from a failure to question seriously the dominant assumptions, biases and habits that underlie the “norm” in evaluation, combined with limited room for manoeuvre, a lack of vision and a lack of resources to research and further develop alternatives.

Professional and organisational cultures tend to perpetuate the status quo. Given this context, apparent widening of interest (especially at higher levels) in non-mainstream, “alternative” approaches is welcome. This trend parallels innovations across disciplines and sectors, in an increasingly connected and networked world.

Our current work takes place in this environment, posing major challenges as well as exciting opportunities.

Looking to get beyond polarised debates about evaluation methodologies and methods that have raged for decades and are themselves of limited value, we are focusing on what lesser known and “alternative” approaches offer. This is in the spirit of complementing (rather than replacing) current practice. Nevertheless it does raise significant questions about factors that perpetuate more of the same and that, in effect, serve to stifle alternatives. Leading practitioners and theorists repeatedly emphasise the need to deal with the bigger evaluation issues first, before becoming bogged down in detail about the merits of particular methods.

With this in mind, here are some key questions we must ask of evaluation practice. I also will introduce emerging concepts that are critical to the evaluation of CFSC and social change communication.

Dominant practice raises fundamental questions about evaluation:

  • What factors have informed the evaluation design?
  • What values, priorities and assumptions underlie the methodology?
  • How far do these reflect underlying programme values and principles?
  • Who will be involved and how?
  • Which voices will be heard and which are missing?
  • Who decides what is of value and on what basis?
  • Who decides what constitutes “success” or otherwise?
  • How are the findings likely/intended to be used, and by whom?
  • What could help to ensure appropriate use of findings?
  • To what extent is learning prioritised? How? Learning for whom?
  • If indicators are used, how are they selected?
  • How will findings be shared?
  • How can we ensure honesty and quality through the process?
  • What could be done to make evaluation more useful to key actors?
  • What could be done to make the work/organisational environment more conducive to the above?

Considering gaps and biases in dominant practice has highlighted the value of systemic thinking, complexity thinking (including complex adaptive systems); associated methodologies, such as large-system action research; and more participatory approaches, among others.

A useful starting point in any context is to distinguish between simple problems, such as following a recipe, or complicated problems, such as going to the moon and complex problems, such as raising a child. We much appreciate what each means for the evaluation1. From a social change perspective, we quickly realise that related initiatives, including HIV/AIDS prevention programmes, typically come under the complex category. The major implications for evaluation are evident in the concepts we outline below. These stem from reflection on years of experience in the field, combined with recent focused research into the issues.

Below are key concepts, many of which are interlinked.

  • Holism: The whole is greater than the sum of the parts. The dynamic relationships between different parts of the system are critical and fundamental enabling and disabling patterns and assumptions will have an impact across the system (Burns, 2007). Staying focused on the whole draws attention to the deeper, underlying dynamics of social change.
  • Complexity: human systems are human. They are inherently complex and cannot be meaningfully reduced to individual parts.
  • Interconnectedness: Change in any part of the system can have (unexpected) consequences for the whole. The relationships between parts are often more critical than the parts themselves.
  • Dynamic, in flux and evolving: The evaluation design itself should reflect this and be regularly revised, to reflect and capture broader systemic changes.
  • Non-linear: Outputs of one process feed into the next one, in non-concentric, inter-locking spheres of influence (Eoyang & Berkas: 5).
  • Emergent and unpredictable: Systems transform themselves in ways unpredictable and unknown and have a life of their own. Thus we must track evolving patterns and shifts over time.
  • Unexpected: Seek the unexpected, which can emerge to be most significant. Capture “differences that make a difference” and learn from the “noise” in a system (Eoyang & Berkas, 1997).
  • Participatory: Contexts of multiple actors and multiple, diverse perspectives and types of knowledge call for participatory approaches. The participation of multiple and diverse stakeholders inevitably involves diverse, at times contradictory, perspectives (Midgley: 2007: 21).
  • Go beyond boundaries: Question and expand boundaries rather than assuming they are given. This includes questioning who can be a “knower” and what is legitimised as valid “knowledge.” More inclusive and more ethical positions are encouraged, through critical boundary analysis (Ibid: 21). This challenges the notion of evaluation “experts.”
  • Foster learning and change by facilitating questioning: Self-reflect rigorously; question deep assumptions (ibid: 22). This demands greater realism about evaluations to foster honest, critical reflection rather than fabricated “findings” which unfeasible demands often fuel.
  • Recognise relationships are critical: Outcomes often have more to do with the interrelationships between different actors in and elements of a system, rather than with particular actions (Burns, 2007).
  • Keep it simple: Less is more. Provide minimum guidelines or rules, and allow maximum flexibility to get there in own ways.

The above calls for approaches that are:

  • Contextually-grounded, appropriate to and drawing on the assets of particular context.
  • Integral to the social change process itself. Evaluation processes create feedback loops that make the evaluation an integral part of the wider initiative, calling into question notions of “objective," outsider evaluation.
  • (As Eoyang & Berkas state,) “a transforming feedback loop." Assessment activities should enrich and enhance the intervention activities (1997: 10).
  • Reaffirming rather than judging. Use evaluations to celebrate success, to amplify energy and commitment in the system, particularly in the early stages (Eoyang & Berkas: 10).
  • Capturing processes of change, including what may appear to be “small” changes.
  • Valuing explicit, in line with underlying programme values and fundamentals.
  • Appropriate and befitting the degree of complexity of issues being addressed (see P. Rogers, 2008).
  • Open to what unfolds (including the unexpected), flexible and responsive.
  • Creative and “open" rewarding of innovation, reasoned risk-taking and learning through trial and error.
  • Adopting an ethos of complementarities and triangulation, recognising that different approaches are suitable for different issues and that a mix of methods will shed varied light on different issues.

Overall, they call for approaches that fit and reflect the underlying values and fundamentals of processes of social change and communication for social change.

HIV/AIDS contexts and prevention programmes pose particular challenges to evaluation because the field is so complex, diverse and multi-faceted, with critical socio-cultural, political, economic, environmental, gender and other dimensions.

Combine these dimensions with such sensitive issues as individual choice, behaviour and options (or otherwise), given the messy realities of people's lives--particularly those who are the most vulnerable. We then are challenged to consider evaluation approaches that can engage with social contexts of HIV prevention and social change (issues of gender, inequality, discrimination and stigma, vulnerability etc.), rather than just the immediate causes of individual behaviour which are typically focused on.

The SCC Working Group of UNAIDS2 has long been engaged with the challenges posed for communication and is shortly to produce a technical update on social change communication. Developed collaboratively by a diverse range of leaders in the SCC field, the process has highlighted yet again the urgent need for guidance and vision in the area of monitoring and evaluating SCC.

It is this urgent need that fuels interest in our work.

Questions include:

  • If lasting social change takes place over years, if not decades or generations, how can you assess steps in the right direction?
  • How do you measure what did not happen (prevention)? How do you track change over the long term and in such diverse contexts?
  • What proxy measures can you develop to capture “progress towards” broader social outcomes?
  • Can you compare “successful” experiences from such diverse socio-cultural, economic and institutional contexts? How?
  • What are the implications for notions (generally assumed) of scale-up or replicability?
  • How to ensure wider learning, critical to lasting social change, through the evaluation process?

Here are some major challenges:

  • Fostering a shift towards social rather than technical perspectives and approaches;
  • Keeping social, rather than technical, issues to the fore; ensuring that messy and "difficult," deeply-rooted issues are not neglected in favour of tracking changes that are easier to measure; widening what counts as “evidence;"
  • Building the evidence base and demonstrating impact in appropriate ways; securing legitimacy for a wider range of approaches, including many that are relatively new, especially at higher levels;
  • Ensuring that evaluations foster learning and critical reflection. Programme improvement remains a challenge but is firmly linked to oft-stated priorities of capacity development, sustainability, local ownership and empowerment; and
  • Proposing useful suggestions in constructive and appealing ways. Working from a basis of complementing (rather than replacing) current practice, we must not lose the essence of fundamental changes that the evaluation of social change processes calls for.

Engaging with these challenges does not make for smooth or easy evaluations, but it can make them more meaningful and beneficial to those who should be benefiting from them. It is important to be more realistic and honest, at all levels, about what any social change process, and its evaluation, can achieve. In highly complex and sensitive fields like that of HIV/AIDS, this is even more critical and there are growing calls for a shift from the notion of proof to that of reasonableness.

We must recognise that our ability to understand and predict the behaviour of highly complex social and human systems will always be limited (Midgley: 18). As well, impacts of the wider environment in which any evaluation takes place need to be better appreciated. This includes organisational, socio-cultural, political and other environments. There remains a real need for sustained and longer term commitment and support for evaluation in the development sector as a whole.

At the same time, everyone has a role to play. Exploring diverse case studies of successful social change, particularly from a whole systems perspective, highlights the importance of multiple small, diverse efforts that together help to reach a critical “tipping point.” Importantly, these are not planned or linear processes, they are largely unpredictable and include “small” actions that can have great and unimagined impact. The important thing is to get started and do what you can, in your own context. In relation to issues discussed here, this might include experimenting with innovative approaches, negotiating room for manoeuvre in evaluations and evaluation contracts and suggesting alternatives that can complement mainstream practice. Argue what the key principles of equity, participation, empowerment and local ownership mean for evaluation in your context. Argue to redress related imbalances in voice and in forms of “knowledge” deemed legitimate. Document your experience and share reflection and learning more widely. Thankfully there are numerous opportunities for this in today’s widely networked world.

Evaluation practice will not fundamentally change unless more development actors, at all levels, help build the momentum for positive change. We all share this responsibility. We urgently need more honest and reflective stories of innovation in practice, in social change, communication for social change and their evaluation, from across the world and across the development sector.

Puntos de Encuentro, an internationally renowned and respected multimedia communication for social change initiative in Nicaragua, featured in MAZI 16 (Lacayo, 2008), provides a living example. As Puntos vividly illustrates, such shifts imply a degree of risk-taking, open-mindedness and vision, combined with a willingness to let go on the part of senior stakeholders. Thankfully, there are promising moves in this direction being fuelled by forces both within and outside the development sector.

Thanks to Danny Burns, Rick Davies, Virginia Lacayo, Ricardo Ramirez and Robin Vincent, for inspirational reflections and discussion.

A report on the meeting in Sussex in May 2009 will soon be available on the CFSC Consortium website.

The UNAIDS report mentioned will be available on the Consortium website as soon as it is in the public domain. We will keep readers informed as the research process develops.

Select References
Burns, D (2007), Systemic Action Research. A strategy for whole system change. Bristol: Policy Press

Byrne, A. (2008), Evaluating Social Change and Communication for Social Change: New Perspectives. CFSC Consortium. MAZI 17 (November 2008)

Eoyang, G. & T. Berkas (1998), ‘Evaluation in a Complex Adaptive System’. In M. Lissak & H. Gunz (eds) Managing Complexity in Organisations. Westport, CT: Quorum Books. Available at: (accessed 22/7/09).

Lacayo, V. (2007), 'What Complexity Science teaches us about Social Change.' MAZI 10 (February 2007).

Lacayo, V. (2008), 'When It Comes to Social Change, The Machine Metaphor Has Limits.' CFSC Consortium. MAZI 16 (August 2008)

Midgley, G. (2007), Systems thinking for evaluation. In B. Williams & I. Imam (eds), Systems Concepts in Evaluation. An expert anthology. American Evaluation Association.

Patton, M. Q. (1999), Utilization-Focused Evaluation in Africa. Evaluation Training Lectures delivered by M. Q. Patton at the inaugural conference of the African Evaluation Association, in Nairobi, Sept. 1999. See: (accessed 22/7/09)

P. Rogers (2008), Using Programme Theory to Evaluate Complicated and Complex Aspects of Interventions. Evaluation, Vol. 14 (1): 29-48

1 - Glouberman and Zimmerman’s valuable and widely used table sums up these differences. Cited in P. Rogers, 2008, p. 31.

2 - Ailish Byrne and Denise Gray-Felder of the CFSC Consortium are members of the SCC Working Group.

Click here to return to MAZI 19

Click here to return to the main listing