MAZI Articles

When It Comes to Social Change, The Machine Metaphor Has Limits by Virginia Lacayo

In Getting to Maybe, Westley, Zimmerman MSMGF and Patton say that, when it comes to social change issues, especially those that imply long-term, complex processes, the best we can aspire for is “getting to maybe.”

Alan Fowler agrees: “the longer something takes, the less predictable the outcome.” Longer timeframes—from political and social changes, through shifts in social institutions, to intergenerational changes—require us to move from relying on certainties in defining desired change to estimating probabilities.”

Most of us think that way. Over the years, most development professionals agree that social change is a nonlinear, long-term and often unpredictable, process that requires efforts at multiple levels.

However, most organisations continue to frame their strategies in measurable, cause-effect terms, as if their programs can be evaluated in isolation from other efforts and can demonstrate short-term effectiveness.

This paradox characterizes the communication for social change field. As communication involves people, and people are unpredictable, attempts to assess programme effects miss their mark when social change is viewed as a predictable, linear process.

We keep planning our communication for social change initiatives in a linear way, and we design the evaluation of those initiatives with the same parameters. The result: Complex problems and issues probably will act exactly as we predict, and our inputs will result in the outcomes we planned in the expected time frame. In addition, our evaluations risk leading us to a narrow (and even inaccurate) understanding of the whole system. And we miss positive benefits that lie outside the scope of the evaluation.

But there are limitations to framing our world as if it were a machine in which the whole is the sum of the parts and things happens in a linear and predictable way. According to this mechanic perspective, the only thing we need to do is to plan things more carefully and with more specifications. This essay will try to explain some of the implications this machine metaphor has for planning and evaluation of social change interventions. Hopefully, this essay will spark Mazi readers’ curiosity and inspire them to find out ways to approach social change planning and evaluations with a more systemic/holistic approach.

Simple, Complicated and Complex Problems

Zimmerman helps us understand the difference between the diverse kinds of problems we face in promoting social change. She says there are three kinds of problems: simple, complicated and complex.

Simple is like following a recipe and complicated is like flying to moon. In both these cases we are dealing with knowable processes, even if they are unknown at the moment. We should be able to figure it out a priori.

To illustrate a complex problem, Zimmerman invites us to consider the process of raising a child: While child-rearing manuals abound, parents still struggle to get it “right.” For no sooner do they think that they have found the formula than the child changes, and they have to find a new approach. Add more children, each changing and affecting siblings, and all affect the parents, and you have the perfect picture of a continuously shifting landscape.

Raising a child cannot be scripted ahead of time. It is a great way to help people understand that you can still move things forward and act as a parent even when the future is inherently unknowable. Thesituation changes constantly, and the relationship between you and your child is more important than any specific parenting intervention or plan.

Adam Kahane defines the different kinds of complexities a complex problem can have: “They are dynamically complex, which means that cause and effect are apart in space and time, and so are hard to grasp from firsthand experience. They are generatively complex, which means that they are unfolding in unfamiliar and unpredictable ways. And they are socially complex, which means that the people involve see things very differently, and so the problems become polarized and stuck.”

Being able to identify what kind of problem/situation we are addressing is the first step towards finding the right approach to address it. Simple problems, with low complexity, can be solved perfectly well—efficiently and effectively—using processes that are piecemeal, backward looking, and authoritarian,

In contrast, only using processes that are systemic, emergent and participatory can solve highly complex problems.

If we reflect on previous experiences, successful social innovations combine all three problems: simple, complicated and complex. Yet each problem has been addressed according to its level of complexity. In this sense, as Yaneer Bar-Yam said, the most basic issue for organisational success is correctly aligning a system’s complexity to its environment.

The Machine Metaphor and the Living Organism Metaphor

A metaphor is language that directly compares seemingly unrelated subjects. Metaphors describe a first subject as being or equal to a second subject in some way. Thus, the first subject can be described effectively because implicit and explicit attributes from the second subject are used to enhance the description of the first.

Innovators and evaluators have as their worldviews the profound belief that the nature of the world is reflected in the approaches they choose to employ in practice. Metaphors influence the questions we ask and the answers we find.
Brenda Zimmerman says current management and evaluation thinking—the way of understanding organizations—largely assumes that a well functioning organisation is like a well-oiled machine. This leads to the notion that performance is optimised when work is specified in detail and delegated to distinct operational units. "Organisation as machine" is the implicit metaphor that describes organisations and how we work.

Many theories about management and change believe that viewing parts in isolation, specifying changes in detail, battling resistance to change and reducing variation will lead to better performance. And because we believe things are machine-like, in terms of organisations, we look for machine-like attributes and so have begun to believe the metaphor as true.

However, to see life as a whole—to observe what all life has in common—requires us to shift the way we normally look at things. We need to think as much about the process as we do about structure. Complexity, like all science, is a metaphor. It shapes our logic and perspective.

Complexity science is not a single theory. It is the study of complex adaptive systems, the patterns of relationships within them, how they are sustained, how they self-organise and how outcomes emerge.

Zimmerman says, in contrast to the machine metaphor, complexity science is built on a living organism metaphor: It studies complex adaptive systems with all their inherent messiness, unpredictability and emergence. Complexity suggests that relationships between the parts are more important than the parts themselves.

Glenda Eoyang also gives us hope when she explains complexity science and other nonlinear approaches are used to understand the dynamics of a variety of physical and mathematical systems. Patterned behaviour in those contexts is analogous, but not identical to, emergent behaviour in human systems. People are conscious of their own behaviour and that of others; they learn from past experience, and they have hopes and desires for the future that affect their behaviour and their expectations of others. Finally, and probably most importantly, people take intentional action to influence patterns as they emerge.

All these characteristics complicate the complex adaptive nature for humans and the systems they create. For this reason, a literal translation of CAS to human systems is insufficient to help us see and influence patterns that emerge from social interactions.

Not Getting Us There Faster—If at All

When a problem arises in our organisation, there is a strong tendency to try to figure out who is responsible for it. Someone should be fired; someone should pay; someone should be punished. Today, in an important step forward, there is an increasing tendency to use a systems perspective, to recognize that many factors may be responsible—too many to be identified individually.

Westley, Zimmerman and Patton tell us the story of Brazilians’ “highly efficient and effective” approach to the HIV/AIDS crisis and how a complex solution to a complex problem may be the best, if not the only, way to proceed.

“Brazil refused to sacrifice its currently infected generation, and chose instead to challenge the World Bank’s assumptions. … Rejecting the World Bank’s advice, Brazilians looked at the key relationships, the social capital, that existed in the country.

The question became how to provide drugs to all who need them and how to support the existing relationships to enhance treatment compliance. The strategy included more than 600 NGOs and churches, as well as government, hospitals and generic drug companies. Each played a role. Each one was leader and follower at turns. This was not a top-down strategy nor was it entirely bottom-up. Once the treatment question was framed in terms of how to deliver help instead of to whom help will be offered, a flood of activity (and creativity) was released across Brazil (Changing the framing of the question opens for possibilities – from problem driven to asset driven).”

According to Westley, Zimmerman and Patton, Brazil’s approach to HIV/Aids was so effective because by expanding “the definition of ‘resources’, social innovators were able to draw on abundance invisible to others.”

Complicated solutions are finite. They are clear and precise and lead to specific follow-up actions. Complex solutions lead to more questions; they continue to open up the space for inquiry and for solutions to emerge from interaction within the system.

All the relationships and interactions that make up a functional complex system cannot be known in advance. So development cannot simply involve drawing up a blueprint and then implementing it.

For instance, logical frameworks, a very well-known tool in the development field, have underlying assumptions that limit our capacity to see and effectively address complex social problems. According to Ben Ramalingam, logical frameworks assume:

  1. The future is knowable, given enough data points;
  2. Phenomena can be reduced to simple cause-and-effect relationships;
  3. Dissecting discrete parts reveals how the whole system works, and science is the search for the basic building blocks,
  4. The world is linear: Changes in output proportional to changes in input. For example, if a little foreign aid slightly increases economic growth, then more aid should produce more growth.
  5. Output of two or more different inputs is equal to the sum of the outputs of the individual inputs.
  6. The role of scientists, technologists and leaders is to predict and control. Increasing levels of control—over nature, people and things—will improve processes, organisations, quality of life and even entire human societies.

However, we all have experience the failure of this approach. According to Yaneer Bar-Yam, the challenge of solving complex problems thus requires us to understand how to organise people for collective and complex behaviour . We must give up the idea of centralising, controlling, coordinating and planning in a conventional way. We must be able to characterise the problem in order to identify the structure of the organization that can solve it, and then allow the processes of that organisation to act.

Evaluating With the Same Blueprint We Used to Plan the Change

The machine metaphor also has serious implications in our monitoring and evaluation efforts.

Derek Cabrera, an expert in systems thinking, analysed some of the most popular evaluation models and stated, “There is a large degree of agreement between these models. It is clear that ‘causality’ is a central idea–that some intervention, action, object or activity (X) leads to some outcome (Y). It is also clear that there are often interim steps (i.e., ‘short or midterm outcomes,’ ‘goals,’ or ‘sub-goals’) that occur between X and Y. Also clear among these models, is the desire to create ‘causal chains’ or ‘through lines’ that are rational and logical.”

These models work well in systems characterised by incremental change, predictable outcomes, proportional and rational cause and effect relationships, detailed intervention designs and command and control management. However, the behaviour in a complex system does not conform to the assumptions that are the foundation of those traditional evaluation processes. Many traditional evaluation methods were not designed to capture data about the complex and unpredictable performance of societies.

In the context of evaluation, it is widely recognised that setting narrowly defined goals for a service or organisation, and measuring the achievement of these alone, may result in the evaluator missing positive benefits that lie outside the scope of the evaluation. This offers a methodology and model for evaluating organisational failures without simply attributing blame to a single individual.

New methods of both formative and summative evaluation must be identified to assess performance in a complex system effectively. For evaluations to work this way they should not be limited to observing intended effects and routes. Instead, they must look at the entire range of effects triggered by the programme, whether or not they are in line with original intentions. Exceptions, discontinuities, unexpected results and side effects are valuable sources of information on the programme being evaluated that can help to improve implementation.

A More Systemic Approach to Evaluation

Complexity science is not about bulldozing old concepts and theory. Rather, it helps illuminate what has worked in the past and why. It reframes our view of many systems that are only partially understood by traditional scientific methods.

In systems work, richness implies that the whole can only be understood as a product of its parts plus the dynamic relationship between those parts. The richness of a systems inquiry is not about detail but about value. And the value is contained in the relevance of the inquiry to those affected by it. Participation of all possible stakeholders, not only those apparently directly affected but also those on the margins, is essential.

But participation alone is not enough: We need approaches that deliberately expose our, i.e., evaluators’ and stakeholders’, assumptions about what is valid knowledge and that embrace multiple perspectives.

No Need to Be an Expert

It is not always easy to gain acceptance for this kind of evaluation in a culture characterised by a worldview where linear causality predominates and in which individual free choice is accepted as the logical starting point of a causal chain.
However, this ought not be an overwhelming task. Williams and Iman suggest “rather than think that you need to know the full range of systems methods before beginning to practice, start from where you are now. Begin with the systemic insights and methodological resources you already have and then move outwards from there. Ask yourself if there are just one or two ideas or methods that might enhance your existing systemic awareness, and try synthesizing these with what you already do.”

According to some systems thinking evaluators cited in this essay, some basic guidelines could be:

  1. Replace the “search for best practices” with “facilitating good principles.” To support this goal, the evaluation design should be as simple and self-documenting as possible. It should include simple, iterative activities, and it should be totally understood by as many stakeholders as possible.
  2. Adjust monitoring and evaluation approaches to allow for learning from unexpected outcomes, e.g. outcome mapping, rather than retrospectively rationalising that they were intended all along. Incorporate multiple strategies, methodologies, cycle times, dimensions and informants and triangulate the information obtained from these sources often. Because a complex system has a structure that is nonlinear, open and multi-dimensional (on micro and macro levels), an evaluation design cannot predetermine all factors that will be of interest. Triangulation of informants, strategies and timeframes will help the evaluation program represent the complex dynamics of the system better. By including a wide range of approaches, CAS evaluation methods integrate the best of many disciplines and methods that were previously irreconcilable.
  3. Make information about the evaluation process open and accessible to all stakeholders beginning with the design phase. By being explicit about decisions and processes, evaluation becomes an effective transforming (re-enforcing) feedback loop. So the evaluation becomes a part of the intervention, rather than some irrelevant activity.
  4. Given the dynamic of a complex adaptive system (CAS), it is possible to develop a short list of simple rules that could generate a complex and effective evaluation program across many different parts of a complex human system. Simple rules will help each individual and group in the system design and implement their own evaluation plans. The system-wide evaluation plan would not be predictable because it would evolve as the system evolved. The following rules might be sufficient to establish such a reflective evaluation process:
    • Evaluate to inform action.
    • Communicate findings to others in terms they care about and understand.
    • Focus on "differences that make a difference."
    If all stakeholders of a program followed these three rules, they would generate a cluster of evaluation activities that would look different than many traditional evaluation plans.
  5. Evaluate and revise the evaluation design often. Because the CAS baseline is constantly shifting, the evaluation plan should include options for frequent and iterative reconsideration and redesign.
  6. Make learning the primary outcome. Effective adaptation is the best indicator of success in a complex system. Match the type of evaluation to the maturity level of the system. Do not enforce a summative evaluation in a system that could benefit more from a formative one.

Conclusions

Change unfolds continually in a CAS. Individuals and their organisations express anxiety during times of change and uncertainty. Evaluators have an opportunity to mediate this anxiety in three ways:

  1. They can help the system understand and make sense of the CAS dynamics they observe. By explaining the basics of CAS, the evaluator can help the organization be reflective about their experiences and their fears;
  2. Evaluators can also help articulate the CAS dynamics within a given, local context. By stating, and encouraging others to state, the dynamic patterns in the environment, the group can begin to build coping mechanisms for the future; and
  3. Evaluators can lower the cost of failure. By framing an evaluation method as experimentation and learning, the evaluator can encourage individuals and groups to value their mistakes and to learn from them.

Finally, applying a more systemic approach to evaluations could help us to better understand, not only what happened, but how and why it happened, and get more information on how the system works and how it can be influenced. This should be an appealing enough reason to try since, after all, it's what we all want.

Highly recommended books and articles used for this essay:

Bar-Yam, Y. (2004) Making things work: Solving complex problems in a complex world. USA: Knowledge Press.

Cabrera, D. (Ed.). (2006). Systems evaluation and evaluation systems. White paper series. Ithaca, NY: Cornell University Dspace Open Access Repository. National Science Foundation Systems Evaluation

Eoyang, G. (2007). Human Systems Dynamics: Complexity-based approach to a complex evaluation. In Williams, B., and Iman, I. (2007). Systems concepts in evaluation: An expert anthology. Point Reyes, CA: EdgePress of Inverness, publishers.

Fowler, A. (2008). Connecting the dots: Complexity thinking and society development. The Broker, 7, April 2008.

Kahane, A. (2004). Solving though problems. San Francisco, CA: Berrett-Koehler Publishers, Inc.

Ramalingam, B., Jones, H. (February, 2008). Exploring the science of complexity: Ideas and implications for development and humanitarian efforts. Working paper 285 for the Overseas Development Institute.

Tamarack (2005). Complexity, a conversation with Brenda Zimmerman. Interview by Tamarack Learning Centre, 2005.

Westley, F., Zimmerman, B., & Patton, M.Q. (2007). Getting to Maybe: How to change the world. Toronto: Vintage Canada

Williams, B., and Iman, I. (2007). Systems concepts in evaluation: An expert anthology. Point Reyes, CA: EdgePress of Inverness, publishers.

Zimmerman, B., Lindberg, C., & Plsek, P. (2001). Edgeware. USA.

Click here to return to Mazi 16