(613) 818-2848

Finding Simplicity on the Other Side of Complexity

As I have argued in an earlier post:

  • Logic models are an important tool for policy and program developers, managers and evaluators.
  • One serious problem, however, with logic models is that they usually leave out many relationships and paths of influence, even when they are likely to be important, because they make the model “too complex”.
  • A systems approach to logic models is needed where complexity is present and relevant.

I recently found a Ted Talk from 2010 by an ecologist named Eric Berlow who studies networks in ecological systems and their links to the larger environmental and human systems in which they exist.

In his first example, he discusses how a complex food web can be represented as a network mapping, thus allowing a more analytic approach to identifying the links closest to a particular species; in this case, a top predator.

Note that he never uses the term “network mapping” nor does he talk about network analysis theory and technology, he sticks to the point on simplifying complexity. But that is what he is using to produce these images and analysis.

Berlow notes that they have gained “insights from studying nature that maybe are applicable to other problems”, and cites the “simple power of good visualization tools to help untangle complexity and encourage you to ask questions that you didn’t think of to ask before”.

“We are discovering in nature that simplicity often lies on the other side of complexity – so for any problem, the more you can zoom out and embrace complexity, the better chance you have of zooming in on the simple details that matter most.” Eric Berlow, 2010

To illustrate this, he uses as an example the much-derided spaghetti diagram on U.S. counter-insurgency strategy in Afghanistan that pretty much everyone has seen. Brilliantly, he put the connections into network analysis software and produced a version of the same set of factors and connections that was suddenly useable from an analytical and data visualization perspective.

Berlow’s presentation showed how you could cut down the effective complexity of the model, allowing you to isolate the set of influences and players most directly connected to a given outcome, activity or other factor of interest, that is, identify its sphere of influence. From there you could filter out those influences over which you have no control, or involve military force, etc., thus leaving a relatively restricted number of influences on which you might wish to exert some control in order to achieve. In this case, that would be increased support for the Afghan government. He notes that they found in their work on ecological systems that “…only focusing on one link in a system makes it less predictable than if you step back and consider the entire system, and from that place, hone in on the sphere of influence that matters most” and that they find that the key influences are usually just one or two degrees (links) from the node that you care about… “so the more you step back (and) embrace complexity, the better chance you have of finding simple answers, and it’s often different from the simple answer that you started with”.

My work along similar lines has shown that it is also helpful to be able to trace direct influences farther back through a chain of other influences and actions, using a network mapping, as in this example of influences on early high-school departure.

So a key advantage of using network analysis tools to work with complex systems is in your ability to pull from a complex system the factors and relationships that matter most to your results of interest and abstract from the rest. This in principle would be possible looking at the horribly complicated paper version long enough, I suppose, but only in principle. The network analysis mapping makes the task tractable.

A second major advantage of this approach is that the model utilizes network metrics generated from the data, so how the program and factors that influence it are expected to be related can be analyzed by looking at it various ways. For instance, factors that are thought to have an important role in propagating effects across the system (and were therefore assigned values connecting them strongly to other factors by, an expert panel or recent research) would show high betweenness/eigenvector centralities (these are metrics that indicate the extent to which a system factor or actor connects many others/is connected to others that highly-connected). Therefore, undertaking this analysis, and for visualization purposes, displaying the nodes (the mapping representation of the system factors or actors) to reflect these metrics visually, could reveal critical information.

Another major advantage of this approach is that that it can create a model using multiple, high credibility, sources like subject matter research, stakeholder experiences and expert knowledge, rather than simply an evaluator or manager’s understanding of a program.

It’s been argued to me that this approach to modelling program environments and logic models is too labour and time-intensive for use in evaluation practice. I don’t accept that as a valid argument, at least as a blanket statement. In the case of simple programs in simple environments, yes, it may be overkill. But if a program is operating in a complex environment, with many contributory factors involved in determining results, then there is a strong case for putting more effort up front, in understanding what needs to be measured and accounted-for, to avoid problems of credibility and attribution later.

So it’s not “keep it simple, stupid”, it’s “find the simple, stupid”!