Logic models are an important tool for policy and program developers, managers and evaluators.
One serious problem, however, with logic models is that they usually leave out external influences and feedback effects, even when they are likely to be important, because they make the model “too complex”. It is good to simplify, but ignoring important influences on program success when planning a program or an evaluation is poor strategy. Failing to consider potentially important external influences and other complexities essentially places hopes for success on a best case scenario. Moreover, this may lead evaluators to fail to collect important data and to misinterpret program results.
Trying to embrace complexity by simply drawing a web of boxes and arrows like this is not helpful: it’s too complex to use and explain; will drive your audience away; and will probably come only from the mind of the evaluator or program manager, thereby easily missing important external influences and other complexities.1
A while ago, I stumbled onto an alternative approach during a mapping of factors of cause and effect related to a complex social policy problem for the Province of Alberta. Data was obtained from an expert panel, by developing a matrix linking a number of factors with their estimates of the strength and direction of the relationships between them. This was done in a facilitated session, drawing from the knowledge and experience of the panel members with the issue at hand. We realized that mapping this with network analysis software would help the panel to visualize what they had created, and in fact this was very successful, leading to more development of their thinking. It followed, from this insight and experience, that this form of data could generate outcomes chains and logic models that could handle complexity in a useable way.
Here’s a simple example, using hypothetical data in the same format that we had used in Alberta: a program supporting trades training by providing grants to students and developing state of the art teaching materials in collaboration with trade schools drives the immediate outcomes of 1) students gaining the ability to take training and 2) the currency and quality of the training being improved, in order to achieve 3) the ultimate outcome of increased employment.
Exogenous effects influencing these results can be seen here, including the cost of living and its drivers; factors driving the demand for skills; and technical changes affecting the training’s currency and relevance. The size of the nodes (the points) in this particular view indicates “betweenness centrality”, identifying those factors that connect many influences, thus capable of propagating effects across the system. The width of the edges (lines) indicates the hypothesized strength of influence of one factor on another. Possible unintended effects and a feedback loop are also shown.
A key advantage of this approach is that that it creates a logic model using expert knowledge, rather than simply an evaluator or manager’s understanding of a program. This data could also include information from other sources, like findings from the relevant literature and program stakeholders’ experiences as to the relevant factors and how they are connected. Importantly, you could (and should) do this without imposing any prior idea of the logic model on those providing the cause-effect data other than including the program/outputs/activities and specifying the immediate/intermediate and ultimate intended outcomes. This would help avoid imposing the program manager or evaluator’s view of the program’s logic and external context on the data.
A second major advantage of this approach is that the logic model utilizes network metrics generated from the data, so how the program and influences are expected to be related can be analyzed, by looking at it various ways. For instance, factors that are thought to have an important role in propagating effects across the system (and were therefore assigned valuesconnecting them strongly to other factors by the expert panel) would show high betweenness/eigenvector centralities, so coding the nodes to reflect these metrics visually could reveal critical
I think that this approach shows a great deal of promise as a way to:
• incorporate key elements of complexity and program context in a logic model for program
planning and evaluation;
• engage experts and stakeholders in a meaningful way; and
• clearly communicate plans and results.
1 Richardson, G.P., “How to Anticipate Change in Tobacco Control Systems”, 2007