I've really enjoyed writing the Cause-alities blog but the time has come to move on. I hope to see you over at the DecisioTech Blog!
I've really enjoyed writing the Cause-alities blog but the time has come to move on. I hope to see you over at the DecisioTech Blog!
“I think storytelling is becoming one of the new frontiers,” said Luke Lonergan, co-founder of Greenplum, now part of EMC Corp. But beyond that, “it really matters a lot to bring the brain to the problem in a way that you can untangle the complexities.” "Social Media, Genomics Driving Data Tsunami" Wall Street Journal 18 Feb 2011 http://on.wsj.com/g9Lt5A
There are a lot of reasons to create system simulation models. Many efforts start by simply wanting to understand what is causing some situation to develop; or, just the desire to understand how things work. In these cases a simulation model becomes a rich and transparent cause-and-effect hypothesis. Now, let me observe that having a solid understanding of how your business (or whatever you are exploring) works, its driving structure, and the baseline values of its parameters is a basic and broadly useful result in and of itself. One that is surprisingly rare.
However, in this "what have you done for me lately" world, inevitably, the "so what?" question comes up. As in, "So you have a simulation model . . . so what?" Because, as soon as a basis for system understanding has been established, we want to improve, control, change, the system. We want to make insightful resource allocation decisions. So, as I've discussed many times in this blog, it's usually not enough to build a system model simply to know how things work -- we need to think about how to harness it to do useful work.
This is trickier than it might first appear. Commonly, the initial approach runs along the classical scientific reductionist line: "Now that we have a model that predicts the future we simply act in accordance with that insight." This is so common it has a name: The predict-and-act decision framework.
In a very real sense, however, system models don't predict the future. They describe the cause-and-effect physics that connect our actions with assumptions about the future that lie outside our control. They describe the rules that allow us to "shape" but not dictate the future.
As the saying goes, this is not a bug, it's a feature. Because, shocking as it might appear, good decision making does not require that we predict the future. Good decision making requires that we understand the implications of our actions. System modeling is a practical way to differentiate the implications of our decisions from uncertain factors that are out of the sphere of our influence. And by doing so we gain deep insight into both.
Working with my clients I've created a visualization that helps them put their system model to use in a decision making environment. I call it an "Outcome Map". I've drawn heavily on work from "Real Options" and "Robust Decision Making" and married it to system simulation. Take a look at this Prezi to learn more.
Some Prezi Hints: After you fire up the Prezi, use the "more" menu to switch to full screen mode. Advance the presentation using the "next" arrow at the bottom. After seeing the presentation explore the canvas using the pan (left click and drag) and zoom (scroll wheel).
As one would expect I spend a lot of time describing how system modeling works as a problem solving approach. My usual description -- in fact the one that I wrote again this morning -- goes something like this:
"System Modeling works by explicitly mapping the causal drivers that link today's resource allocations (your management decisions) to future outcomes. The technique provides a fact-based, quantitative, and transparent basis for management policy development."
Since we're describing simulation models that run on a computer it's easy to assume that all of the "facts" in the simulation are quantitative as they appear to be in a spreadsheet. But in a systems model that's not really true. The "non-quantitative facts" identify people and things in the system and, crucially, the logical relationships between those things. They describe the "physics" of the system. Things like "We have to provide a quote to the prospect before they can buy it". Or that "I have to build a widget before I can put it in inventory". Or that "I have to ship it from inventory to get it to the customer". Sometimes the physics are about human behavior: "If supply is constrained I need to accelerate ordering" is an all time favorite of mine.
Maybe this seems simple and obvious. But the sum of all of these relationships is often complex and (this is important) can feed back onto itself in a feedback loop. Also, these facts may be well known to the players in the system but they are seldom written down anywhere and are therefore "implicit" knowledge.
Finally, I don't know of any other modeling or problem-solving approach that offers to capture these causal relationships, marry them to the quantitative data (eg: how long does that take? How many of those things are there?) so that potential management policies can be evaluated in light of "all of the facts".
I recently ran onto a short paper (speakers notes, really) by Joshua M. Epstein titled "Why Model?" I spend a lot of my life answering that question and I am excited by Epstein's concise, reasoned explanation. He boils it right down to the basics:
In some sense Epstein's position on modeling is a presentation of the scientific worldview and its moral advantages. So, mixed in with some really concrete reasons (eg: #2 -- Guide data collection) are some seemingly more esoteric objectives (eg: #6 -- Promote a scientific habit of mind).
Alas, business, financial, and other organizational leaders are mostly not swayed by a "scientific approach". I find that business and organizational clients generally need an additional level of motivation to justify an investment in explicit modeling. Usually, for the business person it is not enough to believe that an investment in explicit modeling will accomplish any of Epstein's 16 reasons. The business person wants to know What Then? Often phrased as a somewhat derisive "So What?"
For most business leaders explicit modeling has to be linked to some decision making or problem solving process. And, unfortunately, this often boils down to a focus on prediction at the expense of the other 16 reasons that are also part of excellent decision making and problem solving.
If you like this paper by Epstein try his book Generative Social Science: Studies in Agent-Based Computational Modeling. It's one of my favorites.
I was having coffee with a colleague recently and he challenged me to be much more concise and concrete about what makes Decisio's approach to modeling, simulation, and analysis novel and valuable. In response I've boiled Decisio's proposition down to four dimensions. In this post I am going to try to summarize these as concisely as possible. In future posts I'll elaborate each one and present some concrete examples.
Through the years I've had the good fortune to work on several complex projects for Vince Barabba (more about Vince here and here). Vince is a true leader -- visionary, creative, and effective. One of the management approaches he often applied was the use of modeling to help him and his team understand a complex situation and make good decisions. In fact, I consider Vince the ultimate "model consumer." That is, he did not write complex systems models -- he used them and guided others in their use and interpretation. This paper provides one detailed example of how he worked.
Vince had a guiding principle in the application of complex models that became known as "Barabba's Law". Here it is:
Never Say "The Model Says"
-- Vince Barabba
I've think I've sat in a hundred meetings where we were using a systems simulation model to understand some complex, uncertain, situation and at some point someone would say -- but the model says . . . . If you where working on a project for Vince (or even if you had EVER worked on a project for Vince) then you knew it was time to pause the action and reflect about what was happening. Because as soon as those words are uttered then somebody is about to depend on the model as a literal prediction of the future instead of a tool to "make sense" of the situation to support their decision making.
I started writing about sense-making in my last post but here it is again: Making Sense is the development of situational awareness including an understanding of the future trajectory of the system.
At the time, though, I didn't spend much energy thinking about the underlying philosophy of Barabba's Law. What I observed is that forcing a different choice of language inherently guided stakeholders towards a different and more effective application of the modeling. The nature of the team discussion changed from predictive thinking towards evaluating the correctness and completeness of the underlying causal hypothesis that the model represented.
Barabba's Law closes the, often disastrous, thinking shortcut that allows leaders to abdicate responsibility for understanding the relevant system and its behavior. ( "Gee, we thought we were doing the right thing because the model said we were. . ." )
I think that one of the reasons Vince was so effective in using sophisticated models is that he instinctively understood the difference between prediction and sense making (although I never heard him use exactly those words). Through his extensive experience he understood how leaders actually make decisions and he knew how to integrate sophisticated modeling into that process. And he distilled some of that into his law.
When I named Decisio (almost 10 years ago now!) I was casting about for a tag line that extended the "decision motif" to capture the essence of what we do. I settled on "Making Sense of the Future." My idea was (and is) that if clients are going to be able to make good decisions in complicated situations then they first had to understand that situation -- they had to "make sense" of what was happening. Then, they could use that understanding to make good decisions. The invocation of the "future" in this was intended, firstly, to suggest that comprehending the role of time is important to understand problems. Secondly, that we make decisions today in order to reap rewards in the future.
This idea of using systems modeling to "make sense" and support decision making was not and still isn't very common. There seem to be two prevailing ideas about the role of models and modeling. One common view is that they are sophisticated black box tools that consume data and produce predictions of the future. My observation is that while good models have predictive qualities the future is slippery. All models are wrong (but some are useful). Decision making based on a "forecast" mentality will not turn out well. An alternative perspective is that, since forecasting is difficult or impossible, modeling should be used for individual and organizational learning. Well, that's fine but sooner or later somebody has to make decisions!
I've recently become aware of the science and some of the research around the formal idea of "sensemaking." Gary Klein, well known in the field, describes sensemaking as "a motivated, continuous effort to understand connections (which can be among people, places, and events) in order to anticipate their trajectories and act effectively". Well, that's exactly what I help clients accomplish using systems models. In my projects the modeling activity guides an effective sensemaking process that results in high quality decisions.
Recently, I think I've been guilty of describing my work from the perspective of systems science and modeling to the detriment of the "making sense of the future" perspective. In fact, successful projects always integrate modeling with the sensemaking perspective.
I think that the intersection of systems modeling and sensemaking is not as well explored as it needs to be so I'll be blogging more about it. To read more about sensemaking in general try this wikipedia article and publications by Gary Klein and K. E. Weick.
It is very common for people to use the idea of a "system" pretty freely when discussing their ideas, projects, and problems . Alas, they often have a pretty fuzzy idea about what a system is and how that perspective can be put to work. "Systems Science" offers a concise definition of a system that is easy to contrast with traditional analytics. In this post I'd like to start at the beginning and try to create a clear mental image of what systems science is and how it provides useful insights.
I found this New York Times story about a couple of high school students foray into genetic fingerprinting fascinating on so many levels. Here it is in a nutshell:
. . . In a tale of teenagers, sushi and science, Kate Stoeckle and Louisa Strauss, who graduated this year from the Trinity School in Manhattan, took on a freelance science project in which they checked 60 samples of seafood using a simplified genetic fingerprinting technique to see whether the fish New Yorkers buy is what they think they are getting.
They found that one-fourth of the fish samples with identifiable DNA were mislabeled . . .
As the father of a teenaged woman I know how clever and motivated these young folk can be. There is nothing they cannot do if they set their minds to it. I certainly related to one girl's father who noted this about their field technique: “It involved shopping and eating, in which they were already fluent.”
At a different level, as a consumer of a fair bit of sushi, I'm totally appalled. If you can't trust your sushi-master, who CAN you trust!?!
Finally, the usefulness of the DNA Barcoding Technique, despite its apparent limitations, is pretty impressive. I think that supermarkets should go way beyond just labeling fresh food with the origin. I want a BAR CODE that I can read with a pocket scanner to determine EXACTLY what I'm getting. Those green beans, for instance, what variety are they really?
I'm going to setup a DNA Barcoding system in my garage . . . .