Archive for the ‘General Management’ Category
Cause and effect in management
Introduction
Management schools and gurus tell us that specific managerial actions will lead to desirable consequences – witness the prescriptions for success in books such as Good to Great or In Search of Excellence. But can one really attribute success (or failure) to specific actions? A cause-effect relationship is often assumed, but in reality the causal connection between strategic management actions and organisational outcomes is tenuous. This post, based on a paper by Glenn Shafer entitled Causality and Responsibility, is an exploration of the causal connection between managerial actions and their (assumed) consequences.
Note that the discussion below applies to strategic – or “big picture” – management decisions, not operational ones. In the latter, cause and effect is generally quite clear cut. For example, the decision to initiate a project sets in motion several processes that have fairly predictable outcomes. However, taking a big picture view, initiating a project (or even the successful completion of one) does not imply that the strategic aims of the project will be met. It is the latter point that is of interest here – the causal connection between a strategic decision and its assumed outcome.
Shafer’s paper deals with causality and responsibility in legal deliberations: specifically, the process by which judges and juries reach their verdict as to whether the accused (person or entity) is actually responsible (in a causal sense) for the outcome they are charged with. In short, did the actions of the accused cause the outcome? The arguments Shafer makes are quite general, and have applicability to any discipline. In the following paragraphs I’ll look at a couple of the key points he makes and outline their implications for cause and effect in management actions.
Deterministic cause-effect relationships
The first point that Shafer makes is that we should infer that a particular action causes a particular outcome only if it is improbable that the outcome could have happened without the action preceding it. In Shafer’s words:
…we are on safe ground in attributing responsibility if we do so based on our knowledge of impossibilities. It is not surprising, therefore, that the classical legal concept of cause – necessary and sufficient cause – is defined in terms of impossibility. According to this concept, an action causes an event if the event must happen (it is impossible for it not to happen) when the action is taken and cannot happen (it is impossible for it to happen) if the action is not taken.
This is, in fact, what legal arguments attempt to do: they attempt to prove, beyond reasonable doubt, that the crime occurred because of the defendants actions.
The reason that impossibilities are a better way of “proving” causal relationships is that such relationships cannot be invalidated as our knowledge of the situation increases providing the knowledge that we already have is valid. In other words, once something is deemed impossible (using valid knowledge) then it remains so even if we get to know more about the situation. In contrast, if something is deemed possible in the light of existing knowledge, it can be rendered false by a single contradictory fact.
The implication of the above for cause and effect in management is clear: a manager can (should!) claim responsibility for a particular outcome only if:
- The outcome must (almost always) happen if the managerial action occurs.
- It is highly unlikely that the outcome could have occurred without the action occurring prior to it.
Seen in this light, many of the prescriptions laid out in management bestsellers are little better than herpetological oleum.
Probabilistic cause-effect relationships
Of course, deterministic cause-effect relationships aren’t the norm in management – only the supremely confident (foolhardy?) would claim that a specific managerial action will always lead to a specific organisational outcome. This begs the question: what about probabilistic relationships? That is, what can we say about claims that a particular action results in a particular effect, but only in a fraction of the instances in which the action occurs?
To address this question, Shafer makes the important point that probabilities not close to zero or one have no meaning in isolation. They have meaning only in a system, and their meaning derives from the impossibility of a successful gambling strategy—the probability close to one that no one can make a substantial amount of money betting at the odds given by the probabilities. The last part of the previous statement is a consequence of how probabilities are validated empirically. In Shafer’s words:
We validate a system of probabilities empirically by performing statistical tests. Each such test checks whether observations have some overall property that the system says they are practically certain to have. It checks, in other words, on whether observations diverge from the probabilistic model in a way that the model says is practically (approximately) impossible. In Probability and Finance: It’s Only a Game, Vovk and I argue that both the applications of probability and the classical limit theorems (the law of large numbers, the central limit theorem, etc.) can be most clearly understood and most elegantly explained if we treat these asserted practical impossibilities as the basic meaning of a probabilistic or statistical model, from which all other mathematical and practical conclusions are to be derived. I cannot go further into the argument of the book here, but I do want to emphasize one of its consequences: because the empirical validity of a system of probabilities involves only the approximate impossibilities it implies, it is only these approximate impossibilities that we should expect to see preserved in a deeper causal structure. Other probabilities, those not close to zero or one, may not be preserved and hence cannot claim the causal status.
An implication of the above is that probabilities not close to zero or one are not fundamental properties of the system/situation; they are subject to change as our knowledge of the situation/system improves. A simple example may serve to explain this point. Consider the following hypothetical claim from a software vendor:
“80% of our customers experience an increase in sales after implementing our software system.”
Presumably, the marketing department responsible for this statement has the data to back it up. Despite that, the increase in sales for a particular customer cannot (should not!) be attributed to the software. Why? Well, for the following reasons:
- The particular customer may differ in important ways from those used in estimating the probability.This is a manifestation of the reference class problem.
- Most statistical studies of the kind used in marketing or management studies are enumerative, not analytical – i.e they can be used to classify data, but not to establish cause-effect relationships. See my post entitled Enumeration or Analysis for more onthe differences between enumerative and analytical studies.
There is an underlying reason for the above which I’ll discuss next.
The root of the problem – too many variables
The points made above – that outcomes cannot be attributed to actions unless the probabilities involved are close to zero or one – is a consequence of the fact that most organisational outcomes are results of several factors. Therefore it is incorrect to attribute the outcome to a single factor (such as farsighted managerial action). Nancy Cartwright makes this point in her paper entitled Causal Laws and Effective Strategies, where she states that a cause ought to increase the frequency of its purported outcome, but this increase can be masked by other causal factors that have not been taken into account. She uses the somewhat dated and therefore incorrect example of the relationship between smoking and heart disease. However, it serves to illustrate the point, so I’ll quote it below:
…a cause ought to increase the frequency of its effect. But this fact may not show up in the probabilities if other causes are at work. Background correlations between the purported cause and other causal factors may conceal the increase in probability which would otherwise appear. A simple example will illustrate. It is generally supposed that smoking causes heart disease. Thus, we may expect that the probability of heart disease on smoking is greater than otherwise (K’s note: i.e. the conditional probability of heart disease given that the person is a smoker, P(H/S), is greater than the probability of heart disease in the general population, P(H)). This expectation is mistaken, however. Even if it is true that smoking causes heart disease, the expected increase in probability will not appear if smoking is correlated with a sufficiently strong preventative, say exercising. To see why this is so, imagine that exercising is more effective at preventing heart disease than smoking at causing it. Then in any population where smoking and exercising are highly enough correlated, it can be true that P(H/S) = P(H), or even P(H/S) < P(H). For the population of smokers also contains a good many exercisers, and when the two are in combination, the exercising tends to dominate….
In the case of strategic outcomes, it is impossible to know all the factors involved. Moreover, the factors are often interdependent and subject to positive feedback (see my previous post for more on this). So the problem is even worse than implied by Cartwright’s example.
Conclusions
The implications of the above can be summarised as follows: the efficacy of most strategic managerial actions is questionable because the probabilities involved in such claims are rarely close to zero or one. This shouldn’t be a surprise: most organisational outcomes are consequences of several factors acting in concert, many of which combine in unpredictable ways. Given this is unreasonable to expect that managerial actions will result in predictable organisational outcomes. That said, it is only natural to claim responsibility for desirable outcomes and shift the blame for undesirable ones, as it is to seek simplistic solutions to difficult organisational problems. Hence the insatiable market for management snake oil.
On the interpretation of probabilities in project management
Introduction
Managers have to make decisions based on an imperfect and incomplete knowledge of future events. One approach to improving managerial decision-making is to quantify uncertainties using probability. But what does it mean to assign a numerical probability to an event? For example, what do we mean when we say that the probability of finishing a particular task in 5 days is 0.75? How is this number to be interpreted? As it turns out there are several ways of interpreting probabilities. In this post I’ll look at three of these via an example drawn from project estimation.
Although the question raised above may seem somewhat philosophical, it is actually of great practical importance because of the increasing use of probabilistic techniques (such as Monte Carlo methods) in decision making. Those who advocate the use of these methods generally assume that probabilities are magically “given” and that their interpretation is unambiguous. Of course, neither is true – and hence the importance of clarifying what a numerical probability really means.
The example
Assume there’s a task that needs doing – this may be a project task or some other job that a manager is overseeing. Let’s further assume that we know the task can take anywhere between 2 to 8 days to finish, and that we (magically!) have numerical probabilities associated with completion on each of the days (as shown in the table below). I’ll say a teeny bit more about how these probabilities might be estimated shortly.
| Task finishes on | Probability |
| Day 2 | 0.05 |
| Day 3 | 0.15 |
| Day 4 | 0.3 |
| Day5 | 0.25 |
| Day 6 | .15 |
| Day 7 | .075 |
| Day 8 | .025 |
This table is a simple example of what’s technically called a probability distribution. Distributions express probabilities as a function of some variable. In our case the variable is time.
How are these probabilities obtained? There is no set method to do this but commonly used techniques are:
- By using historical data for similar tasks.
- By asking experts in the field.
Estimating probabilities is a hard problem. However, my aim in this article is to discuss what probabilities mean, not how they are obtained. So I’ll take the probabilities mentioned above as given and move on.
The rules of probability
Before we discuss the possible interpretations of probability, it is necessary to mention some of the mathematical properties we expect probabilities to possess. Rather than present these in a formal way, I’ll discuss them in the context of our example.
Here they are:
- All probabilities listed are numbers that lie between 0 (impossible) and 1 (absolute certainty).
- It is absolutely certain that the task will finish on one of the listed days. That is, the sum of all probabilities equals 1.
- It is impossible for the task not to finish on one of the listed days. In other words, the probability of the task finishing on a day not listed in the table is 0.
- The probability of finishing on any one of many days is given by the sum of the probabilities for all those days. For example, the probability of finishing on day 2 or day 3 is 0.20 (i.e, 0.05+0.15). This holds because the two events are mutually exclusive – that is, the occurence of one event precludes the occurence of the other. Specifically, if we finish on day 2 we cannot finish on day 3 (or any other day) and vice-versa.
These statements illustrate the mathematical assumptions (or axioms) of probability. I won’t write them out in their full mathematical splendour, those interested in this should head off to the Wikipedia article on the axioms of probability.
Another useful concept is that of cumulative probability which, in our example, is the probability that the task will be completed by a particular day . For example, the probability that the task will be completed by day 5 is 0.75 (the sum of probabilities for days 2 through 5). In general, the cumulative probability of finishing on any particular day is the sum of probabilities of completion for all days up to and including that day.
Interpretations of probability
With that background out of the way, we can get to the main point of this article which is:
What do these probabilities mean?
We’ll explore this question using the cumulative probability example mentioned above, and by drawing on a paper by Glen Shafer entitled, What is Probability?
OK, so what is meant by the statement, “There is a 75% chance that the task will finish in 5 days.” ?
It could mean that:
- If this task is done many times over, it will be completed within 5 days in 75% of the cases. Following Shafer, we’ll call this the frequency interpretation.
- It is believed that there is a 75% chance of finishing this task in 5 days. Note that belief can be tested by seeing if the person who holds the belief is willing to place a bet on task completion with odds that are equivalent to the believed probability. Shafer calls this the belief interpretation.
- Based on a comparison to similar tasks this particular task has a 75% chance of finishing in 5 days. Shafer refers to this as the support interpretation.
(Aside: The belief and support interpretations involve subjective and objective states of knowledge about the events of interest respectively. These are often referred to as subjective and objective Bayesian interpretations because knowledge about these events can be refined using Bayes Theorem, providing one has relevant data regarding the occurrence of events.)
The interesting thing is that all the above interpretations can be shown to satisfy the axioms of probability discussed earlier (see Shafer’s paper for details). However, it is clear from the above that each of these interpretations have very different meanings. We’ll take a closer look at this next.
More about the interpretations and their limitations
The frequency interpretation appears to be the most rational one because it interprets probabilities in terms of results of experiments – I.e. it interprets probabilities as experimental facts, not beliefs. In Shafer’s words:
According to the frequency interpretation, the probability of an event is the long-run frequency with which the event occurs in a certain experimental setup or in a certain population. This frequency is a fact about the experimental setup or the population, a fact independent of any person’s beliefs.
However, there is a big problem here: it assumes that such an experiment can actually be carried out. This definitely isn’t possible in our example: tasks cannot be repeated in exactly the same way – there will always be differences, however small.
There are other problems with the frequency interpretation. Some of these include:
- There are questions about whether a sequence of trials will converge to a well-defined probability.
- What if the event cannot be repeated?
- How does one decide on what makes up the population of all events. This is sometimes called the reference class problem.
See Shafer’s article for more on these.
The belief interpretation treats probabilities as betting odds. In this interpretation a 75% probability of finishing in 5 days means that we’re willing to put up 75 cents to win a dollar if the task finishes in 5 days (or equivalently 25 cents to win a dollar if it doesn’t). Note that this says nothing about how the bettor arrives at his or her odds. These are subjective (personal) beliefs. However, they are experimentally determinable – one can determine peoples’ subjective odds by finding out how theyactually place bets.
There is a good deal of debate about whether the belief interpretation is normative or descriptive: that is, do the rules of probability tell us what people’s beliefs should be or do they tell us what they actually are. Most people trained in statistics would claim the former – that the rules impose conditions that beliefs should satisfy. In contrast, in management and behavioural science, probabilities based on subjective beliefs are often assumed to describe how the world actually is. However, the wealth of literature on cognitive biases suggests that the people’s actual beliefs, as reflected in their decisions, do not conform to the rules of probability. The latter observation seems to favour normative option, but arguments can be made in support (or refutation) of either position.
The problem mentioned the previous paragraph is a perfect segue into the support interpretation, according to which the probability of an event occurring is the degree to which we should believe that it will occur (based on available evidence). This seems fine until we realize that evidence can come in many “shapes and sizes.” For example, compare the statements “the last time we did something similar we finished in 5 days, based on which we reckon there’s a 70-80% chance we’ll finish in 5 days” and “based on historical data for gathered for 50 projects, we believe that we have a 75% chance of finishing in 5 days. “ The two pieces of evidence offer very different levels of support. Therefore, although the support interpretation appears to be more objective than the belief interpretation, it isn’t actually so because it is difficult to determine which evidence one should use. So, unlike the case of subjective beliefs (where one only has to ask people about their personal odds), it is not straightforward to determine these probabilities empirically.
So we’re left with a situation in which we have three interpretations, each of which address specific aspects of probability but also have major shortcomings.
Is there any way to break the impasse?
A resolution?
Shafer suggests that the three interpretations of probability are best viewed as highlighting different aspects of a single situation: that of an idealized case where we have a sequence of experiments with known probabilities. Let’s see how this statement (which is essentially the frequency interpretation) can be related to the other two interpretations.
Consider my belief that that the task has a 75% chance of finishing in 5 days. This is analogous to saying that if the task is done several times over, I believe it would finish in 5 days in 75% of the cases. My belief can be objectively confirmed by testing my willingness to put up 75 cents to win a dollar if the task finishes in five days. Now, when I place this bet I have my (personal) reasons for doing so. However, these reasons ought to relate to knowledge of the fair odds involved in the said bet. Such fair odds can only be derived from knowledge of what would happen in a (possibly hypothetical) sequence of experiments.
The key assumption in the above argument is that my personal odds aren’t arbitrary – I should be able to justify them to another (rational) person.
Let’s look at the support interpretation. In this case I have hard evidence for stating that there’s a 75% chance of finishing in 5 days. I can take this hard evidence as my personal degree of belief (remember, as stated in the previous paragraph, any personal degree of belief should have some such rationale behind it.) However, since it is based on hard evidence, it should be rationally justifiable and hence can be associated with a sequence of experiments.
So what?
The main point from the above is the following: probabilities may be interpreted in different ways, but they have an underlying unity. That is, when we state that there is a 75% probability of finishing a task in 5 days, we are implying all the following statements (with no preference for any particular one):
- If we were to do the task several times over, it will finish within five days in three-fourths of the cases. Of course, this will hold only if the task is done a sufficiently large number of times (which may not be practical in most cases)
- We are willing to place a bet given 3:1 odds of completion within five days.
- We have some hard evidence to back up statement (1) and our betting belief (2).
In reality, however, we tend to latch on to one particular interpretation depending on the situation. One is unlikely to think in terms of hard evidence when one is buying a lottery ticket but hard evidence is a must when estimating a project. When tossing a coin one might instinctively use the frequency interpretation but when estimating a task that hasn’t been done before one might use personal belief. Nevertheless, it is worth remembering that regardless of the interpretation we choose, all three are implied. So the next time someone gives you a probabilistic estimate, ask them if they have the evidence to back it up for sure, but don’t forget to ask if they’d be willing to accept a bet based on their own stated odds. 🙂
Beyond Best Practices: a paper review and the genesis of a collaboration
Introduction
The fundamental premise behind best practices is that it is possible to reproduce the successes of those who excel by imitating them. At first sight this assumption seems obvious and uncontroversial. However, most people who have lived through an implementation of a best practice know that following such prescriptions does not guarantee success. Actually, anecdotal evidence suggests the contrary: that most attempts at implementing best practices fail. This paradox remains unnoticed by managers and executives who continue to commit their organisations to implementing best practices that are, at best, of dubious value.
Why do best practices fail? There has been a fair bit of research on the shortcomings of best practices, and the one thing it tells us is that there is no simple answer to this question. In this post I’ll discuss this issue, drawing upon an old (but still very relevant) paper by Jonathan Wareham and Han Cerrits entitled, De-Contextualising Competence: Can Best Practice be Bundled and Sold. Note that I will not cover the paper in its entirety; my discussion will focus only on those aspects that relate to the question raised above.
I may as well say it here: I have a secondary aim (or more accurately, a vested interest) in discussing this paper. Over the last few months Paul Culmsee and I have been working on a book that discusses reasons why best practices fail and proposes some practical techniques to address their shortcomings. I’ll end this post with a brief discussion of the background and content of the book (see this post for Paul’s take on the book). But let’s look at the paper first…
Background
On the first page of the paper the authors state:
Although the concept of ‘imitating excellent performers’ may seem quite banal at first glance, the issue, as we will argue, is not altogether that simple after deeper consideration. Accordingly, the purpose of the paper is to explore many of the fundamental, often unquestioned, assumptions which underlie the philosophy and application of Business Best Practice transfer. In illuminating the central empirical and theoretical problems of this emerging discipline, we hope to refine our expectations of what the technique can yield, as well as contribute to theory and the improvement of practice.
One of the most valuable aspects of the paper is that it lists some of the implicit assumptions that are often glossed over by consultants and others who sell and implement best practice methodologies. It turns out that these assumptions are not valid in most practical situations, which renders the practices themselves worthless.
The implicit assumptions
According to Wareham and Cerrits, the unstated premises behind best practices include:
- Homogeneity of organisations: Most textbooks and courses on best practices present the practices as though they have an existence that is independent of organizational context. Put another way: they assume that all organisations are essentially the same. Clearly, this isn’t the case – organisations are defined by their differences.
- Universal yardstick: Best practices assume that there is a universal definition of what’s best, that what’s best for one is best for all others. This assumption is clearly false as organisations have different (dare I say, unique) environments, objectives and strategies. How can a universal definition of “best” fit all?
- Transferability: Another tacit assumption in the best practice business is that practices can be transplanted on to receiving organisations wholesale. Sure, in recent years it has been recognized that such transplants are successful only if a) the recipient organisation undertakes the changes necessary for the transplant to work and b) the practice itself is adapted to the recipient organisation. The point is in most successful cases, the change or adaptation is so great that it no longer resembles that original best practice. This is an important point – to have a hope in hell of working, best practices have to be adapted extensively. It is also worth mentioning that such adaptations will succeed only if they are made in consultation with those who will be affected by the practices. I’ll say more about this later in this post
- Alienability and stickiness: These are concepts that relate to the possibility of extracting relevant knowledge pertaining to a best practice from a source and transferring it without change to a recipient. Alienability refers to the possibility of extracting relevant knowledge from the source. Alienability is difficult because best practice knowledge is often tacit, and is therefore difficult to codify. Stickiness refers to the willingness of the recipient to learn this knowledge, and his or her ability to absorb it. Stickiness highlights the importance of obtaining employee buy-in before implementing best practices. Unfortunately most best practice implementations gloss over the issues of alienability and stickiness.
- Validation: Wareham and Cerrits contend that best practices are rarely validated. More often than not, recipient organisations simply believe that they will work, based on their consultants’ marketing spiel. See this short piece by Paul Strassman for more on the dangers of doing so.
What does “best” mean anyway?
After listing the implicit assumptions, Wareham and Cerrits argue that the conceptual basis for defining a particular practice as being “best” is weak. Their argument hinges on the observation that it is impossible to attribute the superior performance of a firm to specific managerial practices. Why? Well, because one cannot perform a control experiment to see what would happen if those practices weren’t used.
Related to the above is the somewhat subtle point that it is impossible to say, with certainty, whether practices, as they exist within model organisations, are consequences of well-thought out managerial action or whether they are merely adaptations to changing environments. If the latter were true, then there is no best practice, because the practices as they exist in model organisations are essentially random responses to organizational stimuli.
Wareham and Cerrits also present an economic perspective on best practice acquisition and transfer, but I’ll omit this as it isn’t of direct relevance to the question of why best practices fail.
Implications
The authors draw the following conclusions from their analysis:
- The very definition of best practices is fraught with pitfalls.
- Environmental factors have a significant effect on the evolution and transfer(ability) of “best” practices. Consequently, what works in one organisation may not work in another.
So, can anything be salvaged? Wareham and Cerrits think so. They suggest an expanded view of best practices which includes things such as:
- Using best practices as guides for learning new technologies or new ways of working.
- Using best practices to generate creative insight into how business processes work in practice.
- Using best practices as a guide for change – that is, following the high-level steps, but not necessarily the detailed prescriptions.
These are indeed sensible and reasonable statements. However, they are much weaker than the usual hyperbole-laden claims that accompany best practices.
Discussion
Cerrits and Johnson focus on the practices themselves, not the problems they are used to solve. In my opinion, another key reason why best practices fail is that they are applied without a comprehensive understanding of the problem that they are intended to address.
I’ll clarify this using an example: in a quest to improve efficiency an organisation might go through a major restructure. All too often, such organisations will not think through all the consequences of the restructuring (what are the long-term consequences of outsourcing certain functions, for instance). The important point to realize is that a comprehensive understanding of the consequences is possible only if all stakeholders – management and employees – are involved in planning the restructure. Unfortunately, such a bottom-up approach is rarely taken because of the effort involved, and the wrong-headed perception that chaos may ensue from management actually talking to people on the metaphorical shop floor. So most organizations take a top-down approach, dictating what will be done, with little or no employee involvement.
Organisations focus on how to achieve a particular end. The end itself, the reasons for wanting to achieve it and the consequences of doing so remain unexplored; it is assumed that these are obvious to all stakeholders. To put it in aphoristically: organizations focus on the “how” not the on the “what” or why.”
The heart of the matter
The key to understanding why best practices do not work is to realise that many organizational problems are wicked problems: i.e., problems that are hard to define, let alone solve’s (see this paper for a comprehensive discussion of wicked problems). Let’s look at organizational efficiency, for example. What does it really mean to improve organizational efficiency? More to the point, how can one arrive at a generally agreed way to improve organizational efficiency? By generally agreed, I mean a measure that all stakeholders understand and agree on. Note that “efficiency “is just an example here – the same holds for most other matters of strategic importance to organizations: organisational strategy is a wicked problem.
Since wicked problems are hard to pin down (because they mean different things to different people), the first step to solving them is to ensure that all stakeholders have a common (or shared) understanding of what the problem is. The next step is to achieve a shared commitment to solving that problem. Any technique that could help achieve a shared understanding of wicked problems and commitment to solving them would truly deserve to be called the one best practice to rule them all.
The genesis of a collaboration
About a year ago, in a series of landmark posts entitled The One Best Practice to Rule Them All, Paul Culmsee wrote about his search for a practical method to manage wicked problems. In the articles he made a convincing case that dialogue mapping can help a diverse group of stakeholders achieve a shared understanding of such problems. Paul’s writings inspired me to learn dialogue mapping and use it at work. I was impressed – here, finally, was a technique that didn’t claim to be a best practice, but had the potential to address some of the really complex problems that organisations face.
Since then, Paul and I have had several conversations about the failure of best practices in to tackling issues ranging from organizational change to project management. Paul is one of those rare practitioners with an excellent grounding in theory and practice. I learnt a lot from him in those conversations. Among other things, he told me about his experiences in using dialogue mapping to tackle apparently intractable problems (see this case study from Paul’s company, for example).
Late last year, we thought of writing up some of the things we’d been talking about in a series of joint blog posts. Soon we realised that we had much more to say than would fit into a series of posts – we probably had enough for a book. We’re a few months into writing that book, and are quite pleased with the way it’s turning out.
Here’s a very brief summary of the book. The first part analyses why best practices fail. Our analysis touches upon diverse areas like organizational rhetoric, cognitive bias, memetics and scientific management (topics that both Paul and I have written about on our blogs). The second part of the book presents a series of case studies that illustrate some techniques that address complex problems that organizations face. The case studies are based on our experiences in using dialogue mapping and other techniques to tackle wicked problems relating to organizational strategy and project management. The techniques we discuss go beyond the rhetoric of best practices – they work because they use a bottom-up approach that takes into account the context and environment in which the problems live.
Now, Paul writes way better than I do. For one, his writing is laugh-out-loud funny, mine isn’t. Those who have read his work and mine may be wondering how our very different styles will combine. I’m delighted to report that the book is way more conversational and entertaining than my blog posts. However, I should also emphasise that we are trying to be as rigorous as we can by backing up our claims by references to research papers and/or case studies.
We’re learning a lot in the process of writing, and are enthused and excited about the book . Please stay tuned – we’ll post occasional updates on how it is progressing.
Update (16 June 2010):
An excerpt from the book has been published here.
Update (27 Nov 2011):
The book, which has a new title, is currently in the final round of proofs. Hopefully it will be available for pre-order in a month or two.
Update (05 Dec 2011):
It’s out!
Get your copy via Amazon or Book Depository.
The e-book can be obtained from iUniverse (PDF or Epub formats) or Amazon (Kindle).

