Archive for the ‘Consulting’ Category
Why best practices are hard to practice (and what can be done about it)
Introduction
In a recent post entitled, Why Best Practices Are Hard to Practice, Ron Ashkenas mentions two common pitfalls that organisations encounter when implementing best practices. These are:
- Lack of adaptation: this refers to a situation in which best practices are applied without customizing them to an organisation’s specific needs.
- Lack or adoption: this to the tendency of best practice initiatives to fizzle out due to lack of adoption in the day-to-day work of an organisation.
Neither point is new: several practitioners and academics have commented on the importance adaptation and adoption in best practice implementations (see this article from 1997, for example). Despite this, organisations continue to struggle when implementing best practices, which suggests a deeper problem. In this post, I explore the possibility that problems of adaptation and adoption arise because much of the knowledge relevant to best practices is tacit – it cannot be codified or captured via symbolic systems (such as writing) or speech. This “missing” tacit knowledge makes it difficult to adapt and adopt practices in a meaningful way. All is not lost, though: best practices can be useful as long as they are viewed as templates or starting points for discussion, rather than detailed prescriptions that are to be imitated uncritically.
The importance of tacit knowledge
Michael Polanyi’s aphorism – “We can know more than we can tell’ – summarises the difference between explicit and tacit knowledge : the former refers to what we can “tell” (write down, or capture in some symbolic form) whereas the latter are the things we know but cannot explain to others via writing or speech alone.
The key point is: tacit knowledge is more relevant to best practices than its explicit counterpart.
“Why?” I hear you ask.
Short Answer: Explicit knowledge is a commodity that can be bought and sold, tacit knowledge isn’t. Hence it is the latter that gives organisations their unique characteristics and competencies.
For a longer answer, I’ll quote from a highly-cited paper by Maskell and Malmberg entitled, Localised Learning and Industrial Competitiveness:
It is a logical and interesting – though sometimes overlooked – consequence of the present development towards a knowledge-based economy, that the easier codified (tradeable) knowledge is accessed, the more significant becomes tacit knowledge for sustaining the heterogeneity of the firm’s resources. If all factors of production, all organisational blue-prints, all market-information and all production technologies were readily available in all parts of the world at (more or less) the same price, economic progress would dwindle. Resource heterogeneity is the very foundation for building firm specific competencies and thus for variations between firms in their competitiveness. Resource heterogeneity fuels the market process of selection between competing firms
Tacit knowledge thus confers a critical advantage on firms. It is precisely this knowledge that distinguishes firms from each other and sets the “best” (however one might choose to define that) apart from the rest. It is the knowledge that best practices purport to capture, but can’t.
Transferring tacit knowledge
The transfer of tacit knowledge is an iterative and incremental process: apprentices learn by practice, by refining their skills over time. Such learning requires close interaction between the teacher and the taught. Communication technology can obviate the need for some face-to-face interaction but he fact remains that proximity is important for effective transfer of tacit knowledge. In the words of Maskell and Malmberg:
The interactive character of learning processes will in itself introduce geographical space as a necessary dimension to take into account. Modern communications technology will admittedly allow more of long distance interaction than was previously possible. Still, certain types of information and knowledge exchange continue to require regular and direct face-to-face contact. Put simply, the more tacit the knowledge involved, the more important is spatial proximity between the actors taking part in the exchange. The proximity argument is twofold. First, it is related to the time geography of individuals. Everything else being equal, interactive collaboration will be less costly and more smooth, the shorter the distance between the participants. The second dimension is related to proximity in a social and cultural sense. To communicate tacit knowledge will normally require a high degree of mutual trust and understanding, which in turn is related not only to language but also to shared values and ‘culture’.
The main point to take away from their argument is that proximity is important for effective transfer of tacit knowledge. The individuals involved need to be near each other geographically (shared space, face-to-face) and culturally (shared values and norms). By implication, this is also the only way to transfer best practice knowledge.
Discussion
Best practices, by definition, aim to capture knowledge that enables successful organizations be what they are. As we have seen above, much of this knowledge is tacit: it is context and history dependent, and requires physical/cultural proximity for effective transfer. Further, it is hard to extract, codify and transfer such knowledge in a way that makes sense outside its original setting. In light of this, it is easy to understand why adapting and adopting best practices is hard: it is hard because best practices are incomplete – they omit important elements (the tacit bits that can’t be written down). Organisations have to (re)discover these in their own way. The explicit and (re-discovered) tacit elements then need to be integrated into new workplace practices that are necessarily different from standardised best practices. This makes the new practices unique to the implementing organisation.
The above suggests that best practices should be seen as starting points – or “bare bones” templates – for transforming an organisation’s work practices. I have made this point in an earlier post in which I reviewed this paper by Jonathan Wareham and Hans Cerrits. Quoting from that post:
[Wareham and Cerrits] suggest an expanded view of best practices which includes things such as:
- Using best practices as guides for learning new technologies or new ways of working.
- Using best practices to generate creative insight into how business processes work in practice.
- Using best practices as a guide for change – that is, following the high-level steps, but not necessarily the detailed prescriptions.
These are indeed sensible and reasonable statements. However, they are much weaker than the usual hyperbole-laden claims that accompany best practices.
The other important implication of the above is that successful adoption of organisational practices is possible only with the active involvement of front-line employees. “Active” is the operative word here, signifying involvement and participation. One of the best ways to get involvement is to seek and act on employee opinions about their day-to-day work practices. Best practices can serve as templates for these discussions. Participation can be facilitated through the use of collective deliberation techniques such as dialogue mapping.
Wrap-up
Best practices have long been plagued by problems of adaptation and adoption. The basic reason for this is that much of the knowledge pertaining to practices is tacit and cannot be transferred easily. Successful implementation requires that organisations use best practices as templates to build on rather than prescriptions to be followed to the letter. A good way to start this process is through participatory design discussions aimed at filling in the (tacit) gaps. These discussions should be conducted in a way that invites involvement of all relevant stakeholders, especially those who will work with and be responsible for the practices. Such an inclusive approach ensures that the practices will be adapted to suit the organisation’s needs. Further, it improves the odds of adoption because it incorporates the viewpoints of the most important stakeholders at the outset.
Paul Culmsee and I are currently working on a book that describes such an approach that goes “beyond best practices”. See this post for an excerpt from the book (and this one for a rather nice mock-up cover!)
The four destructive enthusiasms of IT
Introduction
Several surveys have indicated that IT projects – especially large ones – fail at an alarming rate. In a paper entitled, Pessimism, Computer Failure, and Information Systems Development in the Public Sector, Shaun Goldfinch mentions that 20-30% of projects costing more than $ 10 million are abandoned altogether. Further, over half are over time and/or budget, and do not deliver to expectations. Although Goldfinch’s paper focuses on IT investments in the public sector, the situation in the private sector isn’t much better.
Goldfinch makes the observation that,
Enthusiasm for large and complex investments in IS continues unabated despite decades of failure. Indeed, the largest-ever public sector project was initiated in 2002 by the United Kingdom’s National Health Service at an estimated cost of US$11 billion…
He proposes a model of four pathological enthusiasms which cause key stakeholders to talk-up benefits and downplay difficulties when advocating such projects. In this post, I take a brief look at the model and its utility evaluating project proposals.
The four enthusiasms model
Many projects begin as ideas which originate from a small number of enthusiastic advocates. Often a single enthusiast with sufficient influence can push an ill-conceived project through the approval stages to the point where it is given the go-ahead. According to Goldfinch, such misplaced enthusiasm generally falls into one of the following categories:
- Idolisation (Technological Infatuation): This is a situation where a key business stakeholder believes that business problems can always be solved by technology. Projects driven by such people place technology at the heart of the solution. Such efforts often fail because not enough attention is paid to other factors (people, processes etc.).
- Technophilia: This refers to the IT profession’s belief that all problems have technical solutions. As Goldfinch puts it, “it is the myth of the technological fix, in which “the entire IS profession perpetuates the myth that better technology, and more of it, are the remedies for practical problems.” Efforts driven by technophilia fail because those who are involved get too caught up in learning and mastering the technology rather than solving the problem.
- Lomanism: This term, derived from the protagonist in the play Death of a Salesman, refers to the (real or feigned) over-enthusiasm that IT sales and marketing professionals have for their companies’ products. Unfortunately such folks often have the ear of IT decision-makers who are susceptible to sales pitches that promise untold (but unrealistic) benefits. On the other hand it should also be mentioned that Lomanism is often a response to unrealistic customer expectations coupled with the pressure to meet sales targets. The only clear beneficiaries from Lomanism-driven efforts are technology vendors.
- Managerial faddism: This refers to the tendency of managers and senior executives to fall under the spell of the latest management fads. Many of these fads recommend a wholesale overhaul of organizational structures and processes, and are often accompanied by technical tools. IT service management methodologies are good examples of such fads.
Goldfinch states that:
Together, these four enthusiasms feed on and mutually reinforce one another in a vicious cycle, creating a strongly held belief that newer and larger IS projects are a good idea. Doubters and skeptics may be portrayed as “negative,” “not team players,” “not helpful”… Together, these pathologies make up the four enthusiasms of IT failure. When a project does encounter difficulties, these four enthusiasms can undermine attempts to curtail or abandon the project — a project can always be fixed with better management, a redesigned monitoring system or contract, more technology or hardware, better programming, or just a reassuring “it’ll be right on the night.”
Conclusion
Goldfinch suggests that large IT projects are often driven by one of four types of enthusiasm. These can lead to projects being driven by nothing more than wishful thinking and undue optimism. To counter this, he recommends that decision-makers take a pessimistic view when evaluating proposals for IT projects. Among other things this means questioning assumptions, particularly those relating to the technology that will be employed. Independent opinions are a good way to do this, but truly unbiased ones can be hard to come by (vested interests aren’t always obvious). In the end the solution may be as simple as relying on one’s own common sense and judgement. That’s where the model can help: viewing a business case or project proposal through the lens of the model can show up over-optimistic claims and projections.
Cause and effect in management
Introduction
Management schools and gurus tell us that specific managerial actions will lead to desirable consequences – witness the prescriptions for success in books such as Good to Great or In Search of Excellence. But can one really attribute success (or failure) to specific actions? A cause-effect relationship is often assumed, but in reality the causal connection between strategic management actions and organisational outcomes is tenuous. This post, based on a paper by Glenn Shafer entitled Causality and Responsibility, is an exploration of the causal connection between managerial actions and their (assumed) consequences.
Note that the discussion below applies to strategic – or “big picture” – management decisions, not operational ones. In the latter, cause and effect is generally quite clear cut. For example, the decision to initiate a project sets in motion several processes that have fairly predictable outcomes. However, taking a big picture view, initiating a project (or even the successful completion of one) does not imply that the strategic aims of the project will be met. It is the latter point that is of interest here – the causal connection between a strategic decision and its assumed outcome.
Shafer’s paper deals with causality and responsibility in legal deliberations: specifically, the process by which judges and juries reach their verdict as to whether the accused (person or entity) is actually responsible (in a causal sense) for the outcome they are charged with. In short, did the actions of the accused cause the outcome? The arguments Shafer makes are quite general, and have applicability to any discipline. In the following paragraphs I’ll look at a couple of the key points he makes and outline their implications for cause and effect in management actions.
Deterministic cause-effect relationships
The first point that Shafer makes is that we should infer that a particular action causes a particular outcome only if it is improbable that the outcome could have happened without the action preceding it. In Shafer’s words:
…we are on safe ground in attributing responsibility if we do so based on our knowledge of impossibilities. It is not surprising, therefore, that the classical legal concept of cause – necessary and sufficient cause – is defined in terms of impossibility. According to this concept, an action causes an event if the event must happen (it is impossible for it not to happen) when the action is taken and cannot happen (it is impossible for it to happen) if the action is not taken.
This is, in fact, what legal arguments attempt to do: they attempt to prove, beyond reasonable doubt, that the crime occurred because of the defendants actions.
The reason that impossibilities are a better way of “proving” causal relationships is that such relationships cannot be invalidated as our knowledge of the situation increases providing the knowledge that we already have is valid. In other words, once something is deemed impossible (using valid knowledge) then it remains so even if we get to know more about the situation. In contrast, if something is deemed possible in the light of existing knowledge, it can be rendered false by a single contradictory fact.
The implication of the above for cause and effect in management is clear: a manager can (should!) claim responsibility for a particular outcome only if:
- The outcome must (almost always) happen if the managerial action occurs.
- It is highly unlikely that the outcome could have occurred without the action occurring prior to it.
Seen in this light, many of the prescriptions laid out in management bestsellers are little better than herpetological oleum.
Probabilistic cause-effect relationships
Of course, deterministic cause-effect relationships aren’t the norm in management – only the supremely confident (foolhardy?) would claim that a specific managerial action will always lead to a specific organisational outcome. This begs the question: what about probabilistic relationships? That is, what can we say about claims that a particular action results in a particular effect, but only in a fraction of the instances in which the action occurs?
To address this question, Shafer makes the important point that probabilities not close to zero or one have no meaning in isolation. They have meaning only in a system, and their meaning derives from the impossibility of a successful gambling strategy—the probability close to one that no one can make a substantial amount of money betting at the odds given by the probabilities. The last part of the previous statement is a consequence of how probabilities are validated empirically. In Shafer’s words:
We validate a system of probabilities empirically by performing statistical tests. Each such test checks whether observations have some overall property that the system says they are practically certain to have. It checks, in other words, on whether observations diverge from the probabilistic model in a way that the model says is practically (approximately) impossible. In Probability and Finance: It’s Only a Game, Vovk and I argue that both the applications of probability and the classical limit theorems (the law of large numbers, the central limit theorem, etc.) can be most clearly understood and most elegantly explained if we treat these asserted practical impossibilities as the basic meaning of a probabilistic or statistical model, from which all other mathematical and practical conclusions are to be derived. I cannot go further into the argument of the book here, but I do want to emphasize one of its consequences: because the empirical validity of a system of probabilities involves only the approximate impossibilities it implies, it is only these approximate impossibilities that we should expect to see preserved in a deeper causal structure. Other probabilities, those not close to zero or one, may not be preserved and hence cannot claim the causal status.
An implication of the above is that probabilities not close to zero or one are not fundamental properties of the system/situation; they are subject to change as our knowledge of the situation/system improves. A simple example may serve to explain this point. Consider the following hypothetical claim from a software vendor:
“80% of our customers experience an increase in sales after implementing our software system.”
Presumably, the marketing department responsible for this statement has the data to back it up. Despite that, the increase in sales for a particular customer cannot (should not!) be attributed to the software. Why? Well, for the following reasons:
- The particular customer may differ in important ways from those used in estimating the probability.This is a manifestation of the reference class problem.
- Most statistical studies of the kind used in marketing or management studies are enumerative, not analytical – i.e they can be used to classify data, but not to establish cause-effect relationships. See my post entitled Enumeration or Analysis for more onthe differences between enumerative and analytical studies.
There is an underlying reason for the above which I’ll discuss next.
The root of the problem – too many variables
The points made above – that outcomes cannot be attributed to actions unless the probabilities involved are close to zero or one – is a consequence of the fact that most organisational outcomes are results of several factors. Therefore it is incorrect to attribute the outcome to a single factor (such as farsighted managerial action). Nancy Cartwright makes this point in her paper entitled Causal Laws and Effective Strategies, where she states that a cause ought to increase the frequency of its purported outcome, but this increase can be masked by other causal factors that have not been taken into account. She uses the somewhat dated and therefore incorrect example of the relationship between smoking and heart disease. However, it serves to illustrate the point, so I’ll quote it below:
…a cause ought to increase the frequency of its effect. But this fact may not show up in the probabilities if other causes are at work. Background correlations between the purported cause and other causal factors may conceal the increase in probability which would otherwise appear. A simple example will illustrate. It is generally supposed that smoking causes heart disease. Thus, we may expect that the probability of heart disease on smoking is greater than otherwise (K’s note: i.e. the conditional probability of heart disease given that the person is a smoker, P(H/S), is greater than the probability of heart disease in the general population, P(H)). This expectation is mistaken, however. Even if it is true that smoking causes heart disease, the expected increase in probability will not appear if smoking is correlated with a sufficiently strong preventative, say exercising. To see why this is so, imagine that exercising is more effective at preventing heart disease than smoking at causing it. Then in any population where smoking and exercising are highly enough correlated, it can be true that P(H/S) = P(H), or even P(H/S) < P(H). For the population of smokers also contains a good many exercisers, and when the two are in combination, the exercising tends to dominate….
In the case of strategic outcomes, it is impossible to know all the factors involved. Moreover, the factors are often interdependent and subject to positive feedback (see my previous post for more on this). So the problem is even worse than implied by Cartwright’s example.
Conclusions
The implications of the above can be summarised as follows: the efficacy of most strategic managerial actions is questionable because the probabilities involved in such claims are rarely close to zero or one. This shouldn’t be a surprise: most organisational outcomes are consequences of several factors acting in concert, many of which combine in unpredictable ways. Given this is unreasonable to expect that managerial actions will result in predictable organisational outcomes. That said, it is only natural to claim responsibility for desirable outcomes and shift the blame for undesirable ones, as it is to seek simplistic solutions to difficult organisational problems. Hence the insatiable market for management snake oil.

