Author Archive
The failure of risk management: a book review
Introduction
Any future-directed activity has a degree of uncertainty, and uncertainty implies risk. Bad stuff happens – anticipated events don’t unfold as planned and unanticipated events occur. The main function of risk management is to deal with this negative aspect of uncertainty. The events of the last few years suggest that risk management as practiced in many organisations isn’t working. A book by Douglas Hubbard entitled, The Failure of Risk Management – Why it’s Broken and How to Fix It, discusses why many commonly used risk management practices are flawed and what needs to be done to fix them. This post is a summary and review of the book.
Interestingly, Hubbard began writing the book well before the financial crisis of 2008 began to unfold. So although he discusses matters pertaining to risk management in finance, the book has a much broader scope. For instance, it will be of interest to project and program/portfolio management professionals because many of the flawed risk management practices that Hubbard mentions are often used in project risk management.
The book is divided into three parts: the first part introduces the crisis in risk management; the second deals with why some popular risk management practices are flawed; the third discusses what needs to be done to fix these. My review covers the main points of each section in roughly the same order as they appear in the book.
The crisis in risk management
There are several risk management methodologies and techniques in use ; a quick search will reveal some of them. Hubbard begins his book by asking the following simple questions about these:
- Do these risk management methods work?
- Would any organisation that uses these techniques know if they didn’t work?
- What would be the consequences if they didn’t work
His contention is that for most organisations the answers to the first two questions are negative. To answer the third question, he gives the example of the crash of United Flight 232 in 1989. The crash was attributed to the simultaneous failure of three independent (and redundant) hydraulic systems. This happened because the systems were located at the rear of the plane and debris from a damaged turbine cut lines to all them. This is an example of common mode failure – a single event causing multiple systems to fail. The probability of such an event occurring was estimated to be less than one in a billion. However, the reason the turbine broke up was that it hadn’t been inspected properly (i.e. human error). The probability estimate hadn’t considered human oversight, which is way more likely than one-in-billion. Hubbard uses this example to make the point that a weak risk management methodology can have huge consequences.
Following a very brief history of risk management from historical times to the present, Hubbard presents a list of common methods of risk management. These are:
- Expert intuition – essentially based on “gut feeling”
- Expert audit – based on expert intuition of independent consultants. Typically involves the development of checklists and also uses stratification methods (see next point)
- Simple stratification methods – risk matrices are the canonical example of stratification methods.
- Weighted scores – assigned scores for different criteria (scores usually assigned by expert intuition), followed by weighting based on perceived importance of each criterion.
- Non-probabilistic financial analysis –techniques such as computing the financial consequences of best and worst case scenarios
- Calculus of preferences – structured decision analysis techniques such as multi-attribute utility theory and analytic hierarchy process. These techniques are based on expert judgements. However, in cases where multiple judgements are involved these techniques ensure that the judgements are logically consistent (i.e. do not contradict the principles of logic).
- Probabilistic models – involves building probabilistic models of risk events. Probabilities can be based on historical data, empirical observation or even intuition. The book essentially builds a case for evaluating risks using probabilistic models, and provides advice on how these should be built
The book also discusses the state of risk management practice (at the end of 2008) as assessed by surveys carried out by The Economist, Protiviti and Aon Corporation. Hubbard notes that the surveys are based largely on self-assessments of risk management effectiveness. One cannot place much confidence in these because self-assessments of risk are subject to well known psychological effects such as cognitive biases (tendencies to base judgements on flawed perceptions) and the Dunning-Kruger effect (overconfidence in one’s abilities). The acid test for any assessment is whether or not it use sound quantitative measures. Many of the firms surveyed fail on this count: they do not quantify risks as well as they claim they do. Assigning weighted scores to qualitative judgements does not count as a sound quantitative technique – more on this later.
So, what are some good ways of measuring the effectiveness of risk management? Hubbard lists the following:
- Statistics based on large samples – the use of this depends on the availability of historical or other data that is similar to the situation at hand.
- Direct evidence – this is where the risk management technique actually finds some problem that would not have been found otherwise. For example, an audit that unearths dubious financial practices
- Component testing – even if one isn’t able to test the method end-to-end, it may be possible to test specific components that make up the method. For example, if the method uses computer simulations, it may be possible to validate the simulations by applying them to known situations.
- Check of completeness – organisations need to ensure that their risk management methods cover the entire spectrum of risks, else there’s a danger that mitigating one risk may increase the probability of another. Further, as Hubbard states, “A risk that’s not even on the radar cannot be managed at all.” As far as completeness is concerned, there are four perspectives that need to be taken into account. These are:
- Internal completeness – covering all parts of the organisation
- External completeness – covering all external entities that the organisation interacts with.
- Historical completeness – this involves covering worst case scenarios and historical data.
- Combinatorial completeness – this involves considering combinations of events that may occur together; those that may lead to common-mode failure discussed earlier.
Finally, Hubbard closes the first section with the observation that it is better not to use any formal methodology than to use one that is flawed. Why? Because a flawed methodology can lead to an incorrect decision being made with high confidence.
Why it’s broken
Hubbard begins this section by identifying the four major players in the risk management game. These are:
- Actuaries: These are perhaps the first modern professional risk managers. They use quantitative methods to manage risks in the insurance and pension industry. Although the methods actuaries use are generally sound, the profession is slow to pick up new techniques. Further, many investment decisions that insurance companies do not come under the purview of actuaries. So, actuaries typically do not cover the entire spectrum of organizational risks.
- Physicists and mathematicians: Many rigorous risk management techniques came out of statistical research done during the second world war. Hubbard therefore calls this group War Quants. One of the notable techniques to come out of this effort is the Monte Carlo Method – originally proposed by Nick Metropolis, John Neumann and Stanislaw Ulam as a technique to calculate the averaged trajectories of neutrons in fissile material (see this article by Nick Metropolis for a first-person account of how the method was developed). Hubbard believes that Monte Carlo simulations offer a sound, general technique for quantitative risk analysis. Consequently he spends a fair few pages discussing these methods, albeit at a very basic level. More about this later.
- Economists: Risk analysts in investment firms often use quantitative techniques from economics. Popular techniques include modern portfolio theory and models from options theory (such as the Black-Scholes model) . The problem is that these models are often based on questionable assumptions. For example, the Black-Scholes model assumes that the rate of return on a stock is normally distributed (i.e. its value is lognormally distributed) – an assumption that’s demonstrably incorrect as witnessed by the events of the last few years . Another way in which economics plays a role in risk management is through behavioural studies, in particular the recognition that decisions regarding future events (be they risks or stock prices) are subject to cognitive biases. Hubbard suggests that the role of cognitive biases in risk management has been consistently overlooked. See my post entitled Cognitive biases as meta-risks and its follow-up for more on this point.
- Management consultants: In Hubbard’s view, management consultants and standards institutes are largely responsible for many of the ad-hoc approaches to risk management. A particular favourite of these folks are ad-hoc scoring methods that involve ordering of risks based on subjective criteria. The scores assigned to risks are thus subject to cognitive bias. Even worse, some of the tools used in scoring can end up ordering risks incorrectly. Bottom line: many of the risk analysis techniques used by consultants and standards have no justification.
Following the discussion of the main players in the risk arena, Hubbard discusses the confusion associated with the definition of risk. There are a plethora of definitions of risk, most of which originated in academia. Hubbard shows how some of these contradict each other while others are downright non-intuitive and incorrect. In doing so, he clarifies some of the academic and professional terminology around risk. As an example, he takes exception to the notion of risk as a “good thing” – as in the PMI definition, which views risk as “an uncertain event or condition that, if it occurs, has a positive or negative effect on a project objective.” This definition contradicts common (dictionary) usage of the term risk (which generally includes only bad stuff). Hubbard’s opinion on this may raise a few eyebrows (and hackles!) in project management circles, but I reckon he has a point.
In my opinion, the most important sections of the book are chapters 6 and 7, where Hubbard discusses why “expert knowledge and opinions” (favoured by standards and methodologies are flawed) and why a very popular scoring method (risk matrices) is “worse than useless.” See my posts on the limitations of scoring techniques and Cox’s risk matrix theorem for detailed discussions of these points.
A major problem with expert estimates is overconfidence. To overcome this, Hubbard advocates using calibrated probability assessments to quantify analysts’ abilities to make estimates. Calibration assessments involve getting analysts to answer trivia questions and eliciting confidence intervals for each answer. The confidence intervals are then checked against the proportion of correct answers. Essentially, this assesses experts’ abilities to estimates by tracking how often they are right. It has been found that people can improve their ability to make subjective estimates through calibration training – i.e. repeated calibration testing followed by feedback. See this site for more on probability calibration.
Next Hubbard tackles several “red herring” arguments that are commonly offered as reasons not to manage risks using rigorous quantitative methods. Among these are arguments that quantitative risk analysis is impossible because:
- Unexpected events cannot be predicted.
- Risks cannot be measured accurately.
Hubbard states that the first objection is invalid because although some events (such as spectacular stockmarket crashes) may have been overlooked by models, it doesn’t prove that quantitative risk as a whole is flawed. As he discusses later in the book, many models go wrong by assuming Gaussian probability distributions where fat-tailed ones would be more appropriate. Of course, given limited data it is difficult to figure out which distribution’s the right one. So, although Hubbard’s argument is correct, it offers little comfort to the analyst who has to model events before they occur.
As far as the second is concerned, Hubbard has written another book on how just about any business variable (even intangible ones) can be measured. The book makes a persuasive case that most quantities of interest can be measured, but there are difficulties. First, figuring out the factors that affect a variable is not a straightforward task. It depends, among other things, on the availability of reliable data, the analyst’s experience etc. Second, much depends on the judgement of the analyst, and such judgements are subject to bias. Although calibration may help reduce certain biases such as overconfidence, it is by no means a panacea for all biases. Third, risk-related measurements generally involve events that are yet to occur. Consequently, such measurements are based on incomplete information. To make progress one often has to make additional assumptions which may not justifiable a priori.
Hubbard is a strong advocate for quantitative techniques such as Monte Carlo simulations in managing risks. However, he believes that they are often used incorrectly. Specifically:
- They are often used without empirical data or validation – i.e. their inputs and results are not tested through observation.
- Are generally used piecemeal – i.e. used in some parts of an organisation only, and often to manage low-level, operational risks.
- They frequently focus on variables that are not important (because these are easier to measure) rather than those that are important. Hubbard calls this perverse occurrence measurement inversion. He contends that analysts often exclude the most important variables because these are considered to be “too uncertain.”
- They use inappropriate probability distributions. The Normal distribution (or bell curve) is not always appropriate. For example, see my posts on the inherent uncertainty of project task estimates for an intuitive discussion of the form of the probability distribution for project task durations.
- They do not account for correlations between variables. Hubbard contends that many analysts simply ignore correlations between risk variables (i.e. they treat variables as independent when they actually aren’t). This almost always leads to an underestimation of risk because correlations can cause feedback effects and common mode failures.
Hubbard dismisses the argument that rigorous quantitative methods such as Monte Carlo are “too hard.” I agree, the principles behind Monte Carlo techniques aren’t hard to follow – and I take the opportunity to plug my article entitled An introduction to Monte Carlo simulations of project tasks 🙂 . As far as practice is concerned, there are several commercially available tools that automate much of the mathematical heavy-lifting. I won’t recommend any, but a search using the key phrase monte carlo simulation tool will reveal many.
How to Fix it
The last part of the book outlines Hubbard’s recommendations for improving the practice of risk management. Most of the material presented here draws on the previous section of the book. His main suggestions are to:
- Adopt the language, tools and philosophy of uncertain systems. To do this he recommends:
- Using calibrated probabilities to express uncertainties. Hubbard believes that any person who makes estimates that will be used in models should be calibrated. He offers some suggestions on people can improve their ability to estimate through calibration – discussed earlier and on this web site.
- Employing quantitative modelling techniques to model risks. In particular, he advocates the use of Monte Carlo methods to model risks. He also provides a list of commercially available PC-based Monte Carlo tools. Hubbard makes the point that modelling forces analysts to decompose the systems of interest and understand the relationships between their components (see point 2 below).
- Developing an understanding of the basic rules of probability including independent events, conditional probabilities and Bayes’ Theorem. He gives examples of situations in which these rules can help analysts extrapolate
To this, I would also add that it is important to understand the idea that an estimate isn’t a number, but a probability distribution – i.e. a range of numbers, each with a probability attached to it.
- Build, validate and test models using reality as the ultimate arbiter. Models should be built iteratively, testing each assumption against observation. Further, models need to incorporate mechanisms (i.e. how and why the observations are what they are), not just raw observations. This is often hard to do, but at the very least models should incorporate correlations between variables. Note that correlations are often (but not always!) indicative of an underlying mechanism. See this post for an introductory example of Monte Carlo simulation involving correlated variables.
- Lobbying for risk management to be given appropriate visibility in organisation.s
In the penultimate chapter of the book, Hubbard fleshes out the characteristics or traits of good risk analysts. As he mentions several times in the book, risk analysis is an empirical science – it arises from experience. So, although the analytical and mathematical (modelling) aspects of risk are important, a good analyst must, above all, be an empiricist – i.e. believe that knowledge about risks can only come from observation of reality. In particular, tesing models by seeing how well they match historical data and tracking model predictions are absolutely critical aspects of a risk analysts job. Unfortunately, many analysts do not measure the performance of their risk models. Hubbard offers some excellent suggestions on how analysts can refine and improve their models via observation.
Finally, Hubbard emphasises the importance of creating an organisation-wide approach to managing risks. This ensures that organisations will tackle the most important risks first, and that its risk management budgets will be spent in the most effective way. Many of the tools and approaches that he suggests in the book are most effective if they are used in a consistent way across the entire organisation. In reality, though, risk management languishes way down in the priorities of senior executives. Even those who profess to understanding the importance of managing risks in a rigorous way, rarely offer risk managers the organisational visibility and support they need to do their jobs.
Conclusion
Whew, that was quite a bit to go through, but for me it was was worth it. Hubbard’s views impelled me to take a closer look at the foundations of project risk management and I learnt a great deal from doing so. Regular readers of this blog would have noticed that I have referenced the book (and some of the references therein) in a few of my articles on risk analysis.
I should add that I’ve never felt entirely comfortable with the risk management approaches advocated by project management methodologies. Hubbard’s book articulates these shortcomings and offers solutions to fix them. Moreover, he does so in a way that is entertaining and accessible. If there is a gap, it is that he does does not delve into the details of model building, but then his other book deals with this in some detail.
To summarise: the book is a must read for anyone interested in risk management. It is especially recommended for project professionals who manage risks using methods that are advocated by project management standards and methodologies.
On the limitations of scoring methods for risk analysis
Introduction
A couple of months ago I wrote an article highlighting some of the pitfalls of using risk matrices. Risk matrices are an example of scoring methods , techniques which use ordinal scales to assess risks. In these methods, risks are ranked by some predefined criteria such as impact or expected loss, and the ranking is then used as the basis for decisions on how the risks should be addressed. Scoring methods are popular because they are easy to use. However, as Douglas Hubbard points out in his critique of current risk management practices, many commonly used scoring techniques are flawed. This post – based on Hubbard’s critique and research papers quoted therein – is a brief look at some of the flaws of risk scoring techniques.
Commonly used risk scoring techniques and problems associated with them
Scoring techniques fall under two major categories:
- Weighted scores: These use several ordered scales which are weighted according to perceived importance. For example: one might be asked to rate financial risk, technical risk and organisational risk on a scale of 1 to 5 for each, and then weight then by factors of 0.6, 0.3 and 0.1 respectively (possibly because the CFO – who happens to be the project sponsor – is more concerned about financial risk than any other risks ). The point is, the scores and weights assigned can be highly subjective – more on that below.
- Risk matrices: These rank risks along two dimensions – probability and impact – and assign them a qualitative ranking of high, medium or low depending on where they fall. Cox’s theorem shows such categorisations are internally inconsistent because the category boundaries are arbitrarily chosen.
Hubbard makes the point that, although both the above methods are endorsed by many standards and methodologies (including those used in project management), they should be used with caution because they are flawed. To quote from his book:
Together these ordinal/scoring methods are the benchmark for the analysis of risks and/or decisions in at least some component of most large organizations. Thousands of people have been certified in methods based in part on computing risk scores like this. The major management consulting firms have influenced virtually all of these standards. Since what these standards all have in common is the used of various scoring schemes instead of actual quantitative risk analysis methods, I will call them collectively the “scoring methods.” And all of them, without exception, are borderline or worthless. In practices, they may make many decisions far worse than they would have been using merely unaided judgements.
What is the basis for this claim? Hubbard points to the following:
- Scoring methods do not make any allowance for flawed perceptions of analysts who assign scores – i.e. they do not consider the effect of cognitive bias. I won’t dwell on this as I have previously written about the effect of cognitive biases in project risk management -see this post and this one, for example.
- Qualitative descriptions assigned to each score are understood differently by different people. Further, there is rarely any objective guidance as to how an analyst is to distinguish between a high or medium risk. Such advice may not even help: research by Budescu, Broomell and Po shows that there can be huge variances in understanding of qualitative descriptions, even when people are given specific guidelines what the descriptions or terms mean.
- Scoring methods add their own errors. Below are brief descriptions of some of these:
- In his paper on the risk matrix theorem, Cox mentions that “Typical risk matrices can correctly and unambiguously compare only a small fraction (e.g., less than 10%) of randomly selected pairs of hazards. They can assign identical ratings to quantitatively very different risks.” He calls this behaviour “range compression” – and it applies to any scoring technique that uses ranges.
- Assigned scores tend to cluster around the mid-low high range. Analysis by Hubbard shows that, on a 5 point scale, 75% of all responses are 3 or 4. This implies that changing a score from 3 to 4 or vice-versa can have a disproportionate effect on classification of risks.
- Scores implicitly assume that the magnitude of the quantity being assumed is directly proportional to the scale. For example, a score of 2 implies that the criterion being measured is twice as large as it would be for a score of 1. However, in reality, criteria are rarely linear as implied by such a scale.
- Scoring techniques often presume that the factors being scored are independent of each other – i.e. there are no correlations between factors. This assumption is rarely tested or justified in any way.
Many project management standards advocate the use of scoring techniques. To be fair, in many situations they are adequate as long as they are used with an understanding of their limitations. Seen in this light, Hubbard’s book is an admonition to standards and textbook writers to be more critical of the methods they advocate, and a warning to practitioners that an uncritical adherence to standards and best practices is not the best way to manage project risks .
Scoring done right
Just to be clear, Hubbard’s criticism is directed against scoring methods that use arbitrary, qualitative scales which are not justified by independent analysis. There are other techniques which, though superficially similar to these flawed scoring methods, are actually quite robust because they are:
- Based on observations.
- Use real measures (as opposed to arbitrary ones – such as “alignment with business objectives” on a scale of 1 to 5, without defining what “alignment” means.)
- Validated after the fact (and hence refined with use).
As an example of a sound scoring technique, Hubbard quotes this paper by Dawes, which presents evidence that linear scoring models are superior to intuition in clinical judgements. Strangely, although the weights themselves can be obtained through intuition, the scoring model outperforms clinical intuition. This happens because human intuition is good at identifying important factors, but not so hot at evaluating the net effect of several, possibly competing factors. Hence simple linear scoring models can outperform intuition. The key here is that the models are validated by checking the predictions against reality.
Another class of techniques use axioms based on logic to reduce inconsistencies in decisions. An example of such a technique is multi-attribute utility theory. Since they are based on logic, these methods can also be considered to have a solid foundation unlike those discussed in the previous section.
Conclusions
Many commonly used scoring methods in risk analysis are based on flaky theoretical foundations – or worse, none at all. To compound the problem, they are often used without any validation. A particularly ubiquitous example is the well-known and loved risk matrix. In his paper on risk matrices, Tony Cox shows how risk matrices can sometimes lead to decisions that are worse than those made on the basis of a coin toss. The fact that this is a possibility – even if only a small one – should worry anyone who uses risk matrices (or other flawed scoring techniques) without an understanding of their limitations.
Building project knowledge – a social constructivist view
Introduction
Conventional approaches to knowledge management on projects focus on the cognitive (or thought-related) and mechanical aspects of knowledge creation and capture. There is alternate view, one which considers knowledge as being created through interactions between people who – through their interactions – develop mutually acceptable interpretations of theories and facts in ways that suit their particular needs. That is, project knowledge is socially constructed. If this is true, then project managers need to pay attention to the environmental and social factors that influence knowledge construction. This is the position taken by Paul Jackson and Jane Klobas in their paper entitled, Building knowledge in projects: A practical application of social constructivism to information systems development, which presents a knowledge creation / sharing process model based social constructivist theory. This article is a summary and review of the paper.
A social constructivist view of knowledge
Jackson and Klobas begin with the observation that engineering disciplines are founded on the belief that knowledge can be expressed in propositions that correspond to a reality which is independent of human perception. However, there is an alternate view that knowledge is not absolute, but relative – i.e. it depends on the mental models and beliefs used to interpret facts, objects and events. A relevant example is how a software product is viewed by business users and software developers. The former group may see an application in terms of its utility whereas the latter may see it as an instance of a particular technology. Such perception gaps can also occur within seemingly homogenous groups – such as teams comprised of software developers, for example. This can happen for a variety of reasons such as the differences in the experience and cultural backgrounds of those who make up the group. Social constructivism looks at how such gaps can be bridged.
The authors’ discussion relies on the work of Berger and Luckmann, who described how the gap between perceptions of different individuals can be overcome to create a socially constructed, shared reality. The phrase “socially constructed” implies that reality (as it pertains to a project, for example) is created via a common understanding of issues, followed by mutual agreement between all the players as to what comprises that reality. For me this view strikes a particular chord because of it is akin to the stated aims of dialogue mapping, a technique that I have described in several earlier posts (see this article for an example relevant to projects).
Knowledge in information systems development as a social construct
First up, the authors make the point that information systems development (ISD) projects are:
…intensive exercises in constructing social reality through process and data modeling. These models are informed with the particular world-view of systems designers and their use of particular formal representations. In ISD projects, this operational reality is new and explicitly constructed and becomes understood and accepted through negotiated agreement between participants from the two cultures of business and IT
Essentially, knowledge emerges through interaction and discussion as the project proceeds. However, the methodologies used in design are typically founded on an engineering approach, which takes a positivist view rather than a social one. As the authors suggest,
Perhaps the social constructivist paradigm offers an insight into continuing failure, namely that what is happening in an ISD project is far more complex than the simple translation of a description of an external reality into instructions for a computer. It is the emergence and articulation of multiple, indeterminate, sometimes unconscious, sometimes ineffable realities and the negotiated achievement of a consensus of a new, agreed reality in an explicit form, such as a business or data model, which is amenable to computerization.
With this in mind, the authors aim to develop a model that addresses the shortcomings of the traditional, positivist view of knowledge in ISD projects. They do this by representing Berger and Luckmann’s theory of social constructivism in terms of a knowledge process model. They then identify management principles that map on to these processes. These principles form the basis of a survey which is used as an operational version of the process model. The operational model is then assessed by experts and tested by a project manager in a real-life project.
The knowledge creation/sharing process model
The process model that Jackson and Klobas describe is based on Berger and Luckmann’s work.
The model describes how personal knowledge is created – personal knowledge being what an individual knows. Personal knowledge is built up using mental models of the world – these models are frameworks that individuals use to make sense of the world.
According to the Jackson-Klobas process model, personal knowledge is built up through a number of process including:
Internalisation: The absorption of knowledge by an individual
Knowledge creation: The construction of new knowledge through repetitive performance of tasks (learning skills) or becoming aware of new ideas, ways of thinking or frameworks. The latter corresponds to learning concepts and theories, or even new ways of perceiving the world. These correspond to a change in subjective reality for the individual.
Externalisation: The representation and description of knowledge using speech or symbols so that it can be perceived and internalized by others. Think of this as explaining ideas or procedures to other individuals.
Objectivation: The creation of a shared constructs that represent a group’s understanding of the world. At this point, knowledge is objectified – and is perceived as having an existence independent of individuals.
Legitimation: The authorization of objectified knowledge as being “correct” or “standard.”
Reification: The process by which objective knowledge assumes a status that makes it difficult to change or challenge. A familiar example of reified knowledge is any procedure or process that is “hardened” into a system – “That’s just the way things are done around here,” is a common response when such processes are challenged.
The links depicted in the figure show the relationships between these processes.
Jackson and Klobas suggest that knowledge creation in ISD projects is a social process, which occurs through continual communication between the business and IT. Sure, there are other elements of knowledge creation – design, prototyping, development, learning new skills etc. – but these amount to nought unless they are discussed, argued, agreed on and communicated through social interactions. These interactions occur in the wider context of the organization, so it is reasonable to claim that the resulting knowledge takes on a form that mirrors the social environment of the organization.
Clearly, this model of knowledge creation is very different from the usual interpretation of knowledge having an independent reality, regardless of whether it is known to the group or not.
An operational model
The above is good theory, which makes for interesting, but academic, discussions. What about practice? Can the model be operationalised? Jackson and Klobas describe an approach to creating to testing the utility (rather than the validity) of the model. I discuss this in the following sections.
Knowledge sharing heuristics
To begin with, they surveyed the literature on knowledge management to identify knowledge sharing heuristics (i.e. experience-based techniques to enable knowledge sharing). As an example, some of the heuristics associated with the externalization process were:
- We have standard documentation and modelling tools which make business requirements easy to understand
- Stakeholders and IS staff communicate regularly through direct face-to-face contact
- We use prototypes
The authors identified more than 130 heuristics. Each of these was matched with a process in the model. According to the authors, this matching process was simple: in most cases there was no doubt as to which process a heuristic should be attached to. This suggests that the model provides a natural way to organize the voluminous and complex body of research in knowledge creation and sharing. Why is this important? Well, because it suggests that the conceptual model (as illustrated in Fig. 1) can form the basis for a simple means to assess knowledge creation / sharing capabilities in their work environments, with the assurance that they have all relevant variables covered.
Validating the mapping
The validity of the matching was checked using twenty historical case studies of ISD projects. This worked as follows: explanations for what worked well and what didn’t were mapped against the model process areas (using the heuristics identified in the prior step). The aim was to answer the question: “is there a relationship between project failure and problems in the respective knowledge processes or, conversely, between project success and the presence of positive indicators?”
One of the case studies the authors use is the well-known (and possibly over-analysed) failure of the automated dispatch system for the London Ambulance Service. The paper has a succinct summary of the case study, which I reproduce below:
The London Ambulance Service (LAS) is the largest ambulance service in the world and provides accident and emergency and patient transport services to a resident population of nearly seven million people. Their ISD project was intended to produce an automated system for the dispatch of ambulances to emergencies. The existing manual system was poor, cumbersome, inefficient and relatively unreliable. The goal of the new system was to provide an efficient command and control process to overcome these deficiencies. Furthermore, the system was seen by management as an opportunity to resolve perceived issues in poor industrial relations, outmoded work practices and low resource utilization. A tender was let for development of system components including computer aided dispatch, automatic vehicle location, radio interfacing and mobile data terminals to update the status of any call-out. The tender was let to a company inexperienced in large systems delivery. Whilst the project had profound implications for work practices, personnel were hardly involved in the design of the system. Upon implementation, there were many errors in the software and infrastructure, which led to critical operational shortcomings such as the failure of calls to reach ambulances. The system lasted only a week before it was necessary to revert to the manual system.
Jackson and Klobas show how their conceptual model maps to knowledge-related factors that may have played a role in the failure project. For example, under the heading of personal knowledge, one can identify at least two potential factors: lack of involvement of end-users in design and selection of an inexperienced vendor. Further, the disconnect between management and employees suggests a couple of factors relating to reification: mutual negative perceptions and outmoded (but unchallenged) work practices.
From their validation, the authors suggest that the model provides a comprehensive framework that explains why these projects failed. That may be overstating the case – what’s cause and what’s effect is hard to tell, especially after the fact. Nonetheless, the model does seem to be able to capture many, if not all, knowledge-related gaps that could have played a role in these failures. Further, by looking at the heuristics mapped to each process, one might be able to suggest ways in which these deficiencies could have been addressed. For example, if externalization is a problem area one might suggest the use of prototypes or encourage face to face communication between IS and business personnel.
Survey-based tool
Encouraged by the above, the authors created a survey tool which was intended to evaluate knowledge creation/sharing effectiveness in project environments. In the tool, academic terms used in the model were translated into everyday language (for example, the term externalization was translated to knowledge sharing – see Fig 1 for translated terms). The tool asked project managers to evaluate their project environments against each knowledge creation process (or capability) on a scale of 1 to 10. Based on inputs, it could recommend specific improvement strategies for capabilities that were scored low. The tool was evaluated by four project managers, who used it in their work environment over a period of 4-6 weeks. At the end of the period, they were interviewed and their responses were analysed using content analysis to match their experiences and requirements against the designed intent of the tool. Unfortunately, the paper does not provide any details about the tool, so it’s difficult to say much more than paraphrase the authors comments.
Based on their evaluation, the authors conclude that the tool provides:
- A common framework for project managers to discuss issues pertaining to knowledge creation and sharing.
- A means to identify potential problems and what might be done to address them.
Field testing
One of the evaluators of the model tested the tool in the field. The tester was a project manager who wanted to identify knowledge creation/sharing deficiencies in his work environment, and ways in which these could be addressed. He answered questions based on his own evaluation of knowledge sharing capabilities in his environment and then developed an improvement plan based on strategies suggested by the tool along with some of his own ideas. The completed survey and plan were returned to the researchers.
Use of the tool revealed the following knowledge creation/sharing deficiencies in the project manager’s environment:
- Inadequate personal knowledge.
- Ineffective externalization
- Inadequate standardization (objectivation)
Strategies suggested by the tool include:
- An internet portal to promote knowledge capture and sharing. This included discussion forums, areas to capture and discuss best practices etc.
- Role playing workshops to reveal how processes worked in practice (i.e. surface tacit knowledge).
Based on the above, the authors suggest that:
- Technology can be used to promote support knowledge sharing and standardization, not just storage.
- Interventions that make tacit knowledge explicit can be helpful.
- As a side benefit, they note that the survey has raised consciousness about knowledge creation/sharing within the team.
Reflections and Conclusions
In my opinion, the value of the paper lies not in the model or the survey tool, but the conceptual framework that underpins them – namely, the idea knowledge depends on, and is shaped by, the social environment in which it evolves. Perhaps an example might help clarify what this means. Consider an organisation that decides to implement project management “best practices” as described by <fill in any of the popular methodologies here>. The wrong way to do this would be to implement practices wholesale, without regard to organizational culture, norms and pre-existing practices. Such an approach is unlikely to lead to the imposed practices taking root in the organisation. On the other hand, an approach that picks the practices that are useful and tailors these to organizational needs, constraints and culture is likely to meet with more success. The second approach works because it attempts to bridge gap between the “ideal best practice” and social reality in the organisation. It encourages employees to adapt practices in ways that make sense in the context of the organization. This invariably involves modifying practices, sometimes substantially, creating new (socially constructed!) knowledge in the bargain.
Another interesting point the authors make is that several knowledge sharing heuristics (130, I think the number was) could be classified unambiguously under one of the processes in the model. This suggests that the model is a reasonable view of the knowledge creation/sharing process. If one accepts this conclusion, then the model does indeed provide a common framework for discussing issues relating knowledge creation in project environments. Further, the associated heuristics can help identify processes that don’t work well.
I’m unable to judge the usefulness of the survey-based tool developed by the authors because they do not provide much detail about it in the paper. However, that isn’t really an issue; the field of project management has too many “tools and techniques” anyway. The key message of the paper, in my opinion, is the that every project has a unique context, and that the techniques used by others have to be interpreted and applied in ways that are meaningful in the context of the particular project. The paper is an excellent counterpoint to the methodology-oriented practice of knowledge management in projects; it should be required reading for methodologists and project managers who believe that things need to be done by The Book, regardless of social or organizational context.


