Archive for the ‘Corporate IT’ Category
Reasons and rationales for not managing risks on IT projects – a paper review
Introduction
Anticipating and dealing with risks is an important part of managing projects. So much so that most frameworks and methodologies devote a fair bit of attention to risk management: for example, the PMI framework considers risk management to be one of the nine “knowledge areas” of project management. Now, frameworks and methodologies are normative– that is. they us how risks should be managed – but they don’t say anything about how are risks actually handled on projects. It is perhaps too much expect that all projects are run with the full machinery of formal risk management, but it is reasonable to expect that most project managers deal with risks in some more or less systematic way. However, project management lore is rife with stories of projects on which risks were managed inadequately, or not managed at all (see this post for some pertinent case studies). This begs the question: are there rational reasons for not managing risks on projects? A paper by Elmar Kutsch and Mark Hall entitled, The Rational Choice of Not Applying Project Risk Management in Information Technology Projects, addresses this question. This post is a summary and review of the paper.
Background
The paper begins with a brief overview of risk management as prescribed by various standards. Risk management is about making decisions in the face of uncertainty. To make the right decisions, project managers need to figure out which risks are the most significant. Consequently, most methodologies offer techniques to rank risks based on various criteria. These techniques are based on many (rather strong) assumptions, which the authors summarise as follows:
- An unambiguous identification of the problem (or risk) including its cause
- Perfect information about all relevant variables that affect the risk.
- A model of the risk that incorporates the aforementioned variables.
- A complete list of possible approaches to tackle the risks.
- An unambiguous, quantitative and internally consistent measure for the outcomes of each approach.
- Perfect knowledge of the consequences of each approach.
- Availability of resources for the successful implementation of the chosen solution.
- The presence of rational decision-makers (i.e. folks free from cognitive bias for example)
Most formal methodologies assume the above to be “self-evidently correct” (note that some of them aren’t correct, see my posts on cognitive biases as project meta-risks and the limitations of scoring methods in risk analysis for more). Anyway, regardless of the validity of the assumptions, it is clear that achieving all the above would require a great deal of commitment, effort and money. This, according to the authors, provides a hint as to why many projects are run without formal risk management. In their words:
…despite the existence of a self-evidently correct process to manage project risk, some evidence suggests that project managers feel restricted in applying such an “optimal” process to manage risks. For example, Lyons and Skitmore (2004) investigated factors limiting the implementation of risk management in Australian construction projects. Similar findings about the barriers of using risk management in three Hong Kong industries were found in a further prominent study by Tummala, Leung, Burchett, and Leung (1997). The most dominant factors for constraining the use of project risk management are the lack of time, the problem of justifying the effort into project risk management, and the lack of information required to quantify/qualify risk estimates.
The authors review the research literature to find other factors that could reduce the likelihood of risk management being applied in projects. Based on their findings, they suggest the following as reasons that project managers often offer as justifications (or rationales) for not managing risks:
- The problem of hindsight: Most risk management methodologies rely on historical data to calculate probabilities of risk eventuation. However, many managers feel they cannot rely on such data for their specific (unique) project.
- The problem of ownership: Risks are often thought of as “someone else’s problem”. There is often a reluctance to take ownership of a risk because of the fear of blame in case the risk response fails to address the risk.
- The problem of cost justification: From the premises listed above it is clear that proper risk management is a time-consuming, effort-laden and expensive process. Many IT projects are run on tight budgets, and risk management is an area that’s perceived as being an unnecessary expense.
- Lack of expertise: Project managers might be unaware of risk management technique. I find this hard to believe, given that practically all textbooks and methodologies yammer on, at great length, about the importance of managing risks. Besides, it is a pretty weak justification!
- The problem of anxiety: By definition, risk management implies that one is considering things that can go wrong. Sometimes, when informed about risks, stakeholders may decide not to go ahead with a project. Consequently, project managers may limit their risk identification efforts in an attempt to avoid making stakeholders nervous.
When justifying the decision not to manage risks, the above factors are often presented as barriers or problems which prevent the project manager from using risk management. As an illustration of (5) above, a project manager might say, “I can’t talk about risks on my project because the sponsor will freak out and throw me out of his office.”
Research Method
The authors started with an exploratory study aimed at developing an understanding of the problem from the perspective of IT project managers – i.e. how project managers actually experience the application of risk management on their projects. This study was done through face-to-face interviews. Based on patterns that emerged from this study, the authors developed a web-based survey that was administered to a wider group of project managers. The exploratory phase involved eighteen project managers whereas the in-depth survey was completed by just over a hundred project managers all of whom were members of the PMI Risk Management Special Interest Group. Although the paper doesn’t say so, I assume that project managers were asked questions in reference to a specific project they were involved in (perhaps the most recent one?).
I won’t dwell any more on the research methodology; the paper has all the details.
Results and interpretation
Four of the eighteen project managers interviewed in the exploratory study did not apply risk management processes on their projects. The reasons given were interpreted by the authors as cost justification, hindsight and anxiety. I’ve italicized the word “interpreted” in the previous sentence because I believe the responses given by the project managers could just as easily be interpreted another way. I’ve presented their arguments below so that readers can judge for themselves.
One interviewee mentioned that, “At the beginning, we had so much to do that no one gave a thought to tackling risks. It simply did not happen.” The authors conclude that the rationale for not managing risks in this case is one of cost justification, the chain of logic being that due to the lack of time, investment of resources in managing risks was not justified. To me this seems to read too much into the response. From the response it appears to me that the real reason is exactly what the interviewee states – “no one thought of managing risks” – i.e. risks were overlooked.
Another interviewee stated, “It would have been nice to do it differently, but because we were quite vulnerable in terms of software development, and because most of that was driven by the States, we were never in a position to be proactive. The Americans would say “We got an update to that system and we just released it to you,” rather than telling us a week in advance that something was happening. We were never ahead enough to be able to plan.” The authors interpret the lack of risk management in the this case as being due to the problem of hindsight – i.e. because the risk that an update poses to other parts of the system could not have been anticipated, no risk management was possible. To me this interpretation seems a little thin – surely, most project managers understand the risks that arbitrary updates pose. From the response it appears that the real reason was that the project manager was not able to plan ahead because he/she had no advance warning of updates. This seems more a problem of a broken project management process rather than anything to do with risk management or hindsight. My point: the uncertainty here was known (high probability of regular updates), so something could (and should) have been done about it whilst planning the project.
I’ve dwelt on these examples because it appears that the authors may have occasionally fallen into the trap of pigeon holing interviewee responses into their predefined rationales (the ones discussed in the previous section) instead of listening to what was actually being said. Of course, my impression is based on a reading of the paper and the data presented therein. The authors may well have other (unpublished) information to support their classification of interviewee responses. However, if that is the case, they should have presented the data in the paper because the reliability of the second survey depends on the set of predefined rationales being comprehensive and correct.
The authors present a short discussion of the second phase of their study. They find that no formal risk management processes were used in about one third of the 102 cases studied. As the authors point out, that in itself is an interesting statistic, especially considering the money at stake in typical IT projects. In cases where no risk management was applied, respondents were asked to provide reasons why this was so. The reasons given were extremely varied but, once again, the authors pigeon-holed these into their predefined categories. I present some of the original responses and interpretations below so that readers can judge for themselves.
Consider the following reasons that were offered (by respondents) for not applying risk management:
- “We haven’t got time left.”
- “No executive call for risk measurements.”
- “Company doesn’t see the value in adding the additional cycles to a project.” (?)
- “Upper management did not think it required it.”
- “Ignorance that such a thing was necessary.”
- “An initial risk analysis was done, but the PM did not bother to follow up.”
- “A single risk identification workshop was held early in the project before my arrival. Reason for not following the process was most probably the attitude of the members of the team.”
Interestingly, the authors interpret all the above responses (and a few more ) as being attributable to the cost justification rationale. However, it seems to me that there could be several other (more likely) interpretations. For example: 2, 3, 4, 5 could be attributed to a lack of knowledge about the value of managing risks whereas 1, 6, 7 sound more like simple (and unfortunately, rather common!) buck-passing.
Conclusion
Towards the end of the paper the authors make an excellent point about the rationality of a decision not to apply risk management. From the perspective of formal methodoologies such a decision is irrational. However, rationality (or the lack of it) isn’t so cut and dried. Here’s what the authors say:
…a decision by an IT project manager not to apply project risk management may be described as irrational, at least if one accepts the premise that the project manager chose not to apply a “self-evidently” correct process to optimally reduce the impact of risk on the project outcome. On the other hand, … a person who focuses only on the statistical probability of threats and their impacts and ignores any other information would be truly irrational. Hence, a project manager would act sensibly by, for example, not applying project risk management because he or she rates the utility of not using project risk management as higher than the utility of confronting stakeholders with discomforting information….”
…or spending money to address issues that may not eventuate, for that matter. The point being that people don’t make decisions based on prescribed processes and procedures alone; there are other considerations.
The authors then go on to say,
PMI and APM claim that through the systematic identification, analysis, and response to risk, project managers can achieve the planned project outcome. However, the findings show that in more than one-third of all projects, the effectiveness of project risk management is virtually nonexistent because no formal project risk management process was applied due to the problem of cost justification.
Now, although it is undeniable that many projects are run with no risk management whatsoever, I’m not sure I agree with the last statement in the quote. From the data presented in the paper, it seems more likely that a lack of knowledge and “buck-passing” are the prime reasons for risk management being given short shrift on the projects surveyed. Even if cost justification was offered as a rationale by some interviewees, their quotes suggest that the real reasons were quite different. This isn’t surprising: it is but natural to attribute to unacceptable costs that which should be attributed to oversight or failure. I think this may be the case in a large number of projects on which risks aren’t managed. However, as the authors mention, it is impossible to make any generalisations based on small samples . So, although it is incontrovertible that there are a significant number of projects on which risks aren’t managed, why this is so remains an open question.
To outsource or not to outsource – a transaction cost view
One of the questions that organisations grapple with is whether or not to outsource software development work to external providers. The work of Oliver Williamson – one of the 2009 Nobel Laureates for Economics – provides some insight into this issue. This post is a brief look at how Williamson’s work on transaction cost economics can be applied to the question of outsourcing.
A firm has two choices for any economic activity: performing the activity in-house or going to market. In either case, the cost of the activity can be decomposed into production costs, which are direct and indirect costs of producing the good or service, and transaction costs, which are other (indirect) costs incurred in performing the economic activity.
In the case of in-house application development, production costs include developer time, software tools etc whereas transaction costs include costs relating to building an internal team (with the right skills, attitude and knowledge) and managing uncertainty. On the other hand, in outsourced application development, production costs include all costs that the vendor incurs in producing the application whereas transaction costs (typically incurred by the client) include the following:
- Search costs: cost of searching for providers of the product / service.
- Selection costs: cost of selecting a specific vendor.
- Bargaining costs: costs incurred in agreeing on an acceptable price.
- Enforcement costs: costs of measuring compliance, costs of enforcing the contract etc.
- Costs of coordinating work : this includes costs of managing the vendor.
From the above list it is clear that it can be hard to figure out transaction costs for outsourcing.
Now, according to Williamson, the decision as to whether or not an economic activity should be outsourced depends critically on transaction costs. To quote from an article in the Economist which describes his work:
…All economic transactions are costly-even in competitive markets, there are costs associated with figuring out the right price. The most efficient institutional arrangement for carrying out a particular economic activity would be the one that minimized transaction costs.
The most efficient institutional arrangement is often the market (i.e. outsourcing, in the context of this post), but firms (i.e. in-house IT arrangements) are sometimes better.
So, when are firms better?
Williamson’s work provides an answer to this question. He argues that the cost of completing an economic transaction in an open market:
- Increases with the complexity of the transaction (implementing an ERP system is more complex than implementing a new email system).
- Increases if it involves assets that are worth more within a relationship between two parties than outside of it: for example, custom IT services, tailored to the requirements of a specific company have more value to the two parties – provider and client – than to anyone else. This is called asset specificity in economic theory
These features make it difficult if not impossible to write and enforce contracts that take every eventuality into account. To quote from Williamson (2002):
…. all complex contracts are unavoidably incomplete, on which account the parties will be confronted with the need to adapt to unanticipated disturbances that arise by reason of gaps, errors, and omissions in the original contract….
Why are complex contracts necessarily incomplete?
Well, there are at least a couple of reasons:
- Bounds on human rationality: basically, no one can foresee everything, so contracts inevitably omit important eventualities.
- Strategic behavior: This refers to opportunistic behavior to gain advantage over the other party. This might be manifested as a refusal to cooperate or a request to renegotiate the contract.
Contracts will therefore work only if interpreted in a farsighted manner, with disputes being settled directly between the vendor and client. As Williamson states in this paper:
…important to the transaction-cost economics enterprise is the assumption that contracts, albeit incomplete, are interpreted in a farsighted manner, according to which economic actors look ahead, perceive potential hazards and embed transactions in governance structures that have hazard-mitigating purpose and effect. Also, most of the governance action works through private ordering with courts being reserved for purposes of ultimate appeal.
At some point this becomes too hard to do. In such situations it makes sense to carry out the transaction within a single legal entity (i.e. within a firm) rather than on the open market. This shouldn’t be surprising: it is obvious that complex transactions will be simplified if they take place within a single governance structure.
The above has implications for both clients and providers in outsourcing arrangements. From the client perspective, when contracts for IT services are hard to draw up and enforce, it may be better to have those services provided by in-house departments rather than external vendors. On the other hand, vendors need to focus on keeping contracts as unambiguous and transparent as possible. Finally, both clients and vendors should expect ambiguities and omissions in contracts, and be flexible whenever there are disagreements over the interpretation of contract terms.
The key takeaway is easy to summarise: be sure to consider transaction costs when you are making a decision on whether or not to outsource development work.
On the limitations of scoring methods for risk analysis
Introduction
A couple of months ago I wrote an article highlighting some of the pitfalls of using risk matrices. Risk matrices are an example of scoring methods , techniques which use ordinal scales to assess risks. In these methods, risks are ranked by some predefined criteria such as impact or expected loss, and the ranking is then used as the basis for decisions on how the risks should be addressed. Scoring methods are popular because they are easy to use. However, as Douglas Hubbard points out in his critique of current risk management practices, many commonly used scoring techniques are flawed. This post – based on Hubbard’s critique and research papers quoted therein – is a brief look at some of the flaws of risk scoring techniques.
Commonly used risk scoring techniques and problems associated with them
Scoring techniques fall under two major categories:
- Weighted scores: These use several ordered scales which are weighted according to perceived importance. For example: one might be asked to rate financial risk, technical risk and organisational risk on a scale of 1 to 5 for each, and then weight then by factors of 0.6, 0.3 and 0.1 respectively (possibly because the CFO – who happens to be the project sponsor – is more concerned about financial risk than any other risks ). The point is, the scores and weights assigned can be highly subjective – more on that below.
- Risk matrices: These rank risks along two dimensions – probability and impact – and assign them a qualitative ranking of high, medium or low depending on where they fall. Cox’s theorem shows such categorisations are internally inconsistent because the category boundaries are arbitrarily chosen.
Hubbard makes the point that, although both the above methods are endorsed by many standards and methodologies (including those used in project management), they should be used with caution because they are flawed. To quote from his book:
Together these ordinal/scoring methods are the benchmark for the analysis of risks and/or decisions in at least some component of most large organizations. Thousands of people have been certified in methods based in part on computing risk scores like this. The major management consulting firms have influenced virtually all of these standards. Since what these standards all have in common is the used of various scoring schemes instead of actual quantitative risk analysis methods, I will call them collectively the “scoring methods.” And all of them, without exception, are borderline or worthless. In practices, they may make many decisions far worse than they would have been using merely unaided judgements.
What is the basis for this claim? Hubbard points to the following:
- Scoring methods do not make any allowance for flawed perceptions of analysts who assign scores – i.e. they do not consider the effect of cognitive bias. I won’t dwell on this as I have previously written about the effect of cognitive biases in project risk management -see this post and this one, for example.
- Qualitative descriptions assigned to each score are understood differently by different people. Further, there is rarely any objective guidance as to how an analyst is to distinguish between a high or medium risk. Such advice may not even help: research by Budescu, Broomell and Po shows that there can be huge variances in understanding of qualitative descriptions, even when people are given specific guidelines what the descriptions or terms mean.
- Scoring methods add their own errors. Below are brief descriptions of some of these:
- In his paper on the risk matrix theorem, Cox mentions that “Typical risk matrices can correctly and unambiguously compare only a small fraction (e.g., less than 10%) of randomly selected pairs of hazards. They can assign identical ratings to quantitatively very different risks.” He calls this behaviour “range compression” – and it applies to any scoring technique that uses ranges.
- Assigned scores tend to cluster around the mid-low high range. Analysis by Hubbard shows that, on a 5 point scale, 75% of all responses are 3 or 4. This implies that changing a score from 3 to 4 or vice-versa can have a disproportionate effect on classification of risks.
- Scores implicitly assume that the magnitude of the quantity being assumed is directly proportional to the scale. For example, a score of 2 implies that the criterion being measured is twice as large as it would be for a score of 1. However, in reality, criteria are rarely linear as implied by such a scale.
- Scoring techniques often presume that the factors being scored are independent of each other – i.e. there are no correlations between factors. This assumption is rarely tested or justified in any way.
Many project management standards advocate the use of scoring techniques. To be fair, in many situations they are adequate as long as they are used with an understanding of their limitations. Seen in this light, Hubbard’s book is an admonition to standards and textbook writers to be more critical of the methods they advocate, and a warning to practitioners that an uncritical adherence to standards and best practices is not the best way to manage project risks .
Scoring done right
Just to be clear, Hubbard’s criticism is directed against scoring methods that use arbitrary, qualitative scales which are not justified by independent analysis. There are other techniques which, though superficially similar to these flawed scoring methods, are actually quite robust because they are:
- Based on observations.
- Use real measures (as opposed to arbitrary ones – such as “alignment with business objectives” on a scale of 1 to 5, without defining what “alignment” means.)
- Validated after the fact (and hence refined with use).
As an example of a sound scoring technique, Hubbard quotes this paper by Dawes, which presents evidence that linear scoring models are superior to intuition in clinical judgements. Strangely, although the weights themselves can be obtained through intuition, the scoring model outperforms clinical intuition. This happens because human intuition is good at identifying important factors, but not so hot at evaluating the net effect of several, possibly competing factors. Hence simple linear scoring models can outperform intuition. The key here is that the models are validated by checking the predictions against reality.
Another class of techniques use axioms based on logic to reduce inconsistencies in decisions. An example of such a technique is multi-attribute utility theory. Since they are based on logic, these methods can also be considered to have a solid foundation unlike those discussed in the previous section.
Conclusions
Many commonly used scoring methods in risk analysis are based on flaky theoretical foundations – or worse, none at all. To compound the problem, they are often used without any validation. A particularly ubiquitous example is the well-known and loved risk matrix. In his paper on risk matrices, Tony Cox shows how risk matrices can sometimes lead to decisions that are worse than those made on the basis of a coin toss. The fact that this is a possibility – even if only a small one – should worry anyone who uses risk matrices (or other flawed scoring techniques) without an understanding of their limitations.

