Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Risk analysis’ Category

Reasons and rationales for not managing risks on IT projects – a paper review

with 7 comments

Introduction

Anticipating and dealing with risks is an important part of managing projects. So much so that most frameworks and methodologies devote a fair bit of attention to risk management:  for example, the PMI framework considers risk management to be one of the nine “knowledge areas” of project management.  Now, frameworks and methodologies are normative– that is. they us how risks should be managed – but they don’t say anything about how are risks actually handled on projects.   It is perhaps too much  expect that all projects are run with  the full machinery of  formal risk management, but it is reasonable to  expect that most project managers deal with risks in some more or less systematic way.  However, project management lore is rife with stories of projects on which risks were managed inadequately, or not managed at all (see this post for some pertinent case studies).  This begs the question:  are there rational reasons for not managing risks on projects?    A paper by Elmar Kutsch and Mark Hall entitled, The Rational Choice of Not Applying Project Risk Management in Information Technology Projects,  addresses this question.   This post is a summary and review of the paper.

Background

The paper begins with a brief overview of risk management as prescribed by various standards. Risk management is about making decisions in the face of uncertainty. To make the right decisions, project managers need to figure out which risks are the most significant. Consequently, most methodologies offer techniques to rank risks based on various criteria.  These techniques are based on many (rather strong) assumptions, which the authors summarise as follows:

  1. An unambiguous identification of the problem (or risk) including its cause
  2. Perfect information about all relevant variables that affect the risk.
  3. A model of the risk that incorporates the aforementioned variables.
  4. A complete list of possible approaches to tackle the risks.
  5. An unambiguous, quantitative and internally consistent measure for the outcomes of each approach.
  6. Perfect knowledge of the consequences of each approach.
  7. Availability of resources for the successful implementation of the chosen solution.
  8. The presence of rational decision-makers (i.e. folks free from cognitive bias for example)

Most formal methodologies assume the above to be “self-evidently correct” (note that some of them aren’t correct, see my  posts on cognitive biases as project meta-risks and the limitations of scoring methods in risk analysis for more). Anyway, regardless of the validity of the assumptions,  it is clear that achieving all the above would require a great deal of commitment, effort and money.    This, according to the authors,  provides a hint as to why many projects are run without formal risk management. In their words:

…despite the existence of a self-evidently correct process to manage  project risk, some evidence suggests that project managers feel restricted in applying such an “optimal” process to manage risks. For example, Lyons and Skitmore (2004) investigated factors limiting the implementation of risk management in Australian construction projects. Similar findings about the barriers of using risk management in three Hong Kong industries were found in a further prominent study by Tummala, Leung, Burchett, and Leung (1997). The most dominant factors for constraining the use of project risk management are the lack of time, the problem of justifying the effort into project risk management, and the lack of information required to quantify/qualify risk estimates.

The authors review the research literature to find other factors that could reduce the likelihood of risk management being applied in projects. Based on their findings, they suggest the following as reasons  that project managers often offer as  justifications (or rationales)  for not managing risks:

  1. The problem of hindsight: Most risk management methodologies rely on historical data to calculate probabilities of risk eventuation. However, many managers feel they cannot rely on such data for their specific (unique) project.
  2. The problem of ownership: Risks are often thought of as “someone else’s problem”. There is often a reluctance to take ownership of a risk because of the fear of blame in case the risk response fails to address the risk.
  3. The problem of cost justification: From the premises listed above it is clear that proper risk management is a time-consuming, effort-laden and expensive process. Many IT projects are run on tight budgets, and risk management is an area that’s perceived as being an unnecessary expense.
  4. Lack of expertise: Project managers might be unaware of risk management technique.  I find this hard to believe, given that practically all textbooks and methodologies yammer on,  at great length, about the importance of managing risks. Besides, it is a pretty weak justification!
  5. The problem of anxiety:  By definition, risk management implies that one is considering things that can go wrong.  Sometimes, when informed about risks, stakeholders may decide not to go ahead with a project. Consequently, project managers may limit their risk identification efforts in an attempt to avoid making stakeholders nervous.

When justifying the decision not to manage risks, the above factors are often presented as barriers or problems which prevent the project manager from using risk management. As an illustration of (5) above, a project manager might say, “I can’t talk about risks on my project because the sponsor will freak out and throw me out of his office.”

Research Method

The authors started with an exploratory study aimed at developing an understanding of the problem from the perspective of IT project managers – i.e. how project managers actually experience the application of risk management on their projects. This study was done through face-to-face interviews. Based on patterns that emerged from this study, the authors developed a web-based survey that was administered to a wider group of project managers. The exploratory phase involved eighteen project managers whereas the in-depth survey was completed by just over a hundred  project managers all of whom were members of the PMI Risk Management Special Interest Group. Although the paper doesn’t say so, I assume that project managers were asked questions in reference to a specific project they were involved in (perhaps the most recent one?).

I won’t dwell any more on the research methodology;  the paper has all the  details.

Results and interpretation

Four of the eighteen project managers interviewed in the exploratory study did not apply risk management processes on their projects. The reasons given were interpreted by the authors as cost justification, hindsight and anxiety. I’ve italicized the word “interpreted” in the previous sentence because I believe the responses given by the project managers could just as easily be interpreted another way. I’ve presented their arguments below so that readers can judge for themselves.

One interviewee mentioned that, “At the beginning, we had so much to do that no one gave a thought to tackling risks. It  simply did not happen.” The authors conclude that the rationale for not managing risks in this case is one of cost justification, the chain of logic being that due to the lack of time, investment of resources in managing risks was not justified. To me this seems to read too much into the response. From the response it appears to me that the real reason is exactly what the interviewee states –  “no one thought of managing risks” – i.e. risks were  overlooked.

Another interviewee stated, “It would have been nice to do it differently, but because we were quite vulnerable in terms of software development, and because most of that was driven by the States, we were never in a position to be proactive. The Americans would say “We got an update to that system and we just released it to you,” rather than telling us a week in advance that something was happening. We were never ahead enough to be able to plan.” The authors interpret the lack of risk management in the this case as being due to the problem of hindsight – i.e.  because the risk that an update poses to other parts of the system could not have been anticipated, no risk management was possible. To me this interpretation seems a little thin – surely, most project managers understand the risks that arbitrary updates pose. From the response it appears that the real reason was that the project manager was not able to plan ahead because he/she had no advance warning of updates. This seems more a problem of a broken project management process rather than anything to do with risk management or hindsight. My point: the uncertainty here was known (high probability of regular updates),  so something could (and should) have been done about it whilst planning the project.

I’ve dwelt on these examples because it appears that the authors may have occasionally fallen into the trap of pigeon holing interviewee responses into their predefined rationales (the ones discussed in the previous section)  instead of listening to what was actually being said.  Of course, my impression is based on a reading of the paper and the data presented therein. The authors may well have other (unpublished) information to support their classification of interviewee responses. However, if that is the case, they should have presented the data in the paper  because the reliability of the second survey  depends on the set of predefined rationales being comprehensive  and correct.

The authors present a short discussion of the second phase of their study. They find that no formal risk management processes were used in about one third of the 102 cases studied. As the authors point out, that in itself is an interesting statistic, especially considering the money at stake in typical IT projects. In cases where no risk management was applied, respondents were asked to provide reasons why this was so. The reasons given were extremely varied but, once again, the authors pigeon-holed these into their predefined categories. I present some of the original responses and interpretations below so that readers can judge for themselves.

Consider the following reasons that were offered (by respondents) for not applying risk management:

  1. We haven’t got time left.”
  2. No executive call for risk measurements.”
  3. Company doesn’t see the value in adding the additional cycles to a project.” (?)
  4. Upper management did not think it required it.”
  5. Ignorance that such a thing was necessary.”
  6. An initial risk analysis was done, but the PM did not bother to follow up.”
  7. A single risk identification workshop was held early in the project before my arrival. Reason for not following the process was most probably the attitude of the members of the team.”

Interestingly, the authors interpret all the above responses (and a few more ) as being attributable to the cost justification rationale. However, it seems to me that there could be several other (more likely) interpretations.  For example: 2, 3, 4, 5 could be attributed to a lack of knowledge about the value of managing risks whereas 1, 6, 7 sound more like simple (and unfortunately, rather common!)  buck-passing.

Conclusion

Towards the end of the paper   the authors make an excellent point about the  rationality of a decision not to apply risk management. From the perspective of formal methodoologies such a decision is irrational. However, rationality (or the lack of it) isn’t so cut and dried. Here’s what the authors say:

…a decision by an IT project manager not to apply project risk management may be described as irrational, at least if one accepts the premise that the project manager chose not to apply a “self-evidently” correct process to optimally reduce the impact of risk on the project outcome. On the other hand, … a person who focuses only on the statistical probability of threats and their impacts and ignores any other information would be truly irrational. Hence, a project manager would act sensibly by, for example, not applying project risk management because he or she rates the utility of not using project risk management as higher than the utility of confronting stakeholders with discomforting information….”

…or spending money to address issues that may not eventuate, for that matter. The point being  that people don’t make decisions based on prescribed processes and procedures alone; there are other considerations.

The authors then go on to say,

PMI and APM claim that through the systematic identification, analysis, and response to risk, project managers can achieve the planned project outcome. However, the findings show that in more than one-third of all projects, the effectiveness of project risk management is virtually nonexistent because no formal project risk management process was applied due to the problem of cost justification.

Now, although it is undeniable that many projects are run with no risk management whatsoever,  I’m not sure I agree with the last statement in the quote. From the data presented in the paper, it seems more likely that a lack of knowledge  and “buck-passing”  are the prime reasons  for risk management being given short shrift on the projects surveyed. Even if cost justification was  offered as a rationale by some  interviewees,  their quotes suggest that the real reasons were quite different. This isn’t surprising: it is but natural  to attribute to unacceptable costs that which should be attributed to oversight or failure.  I think this may be the case in a large number of projects on which risks aren’t managed. However,  as the authors mention, it is impossible to make any generalisations based on small samples .  So, although it is incontrovertible that there are a significant number of projects on which risks aren’t managed, why this is so remains an open question.

Written by K

November 25, 2009 at 11:07 pm

On the limitations of scoring methods for risk analysis

with 12 comments

Introduction

A couple of months ago I wrote an article highlighting some of the pitfalls of using risk matrices. Risk matrices are an example of scoring methods , techniques which use ordinal scales to assess risks. In these methods,  risks are ranked by some predefined criteria such as impact or expected loss, and the ranking  is then used as the basis for  decisions on how the risks should be addressed. Scoring methods are popular because they are easy to use. However,  as Douglas Hubbard points out in his critique of current risk management practices, many commonly used scoring techniques are flawed. This post – based on Hubbard’s critique and research papers quoted therein –  is a brief look at some of the flaws of risk scoring techniques.

Commonly used risk scoring techniques and problems associated with them

Scoring techniques fall under two major categories:

  1. Weighted scores: These use several ordered scales which are weighted according to perceived importance. For example: one might be asked to rate financial risk, technical risk and organisational risk on a scale of 1 to 5 for each, and then weight then by factors of 0.6, 0.3 and 0.1 respectively (possibly because the CFO – who happens to be the project sponsor – is more concerned about financial risk than any other risks ). The point is, the scores and weights assigned can be highly subjective – more on that below.
  2. Risk matrices: These rank risks along two dimensions – probability and impact – and assign them a qualitative ranking of high, medium or low depending on where they fall.  Cox’s theorem shows such categorisations are internally inconsistent because the category boundaries are arbitrarily chosen.

Hubbard makes the point that, although both the above methods are endorsed by many standards and methodologies (including those used in project management), they should be used with caution because they are flawed. To quote from his book:

Together these ordinal/scoring methods are the benchmark for the analysis of risks and/or decisions in at least some component of most large organizations. Thousands of people have been certified in methods based in part on computing risk scores like this. The major management consulting firms have influenced virtually all of these standards. Since what these standards all have in common is the used of various scoring schemes instead of actual quantitative risk analysis methods, I will call them collectively the “scoring methods.” And all of them, without exception, are borderline or worthless. In practices, they may make many decisions far worse than they would have been using merely unaided judgements.

What is the basis for this claim? Hubbard points to the following:

  1. Scoring methods do not make any allowance for flawed perceptions of analysts who assign scores – i.e. they do not consider the effect of cognitive bias. I won’t dwell on this as I have  previously written  about the effect of cognitive biases in project risk management -see this post and this one, for example.
  2. Qualitative descriptions assigned to each score are understood differently by different people. Further, there is rarely any objective guidance as to how an analyst is to distinguish between a high or medium risk. Such advice may not even help: research by Budescu, Broomell and Po shows that there can be huge variances in understanding of qualitative descriptions, even when people are given specific guidelines what the descriptions or terms mean.
  3. Scoring methods add their own errors.  Below are brief descriptions of some of these:
    1. In his paper on the risk matrix theorem, Cox mentions that “Typical risk matrices can correctly and unambiguously compare only a small fraction (e.g., less than 10%) of randomly selected pairs of hazards. They can assign identical ratings to quantitatively very different risks.” He calls this behaviour “range compression” – and it applies to any scoring technique that uses ranges.
    2. Assigned scores tend to cluster around the mid-low high range. Analysis by Hubbard shows that, on a 5 point scale, 75% of all responses are 3 or 4. This implies that changing a score from 3 to 4 or vice-versa can have a disproportionate effect on classification of risks.
    3. Scores implicitly assume that the magnitude of the quantity being assumed is directly proportional to the scale. For example, a score of 2 implies that the criterion being measured is twice as large as it would be for a score of 1. However, in reality, criteria are rarely linear as implied by such a scale.
    4. Scoring techniques often presume that the factors being scored are independent of each other – i.e. there are no correlations between factors. This assumption  is rarely tested or justified in any way.

Many project management standards advocate the use of scoring techniques.  To be fair, in many situations they are adequate as long as they are used with an understanding of their limitations. Seen in this light, Hubbard’s book is  an admonition to standards and textbook writers to be more critical of the methods they advocate, and a warning to practitioners that an uncritical adherence to standards and best practices is not the best way to manage project risks .

Scoring done right

Just to be clear, Hubbard’s criticism is directed against  scoring methods that use arbitrary, qualitative scales which are not justified by independent analysis. There are other techniques which, though superficially similar to these flawed scoring methods, are actually quite robust because they are:

  1. Based on observations.
  2. Use real measures (as opposed to arbitrary ones – such as “alignment with business objectives” on a scale of 1 to 5, without defining what “alignment” means.)
  3. Validated after the fact (and hence refined with use).

As an example  of a sound scoring technique, Hubbard quotes this paper by Dawes, which presents evidence that linear scoring models are superior to intuition in clinical judgements. Strangely, although the weights themselves can be obtained through intuition, the scoring model outperforms clinical intuition. This happens because human intuition is good at identifying important factors, but not so hot at evaluating the net effect of several, possibly competing factors. Hence simple linear scoring models can outperform intuition. The key here is that the models are validated by checking the predictions against reality.

Another class of techniques use axioms based on logic to reduce inconsistencies in decisions. An example of such a technique is multi-attribute utility theory. Since they are based on logic, these methods can also be considered to have a solid foundation unlike those discussed in the previous section.

Conclusions

Many commonly used scoring methods in risk analysis are based on flaky theoretical foundations – or worse, none at all. To compound the problem, they are often used without any validation.  A particularly ubiquitous example is the well-known and loved risk matrix.  In his paper on risk matrices,  Tony Cox  shows how risk matrices can sometimes lead to decisions that are worse than those made on the basis of a coin toss.   The fact that this is a possibility – even if only a  small one – should worry anyone who uses risk matrices  (or other flawed scoring techniques) without an understanding of their limitations.

Written by K

October 6, 2009 at 8:27 pm

Cognitive biases as project meta-risks – part 2

with 2 comments

Introduction

Risk management is fundamentally about making decisions in the face of uncertainty. These decisions  are based on perceptions of future events,  supplemented by analyses of data relating to those events.  As such, these decisions  are subject to cognitive biases –  human tendencies to base judgements on flawed perceptions of events and/or data. In an earlier post,  I argued that cognitive biases are meta-risks,  i.e.  risks of  risk analysis.   An awareness of how these biases operate can pave the way towards reducing their effects on risk-related decisions. In this post I therefore look into the nature of cognitive biases. In particular:

  1. The role of intuition and rational thought in the expression of cognitive biases.
  2. The psychological process of attribute substitution which underlies judgement-related cognitive biases

I  then take a brief look at ways in which the effect of  bias in decision-making can be reduced.

 The role of intuition and rational thought in the expression of cognitive biases

Research in psychology has established that human cognition works through two distinct processes:   System 1 which corresponds to intuitive thought and System 2 which corresponds to rational thought. In his Nobel Prize lecture, Daniel Kahneman had this to say about the two systems:

The operations of System 1 are fast, automatic, effortless, associative, and often emotionally charged; they are also governed by habit, and are therefore difficult to control or modify. The operations of System 2 are slower, serial, effortful, and deliberately controlled; they are also relatively flexible and potentially rule-governed.

The surprise is that judgements always involve System 2 processes. In Kahneman’s words:

 …the perceptual system and the intuitive operations of System 1 generate impressions of the attributes of objects of perception and thought. These impressions are not voluntary and need not be verbally explicit. In contrast, judgments are always explicit and intentional, whether or not they are overtly expressed. Thus, System 2 is involved in all judgments, whether they originate in impressions or in deliberate reasoning.

So, all judgements, whether intuitive or rational, are monitored by System 2. Kahneman suggests that this monitoring can be very cursory thus allowing System 1 impressions to be expressed directly, whether they are right or not. Seen in this light, cognitive biases are unedited (or at best lightly edited) expressions  of  often incorrect impressions.

 Attribute substitution: a common mechanism for judgement-related biases

In a paper entitled Representativeness Revisited,  Kahneman and Fredrick suggest that the psychological process of attribute substitution  is the mechanism that underlies many cognitive biases.  Attribute substitution is the tendency of people to answer a difficult decision-making question by interpreting it as a simpler (but related) one. In their paper, Kahneman and Fredrick describe attribute substitution as occurring when:

 …an individual assesses a specified target attribute of a judgment object by substituting a related heuristic attribute that comes more readily to mind…

An example might help decode this somewhat academic description.  I pick one from Kahneman’s Edge master class where he related the following:

 When I was living in Canada, we asked people how much money they would be willing to pay to clean lakes from acid rain in the Halliburton region of Ontario, which is a small region of Ontario. We asked other people how much they would be willing to pay to clean lakes in all of Ontario.

People are willing to pay the same amount for the two quantities because they are paying to participate in the activity of cleaning a lake, or of cleaning lakes. How many lakes there are to clean is not their problem. This is a mechanism I think people should be familiar with. The idea that when you’re asked a question, you don’t answer that question, you answer another question that comes more readily to mind. That question is typically simpler; it’s associated, it’s not random; and then you map the answer to that other question onto whatever scale there is—it could be a scale of centimeters, or it could be a scale of pain, or it could be a scale of dollars, but you can recognize what is going on by looking at the variation in these variables. I could give you a lot of examples because one of the major tricks of the trade is understanding this attribute substitution business. How people answer questions.

Attribute substitution boils down to making judgements based on specific, known instances of events or issues under consideration. For example,  people often overrate their own abilities because they base their self-assessments on specific instances where they did well, ignoring situations in which their performance was below par.  Taking another example from the Edge class,

 COMMENT: So for example in the Save the Children—types of programs, they focus you on the individual.

KAHNEMAN: Absolutely. There is even research showing that when you show pictures of ten children, it is less effective than when you show the picture of a single child. When you describe their stories, the single instance is more emotional than the several instances and it translates into the size of contributions.  People are almost completely insensitive to amount in system one. Once you involve system two and systematic thinking, then they’ll act differently. But emotionally we are geared to respond to images and to instances…

Kahnemann sums it up in a line in his Nobel lecture: The essence of attribute substitution is that respondents offer a reasonable answer to a question that they have not been asked.

Several decision-making biases in risk analysis operate via attribute substitution –  some of these include availability, representativeness, overconfidence and selective perception (see this post for specific examples drawn from high-profile failed projects).  Armed with this understanding of how these meta-risks operate, lets look at how their effect can be minimised.

 System two to the rescue, but…

The discussion of the previous section suggests that people often base judgements on specific instances that come to mind, ignoring the range of all possible instances. They do this because specific instances – usually concrete instances that have been experienced – come to mind more easily than the abstract “universe of possibilities.”

Those who make erroneous judgements will correct them only if they become aware of factors that they did not take into account when making the judgement, or when they realise that their conclusions are not logical. This can only happen through deliberation:   rational analysis,  which is possible only through a deliberate invocation of System 2 thinking.

Some of the ways in which System 2 can be helped along are:

  1. By reframing the question or issue in terms that forces analysts to consider the range of possible instances rather than specific instances.  A common manifestation of the latter is when risk managers base their plans on the assumption that average conditions will occur – an assumption that Professor Sam Savage calls the flaw of averages (see Dr. Savage’s very entertaining and informative book for more on the flaw of averages and related statistical fallacies).
  2. By requiring analysts to come up with pros and cons for any decision they make. This forces them to consider possibilities they may not have taken into account when making the original decision.
  3. By basing decisions on relevant empirical or historical data instead of relying on intuitive impressions.
  4. By making the analysts aware of their  propensity to be overconfident (or under-confident) by evaluating their probability calibration. One way to do this is by asking them  to answer a series of trivia questions with confidence estimates for each of their answers (i.e. their self-estimated probability of being right). Their confidence estimates are then compared to the fraction of questions correctly answered. A well calibrated individual’s confidence estimates should be close to the percentage of correct answers.  There is some evidence to suggest that analysts can be trained improve their calibration through cycles of testing and feedback.  Calibration training is discussed in Douglas Hubbard’s book, The Failure of Risk Management. However, as discussed here, improved calibration by through feedback and repeated tests may not carry over to judgements in real-life situations.

Each of the above options forces  analysts to consider instances other than the ones that readily come to mind.  That said, they aren’t a sure-cure for the problem:  System 2 thinking does not guarantee correctness.  Kahneman discusses several reasons why this is so.  First, it has been found that education and training in decision-related disciplines (like statistics) does not eliminate incorrect intuitions; it only reduces them in favourable circumstances (such as when the question is reframed to make statistical cues obvious). Second, he  notes that sytem 2 thinking is easily derailed: research has shown that the efficiency of system 2 is impaired by time pressure and multi-tasking. (Managers who put their teams under time and multi-tasking pressures should take note!). Third, highly accessible values, which form the basis for initial intuitive judgements serve as anchors for subsequent system 2-based corrections. These corrections are generally insufficient – i.e. too small.  And finally, System 2 thinking is of no use if it is based on incorrect assumptions:  as a colleague once said, “Logic doesn’t get you anywhere if  your premise is wrong.”

Conclusion

Cognitive biases are meta-risks that are  responsible for many incorrect judgements in project (or any other) risk analysis . An apposite example is the financial crisis of 2008, which can be traced back to several biases such as groupthink, selective perception and over-optimism (among many others). An understanding of how these meta-risks operate suggest ways in which their effects can be reduced,  though not eliminated altogether.  In the end,  the message is simple and obvious: for judgements that matter, there’s no substitute for  due diligence –  careful observation and thought, seasoned with an awareness of one’s own  fallibility.

Written by K

September 3, 2009 at 11:10 pm