Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Risk analysis’ Category

Cognitive biases as project meta-risks

with 16 comments

Introduction and background

A  comment by John Rusk on this post got me thinking about the effects of  cognitive  biases on the perception and analysis of project  risks.  A cognitive bias is a human tendency to base a judgement or decision on a flawed perception or understanding of data or events.  A recent paper suggests that cognitive biases may have played a role in some high profile project failures.   The author of the paper, Barry Shore, contends that the failures were caused by poor  decisions which could be traced back to specific biases.  A direct implication is that  cognitive biases can have a significant negative  effect on how project risks are perceived and acted upon.  If true, this has consequences for the practice of risk management in projects (and other areas, for that matter). This essay discusses the role of cognitive biases in risk analysis, with a focus on project environments.

Following the pioneering work of Daniel Kahneman and Amos Tversky, there has been a lot of applied research on the role of cognitive biases in various areas of social sciences (see Kahneman’s Nobel Prize lecture for a very readable account of his work on cognitive biases).  A lot of this research highlights the fallibility of intuitive decision making.  But even judgements ostensibly based on data are subject to cognitive biases.  An example of this is when data is misinterpreted to suit the decision-maker’s preconceptions (the so-called confirmation bias). Project risk management is largely about making decisions regarding uncertain events that might impact a project. It involves, among other things, estimating the likelihood of these events occurring and the resulting impact on the project. These estimates and the decisions based on them can be erroneous for a host of reasons.  Cognitive biases are an often overlooked, yet universal,  cause of error.

Cognitive biases as project meta-risks

So, what role do cognitive biases play in project risk analysis? Many researchers have considered specific cognitive biases as project risks:  for example, in this paper, Flyvbjerg describes how the risks posed by optimism bias can be addressed using reference class forecasting (see my post on improving project forecasts for more on this).  However, as suggested in the introduction, one can go further. The first point to note is that biases are part and parcel of the mental make up of humans, so any aspect of risk management that involves human judgment is subject to bias. As such, then, cognitive biases may be thought of as meta-risks: risks that affect risk analyses. Second, because they are a part of the mental baggage of all humans,  overcoming them involves an understanding of the thought processes that govern decision-making,  rather than externally-directed analyses (as in the case of risks).  The analyst has to understand how his or her perception of risks may be affected by these meta-risks.

The publicly available research and professional literature on meta-risks in business and organisational contexts is sparse. One relevant reference is a paper by Jack Gray on meta-risks in financial portfolio management.  The first few lines of the paper state,

“Meta-risks are qualitative, implicit risks that pass beyond the scope of explicit risks. Most are born out the complex interaction between the behaviour pattern of individuals and those of organizational structures” (italics mine).

Although he doesn’t use the phrase, Gray seems to be referring to cognitive biases – at least in part. This is confirmed by a reading of the paper. It describes, among other things, hubris (which roughly corresponds to the  illusion of control) and discounting evidence that conflicts with one’s views (which corresponds to confirmation bias) as meta-risks. From this (admittedly small) sampling of the literature, it seems that the notion of cognitive biases as meta-risks has some precedent.

Next, let’s look at how biases can manifest themselves as meta-risks in a project environment. To keep the discussion manageable, I’ll focus on a small set of biases:

Anchoring: This refers to the tendency of humans to rely on a single piece of information when making a decision. I have seen this manifest itself in task duration estimation – where “estimates plucked out of thin air” by management serve as an anchor for subsequent estimation by the project team. See this post for more on anchoring in project situations. Anchoring is a meta-risk because the over-reliance on a single piece of information about a risk can have an adverse effect on decisions relating to that risk.

Availability: This refers to the tendency of people to base decisions on information that can be easily recalled, neglecting potentially more important information. As an example, a project manager might give undue weight to his or her most recent professional experiences when analysing project risks. Here availability is a meta-risk because it is a barrier to an objective consideration of risks that are not immediately apparent to the analyst.

Representativeness: This refers to the tendency to make judgements based on  seemingly representative, known samples . For example, a project team member  might base a task estimate based on another (seemingly) similar task, ignoring important differences between the two. Another manifestation of representativeness is when probabilities of events are estimated based on those of comparable, known events. An example of this is the gambler’s fallacy. This is clearly a meta-risk, especially where “expert judgement” is used as a technique to assess risk (Why? Because such judgements are invariably based on comparable tasks that the expert has encountered before.).

Selective perception: This refers to the tendency of individuals to give undue importance to data that supports their own views. Selective perception is a bias that we’re all subject to; we hear what we want to hear, see what we choose to see, and remain deaf  and blind to the rest. This is a meta-risk because it results in a skewed (or incomplete) perception of risks.

Loss Aversion: This refers to the tendency of people to give preference to avoiding losses (even small losses) over making gains. In risk analysis this might manifest itself as overcautiousness. Loss aversion is a meta-risk because it might, for instance, result in the assignment of an unreasonably large probability of occurrence to a risk.

A particularly common manifestation of loss aversion in project environments is the sunk cost bias. In situations where significant investments have been made in projects, risk analysts might be biased towards downplaying risks.

Information bias: This is the tendency of some analysts to seek as much data as they can lay their hands on prior to making a decision.  The danger here is of being swamped by too much irrelevant information. Data by itself does not improve the quality of decisions (see this post by Tim van Gelder for more on the dangers of data-centrism). Over-reliance on data – especially when there is no way to determine the quality and relevance of data as is often the case – can hinder risk analyses. Information bias is a meta-risk for two reasons already alluded to above; first, the data may not capture important qualitative factors and second, the data may not be relevant to the actual risk.

I could work my way through a few more of the biases listed here, but I think I’ve already made my point: projects encompass a spectrum of organisational and technical situations, so just about any cognitive bias is a potential meta-risk.

Conclusion

Cognitive biases are meta-risks because they can affect decisions pertaining to risks – i.e. they are risks of risk analysis.  Shore’s research suggests that the risks posed by these meta-risks are very real;  they can cause project failure  So, at a practical level, project managers need to understand  how cognitive biases could affect their own  risk-related judgements (or any other judgements for that matter).   The previous section provides illustrations of how selected cognitive biases  can affect risk analyses;  there are, of course,  many more.  Listing examples is illustrative, and helps make the point that cognitive biases are meta-risks. However, it is more useful and interesting to understand how biases operate and what we can do to overcome them.   As I have mentioned above, overcoming biases requires an understanding of the thought processes through which humans make decisions in the face of uncertainty.  Of particular interest is  the role of  intuition and rational thought in forming judgements, and the common mechanisms that underlie judgement-related cognitive biases.   A knowledge and awareness of these mechanisms  might help project managers in consciously countering the operation of cognitive biases in their own decision making.  I’m currently making some notes on these topics, with the intent of publishing them in a forthcoming essay – please stay tuned.

Note

Part II of this post published here.

Written by K

August 9, 2009 at 9:59 pm

Cox’s risk matrix theorem and its implications for project risk management

with 35 comments

Introduction

One of the standard ways of characterising risk on projects is to use matrices which categorise risks by impact and probability of occurrence.  These matrices provide a qualitative risk ranking in categories such as high, medium and low (or colour: red, yellow and green). Such rankings are often used to prioritise and allocate resources to manage risks. There is a widespread belief that the qualitative ranking provided by matrices reflects an underlying quantitative ranking.  In a paper entitled, What’s wrong with risk matrices?, Tony Cox shows that the qualitative risk ranking provided by a risk matrix will agree with the quantitative risk ranking only if the matrix is constructed according to certain general principles. This post is devoted to an exposition of these principles and their consequences.

Since the content of this post may seem overly academic to some of my readers, I think it is worth clarifying why I believe an understanding of Cox’s principles is important for project managers. First, 3×3 and 4×4 risk matrices are widely used in managing project risk.  Typically these matrices are constructed in an intuitive (but arbitrary) manner. Cox shows – using very general assumptions – that there is only one sensible colouring scheme (or form) of these matrices. This conclusion was surprising to me, and I think that many readers may also find it so. Second, and possibly more important, is that the arguments presented in the paper show that it is impossible to maintain perfect congruence between qualitative (matrix) and quantitative rankings. As I discuss later, this is essentially due to the impossibility of representing quantitative rankings accurately on a rectangular grid. Developing an understanding of these points will enable project managers to use risk matrices in a more logically sound manner.

Background and preliminaries

Let’s begin with some terminology that’s well known to most project managers:

Probability: This is the likelihood that a risk will occur. It is quantified as a number between 0 (will definitely not occur) and 1 (will definitely occur).

Impact (termed “consequence” in the paper): This is the severity of the risk should it occur. It can also be quantified as a number between 0 (lowest severity) and 1(highest severity).

Note that the above scales for probability and impact are arbitrary – other common choices are percentages or a scale of 0 to 10.

Risk:  In many project risk management frameworks, risk is characterised by the formula: Risk = probability x impact.  This formula looks reasonable, but is typically specified a priori, without any justification.

A risk can be plotted on a two dimensional graph depicting impact (on the x-axis) and probability (on the y-axis). This is typically where the problems start: for most risks, neither the probability nor the impact can be accurately quantified. The standard solution is to use a qualitative scale, where instead of numbers one uses descriptive text – for example, the probability, impact and risk can take on one of three values: high, medium and low (as shown in Figure 1 below).  In doing this,  analysts make the implicit assumption that the categorisation provided by the qualitative assessment ranks the risks in correct quantitative order. Problem is, this isn’t true.

Figure 1: A 3x3 Risk Matrix

Figure 1: A 3x3 Risk Matrix

Let’s look at the simple case of two risks A and B ranked on a 2×2 risk matrix shown in Figure 2 below.  Let’s assume that the probability and impact of each of the two risks are independent and uniformly distributed between 0 and 1. Clearly, if the two risks have the same qualitative ranking (high, say), there is no way to rank them correctly unless one has quantitative knowledge of probability and impact – which is usually not the case. In the absence of this information, there’s a 50% chance (all other factors being equal) of ranking them correctly – i.e.  one is effectively “flipping a coin” to choose which one has the higher (or lower) rank. This situation highlights a shortcoming of risk matrices: poor resolution. It is not possible to rank risks that have the same qualitative ranking.

Figure 2: A 2x2 Risk Matrix

Figure 2: A 2x2 Risk Matrix

“That’s obvious,” I hear  you say – and you’re right. But there’s more:  if one of the ratings is medium and the other one is not (i.e. the other one is high or low), then there is a non-zero chance of making an incorrect ranking because some points in the cell with the higher qualitative rating have a lower quantitative value of risk than some points in the cell with the lower qualitative ranking. Look at that statement again: it implies that risk matrices can incorrectly assign higher qualitative rankings to quantitatively smaller risks – i.e. there is the possibility of making ranking errors.  This point is seriously counter-intuitive (to me anyway) and merits a proof, which Cox provides and I  discuss below.  Before doing so, I should also point out that the discussion of this paragraph assumes that the probabilities and impacts of the two risks are independent and uniformly distributed. Cox also points out that the chance of making the wrong ranking can be even higher if the joint distribution of the two are correlated. In particular, if the correlation is negative (i.e. probability decreases as impact increases), a random ranking is actually better than that provided by the risk matrix. In this situation the information provided by risk matrices is “worse than useless” (a random choice is better!).  Negative correlations between probability and impact are actually quite common – many situations involve a mix of high probability-low impact and low probability-high impact risks. See the paper for more on this.

Weak consistency and its implications

With the issues of poor resolution and ranking errors established, Cox asks the question: What can be salvaged?  The underlying problem is that the joint distribution of probability and impact is unknown. The standard approach to improving the utility of risk matrices is to attempt to characterise this distribution. This can be done using artificial intelligence tools – and Cox provides references to papers that use some of these techniques to characterise distributions. These techniques typically need plentiful data as they attempt to infer characteristics of the joint distribution from data points. Cox, instead, proposes an approach that is based on general properties of risk matrices – i.e. an approach that prescribes a set of rules that ensure consistency. This has the advantage of being general,  and not depending on the availability of data points to characterise the probability distribution.

So what might a consistency criterion look like? Cox suggests that, at the very least, a risk matrix should be able to distinguish reliably between very high and very low risks. He formalises this requirement in his definition of weak consistency, which I quote from the paper:

A risk matrix with more than one “colour” (level of risk priority) for its cells satisfies weak consistency with a quantitative risk interpretation if points in its top risk category (red) represent higher quantitative risks than points in its bottom category (green)

The notion of weak consistency formalises the intuitive expectation that a risk matrix must, at the very least, distinguish  between the lowest and highest (quantitative) risks.  If it can’t, it is indeed “worse than useless”.  Note that weak consistency doesn’t say anything about distinguishing between medium and lowest/highest risks – merely between the lowest and highest.

Having defined weak consistency, Cox derives some of its surprising consequences, which I describe next.

Cox’s First Lemma:  If a risk matrix satisfies weak consistency, then no red cell (highest risk category) can share an edge with a green cell (lowest risk category).

Proof:  To see how this is plausible, consider the different ways in which a red cell can adjoin a green one. Basically there are only two ways in which this can happen, which I’ve illustrated in Figure 3. Now assume that the quantitative risk of the midpoint of the common edge is a number n (n between 0 and 1). Then if x and y and are the impact and probability, we have

xy=n or y=n/x

So, the locus of all points having the same risk (often called the iso-risk contour) as the midpoint is a rectangular hyperbola with negative slope (i.e.  y decreases as x increases). The negative slope (see Figure 3) implies that the points above the iso-risk contour in the green cell have a higher quantitative risk than points below the contour in the red cell. This contradicts weak consistency. Hence – by reductio ad absurdum –  it isn’t possible to have a green cell and a red cell with a common edge.

Figure 3: Figure for Lemma 1

Figure 3: Figure for Lemma 1

Cox’s Second Lemma: if a risk matrix satisfies weak consistency and has at least two colours (green in lower left and red in upper right, if axes are oriented to depict increasing probability and impact), then no red cell can occur in the bottom row or left column of the matrix.

Proof:  Assume it is possible to have a red cell in the bottom row or left column. Now consider an iso-risk contour for a sufficiently small risk (i.e. a contour that passes through the lower left-most green cell). By the properties of rectangular hyperbolas, this contour must pass through all cells in the bottom row and the left-most column, as shown in Figure 4. Thus, by an argument similar to the one of the previous lemma, all points below the iso-risk contour in either of the red cells have a smaller quantitative risk than point above it in the green cell. This violates weak consistency, and hence the assumption is incorrect.

Figure 4: Figure for Lemma 2

Figure 4: Figure for Lemma 2

An implication that follows directly from the above lemmas is that any risk matrix that satisfies weak consistency must have at least three colours!

Surprised? I certainly was when I first read this.

Between-ness and its implications

If a risk matrix provides a qualitative representation of the actual qualitative risks, then small changes in the probability or impact should not cause discontinuous jumps in risk categorisation from lowest to highest category without going through the intermediate category. (Recall, from the previous section, that a weakly consistent matrix must have at least three colours).

This expectation is formalised in the axiom of between-ness:

A risk matrix satisfies the axiom of between-ness if every positively sloped line segment that lies in a green cell at its lower end and a red cell at its upper end must pass through at least one intermediate cell (i.e. one that is neither red nor green).

By definition, no 2×2 cell can satisfy between-ness. Further, amongst 3×3 matrices, only one colour scheme satisfies both weak consistency and between-ness. This is the matrix shown in Figure 1: green in the left and bottom columns , red in upper right-most cell and yellow in all other cells. This, to me, is a truly amazing consequence of a couple of simple,  intuitive axioms.

Consistent colouring and its implications

The basic idea behind consistent colouring is that risks that have the identical quantitative values should have the same qualitative ratings. This is impossible to achieve in a discrete risk matrix because iso-risk contours cannot coincide with cell boundaries (Why? Because  iso-risk contours have negative slopes whereas cell boundaries have zero or infinite slope  – i.e. they are horizontal or vertical lines).  So, Cox suggests the following: enforce consistent colouring for extreme categories only – red and green – allowing violations for intermediate categories.  What this means is that cells that contain iso-risk contours which pass through other red cells (“red contours”) must be red and cells that contain iso-risk contours which pass through other green cells (“green contours”) must be green. Hence the following definition of consistent colouring:

  1. A cell is red if it contains points with quantitative risks at least as high as those in other red cells, and does not contain points with quantitative risks as small as those on any green cell.
  2. A cell is green if it contains points with risks at least as small as those in other green cells, and does not contain points with quantitative risks as high as those in any red cell.
  3. A cell has an intermediate colour only if it a) lies between a red cell and a green cell or b) it contains points with quantitative risks higher than those in some red cells and also points with quantitative risks lower than those in some green cells.

An iso-risk contour is green if it passes through one or more green cells but no red cells and a red contour is one which passes through one or more red cells but no green cells. Consistent colouring then implies that cells with red contours and no green contours are red; and cells with green contours and no red contours are green (and, obviously, cells with contours of both colours are intermediate)

Implications of the three axioms – Cox’s Risk Matrix Theorem

So, after a longish journey, we have three axioms: weak consistency, between-ness and consistent colouring. With that done, Cox rolls out his theorem – which I dub Cox’s Risk Matrix Theorem (not to be confused with Cox’s Theorem from statistics!), which can be stated as follows:

In a risk matrix satisfying weak consistency, between-ness and consistent colouring:

a)      All cells in the leftmost column and in the bottom row are green.

b)      All cells in the second column from the left and the second row from the bottom are non-red.

The proof is a bit long, so I’ll omit it, making a couple of plausibility arguments instead:

  1. The lower leftmost cell is green (by definition), and consistent colouring implies that all contours that lie below the one passing through the upper right corner of this cell must also be green because a) they pass through the lower leftmost cell which is green and b) none of the other cells they pass through are red (by Cox’s second lemma). The other cells on the lowest or leftmost edge of the matrix can only be intermediate or green. That they cannot be intermediate is a consequence of  between-ness.
  2. That the second row and second column must be non-red is also easy to see: assume any of these cells to be red. We then have a red cell adjoining a green cell, which violates between-ness.

I’ll leave it at that, referring the interested reader to the paper for a complete proof.

Cox’s theorem has an immediate corollary which is particularly interesting for project managers who use 3×3 and 4×4 risk matrices:

A tricoloured 3×3 or 4×4 matrix that satisfies weak consistency, between-ness and consistent colouring can have only the following (single!) colour scheme:

a)      Leftmost column and bottom row coloured green.

b)      Top right cell (for 3×3) or four top right cells (for 4×4) coloured red.

c)      All other cells coloured yellow.
Proof:  Cox’s theorem implies that the leftmost column and bottom row are green. The top right cell must be red (since it is a tricoloured matrix). Consistent colouring implies that the two cells adjoining this cell (in a 4×4 matrix) and the one diagonally adjacent must also be red (this cannot be so for a 3×3 matrix because these cells would adjoin a green cell which violates Cox’s first lemma). All other cells must be yellow by between-ness.

This result is quite amazing. From three very intuitive axioms Cox derives essentially the only possible colouring scheme for 3×3 and 4×4 risk matrices.

Conclusion

This brings me to the end of this post on the Cox’s axiomatic approach to building logically consistent risk matrices.  I highly recommend reading the original paper for more. Although it presents some fairly involved arguments, it is very well written. The arguments are presented with clarity and logical surefootedness,  and the assumptions underlying each argument are clearly laid out.  The three principles (or axioms) proposed are intuitively appealing – even obvious – but their consequences are quite unexpected (witness the unique colouring scheme for 3×3 and 4×4 matrices). Further, the arguments leading up to the lemmas and theorems bring up points that are worth bearing in mind when using risk matrices in practical situations.

In closing I should mention that the paper also discusses some other limitations of risk matrices that flow from these principles: in particular, spurious risk resolution and inappropriate resource allocation based on qualitative risk categorisation.   For reasons of space, and the very high likelihood that I’ve already tested my readers’ patience to near (if not beyond) breaking point,  I’ll defer a discussion of these to a future post.

Note added on 20 December, 2009:

See this post for a visual representation of the above discussion of Cox’s risk matrix theorem and the comments that follow.

Written by K

July 1, 2009 at 10:05 pm

A new perspective on risk analysis in projects

with 2 comments

Introduction

Projects are, by definition, unique endeavours. Hence it is important that project risks be analysed and managed in a systematic manner. Traditionally,  risk analysis in projects – or any other area –  focuses on external events.  In a recent paper entitled, The Pathogen Construct in Risk Analysis, published in the September 2008 issue of the Project Management Journal, Jerry Busby and Hongliang Zhang articulate a fresh perspective on risk analysis in projects. They argue that the analysis of external threats  should be complemented by an understanding  of  how internal decisions and organisational structures affect risks.  What’s really novel, though, is their  use of metaphor: they characterise these internal sources of risk as pathogens. Below I explore their arguments via an annotated summary of their paper.

What’s a risk pathogen?

Risk,” the authors state, “is a statistical concept of events that happen to someone or something.” Traditional risk analysis concerns itself with identifying risks, determining the probability of their occurrence, and finding ways of dealing with them. Risks are typically considered to be events that are external to an organisation. This approach has its limitations because it does not explicitly take into account the deficiencies and strengths of the organisation. For example, a project may be subject to risk due to the use of an unproven technology. When the risk becomes obvious, one has to ask why that particular technology was chosen. There could be several reasons for this, each obviously flawed only in hindsight. Some reasons may be: a faulty technology selection process, over optimism, decision makers’ fascination with new technology or some other internal predisposition. Whatever the case, the “onditions that lead to the choice of technology existed prior to  the event that triggered the failure.  The authors label such preexisting conditions pathogens. In the authors’ words, “At certain times, external circumstances combine with ‘resident pathogens’ to overcome a system’s defences and bring about its breakdown. The defining aspect of these metaphorical pathogens is that they predate the conditions that trigger the breakdown, and are generally more stable and observable.”

It should be noted that the pathogen tag is subjective – that is, one party might view a certain organisational predisposition as pathogenic whereas another might view it as protective. To illustrate using the above example – management might view a technology as unproven, whereas developers might view it as offering the company a head start in a new area. Perceptions determine how a “risk” is viewed: different groups will select particular risks for attention, depending on the cultural affiliations, background, experience and training. Seen in this light, the subjectivity of the pathogen label is reasonable, if not obvious. In the paper, the authors examine risk pathogens in projectised organisations, with particular focus on the subjectivity of the label (i.e. different perceptions of what is pathogenic). Why is this important? The authors note that in their studies, “the most insidious kind of risk to a project – the least well understood and potentially the most difficult to manage if materialised – was the kind that involved contradictory interpretations.” These contradictory interpretations must be recognised and addressed by risk analysis; else they will come in the way of dealing with risks that become reality.

The authors use a case study based approach, using a mix of projects drawn from UK and China. In order to accentuate the differences between pathogenic and protective perspectives of “pathogens”, the selected projects had both public and private sector involvement. In each of the projects, the following criteria were used to identify pathogens. A pathogen

  • Is the cause of an identifiable adverse organisational effect.
  • Is created by social actors – it should not be an intrinsic vulnerability such as a contract or practice.
  • Exists prior to the problem – i.e. it predates the triggering event.
  • Becomes a problem (or is identified as a problem) only after the triggering event.

The authors claim that in all cases studied, the pathogen was easily identifiable. Further it was also easy to identify contradictory interpretations (protective behaviour) made by other parties. As an example, in a government benefits card project, the formulation of requirements was done only at a high-level (pathogen). The project could not be planned properly as a consequence (triggering event). This lead to poor developer performance and time/cost overruns (effect). The ostensible reason for doing requirements only at a high-level was to save time and cost in the bidding process (protective interpretation). Another protective interpretation was that detailed requirements would strait-jacket the development team and preclude innovation. Note that the adaptive (or protective) interpretation refers to a risk other than the one that actually occurred. This is true of all the examples listed by the authors –  in all cases the alternate interpretation refers to a risk other than the one that occurred, implying that the risk that actually occurred was somehow overlooked or ignored in the original risk analysis. It is interesting to explore why this happens, so I’ll jump straight to the analysis and discussion, referring the reader to  the paper for further details on the case studies.

Analysis and Discussion

From an analysis of their data, the authors suggest three reasons why a practice that is seen as adaptive, might actually end up being pathogenic:

  • Risks change with time, and managing risk at one time cannot be separated from managing it at another. For example, a limited-scale pilot project may be done on a shoestring budget (to save cost). A successful pilot may be seen as protective in the sense that it increases confidence that the project is feasible. However, because of the limited scope of the pilot, it may overlook certain risks that are triggered much later in the project.
  • Risks are often interdependent – i.e. how one risk is addressed may affect another risk in an adverse manner (e.g. increase the probability of its occurrence)
  • The stakeholders in a project do not have unrestricted choices on how they can address risks. There are always constraints (procedural or financial, for example) which restrict options on how risks can be handled. These constraints may lead to decisions that affect other risks negatively.

I would add another point to this list:

  • Stakeholders do not always have all the information they need to make informed decisions on risks. As a consequence, they may not foresee the pathogenic effect of their decisions. The authors allude to this in the paper, but do not state it as an explicit point. In their words, “Being engaged in a particular stage of a project selects certain risks for a project manager’s attention, and the priority becomes dealing with these risks rather than worrying about how widely the way of dealing with them will ramify into other stages of the project.

The authors then discuss the origins of subjectivity on whether something is pathogenic or adaptive. Their data suggests the following factors play an important role in how a stakeholder might view a particular construct:

  • Identity: This refers to the roles people play on projects. For example, a sponsor might view a quick requirements gathering phase as protective, in that it saves time and money; whereas a project manager or developer may view it as pathogenic, as it could lead to problems later.
  • Expectations of blame: It seems reasonable that stakeholders would view factors that cause outcomes that they may be blamed for as pathogenic. As the authors state, “Blameworthy events become highly specific risks to an individual and the origin of these events – whether practices, artefacts or decisions – become relevant pathogens.” The authors also point out that the expectation of blame plays a larger role in projectised organisations – where project managers are given considerable autonomy – compared to functional organisations where blame may be harder to apportion.

Traditional risk analysis, according to the authors, focus on face-value risks – i.e. on external threats – rather than the subjective interpretations of these risks by different stakeholders. To quote, “…problematic events become especially intractable because of actors’ interpretation of risk are contradictory.” These contradictory interpretations are easy to understand in the light of the discussion above. This  then begs the question: how does one deal with this subjectivity of risk perception?  The authors offer the following advice, combining elements of traditional risk analysis with some novel suggestions:

  • Get the main actors (or stakeholders) to identify the risks (as they perceive them), analyse them and come up with mitigation strategies.
  • Get the stakeholders to analyse each others analyses, looking for contradictory interpretations of factors.
  • Get the stakeholders together, to explore the differences in interpretations particularly from the perspective of whether:
    • These differences will interfere with management of risks as they arise.
    • There are ways of managing risks that avoid creating problems for other risks.

They suggest that it is important to avoid seeking consensus, because consensus invariably results in compromises that are sub-optimal from the point of view of managing multiple risks

I end this section with a particularly apposite quote from the paper, “At some point the actors need to agree on how to get on with the concrete business of the project, but they should be clear not only about the risks this will create for them, but also the risks it creates for others – and the risks that will come from others trying to manage their risks.” That, in a nutshell, is the message of the paper.

Conclusion

The authors use the metaphor of a pathogen to describe inherent organisational characteristics or factors that become “harmful” or “pathogenic” when certain risks are triggered. The interpretations of these factors subjective in that one person’s “pathogen” may be another person’s “protection”. Further, a factor that offers protection at one stage of a project may in fact become pathogenic at a later stage. Such contradictory views must be discussed in an open manner in order to manage risks effectively.

Although the work is based on relatively few data points,  it offers a novel perspective on the perception of risks in projects.  In my opinion the paper is well written, interesting and well worth a read for academics, consultants and project managers.

References:

Busby, Jerry. & Zhang, Hongliang.,  The Pathogen Construct in Risk Analysis, Project Management Journal, 39 (3), 86-96. (2008).

Written by K

November 10, 2008 at 9:27 pm