Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Book Review’ Category

The Labyrinths of Information – a book review

with 3 comments

 Introduction

Once implemented, IT systems can evolve in ways that can be quite different from their original intent and design.  One of the reasons for this is that enterprise systems are based on simplistic models that do not capture the complexities of real organisations. The gap between systems and reality is the subject of a fascinating book by Claudio Ciborra entitled, The Labyrinths of Information.  Among other things, the book presents an alternative viewpoint on systems development,  one that focuses on reasons for divergence between design and reality. It also discusses other aspects of system development that tend to be obscured by mainstream development methodologies and processes. This post is a summary and review of the book.

 Background

The standard treatment of systems development in corporate environments is based on the principles of scientific management. Yet, as Ciborra tells us,

…science-based, method-driven approaches can be misleading.  Contrary to their promise, they are deceivingly abstract and removed from practice. Everyone can experience this when he or she moves from the models to the implementation phase. The words of caution and pleas for ‘change management’ interventions that usually accompany the sophisticated methods and polished models keep reminding us of such an implementation gap. However, they offer no valid clue on how to overcome it…

Just to be clear, Ciborra offers no definitive solutions either. However, he offers  “clues on how to bridge the gap” by  looking into some of the informal techniques and approaches that people “on the ground” – users, designers, developers or managers –  use to work and cope with technology. He is not concerned with techniques or methodologies per se, but rather with how people deal with the messy day-to-day business of working with technology in organisations.

The book is organised as a collection of essays based on Ciborra’s research papers spanning a couple of decades – from the mid 1980s until a few years prior to his death in 2005. I discuss each of the chapters in order below, providing links to the original papers where I could find them.

 The divergence between models and reality

Most of the tools and techniques used in systems evaluation, design and development are based on simplified models of organisational reality. However,  organisations do not function according to organograms, data flow diagrams or entity-relationship models. Models used by systems professionals abstract away much of the messiness of real-life. The methods that come out of such simplifications cannot deal with the complexities of a real organisation.  As Ciborra states, “…concern with method is one of the key aspects of our discipline and possibly the true origin of its crisis…

Indeed, as any systems professional will attest to,  unforeseen occurrences and situations  inevitably encountered in real life  are what cause the biggest headaches in the implementation and acceptance of systems. Those on the ground deal with such exceptions by creative but essentially ad-hoc  approaches. Much of the book is a case-study based discussion of such improvised approaches to systems development.

 Making (do) with what is at hand

Ciborra argues that successful systems are invariably imitated by competitors, so any competitive advantage offered by such systems is, at best, limited. A similar argument holds for standards and best practices – they  promote uniformity rather than distinction. Given this, organisations should strive towards practices that cannot be copied. They should work towards inimitability.

In art, bricolage refers to a process of creating a work from whatever is at hand. Among other things it involves tinkering, improvising and generally making do with what is available. Ciborra argues that many textbook cases of strategic systems in fact evolved through bricolage, tinkering and serendipity, rather than plan. Some of the cases he discusses include Sabre Reservation System developed by American Airlines, and the development of Email (as part of the ARPANET project). Moreover, although the Sabre System afforded American Airlines a competitive advantage for a while, it soon became a part of the travel reservation infrastructure thereby becoming an operational necessity rather than an advantage. This is much the same point that Nicholas Carr made in his article, IT Doesn’t Matter.

The question that you may be asking at this point is: “All this is well and good, but does Ciborra have any solutions to offer?” Well, that’s the problem: Ciborra tells us that bricolage and improvisation ought to be encouraged, but offers little advice on how this can be done. For example, he tells  us to “Value bricolage strategically”, “Design tinkering” and “Establish systematic serendipity”  – sounds great in theory, but what does it really mean? It  is platitudinous advice that is hard to action.

Nevertheless his main point is a good one: that managers should encourage informal, creative practices instead of clamping down on them. This advice has not generally been heeded. Indeed, corporate IS practices have gone the other wa, down the road of standardisation and best practices. Ciborra tells us in no uncertain terms  that this is not a good thing.

 The enframing effect of technology

This part is, in my opinion, the most difficult chapter in the book. It is based on a paper by Ciborra and Hanseth entitled, From tool to Gestell: Agendas for managing the information infrastructure.   In German the term Gestell means shelf or rack.  The philosopher Martin Heidegger used the term to describe the way  in which technology frames the way we view (or “organise”)  the world.   Ciborra highlights the way in which existing infrastructure affects the success of  businesses processes and practices.  Ciborra emphasises that technology-based enterprise initiatives are doomed to fail unless they pay due attention to:

  1. Existing or installed infrastructure.
  2. Local needs and concerns.

Instead of attempting to oust old technology, system designers and implementers need to co-opt or cultivate the installed base (and the user community) if they are to succeed at all.  In this sense installed infrastructure is an actor (like an individual) with its own interest and agenda. It provides a context for the way people think and also influences future development.

The notion of Gestell thus reminds us of how existing technology influences and limits the way we think. To get around this, Ciborra suggests that we should:

  1. Be aware of technology and standards, but not be captive to them.
  2. Think imaginatively, but pay attention to the installed base (existing platforms and users).
  3. Remember that top down technology initiatives rarely succeed.

The drifting of information infrastructure

Ciborra uses Donald Schoen’s metaphor of the high ground and the swamp to highlight the gap between theory and practice of information systems (see this paper by Schoen, for a discussion of the metaphor). The high ground is the executive management view,where methodologies and management theories hold sway, while the swamp is the coalface where messy, day-to-day reality of organisational work unfolds. In the swamp of day-to-day work, people tend to use available technology in any way possible to solve real (and messy) problems. So, although a particular technology may have an espoused or intended aim, it may well be used in ways that are completely unforeseen by its designers.

The central point of this essay is that the full implications of a technology are often realised only after it has been implemented and used for a while. In Ciborra’s words, technology drifts – that is, it is put to uses that cannot be foreseen. Moreover, it may be never be used in ways that were intended by the designer.   Although Ciborra lists several cases that demonstrate this point, in my opinion, his blanket claim  that technology drifts is a bit over the top. Sure, in some cases, technologies may be used in unforeseen ways, but by and large they are used in ways that are intended and planned.

 The organisation as a host

Reactions to a  new technology in an organisation are generally mixed – some people may view the technology with some trepidation (because of the changes to their work routines, for instance) while others may welcome it (because of promised efficiencies, say).  In metaphorical terms, the new technology is a “guest,” whose “desires” and “intentions” aren’t fully known. Seen in this light of this metaphor, the notion of hospitality makes sense:  as Ciborra puts it, the organisation hosts the technology.

To be sure, the idea of hospitality applying to objects such as information systems will probably cause a few raised eyebrows. However it isn’t as “out there” as it sounds. Consider, for example, the following implications of the metaphor

  • Interaction between the host and guest can change both parties.
  • If the technology is perceived as unfriendly, it will be rejected (or even ejected!).
  • System development and operations methodologies are akin to cultural rituals (it is how we “deal with” the guest).
  • Technologies, like guests, stay for a while but not forever.

Ciborra’s intent  in this and most of the other essays is to make us ponder over the way we design, develop and run systems,  and possibly view what we do in a different light.

 The organisation as a platform

In this essay Ciborra looks at the way in which successful technology organisations adapt and adjust to rapidly changing environments. It is  based on his paper entitled, The Platform Organization: Recombining Strategies, Structures and Surprises, Using a case-study, he makes the point that the only way organisations can respond to rapidly evolving technology markets is to be open to recombining available resources in flexible ways: it is impossible to start from scratch; one has work with what is at hand, using it in creative ways.

Another point he makes is that the organisation of an organisation (hierarchy and structure) at any particular time is less important than how it gets there, where it’s headed and what are the obstacles in the way. To quote from the book:

 …analysing and evaluating the platform organisation at a fixed point in time is of little use: it may look like a matrix, or a functional hierarchy, and one may wonder how well its particular form fits the market for that period and what its level of efficiency really is. What should be appreciated, instead, is the whole sequence of forms adopted over time, and the speed and friction in shifting from one to the other.

However, the identification of such a trajectory can be misleading – despite after-the-fact rationalisations, management in such situations is often based on improvised actions rather than carefully laid plans.  Although this may not always be so, I suspect it is more common than managers would care to admit.

 Improvisation and mood

By now the reader would have noted that Ciborra’s focus is squarely on the unexpected occurrences in day-to-day organisational work. So it will come as no surprise that the last essay in the book deals with improvisation.

Ciborra argues that most studies on improvisation have a cognitive focus – that is, they deal with how people respond to emerging situations by “quick thinking.” In his opinion, such studies ignore the human aspect of improvised actions, the emotions and moods evoked by situations that call for improvisation. These, he suggests, can be the difference between improvised actions and panic.

As he puts it, people are not cognitive robots – their moods will determine whether they respond to a situation with indifference or interest and engagement. This human dimension of improvisation, though elusive, is the key to understanding improvisation (and indeed, any creative / innovative action)

He also discusses the relationship between improvisation and time – something I have discussed at length in an earlier post, so I’ll say no more about it here.

 A methodological postscript

In a postscript to the book, Ciborra discusses his research philosophy – the thread that links the essays in the book.. His basic contention is that methodologies and organisational models are based on after-the-fact rationalisations of real phenomena. More often than not such methods and models are idealisations that omit the messiness of real life organisations. They are abstractions, not reality. As such they can guide us, but we should be ever open to the surprises that real life may afford us.

Summarising

The essential message that Ciborra conveys is a straightforward one – that the real world is a messy place and that the simplistic models on which systems are based cannot deal with this messiness in full. Despite our best efforts there will always be stuff that “leaks out” of our plans and models. Ciborra’s book celebrates this messiness and reminds us that people matter more than systems or processes.

Written by K

September 8, 2011 at 10:41 pm

The Flaw of Averages – a book review

with 4 comments

Introduction

I’ll begin with an example. Assume you’re having a dishwasher installed in your kitchen. This (simple?) task requires the services of a plumber and an electrician, and both of them need to be present to complete the job. You’ve asked them to come in at 7:30 am. Going from previous experience, these guys are punctual 50% of the time. What’s the probability that work will begin at 7:30 am?

At first sight, it seems there’s a 50% chance of starting on time. However, this is incorrect – the chance of starting on time is actually 25%, the product of the individual probabilities for each of the tradesmen. This simple example illustrates the central theme of a book by Sam Savage entitled, The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty. This post is a detailed review of the book.

The key message that Savage conveys  is that uncertain quantities cannot be represented by single numbers, rather they are  a range of numbers each with a different probability of occurrence. Hence such quantities cannot be manipulated using standard arithmetic operations. The example mentioned in the previous paragraphs illustrate this point. This is well known to  those who work with uncertain numbers (actuaries, for instance), but is not so well understood by business managers and decision makers. Hence the executive who asks his long-suffering subordinate to give him a projected sales figure for next month, with the quoted number then being taken as the 100% certain figure.  Sadly such stories are more the norm than the exception,  so it is clear that there is a need for a better understanding of how uncertain quantities should be interpreted. The main aim of the book is to help those with little or no statistical training achieve that understanding.

Developing an intuition for uncertainty

Early in the book, Savage presents five tools that can be used to develop a feel for uncertainty. He refers to these tools as mindles – or mind handles.  His five mindles for uncertainty are:

  1. Risk is in the eye of the beholder, uncertainty isn’t. Basically this implies that uncertainty does not equate to risk. An uncertain event is a risk only if there is a potential loss or gain involved. See my review of Douglas Hubbard’s book on the failure of risk management for more on risk vs. uncertainty.
  2. An uncertain quantity is a shape (or a distribution of numbers) rather than a single number. The broadness of the shape is a measure of the degree of uncertainty. See my post on the inherent uncertainty of project task estimates for an intuitive discussion of how a task estimate is a shape rather than a number.
  3. A combination of several uncertain numbers is also a shape, but the combined shape is very different from those of the individual uncertainties.  Specifically, if the uncertain quantities are independent, the combined  shape can be narrower (i.e. less uncertain) than that of the individual shapes.  This provides the justification for portfolio diversification, which tells us not to put all our money on one horse, or eggs in one basket etc. See my introductory post on Monte Carlo simulations to see an example of how multiple uncertain quantities can combine in different ways.
  4. If the individual uncertain quantities (discussed in the previous point) aren’t independent, the overall uncertainty can increase or decrease depending on whether the quantities are positively or negatively related. The nature of the relationship (positive or negative) can be determined from a scatter plot of the quantities. See my post on simulation of correlated project tasks for examples of scatter plots. The post also discusses how positive relationships (or correlations) can increase uncertainty.
  5. Plans based on average numbers are incorrect on average. Using average numbers in plans usually entails manipulating them algebraically and/or plugging them into functions. Savage explains how the form of the function can lead to an overestimation or underestimation of the planned value. Although this sounds a somewhat abstruse, the basic idea is simple: manipulating an average number using mathematical operations will amplify the error caused by the flaw of averages.

Savage explains the above concepts using simple arithmetic supplemented with examples drawn from a range of real-life business problems.

The two forms of the flaw of averages

The book makes a distinction between two forms of the flaw of averages. In its  first avatar, the flaw states that  the combined average of two uncertain quantities equals the sum of their individual averages, but the shape of the combined uncertainty can be very different from the sum of the individual shapes (Recall that an uncertain number is a shape, but its average is a number).  Savage calls this the weak form of the flaw of averages. The weak form applies when one deals with uncertain quantities directly.  An example of this is when one adds up probabilistic estimates for two independent project tasks with no lead or lag between them. In this case the average completion time is the sum of the average completion times for the individual tasks, but the shape of the distribution of the combined tasks does not resemble the shape of the individual distributions. The fact that the shape is different is a consequence of the fact that probabilities cannot be “added up” like simple numbers. See the first example in my post on Monte Carlo simulation of project tasks for an illustration of this point.

In contrast, when one deals with functions of uncertain quantities, the combined average of the functions does not equal the sum of the individual averages. This happens because functions “weight” random variables in a non-uniform manner, thereby amplifying certain values of the variable. An example of this is where we have two sequential tasks with an earliest possible start time for the second. The earliest possible start time for the second task introduces a nonlinearity in cases where the first task finishes early (essentially because there is a lag between the finish of the first task and the start of the second in this situation). The constraint causes the average of the combined tasks to be greater than the sum of the individual averages. Savage calls this the strong form of the flaw of averages. It applies whenever one deals with nonlinear functions of uncertain variables. See the second example in my post on Monte Carlo simulation of multiple project tasks for an illustration of this point.

Much of the book presents real-life illustrations of the two forms of the flaw in risk assessment, drawn from finance to the film industry and  from petroleum to pharmaceutical supply chains. He also covers the average-based abuse of statistics in discussions on topical “hot-button” issues such as climate change and health care.

De-jargonising statistics

A layperson-friendly feature of the book is that it explains statistical terms in plain English. As an example, Savage spends an entire chapter demystifying the term correlation using scatter plots . Another term that he explains is the Central Limit Theorem (CLT), which states that the sum of independent random variables resembles the Normal (or bell-shaped) distribution.  A consequence of CLT is that one can reduce investment risk by diversifying one’s investments – i.e. making several (small) independent investments rather than a single (large) one  – this is essentially mindle # 3 discussed earlier.

Decisions, decisions

Towards the middle of the book, Savage makes a foray into decision theory, focusing on the concept of value of information. Since decisions are (or should be) made on the basis of information, one needs to gather pertinent information prior to making a decision. Now, information gathering costs money (and time, which translates to money). This brings up the question as to how much should one spend in collecting information relevant to a particular decision? It turns out that in many cases one can use decision theory to put a dollar value on a particular piece of information.  Surprisingly it turns out that organisations often  over-spend in gathering irrelevant information. Savage spends a few chapters discussing how one can compute the value of information based on simple techniques of decision theory. As interesting as this section is, however, I think it is a somewhat disconnected from the rest of the book.

Curing the flaw: SIPs, SLURPS and Probability Management

The last part of the book is dedicated to outlining a solution (or as Savage calls it, a cure) to average-based – or flawed – statistical  thinking. The central idea is to use pre-generated libraries of simulation trials for variables of interest. Savage calls such a packaged set of simulation trials a Stochastic Information Packet (SIP). Here’s an example of how it might work in practice:

Most business organisations worry about next year’s sales. Different divisions in the organisation might forecast sales using different of techniques. Further, they may use these forecasts as the basis for other calculations (such as profit and expenses for example). The forecasted numbers cannot be compared with each other because each calculation is based on different simulations or worse, different probability distributions.  The upshot of this is that forecasted sales results can’t be combined or even compared. The problem can be avoided if everyone in the organisation uses the same SIP  for forecasted sales. The results of calculations can be compared, and even combined, because they are based on the same simulation.

Calculations that are based on the same SIP (or set of SIPs) form a set of simulations that can be combined and manipulated using arithmetic operations. Savage calls such sets of simulations, Scenario Library Units with Relationships Preserved (or SLURPS).  The name reflects the fact that each of the calculations is based on the same set of sales scenarios (or results of simulation trials).  Regarding the terminology: I’m not a fan of laboured acronyms, but concede that they can serve as a good mnemonics.

The proposed approach ensures that the results of the combined calculations will avoid the flaw of averages,and exhibit the correct statistical behaviour. However, it assumes that there is an organisation-wide authority responsible for generating and maintaining appropriate SIPs.  This authority – the probability manager –  will be responsible for a “database” of SIPs that covers all uncertain quantities of interest to the business, and make these available to everyone in the organisation who needs to use them.   To quote from the book, probability management involves:

…a data management system in which the entities being managed are not numbers, but uncertainties, that is, probability distributions. The central database is a Scenario Library containing thousands of potential future values of uncertain business parameters. The library exchanges information with desktop distribution processors that do for probability distributions what word processors did for words and what spreadsheets did for numbers.

Savage  sees probability management as a key step towards managing uncertainty and risk in a coherent manner across organisations.  He  mentions that  some organizations that have already started down this route (Shell and Merck, for instance). The book can thus also be seen as a manifesto for the new discipline of probability management.

Conclusion

I have come across the flaw of averages in various walks of organizational life ranging from project scheduling to operational risk analysis. Most often, the folks responsible for analysing uncertainty are aware of the flaw, and have the requisite knowledge of statistics to deal with it. However, such analyses can be hard to explain to those who lack this knowledge.  Hence managers who demand a single number. Yes, such attitudes betray a lack of understanding of what uncertain numbers are and how they can be combined, but that’s the way it is in most organizations. The book is directed largely to that audience.

To sum up:  the book is an entertaining and informative read on some common misunderstandings of statistics. Along the way  the author translates many statistical principles and terms from “jargonese” to plain English. The book deserves to  be read widely, especially by those who need it the most: managers and other decision-makers who need to understand the arithmetic of uncertainty.

Written by K

May 4, 2010 at 11:06 pm

The failure of risk management: a book review

with 7 comments

Introduction

Any future-directed activity has a degree of uncertainty, and uncertainty implies risk. Bad stuff happens – anticipated events don’t unfold as planned and unanticipated events occur.  The main function of risk management is to deal with this negative aspect of uncertainty.  The events of the last few years suggest that risk management as practiced in many organisations isn’t working.  A book by Douglas Hubbard entitled, The Failure of Risk Management – Why it’s Broken and How to Fix It, discusses why many commonly used risk management practices are flawed and what needs to be done to fix them. This post is a summary and review of the book.

Interestingly, Hubbard began writing the book well before the financial crisis of 2008 began to unfold.  So although he discusses matters pertaining to risk management in finance, the book has a much broader scope. For instance, it will be of interest to project and  program/portfolio management professionals because many of the flawed risk management practices that Hubbard mentions are often used in project risk management.

The book is divided into three parts: the first part introduces the crisis in risk management; the second deals with why some popular risk management practices are flawed; the third discusses what needs to be done to fix these.  My review covers the main points of each section in roughly the same order as they appear in the book.

The crisis in risk management

There are several risk management methodologies and techniques in use ;  a quick search will reveal some of them. Hubbard begins his book by asking the following simple questions about these:

  1. Do these risk management methods work?
  2. Would any organisation that uses these techniques know if they didn’t work?
  3. What would be the consequences if they didn’t work

His contention is that for most organisations the answers to the first two questions are negative.  To answer the third question, he gives the example of the crash of United Flight 232 in 1989. The crash was attributed to the simultaneous failure of three independent (and redundant) hydraulic systems. This happened because the systems were located at the rear of the plane and debris from a damaged turbine cut lines to all them.  This is an example of common mode failure – a single event causing multiple systems to fail.  The probability of such an event occurring was estimated to be less than one in a billion. However, the reason the turbine broke up was that it hadn’t been inspected properly (i.e. human error).  The probability estimate hadn’t considered human oversight, which is way more likely than one-in-billion.  Hubbard uses this example to make the point that a weak risk management methodology can have huge consequences.

Following a very brief history of risk management from historical times to the present, Hubbard presents a list of common methods of risk management. These are:

  1. Expert intuition – essentially based on “gut feeling”
  2. Expert audit – based on expert intuition of independent consultants.  Typically involves the development  of checklists and also uses stratification methods (see next point)
  3. Simple stratification methodsrisk matrices are the canonical example of stratification methods.
  4. Weighted scores – assigned scores for different criteria (scores usually assigned by expert intuition), followed by weighting based on perceived importance of each criterion.
  5. Non-probabilistic financial analysis –techniques such as computing the financial consequences of best and worst case scenarios
  6. Calculus of preferences – structured decision analysis techniques such as multi-attribute utility theory and analytic hierarchy process. These techniques are based on expert judgements. However, in cases where multiple judgements are involved these techniques ensure that the judgements are logically consistent  (i.e. do not contradict the principles of logic).
  7. Probabilistic models – involves building probabilistic models of risk events.  Probabilities can be based on historical data, empirical observation or even intuition.  The book essentially builds a case for evaluating risks using probabilistic models, and provides advice on how these should be built

The book also discusses the state of risk management practice (at the end of 2008) as assessed by surveys carried out by The Economist, Protiviti and Aon Corporation. Hubbard notes that the surveys are based  largely on self-assessments of risk management effectiveness. One cannot place much confidence in these because self-assessments of risk are subject to well known psychological effects such as cognitive biases (tendencies to base judgements on flawed perceptions) and the Dunning-Kruger effect (overconfidence in one’s abilities).   The acid test for any assessment  is whether or not it use sound quantitative measures.  Many of the firms surveyed fail on this count: they do not quantify risks as well as they claim they do. Assigning weighted scores to qualitative judgements does not count as a sound quantitative technique – more on this later.

So, what are some good ways of measuring the effectiveness of risk management? Hubbard lists the following:

  1. Statistics based on large samples – the use of this depends on the availability of historical or other data that is similar to the situation at hand.
  2. Direct evidence – this is where the risk management technique actually finds some problem that would not have been found otherwise. For example, an audit that unearths dubious financial practices
  3. Component testing – even if one isn’t able to test the method end-to-end, it may be possible to test specific components that make up the method. For example, if the method uses computer simulations, it may be possible to validate the simulations by applying them to known situations.
  4. Check of completeness – organisations need to ensure that their risk management methods cover the entire spectrum of risks, else there’s a danger that mitigating one risk may increase the probability of another.  Further, as Hubbard states, “A risk that’s not even on the radar cannot be managed at all.” As far as completeness is concerned, there are four perspectives that need to be taken into account. These are:
    1. Internal completeness – covering all parts of the organisation
    2. External completeness – covering all external entities that the organisation interacts with.
    3. Historical completeness – this involves covering worst case scenarios and historical data.
    4. Combinatorial completeness – this involves considering combinations of events that may occur together; those that may lead to common-mode failure discussed earlier.

Finally, Hubbard closes the first section with the observation that it is better not to use any formal methodology than to use one that is flawed. Why? Because a flawed methodology can lead to an incorrect decision being made  with high confidence.

Why it’s broken

Hubbard begins this section by identifying the four major players in the risk management game. These are:

  1. Actuaries:  These are perhaps the first modern professional risk managers.  They use quantitative methods to manage risks in the insurance and pension industry.  Although the methods actuaries use are generally sound, the profession is slow to pick up new techniques. Further, many investment decisions that insurance companies do not come under the purview of actuaries. So, actuaries typically do not cover the entire spectrum of organizational risks.
  2. Physicists and mathematicians: Many rigorous risk management techniques came out of statistical research done during the second world war. Hubbard therefore calls this group War Quants. One of the notable techniques to come out of this effort is the Monte Carlo Method – originally proposed by Nick Metropolis, John Neumann and Stanislaw Ulam as a technique to calculate the averaged trajectories of neutrons in fissile material  (see this article by Nick Metropolis for a first-person account of how the method was developed). Hubbard believes that Monte Carlo simulations offer a sound, general technique for quantitative risk analysis. Consequently he spends a fair few pages discussing these methods, albeit at a very basic level. More about this later.
  3. Economists:  Risk analysts in investment firms often use quantitative techniques from economics.  Popular techniques include modern portfolio theory and models from options theory (such as the Black-Scholes model) . The problem is that these models are often based on questionable assumptions. For example, the Black-Scholes model assumes that the rate of return on a stock is normally distributed (i.e.  its value is lognormally distributed) – an assumption that’s demonstrably incorrect as witnessed by the events of the last few years .  Another way in which economics plays a role in risk management is through behavioural studies,  in particular the recognition that decisions regarding future events (be they risks or stock prices) are subject to cognitive biases. Hubbard suggests that the role of cognitive biases in risk management has been consistently overlooked. See my post entitled Cognitive biases as meta-risks and its follow-up for more on this point.
  4. Management consultants: In Hubbard’s view, management consultants and standards institutes are largely responsible for many of the ad-hoc approaches  to risk management. A particular favourite of these folks are ad-hoc scoring methods that involve ordering of risks based on subjective criteria. The scores assigned to risks are thus subject to cognitive bias. Even worse, some of the tools used in scoring can end up ordering risks incorrectly.  Bottom line: many of the risk analysis techniques used by consultants and standards have no justification.

Following the discussion of the main players in the risk arena, Hubbard discusses the confusion associated with the definition of risk. There are a plethora of definitions of risk, most of which originated in academia. Hubbard shows how some of these contradict each other while others are downright non-intuitive and incorrect. In doing so, he clarifies some of the academic and professional terminology around risk. As an example, he takes exception to the notion of risk as a “good thing” – as in the PMI definition, which views risk as  “an uncertain event or condition that, if it occurs, has a positive or negative effect on a project objective.”  This definition contradicts common (dictionary) usage of the term risk (which generally includes only bad stuff).  Hubbard’s opinion on this may raise a few eyebrows (and hackles!) in project management circles, but I reckon he has a point.

In my opinion, the most important sections of the book are chapters 6 and 7, where Hubbard discusses why “expert knowledge and opinions” (favoured by standards and methodologies are flawed) and why a very popular scoring method (risk matrices) is “worse than useless.”  See my posts on the  limitations of scoring techniques and Cox’s risk matrix theorem for detailed discussions of these points.

A major problem with expert estimates is overconfidence. To overcome this, Hubbard advocates using calibrated probability assessments to quantify analysts’ abilities to make estimates. Calibration assessments involve getting analysts to answer trivia questions and eliciting confidence intervals for each answer. The confidence intervals are then checked against the proportion of correct answers.  Essentially, this assesses experts’ abilities to estimates by tracking how often they are right. It has been found that  people can improve their ability to make subjective estimates through calibration training – i.e. repeated calibration testing followed by feedback. See this site for more on probability calibration.

Next Hubbard tackles several “red herring” arguments that are commonly offered as reasons not to manage risks using rigorous quantitative methods.  Among these are arguments that quantitative risk analysis is impossible because:

  1. Unexpected events cannot be predicted.
  2. Risks cannot be measured accurately.

Hubbard states that the first objection is invalid because although some events (such as spectacular stockmarket crashes) may have been overlooked by models, it doesn’t prove that quantitative risk as a whole is flawed. As he discusses later in the book, many models go wrong by assuming Gaussian probability distributions where fat-tailed ones would be more appropriate. Of course, given limited data it is difficult to figure out which distribution’s the right one. So, although Hubbard’s argument is correct, it offers little comfort to the analyst who has to model events before they occur.

As far as the second is concerned, Hubbard has written another book on how just about any business variable (even intangible ones) can be measured. The book makes a persuasive case that most quantities of interest can be measured, but there are difficulties.  First, figuring out the factors that affect a variable  is not a straightforward task.  It depends, among other things,  on the availability of reliable data, the analyst’s experience etc. Second, much depends on the judgement of the analyst, and such judgements are subject to bias. Although calibration may help reduce certain biases such as overconfidence, it is by no means a panacea for all biases.  Third, risk-related measurements generally  involve events that are yet to occur.  Consequently, such measurements are  based on  incomplete information.  To make progress one often has to make additional assumptions which may not justifiable a priori.

Hubbard is a strong advocate for quantitative techniques such as Monte Carlo simulations in managing risks. However,  he believes that they are often used incorrectly.  Specifically:

  1. They are often used without empirical data or validation – i.e. their inputs and results are not tested through observation.
  2. Are generally used piecemeal – i.e. used in some parts of an organisation only, and often to manage low-level, operational risks.
  3. They frequently focus on variables that are not important (because these are easier to measure) rather than those that are important. Hubbard calls this perverse occurrence measurement inversion. He contends that analysts often exclude the most important variables because these are considered to be “too uncertain.”
  4. They use inappropriate probability distributions. The Normal distribution (or bell curve) is not always appropriate. For example, see my posts on the inherent uncertainty of project task estimates for an intuitive discussion of the form of the probability distribution for project task durations.
  5. They do not account for correlations between variables. Hubbard contends that many analysts simply ignore correlations between risk variables (i.e. they treat variables as independent when they actually aren’t). This almost always leads to an underestimation of risk because correlations can cause feedback effects and common mode failures.

Hubbard dismisses the argument that rigorous quantitative methods such as Monte Carlo are “too hard.” I  agree, the principles behind Monte Carlo techniques aren’t hard to follow – and I take the opportunity to plug my article entitled  An introduction to Monte Carlo simulations of project tasks 🙂 .  As far as practice is concerned,  there are several commercially available tools that automate much of the mathematical heavy-lifting. I won’t recommend any, but a search using the key phrase monte carlo simulation tool will reveal many.

How to Fix it

The last part of the book outlines Hubbard’s recommendations for improving the practice of risk management. Most of the material presented here draws on the previous section of the book. His main suggestions are to:

  1. Adopt the language, tools and philosophy of uncertain systems. To do this he recommends:
    • Using calibrated probabilities to express uncertainties. Hubbard believes that any person who makes estimates that will be used in models should be calibrated. He offers some suggestions on people can improve their ability to estimate through calibration – discussed earlier and on this web site.
    • Employing quantitative modelling techniques to model risks. In particular, he advocates the use of Monte Carlo methods to model risks. He also provides a list of commercially available PC-based Monte Carlo tools. Hubbard makes the point that modelling forces analysts to decompose the systems  of interest and understand the relationships between their components (see point 2 below).
    • Developing an understanding of the basic rules of probability including independent events, conditional probabilities and Bayes’ Theorem. He gives examples of situations in which these rules can help analysts extrapolate

    To this, I would also add that it is important to understand the idea that an estimate isn’t a number, but a  probability distribution – i.e. a range of numbers, each with a probability attached to it.

  2. Build, validate and test models using reality as the ultimate arbiter. Models should be built iteratively, testing each assumption against observation. Further, models need to incorporate mechanisms (i.e. how and why the observations are what they are), not just raw observations. This is often hard to do, but at the very least models should incorporate correlations between variables.  Note that correlations are often (but not always!) indicative of an underlying mechanism. See this post for an introductory example of Monte Carlo simulation involving correlated variables.
  3. Lobbying for risk management to be given appropriate visibility in organisation.s

In the penultimate chapter of the book, Hubbard fleshes out the characteristics or traits of good risk analysts. As he mentions several times in the book, risk analysis is an empirical science – it arises from experience. So, although the analytical and mathematical  (modelling) aspects of risk are important,  a good analyst must, above all, be an empiricist – i.e. believe that knowledge about risks can only come from observation of reality. In particular, tesing models by seeing how well they match historical data and tracking model predictions are absolutely critical aspects of a risk analysts job. Unfortunately, many analysts do not measure the performance of their risk models. Hubbard offers some excellent suggestions on how analysts can refine and improve their models via observation.

Finally, Hubbard emphasises the importance of creating an organisation-wide approach to managing risks. This ensures that organisations will tackle the most important risks first, and that its risk management budgets  will be spent in the most effective way. Many of the tools and approaches that he suggests in the book are most effective if they are used in a consistent way across the entire organisation. In reality, though,  risk management languishes way down in the priorities of senior executives. Even those who profess to understanding the  importance of managing risks in a rigorous way, rarely offer risk managers the organisational visibility and support they need to do their jobs.

Conclusion

Whew, that was quite a bit to go through, but for me it was was worth it.  Hubbard’s views impelled me to take a closer look at the foundations of project risk management and  I learnt a great deal from doing so.  Regular readers of this blog would have noticed that I have referenced the book (and some of the references therein)  in a few of my articles on risk analysis.

I should  add that I’ve never felt entirely comfortable with the risk management approaches advocated by project management methodologies.  Hubbard’s book articulates these shortcomings and offers solutions to fix them. Moreover, he does so in a way that is entertaining and accessible.  If there is a gap, it is that he does does not delve into the details of model building, but then his other book deals with this in some detail.

To summarise:  the book is a must read for anyone interested in risk management. It is  especially recommended for project professionals who manage risks using methods that  are advocated by project management standards and methodologies.

Written by K

February 11, 2010 at 10:11 pm