Eight to Late

Sensemaking and Analytics for Organizations

Author Archive

The what and whence of issue-based information systems

with 25 comments

Over the last few months I’ve written a number of posts on IBIS (short for Issue Based Information System), an argument visualisation technique invented in the early 1970s by Horst Rittel and Werner Kunz.  IBIS is best known for its use in dialogue mapping – a collaborative approach to tackling wicked problems – but it has a range of other applications as well (capturing project knowledge is a good example).    All my prior posts on IBIS focused on its use in specific applications.   Hence the present piece,  in which I discuss the “what” and “whence”  of IBIS:  its practical aspects – notation, grammar etc. –   along with  its origins, advantages and limitations

I’ll begin with a brief introduction to the technique (in its present form) and then move on to its origins and other aspects.

A brief introduction to IBIS

IBIS  consists of three main elements:

  1. Issues (or questions): these are issues that need to be addressed.
  2. Positions (or ideas): these are responses to questions. Typically the set of ideas that respond to an issue represents the spectrum of perspectives on the issue.
  3. Arguments: these can be Pros (arguments supporting) or Cons (arguments against) an issue. The complete  set of arguments that respond to an idea represents the multiplicity of viewpoints on it.

The best IBIS mapping tool is Compendium – it can be downloaded here.  In Compendium, the IBIS elements described above are represented as nodes as shown in Figure 1: issues are represented by green question nodes; positions by yellow light bulbs; pros by green + signs and cons by red – signs.  Compendium supports a few other node types,  but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.

IBIS Elements

Figure 1: IBIS Elements

The IBIS grammar can be summarized in a few simple rules:

  1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned.  In Compendium notation:  a question node can connect to any other IBIS node.
  2. Ideas can only respond to questions – i.e.  in Compendium “light bulb” nodes  can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
  3. Arguments  can only be associated with ideas –  i.e in Compendium + and –  nodes can only link to “light bulb” nodes (with arrows pointing to the latter)

The legal links are summarized in Figure 2 below.

Figure 2: Legal Links in IBIS

Figure 2: Legal Links in IBIS

The rules are best illustrated by example-   follow the links below to see some illustrations of IBIS in action:

  1. See this post for a simple example of dialogue mapping.
  2. See this post or this one for examples of argument visualisation .
  3. See this post for the use IBIS  in capturing project knowledge.

Now that we know how IBIS works and have seen a few examples of it in action, it’s time to trace its history from its origins to the present day.

Wicked origins

A good place to start is where it all started. IBIS was first described in a paper entitled, Issues as elements of Information Systems; written by Horst Rittel (who coined the term “wicked problem”) and Werner Kunz in July 1970. They state the intent behind IBIS in the very first line of the abstract of their paper:

Issue-Based Information Systems (IBIS) are meant to support coordination and planning of political decision processes. IBIS guides the identification, structuring, and settling of issues raised by problem-solving groups, and provides information pertinent to the discourse.

Rittel’s preoccupation was the area of public policy and planning – which is also the context in which he defined wicked problems originally.  He defined the term in his landmark paper of 1973 entitled, Dilemmas in  a General Theory of Planning. A footnote to the paper states that it  is based on an article that he   presented at an AAAS meeting in 1969. So it is clear that he had already formulated his ideas on wickedness when he wrote his paper on IBIS in 1970.

Given the above background it is no surprise that Rittel and Kunz foresaw IBIS to be the:

…type of information system meant to support the work of cooperatives like governmental or administrative agencies or committees, planning groups, etc., that are confronted with a problem complex in order to arrive at a plan for decision…

The problems tackled by such  cooperatives are paradigm-defining examples of wicked problems. From the start, then, IBIS was intended as a tool to facilitate a collaborative approach to solving such problems.

Operation of early systems

When Rittel and Kunz wrote their paper, there were three IBIS-type systems in operation: two in governmental agencies (in the US, one presumes) and one in a university environment (possibly, Berkeley, where Rittel worked). Although it seems quaint and old-fashioned now, it is no surprise that they were all manual, paper-based systems- the effort and expense involved in computerizing such systems in the early 70s would have been prohibitive, and the pay-off questionable.

The paper also offers a short description of how these early IBIS systems operated:

An initially unstructured problem area or topic denotes the task named by a “trigger phrase” (“Urban Renewal in Baltimore,” “The War,” “Tax Reform”). About this topic and its subtopics a discourse develops. Issues are brought up and disputed because different positions (Rittel’s word for ideas or responses) are assumed. Arguments are constructed in defense of or against the different positions until the issue is settled by convincing the opponents or decided by a formal decision procedure. Frequently questions of fact are directed to experts or fed into a documentation system. Answers obtained can be questioned and turned into issues. Through this counterplay of questioning and arguing, the participants form and exert their judgments incessantly, developing more structured pictures of the problem and its solutions. It is not possible to separate “understanding the problem” as a phase from “information” or “solution” since every formulation of the problem is also a statement about a potential solution.

Even today, forty years later, this is an excellent description of how IBIS is used to facilitate a common understanding of complex (or wicked) problems. The paper contains an overview of the structure and operation of manual IBIS-type systems. However, I’ll omit these because they are of little relevance in the present-day world.

As an aside, there’s a  term that’s conspicuous by its absence in the Rittel-Kunz paper: design rationale. Rittel must have been aware of the utility of IBIS in capturing design rationale: he was a professor of design science at Berkley and design reasoning was one of his main interests. So it is somewhat odd that  he does not mention this term  even once  in his IBIS  paper.

Fast forward a couple decades (and more!)

In a paper published in 1988 entitled, gIBIS: A hypertext tool for exploratory policy discussion, Conklin and Begeman describe a prototype of a graphical, hypertext-based  IBIS-type system (called gIBIS) and its use in capturing design rationale (yes, despite the title of the paper, it is more about capturing design rationale than policy discussions). The development of  gIBIS represents a key step between the original Rittel-Kunz version of IBIS and its  present-day version as implemented  in Compendium.  Amongst other things, IBIS was finally off paper and on to disk, opening up a new world of possibilities.

gIBIS aimed to offer users:

  1. The ability to capture design rationale – the options discussed (including the ones rejected) and the discussion around the pros and cons of each.
  2. A platform for promoting computer-mediated collaborative design work  – ideally in situations where participants were located at sites remote from each other.
  3. The ability to store a large amount of information and to be able to navigate through it in an intuitive way.

Before moving on, one point needs to be emphasized: gIBIS was intended to be used in collaborative settings; to help groups achieve a shared understanding of central issues, by mapping out dialogues in real time. In present-day terms – one could say that it was intended as a tool for sense making.

The gIBIS prototype proved successful enough to catalyse the development of Questmap, a commercially available software tool that supported IBIS. However, although there were some notable early successes in the real-time use of IBIS in industry environments (see this paper, for example), these were not accompanied by widespread adoption of the technique. Other graphical, IBIS-like methods to capture design rationale were proposed (an example is Questions, Options and Criteria (QOC) proposed by MacLean et. al. in 1991), but these too met with a general reluctance in adoption.

Making sense through IBIS

The reasons for the lack of traction of IBIS-type techniques in industry are discussed in an excellent paper by Shum et. al. entitled, Hypermedia Support for Argumentation-Based Rationale: 15 Years on from gIBIS and QOC.  The reasons they give are:

  1. For acceptance, any system must offer immediate value to the person who is using it. Quoting from the paper, “No designer can be expected to altruistically enter quality design rationale solely for the possible benefit of a possibly unknown person at an unknown point in the future for an unknown task. There must be immediate value.” Such immediate value is not obvious to novice users of IBIS-type systems.
  2. There is some effort involved in gaining fluency in the use of IBIS-based software tools. It is only after this that users can gain an appreciation of the value of such tools in overcoming the limitations of mapping design arguments on paper, whiteboards etc.

The intellectual effort – or cognitive overhead, as it is called in academese – in using IBIS in real time involves:

  1. Teasing out issues, ideas and arguments from the dialogue.
  2. Classifying points raised into issues, ideas and arguments.
  3. Naming (or describing) the point succinctly.
  4. Relating (or linking) the point to an existing node.

This is a fair bit of work, so it is no surprise that beginners might find it hard to use IBIS to map dialogues. However, once learnt, a skilled practitioner can add value to design (and more generally, sense making) discussions in several ways including:

  1. Keeping the map (and discussion) coherent and focused on pertinent issues.
  2. Ensuring that all participants are engaged in contributing to the map (and hence the discussion).
  3. Facilitating useful maps (and dialogues) – usefulness being measured by the extent to which the objectives of the session are achieved.

See this paper by Selvin and Shum for more on these criteria. Incidentally, these criteria are a qualitative measure of how well a group achieves a shared understanding of the problem under discussion.  Clearly, there is a good deal of effort involved in learning and becoming proficient at using IBIS-type systems, but the payoff is an ability to facilitate  a shared understanding of wicked problems – whether in public planning or in technical design.

Why IBIS is better than conventional modes of documentation

IBIS has several advantages over conventional documentation systems. Rittel and Kunz’s 1970  paper contains a nice summary of the advantages, which I paraphrase below:

  1. IBIS can bridge the gap between discussions and records of discussions (minutes, audio/video transcriptions etc,). IBIS sits between the two, acting as a short term memory. The paper thus foreshadows the use of issue-based systems as an aid to organizational or project memory.
  2. Many elements (issue, ideas or arguments) that come up in a discussion have contextual meanings that are different from any pre-existing definitions. In discussions, contextual meaning is more than formal meaning. IBIS  captures the former in a very clear way – for example a response to a question “What do we mean by X? elicits the meaning of X in the context of the discussion, which is then subsequently captured as an idea (position)”.
  3. Related to the above, the commonality of an issue with other, similar issues might be more important than its precise meaning. To quote from the paper, “…the description of the subject matter in terms of librarians or documentalists (sic) may be less significant than the similarity of an issue with issues dealt with previously and the information used in their treatment…”  With search technologies available, this is less of an issue now. However, search technologies are still limited in terms of finding matches between “similar” items (How is “similar” defined? Ans: it depends on context). A properly structured, context-searchable IBIS-based project archive may still be more useful than a conventional document archive based on a document management system.
  4. The reasoning used in discussions is made transparent, as is the supporting (or opposing) evidence. (see my post on visualizing argumentation for example)
  5. The state of the argument (discussion) at any time can be inferred at a glance (unlike the case in written records). See this post for more on the advantages of visual documentation over prose.

Issues with issue-based information systems

Lest I leave readers with the impression that IBIS is a panacea, I should emphasise that it isn’t. According to Conklin, IBIS maps have the following limitations:

  1. They atomize streams of thought into unnaturally small chunks of information thereby breaking up any smooth rhetorical flow that creates larger, more meaningful chunks of narrative.
  2. They disperse rhetorically connected chunks throughout a large structure.
  3. They are not is not chronological in structure (the chronological sequence is normally factored out);
  4. Contributions are not attributed (who said what is normally factored out).
  5. They do not convey the maturity of the map – one cannot distinguish, from the map alone, whether one map is more “sound” than another.
  6. They do not offer a systematic way to decide if two questions are the same, or how the maps of two related questions relate.

Some of these issues (points 3, 4) can be addressed by annotating nodes;  others are not so easy to solve.

Concluding remarks

My aim in this post has been to introduce readers to the IBIS notation, and also discuss its origins, development and limitations.  On one hand, a knowledge of the origins and development  is valuable because it  gives  insight into the rationale behind the technique, which leads to a better understanding of the different ways in which it can be used. On the other, it is also important to know a technique’s limitations,  if for no other reason than to be aware of these so that one can work around them.

Before signing off, I’d like to mention an observation from my experience with IBIS. The real surprise for me has been that the technique can capture most written arguments and discussions,  despite having only three distinct elements and a very simple grammar. Yes, it does require some thought to do this, particularly when mapping discussions in real time. However,  this cognitive “overhead”  is good because  it forces the mapper to think  about what’s being said  instead of just writing it down blind. Thoughtful transcription is the aim of the game. When done right, this results in a map that truly reflects a  shared understanding of the complex  (and possibly wicked) problem under discussion.

There’s no better coda to this post on IBIS than the following quote from  this paper by Conklin:

…Despite concerns over the years that IBIS is too simple and limited on the one hand or too hard to use on the other, there is a growing international community who are fluent enough in IBIS to facilitate and capture highly contentious debates using dialogue mapping, primarily in corporate and educational environments…

For me that’s reason enough to improve my understanding of IBIS and its applications,  and to look for opportunities to use it in ever more challenging situations.

Cox’s risk matrix theorem and its implications for project risk management

with 35 comments

Introduction

One of the standard ways of characterising risk on projects is to use matrices which categorise risks by impact and probability of occurrence.  These matrices provide a qualitative risk ranking in categories such as high, medium and low (or colour: red, yellow and green). Such rankings are often used to prioritise and allocate resources to manage risks. There is a widespread belief that the qualitative ranking provided by matrices reflects an underlying quantitative ranking.  In a paper entitled, What’s wrong with risk matrices?, Tony Cox shows that the qualitative risk ranking provided by a risk matrix will agree with the quantitative risk ranking only if the matrix is constructed according to certain general principles. This post is devoted to an exposition of these principles and their consequences.

Since the content of this post may seem overly academic to some of my readers, I think it is worth clarifying why I believe an understanding of Cox’s principles is important for project managers. First, 3×3 and 4×4 risk matrices are widely used in managing project risk.  Typically these matrices are constructed in an intuitive (but arbitrary) manner. Cox shows – using very general assumptions – that there is only one sensible colouring scheme (or form) of these matrices. This conclusion was surprising to me, and I think that many readers may also find it so. Second, and possibly more important, is that the arguments presented in the paper show that it is impossible to maintain perfect congruence between qualitative (matrix) and quantitative rankings. As I discuss later, this is essentially due to the impossibility of representing quantitative rankings accurately on a rectangular grid. Developing an understanding of these points will enable project managers to use risk matrices in a more logically sound manner.

Background and preliminaries

Let’s begin with some terminology that’s well known to most project managers:

Probability: This is the likelihood that a risk will occur. It is quantified as a number between 0 (will definitely not occur) and 1 (will definitely occur).

Impact (termed “consequence” in the paper): This is the severity of the risk should it occur. It can also be quantified as a number between 0 (lowest severity) and 1(highest severity).

Note that the above scales for probability and impact are arbitrary – other common choices are percentages or a scale of 0 to 10.

Risk:  In many project risk management frameworks, risk is characterised by the formula: Risk = probability x impact.  This formula looks reasonable, but is typically specified a priori, without any justification.

A risk can be plotted on a two dimensional graph depicting impact (on the x-axis) and probability (on the y-axis). This is typically where the problems start: for most risks, neither the probability nor the impact can be accurately quantified. The standard solution is to use a qualitative scale, where instead of numbers one uses descriptive text – for example, the probability, impact and risk can take on one of three values: high, medium and low (as shown in Figure 1 below).  In doing this,  analysts make the implicit assumption that the categorisation provided by the qualitative assessment ranks the risks in correct quantitative order. Problem is, this isn’t true.

Figure 1: A 3x3 Risk Matrix

Figure 1: A 3x3 Risk Matrix

Let’s look at the simple case of two risks A and B ranked on a 2×2 risk matrix shown in Figure 2 below.  Let’s assume that the probability and impact of each of the two risks are independent and uniformly distributed between 0 and 1. Clearly, if the two risks have the same qualitative ranking (high, say), there is no way to rank them correctly unless one has quantitative knowledge of probability and impact – which is usually not the case. In the absence of this information, there’s a 50% chance (all other factors being equal) of ranking them correctly – i.e.  one is effectively “flipping a coin” to choose which one has the higher (or lower) rank. This situation highlights a shortcoming of risk matrices: poor resolution. It is not possible to rank risks that have the same qualitative ranking.

Figure 2: A 2x2 Risk Matrix

Figure 2: A 2x2 Risk Matrix

“That’s obvious,” I hear  you say – and you’re right. But there’s more:  if one of the ratings is medium and the other one is not (i.e. the other one is high or low), then there is a non-zero chance of making an incorrect ranking because some points in the cell with the higher qualitative rating have a lower quantitative value of risk than some points in the cell with the lower qualitative ranking. Look at that statement again: it implies that risk matrices can incorrectly assign higher qualitative rankings to quantitatively smaller risks – i.e. there is the possibility of making ranking errors.  This point is seriously counter-intuitive (to me anyway) and merits a proof, which Cox provides and I  discuss below.  Before doing so, I should also point out that the discussion of this paragraph assumes that the probabilities and impacts of the two risks are independent and uniformly distributed. Cox also points out that the chance of making the wrong ranking can be even higher if the joint distribution of the two are correlated. In particular, if the correlation is negative (i.e. probability decreases as impact increases), a random ranking is actually better than that provided by the risk matrix. In this situation the information provided by risk matrices is “worse than useless” (a random choice is better!).  Negative correlations between probability and impact are actually quite common – many situations involve a mix of high probability-low impact and low probability-high impact risks. See the paper for more on this.

Weak consistency and its implications

With the issues of poor resolution and ranking errors established, Cox asks the question: What can be salvaged?  The underlying problem is that the joint distribution of probability and impact is unknown. The standard approach to improving the utility of risk matrices is to attempt to characterise this distribution. This can be done using artificial intelligence tools – and Cox provides references to papers that use some of these techniques to characterise distributions. These techniques typically need plentiful data as they attempt to infer characteristics of the joint distribution from data points. Cox, instead, proposes an approach that is based on general properties of risk matrices – i.e. an approach that prescribes a set of rules that ensure consistency. This has the advantage of being general,  and not depending on the availability of data points to characterise the probability distribution.

So what might a consistency criterion look like? Cox suggests that, at the very least, a risk matrix should be able to distinguish reliably between very high and very low risks. He formalises this requirement in his definition of weak consistency, which I quote from the paper:

A risk matrix with more than one “colour” (level of risk priority) for its cells satisfies weak consistency with a quantitative risk interpretation if points in its top risk category (red) represent higher quantitative risks than points in its bottom category (green)

The notion of weak consistency formalises the intuitive expectation that a risk matrix must, at the very least, distinguish  between the lowest and highest (quantitative) risks.  If it can’t, it is indeed “worse than useless”.  Note that weak consistency doesn’t say anything about distinguishing between medium and lowest/highest risks – merely between the lowest and highest.

Having defined weak consistency, Cox derives some of its surprising consequences, which I describe next.

Cox’s First Lemma:  If a risk matrix satisfies weak consistency, then no red cell (highest risk category) can share an edge with a green cell (lowest risk category).

Proof:  To see how this is plausible, consider the different ways in which a red cell can adjoin a green one. Basically there are only two ways in which this can happen, which I’ve illustrated in Figure 3. Now assume that the quantitative risk of the midpoint of the common edge is a number n (n between 0 and 1). Then if x and y and are the impact and probability, we have

xy=n or y=n/x

So, the locus of all points having the same risk (often called the iso-risk contour) as the midpoint is a rectangular hyperbola with negative slope (i.e.  y decreases as x increases). The negative slope (see Figure 3) implies that the points above the iso-risk contour in the green cell have a higher quantitative risk than points below the contour in the red cell. This contradicts weak consistency. Hence – by reductio ad absurdum –  it isn’t possible to have a green cell and a red cell with a common edge.

Figure 3: Figure for Lemma 1

Figure 3: Figure for Lemma 1

Cox’s Second Lemma: if a risk matrix satisfies weak consistency and has at least two colours (green in lower left and red in upper right, if axes are oriented to depict increasing probability and impact), then no red cell can occur in the bottom row or left column of the matrix.

Proof:  Assume it is possible to have a red cell in the bottom row or left column. Now consider an iso-risk contour for a sufficiently small risk (i.e. a contour that passes through the lower left-most green cell). By the properties of rectangular hyperbolas, this contour must pass through all cells in the bottom row and the left-most column, as shown in Figure 4. Thus, by an argument similar to the one of the previous lemma, all points below the iso-risk contour in either of the red cells have a smaller quantitative risk than point above it in the green cell. This violates weak consistency, and hence the assumption is incorrect.

Figure 4: Figure for Lemma 2

Figure 4: Figure for Lemma 2

An implication that follows directly from the above lemmas is that any risk matrix that satisfies weak consistency must have at least three colours!

Surprised? I certainly was when I first read this.

Between-ness and its implications

If a risk matrix provides a qualitative representation of the actual qualitative risks, then small changes in the probability or impact should not cause discontinuous jumps in risk categorisation from lowest to highest category without going through the intermediate category. (Recall, from the previous section, that a weakly consistent matrix must have at least three colours).

This expectation is formalised in the axiom of between-ness:

A risk matrix satisfies the axiom of between-ness if every positively sloped line segment that lies in a green cell at its lower end and a red cell at its upper end must pass through at least one intermediate cell (i.e. one that is neither red nor green).

By definition, no 2×2 cell can satisfy between-ness. Further, amongst 3×3 matrices, only one colour scheme satisfies both weak consistency and between-ness. This is the matrix shown in Figure 1: green in the left and bottom columns , red in upper right-most cell and yellow in all other cells. This, to me, is a truly amazing consequence of a couple of simple,  intuitive axioms.

Consistent colouring and its implications

The basic idea behind consistent colouring is that risks that have the identical quantitative values should have the same qualitative ratings. This is impossible to achieve in a discrete risk matrix because iso-risk contours cannot coincide with cell boundaries (Why? Because  iso-risk contours have negative slopes whereas cell boundaries have zero or infinite slope  – i.e. they are horizontal or vertical lines).  So, Cox suggests the following: enforce consistent colouring for extreme categories only – red and green – allowing violations for intermediate categories.  What this means is that cells that contain iso-risk contours which pass through other red cells (“red contours”) must be red and cells that contain iso-risk contours which pass through other green cells (“green contours”) must be green. Hence the following definition of consistent colouring:

  1. A cell is red if it contains points with quantitative risks at least as high as those in other red cells, and does not contain points with quantitative risks as small as those on any green cell.
  2. A cell is green if it contains points with risks at least as small as those in other green cells, and does not contain points with quantitative risks as high as those in any red cell.
  3. A cell has an intermediate colour only if it a) lies between a red cell and a green cell or b) it contains points with quantitative risks higher than those in some red cells and also points with quantitative risks lower than those in some green cells.

An iso-risk contour is green if it passes through one or more green cells but no red cells and a red contour is one which passes through one or more red cells but no green cells. Consistent colouring then implies that cells with red contours and no green contours are red; and cells with green contours and no red contours are green (and, obviously, cells with contours of both colours are intermediate)

Implications of the three axioms – Cox’s Risk Matrix Theorem

So, after a longish journey, we have three axioms: weak consistency, between-ness and consistent colouring. With that done, Cox rolls out his theorem – which I dub Cox’s Risk Matrix Theorem (not to be confused with Cox’s Theorem from statistics!), which can be stated as follows:

In a risk matrix satisfying weak consistency, between-ness and consistent colouring:

a)      All cells in the leftmost column and in the bottom row are green.

b)      All cells in the second column from the left and the second row from the bottom are non-red.

The proof is a bit long, so I’ll omit it, making a couple of plausibility arguments instead:

  1. The lower leftmost cell is green (by definition), and consistent colouring implies that all contours that lie below the one passing through the upper right corner of this cell must also be green because a) they pass through the lower leftmost cell which is green and b) none of the other cells they pass through are red (by Cox’s second lemma). The other cells on the lowest or leftmost edge of the matrix can only be intermediate or green. That they cannot be intermediate is a consequence of  between-ness.
  2. That the second row and second column must be non-red is also easy to see: assume any of these cells to be red. We then have a red cell adjoining a green cell, which violates between-ness.

I’ll leave it at that, referring the interested reader to the paper for a complete proof.

Cox’s theorem has an immediate corollary which is particularly interesting for project managers who use 3×3 and 4×4 risk matrices:

A tricoloured 3×3 or 4×4 matrix that satisfies weak consistency, between-ness and consistent colouring can have only the following (single!) colour scheme:

a)      Leftmost column and bottom row coloured green.

b)      Top right cell (for 3×3) or four top right cells (for 4×4) coloured red.

c)      All other cells coloured yellow.
Proof:  Cox’s theorem implies that the leftmost column and bottom row are green. The top right cell must be red (since it is a tricoloured matrix). Consistent colouring implies that the two cells adjoining this cell (in a 4×4 matrix) and the one diagonally adjacent must also be red (this cannot be so for a 3×3 matrix because these cells would adjoin a green cell which violates Cox’s first lemma). All other cells must be yellow by between-ness.

This result is quite amazing. From three very intuitive axioms Cox derives essentially the only possible colouring scheme for 3×3 and 4×4 risk matrices.

Conclusion

This brings me to the end of this post on the Cox’s axiomatic approach to building logically consistent risk matrices.  I highly recommend reading the original paper for more. Although it presents some fairly involved arguments, it is very well written. The arguments are presented with clarity and logical surefootedness,  and the assumptions underlying each argument are clearly laid out.  The three principles (or axioms) proposed are intuitively appealing – even obvious – but their consequences are quite unexpected (witness the unique colouring scheme for 3×3 and 4×4 matrices). Further, the arguments leading up to the lemmas and theorems bring up points that are worth bearing in mind when using risk matrices in practical situations.

In closing I should mention that the paper also discusses some other limitations of risk matrices that flow from these principles: in particular, spurious risk resolution and inappropriate resource allocation based on qualitative risk categorisation.   For reasons of space, and the very high likelihood that I’ve already tested my readers’ patience to near (if not beyond) breaking point,  I’ll defer a discussion of these to a future post.

Note added on 20 December, 2009:

See this post for a visual representation of the above discussion of Cox’s risk matrix theorem and the comments that follow.

Written by K

July 1, 2009 at 10:05 pm

Visualising arguments using issue maps – an example and some general comments

with 16 comments

The aim of an opinion piece writer is to convince his or her readers that a particular idea or point of view is reasonable or right.  Typically, such pieces  weave facts , interpretations and reasoning into prose, wherefrom it can be hard to pick out the essential thread of argumentation.  In an earlier post I showed how an issue map can help in clarifying the central arguments in a “difficult” piece of writing by mapping out Fred Brooks’ classic article No Silver Bullet.  Note that I use the word “difficult” only because the article has, at times, been misunderstood and misquoted; not because it is particularly hard to follow.  Still, Brooks’ article borders on the academic; the arguments presented therein are of interest to a relatively small group of people within the software development community. Most developers and architects aren’t terribly interested in the essential difficulties of the profession – they just want to get on with their jobs. In the present post, I develop an issue map of a piece that is of potentially wider interest to the IT community – Nicholas Carr’s 2003 article, IT Doesn’t Matter.

The main point of Carr’s article is that IT is becoming a utility,  much like electricity, water or rail. As this trend towards commoditisation gains momentum, the strategic advantage offered by in-house IT will diminish, and organisations will be better served by buying IT services from “computing utility” providers than by maintaining their own IT shops.  Although Carr makes a persuasive case, he glosses over a key difference between IT and other utilities (see this post for more). Despite this, many business and IT leaders have taken his words as the way things will be. It is therefore important for all IT professionals to understand Carr’s arguments. The consequences are likely to affect them some time soon, if they haven’t already.

Some preliminaries before proceeding with the map. First, the complete article is available here – you may want to have a read of it before proceeding (but this isn’t essential). Second, the discussion assumes a basic knowledge of  IBIS (Issue-Based Information System) –  see  this post for a quick tutorial on IBIS.  Third, the map is constructed using the open-source tool Compendium which can be downloaded here.

With the preliminaries out of the way, let’s get on with issue mapping Carr’s article.

So, what’s the root  (i.e. central) question that Carr poses in the article?  The title of the piece is  “IT Doesn’t Matter” – so one possible root question is, “Why doesn’t IT matter?” But there are other candidates:   “On what basis is IT an infrastructural technology?” or  “Why is the strategic value of IT diminishing?” for example. From this it should be clear that there’s a fair degree of subjectivity at every step of constructing an issue map. The visual representation that I construct here is but one interpretation of Carr’s argument.

Out of the above (and many other possibles),  I choose  “Why doesn’t IT matter?” as the root question. Why? Well,  in my view the whole  point of the piece  is to convince the reader that IT doesn’t matter because it is an infrastructural technology and consequently has no strategic significance. This point should become clearer as our development of the issue map progresses.

The ideas that respond to this question aren’t immediately obvious. This isn’t unusual:  as I’ve mentioned elsewhere, points can only be made sequentially – one after the other – when expressed in prose.  In some cases one may have to read a piece in its entirety to figure out the elements that respond to a root (or any other) question.

In the case at hand, the response to the root question stands out clearly after a quick browse through the article. It is:  IT is an infrastructural technology.

The map with the root question and the response is shown in Figure 1.

Figure 1: Issue Map Stage 1

Figure 1: Issue Map Stage 1

Moving on, what arguments does Carr offer for (pros) and against (cons) this idea? A reading of the article reveals one con and four pros. Let’s look at the cons first:

  1. IT (which I take to mean software) is complex and malleable, unlike other infrastructural technologies. This point is mentioned, in passing, on the third page of the paper: “Although more complex and malleable than its predecessors, IT has all the hallmarks of an infrastructural technology…”

The arguments supporting the idea that IT is an infrastructural technology are:

  1. The evolution of IT closely mirrors that of other infrastructural technologies such as electricity and rail. Although this point encompasses the other points made below, I think it merits a separate mention because the analogies are quite striking. Carr makes a very persuasive, well-researched case supporting this point.
  2. IT is highly replicable. This is point needs no further elaboration, I think.
  3. IT is a transport mechanism for digital information. This is true, at least as far as network and messaging infrastructure is concerned.
  4. Cost effectiveness increases as IT services are shared. This is true too, providing it is understood that flexibility is lost when services are shared.

The map, incorporating the pros and cons is shown in Figure 2.

Figure 2: Issue Map Stage 2

Figure 2: Issue Map Stage 2

Now that the arguments for and against the notion that IT is an infrastructural technology are laid out, lets look at the article again, this time with an eye out for any other issues  (questions)  raised.

The first question is an obvious one: What are the consequences of IT being an infrastructural technology?   

Another point to be considered is the role of proprietary technologies, which – by definition – aren’t infrastructural. The same holds true for  custom built applications. So, this begs the question, if IT is an infrastructural technology, how do proprietary and custom built applications fit in?

The map, with these questions  added in is shown in Figure 3.

Figure 3: Issue Map Stage 3

Figure 3: Issue Map Stage 3

Let’s now look at the ideas that respond to these two questions.

A point that Carr makes early in the article is that the strategic value of IT is diminishing. This is essentially a consequence of the notion that IT is an infrastructural technology. This idea is supported by the following arguments:

  1. IT is ubiquitous – it is everywhere, at least in the business world.
  2. Everyone uses it in the same way. This implies that no one gets a strategic advantage from using it.

What about proprietary technologies and custom apps?.  Carr reckons these are:

  1. Doomed to economic obsolescence. This idea is supported by the argument that these apps are too expensive and are hard to maintain.
  2. Related to the above, these will be replaced by generic apps that incorporate best practices. This trend is already evident in the increasing number of enterprise type applications that offered as services. The advantages of these are that they a) cost little b) can be offered over the web and c) spare the client all those painful maintenance headaches.

The map incorporating these ideas and their supporting arguments is shown in Figure 4.

Figure 4: Issue Map Stage 4

Figure 4: Issue Map Stage 4

Finally, after painting this somewhat gloomy picture (to a corporate IT minion, such as me) Carr asks and answers the question: How should organisations deal with the changing role of IT (from strategic to operational)? His answers are:

  1. Reduce IT spend.
  2. Buy only proven technology – follow don’t lead.
  3. Focus on (operational) vulnerabilities rather than (strategic) opportunities.

The map incorporating this question and the ideas that respond to it is shown in Figure 5, which is also the final map (click on the graphic to view  a full-sized image).

Figure 5: Final Issue Map

Figure 5: Issue Map Stage 5

Map completed, I’m essentially done with this post. Before closing, however, I’d like to mention a couple of general points that arise from issue mapping of prose pieces.

Figure 5 is my interpretation of the article. I should emphasise that my interpretation may not coincide with what Carr intended to convey (in fact, it probably doesn’t). This highlights an important, if obvious, point: what a writer intends to convey in his or her writing may not coincide with how readers interpret it. Even worse, different readers may interpret a piece differently. Writers need to write with an awareness of the potential for being misunderstood.  So, my  first point is that issue maps can help writers clarify and improve the quality of their reasoning  before they cast it in prose.

Issue maps sketch out the logical skeleton or framework of argumentative prose. As such, they  can help highlight weak points of arguments. For example, in the above article Carr glosses over the complexity and malleability of software. This is a weak point of the argument, because it is a key difference between IT and traditional infrastructural technologies. Thus my second point is that issue maps can help readers visualise weak links in arguments which might have been obscured by rhetoric and persuasive writing.

To conclude,   issue maps are valuable to writers and readers:  writers can use  issue maps to  improve the quality of their  arguments before committing them in writing, and  readers can use such maps to understand arguments that have been thus committed.