Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Paper Review’ Category

The what and whence of issue-based information systems

with 25 comments

Over the last few months I’ve written a number of posts on IBIS (short for Issue Based Information System), an argument visualisation technique invented in the early 1970s by Horst Rittel and Werner Kunz.  IBIS is best known for its use in dialogue mapping – a collaborative approach to tackling wicked problems – but it has a range of other applications as well (capturing project knowledge is a good example).    All my prior posts on IBIS focused on its use in specific applications.   Hence the present piece,  in which I discuss the “what” and “whence”  of IBIS:  its practical aspects – notation, grammar etc. –   along with  its origins, advantages and limitations

I’ll begin with a brief introduction to the technique (in its present form) and then move on to its origins and other aspects.

A brief introduction to IBIS

IBIS  consists of three main elements:

  1. Issues (or questions): these are issues that need to be addressed.
  2. Positions (or ideas): these are responses to questions. Typically the set of ideas that respond to an issue represents the spectrum of perspectives on the issue.
  3. Arguments: these can be Pros (arguments supporting) or Cons (arguments against) an issue. The complete  set of arguments that respond to an idea represents the multiplicity of viewpoints on it.

The best IBIS mapping tool is Compendium – it can be downloaded here.  In Compendium, the IBIS elements described above are represented as nodes as shown in Figure 1: issues are represented by green question nodes; positions by yellow light bulbs; pros by green + signs and cons by red – signs.  Compendium supports a few other node types,  but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.

IBIS Elements

Figure 1: IBIS Elements

The IBIS grammar can be summarized in a few simple rules:

  1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned.  In Compendium notation:  a question node can connect to any other IBIS node.
  2. Ideas can only respond to questions – i.e.  in Compendium “light bulb” nodes  can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
  3. Arguments  can only be associated with ideas –  i.e in Compendium + and –  nodes can only link to “light bulb” nodes (with arrows pointing to the latter)

The legal links are summarized in Figure 2 below.

Figure 2: Legal Links in IBIS

Figure 2: Legal Links in IBIS

The rules are best illustrated by example-   follow the links below to see some illustrations of IBIS in action:

  1. See this post for a simple example of dialogue mapping.
  2. See this post or this one for examples of argument visualisation .
  3. See this post for the use IBIS  in capturing project knowledge.

Now that we know how IBIS works and have seen a few examples of it in action, it’s time to trace its history from its origins to the present day.

Wicked origins

A good place to start is where it all started. IBIS was first described in a paper entitled, Issues as elements of Information Systems; written by Horst Rittel (who coined the term “wicked problem”) and Werner Kunz in July 1970. They state the intent behind IBIS in the very first line of the abstract of their paper:

Issue-Based Information Systems (IBIS) are meant to support coordination and planning of political decision processes. IBIS guides the identification, structuring, and settling of issues raised by problem-solving groups, and provides information pertinent to the discourse.

Rittel’s preoccupation was the area of public policy and planning – which is also the context in which he defined wicked problems originally.  He defined the term in his landmark paper of 1973 entitled, Dilemmas in  a General Theory of Planning. A footnote to the paper states that it  is based on an article that he   presented at an AAAS meeting in 1969. So it is clear that he had already formulated his ideas on wickedness when he wrote his paper on IBIS in 1970.

Given the above background it is no surprise that Rittel and Kunz foresaw IBIS to be the:

…type of information system meant to support the work of cooperatives like governmental or administrative agencies or committees, planning groups, etc., that are confronted with a problem complex in order to arrive at a plan for decision…

The problems tackled by such  cooperatives are paradigm-defining examples of wicked problems. From the start, then, IBIS was intended as a tool to facilitate a collaborative approach to solving such problems.

Operation of early systems

When Rittel and Kunz wrote their paper, there were three IBIS-type systems in operation: two in governmental agencies (in the US, one presumes) and one in a university environment (possibly, Berkeley, where Rittel worked). Although it seems quaint and old-fashioned now, it is no surprise that they were all manual, paper-based systems- the effort and expense involved in computerizing such systems in the early 70s would have been prohibitive, and the pay-off questionable.

The paper also offers a short description of how these early IBIS systems operated:

An initially unstructured problem area or topic denotes the task named by a “trigger phrase” (“Urban Renewal in Baltimore,” “The War,” “Tax Reform”). About this topic and its subtopics a discourse develops. Issues are brought up and disputed because different positions (Rittel’s word for ideas or responses) are assumed. Arguments are constructed in defense of or against the different positions until the issue is settled by convincing the opponents or decided by a formal decision procedure. Frequently questions of fact are directed to experts or fed into a documentation system. Answers obtained can be questioned and turned into issues. Through this counterplay of questioning and arguing, the participants form and exert their judgments incessantly, developing more structured pictures of the problem and its solutions. It is not possible to separate “understanding the problem” as a phase from “information” or “solution” since every formulation of the problem is also a statement about a potential solution.

Even today, forty years later, this is an excellent description of how IBIS is used to facilitate a common understanding of complex (or wicked) problems. The paper contains an overview of the structure and operation of manual IBIS-type systems. However, I’ll omit these because they are of little relevance in the present-day world.

As an aside, there’s a  term that’s conspicuous by its absence in the Rittel-Kunz paper: design rationale. Rittel must have been aware of the utility of IBIS in capturing design rationale: he was a professor of design science at Berkley and design reasoning was one of his main interests. So it is somewhat odd that  he does not mention this term  even once  in his IBIS  paper.

Fast forward a couple decades (and more!)

In a paper published in 1988 entitled, gIBIS: A hypertext tool for exploratory policy discussion, Conklin and Begeman describe a prototype of a graphical, hypertext-based  IBIS-type system (called gIBIS) and its use in capturing design rationale (yes, despite the title of the paper, it is more about capturing design rationale than policy discussions). The development of  gIBIS represents a key step between the original Rittel-Kunz version of IBIS and its  present-day version as implemented  in Compendium.  Amongst other things, IBIS was finally off paper and on to disk, opening up a new world of possibilities.

gIBIS aimed to offer users:

  1. The ability to capture design rationale – the options discussed (including the ones rejected) and the discussion around the pros and cons of each.
  2. A platform for promoting computer-mediated collaborative design work  – ideally in situations where participants were located at sites remote from each other.
  3. The ability to store a large amount of information and to be able to navigate through it in an intuitive way.

Before moving on, one point needs to be emphasized: gIBIS was intended to be used in collaborative settings; to help groups achieve a shared understanding of central issues, by mapping out dialogues in real time. In present-day terms – one could say that it was intended as a tool for sense making.

The gIBIS prototype proved successful enough to catalyse the development of Questmap, a commercially available software tool that supported IBIS. However, although there were some notable early successes in the real-time use of IBIS in industry environments (see this paper, for example), these were not accompanied by widespread adoption of the technique. Other graphical, IBIS-like methods to capture design rationale were proposed (an example is Questions, Options and Criteria (QOC) proposed by MacLean et. al. in 1991), but these too met with a general reluctance in adoption.

Making sense through IBIS

The reasons for the lack of traction of IBIS-type techniques in industry are discussed in an excellent paper by Shum et. al. entitled, Hypermedia Support for Argumentation-Based Rationale: 15 Years on from gIBIS and QOC.  The reasons they give are:

  1. For acceptance, any system must offer immediate value to the person who is using it. Quoting from the paper, “No designer can be expected to altruistically enter quality design rationale solely for the possible benefit of a possibly unknown person at an unknown point in the future for an unknown task. There must be immediate value.” Such immediate value is not obvious to novice users of IBIS-type systems.
  2. There is some effort involved in gaining fluency in the use of IBIS-based software tools. It is only after this that users can gain an appreciation of the value of such tools in overcoming the limitations of mapping design arguments on paper, whiteboards etc.

The intellectual effort – or cognitive overhead, as it is called in academese – in using IBIS in real time involves:

  1. Teasing out issues, ideas and arguments from the dialogue.
  2. Classifying points raised into issues, ideas and arguments.
  3. Naming (or describing) the point succinctly.
  4. Relating (or linking) the point to an existing node.

This is a fair bit of work, so it is no surprise that beginners might find it hard to use IBIS to map dialogues. However, once learnt, a skilled practitioner can add value to design (and more generally, sense making) discussions in several ways including:

  1. Keeping the map (and discussion) coherent and focused on pertinent issues.
  2. Ensuring that all participants are engaged in contributing to the map (and hence the discussion).
  3. Facilitating useful maps (and dialogues) – usefulness being measured by the extent to which the objectives of the session are achieved.

See this paper by Selvin and Shum for more on these criteria. Incidentally, these criteria are a qualitative measure of how well a group achieves a shared understanding of the problem under discussion.  Clearly, there is a good deal of effort involved in learning and becoming proficient at using IBIS-type systems, but the payoff is an ability to facilitate  a shared understanding of wicked problems – whether in public planning or in technical design.

Why IBIS is better than conventional modes of documentation

IBIS has several advantages over conventional documentation systems. Rittel and Kunz’s 1970  paper contains a nice summary of the advantages, which I paraphrase below:

  1. IBIS can bridge the gap between discussions and records of discussions (minutes, audio/video transcriptions etc,). IBIS sits between the two, acting as a short term memory. The paper thus foreshadows the use of issue-based systems as an aid to organizational or project memory.
  2. Many elements (issue, ideas or arguments) that come up in a discussion have contextual meanings that are different from any pre-existing definitions. In discussions, contextual meaning is more than formal meaning. IBIS  captures the former in a very clear way – for example a response to a question “What do we mean by X? elicits the meaning of X in the context of the discussion, which is then subsequently captured as an idea (position)”.
  3. Related to the above, the commonality of an issue with other, similar issues might be more important than its precise meaning. To quote from the paper, “…the description of the subject matter in terms of librarians or documentalists (sic) may be less significant than the similarity of an issue with issues dealt with previously and the information used in their treatment…”  With search technologies available, this is less of an issue now. However, search technologies are still limited in terms of finding matches between “similar” items (How is “similar” defined? Ans: it depends on context). A properly structured, context-searchable IBIS-based project archive may still be more useful than a conventional document archive based on a document management system.
  4. The reasoning used in discussions is made transparent, as is the supporting (or opposing) evidence. (see my post on visualizing argumentation for example)
  5. The state of the argument (discussion) at any time can be inferred at a glance (unlike the case in written records). See this post for more on the advantages of visual documentation over prose.

Issues with issue-based information systems

Lest I leave readers with the impression that IBIS is a panacea, I should emphasise that it isn’t. According to Conklin, IBIS maps have the following limitations:

  1. They atomize streams of thought into unnaturally small chunks of information thereby breaking up any smooth rhetorical flow that creates larger, more meaningful chunks of narrative.
  2. They disperse rhetorically connected chunks throughout a large structure.
  3. They are not is not chronological in structure (the chronological sequence is normally factored out);
  4. Contributions are not attributed (who said what is normally factored out).
  5. They do not convey the maturity of the map – one cannot distinguish, from the map alone, whether one map is more “sound” than another.
  6. They do not offer a systematic way to decide if two questions are the same, or how the maps of two related questions relate.

Some of these issues (points 3, 4) can be addressed by annotating nodes;  others are not so easy to solve.

Concluding remarks

My aim in this post has been to introduce readers to the IBIS notation, and also discuss its origins, development and limitations.  On one hand, a knowledge of the origins and development  is valuable because it  gives  insight into the rationale behind the technique, which leads to a better understanding of the different ways in which it can be used. On the other, it is also important to know a technique’s limitations,  if for no other reason than to be aware of these so that one can work around them.

Before signing off, I’d like to mention an observation from my experience with IBIS. The real surprise for me has been that the technique can capture most written arguments and discussions,  despite having only three distinct elements and a very simple grammar. Yes, it does require some thought to do this, particularly when mapping discussions in real time. However,  this cognitive “overhead”  is good because  it forces the mapper to think  about what’s being said  instead of just writing it down blind. Thoughtful transcription is the aim of the game. When done right, this results in a map that truly reflects a  shared understanding of the complex  (and possibly wicked) problem under discussion.

There’s no better coda to this post on IBIS than the following quote from  this paper by Conklin:

…Despite concerns over the years that IBIS is too simple and limited on the one hand or too hard to use on the other, there is a growing international community who are fluent enough in IBIS to facilitate and capture highly contentious debates using dialogue mapping, primarily in corporate and educational environments…

For me that’s reason enough to improve my understanding of IBIS and its applications,  and to look for opportunities to use it in ever more challenging situations.

Cox’s risk matrix theorem and its implications for project risk management

with 35 comments

Introduction

One of the standard ways of characterising risk on projects is to use matrices which categorise risks by impact and probability of occurrence.  These matrices provide a qualitative risk ranking in categories such as high, medium and low (or colour: red, yellow and green). Such rankings are often used to prioritise and allocate resources to manage risks. There is a widespread belief that the qualitative ranking provided by matrices reflects an underlying quantitative ranking.  In a paper entitled, What’s wrong with risk matrices?, Tony Cox shows that the qualitative risk ranking provided by a risk matrix will agree with the quantitative risk ranking only if the matrix is constructed according to certain general principles. This post is devoted to an exposition of these principles and their consequences.

Since the content of this post may seem overly academic to some of my readers, I think it is worth clarifying why I believe an understanding of Cox’s principles is important for project managers. First, 3×3 and 4×4 risk matrices are widely used in managing project risk.  Typically these matrices are constructed in an intuitive (but arbitrary) manner. Cox shows – using very general assumptions – that there is only one sensible colouring scheme (or form) of these matrices. This conclusion was surprising to me, and I think that many readers may also find it so. Second, and possibly more important, is that the arguments presented in the paper show that it is impossible to maintain perfect congruence between qualitative (matrix) and quantitative rankings. As I discuss later, this is essentially due to the impossibility of representing quantitative rankings accurately on a rectangular grid. Developing an understanding of these points will enable project managers to use risk matrices in a more logically sound manner.

Background and preliminaries

Let’s begin with some terminology that’s well known to most project managers:

Probability: This is the likelihood that a risk will occur. It is quantified as a number between 0 (will definitely not occur) and 1 (will definitely occur).

Impact (termed “consequence” in the paper): This is the severity of the risk should it occur. It can also be quantified as a number between 0 (lowest severity) and 1(highest severity).

Note that the above scales for probability and impact are arbitrary – other common choices are percentages or a scale of 0 to 10.

Risk:  In many project risk management frameworks, risk is characterised by the formula: Risk = probability x impact.  This formula looks reasonable, but is typically specified a priori, without any justification.

A risk can be plotted on a two dimensional graph depicting impact (on the x-axis) and probability (on the y-axis). This is typically where the problems start: for most risks, neither the probability nor the impact can be accurately quantified. The standard solution is to use a qualitative scale, where instead of numbers one uses descriptive text – for example, the probability, impact and risk can take on one of three values: high, medium and low (as shown in Figure 1 below).  In doing this,  analysts make the implicit assumption that the categorisation provided by the qualitative assessment ranks the risks in correct quantitative order. Problem is, this isn’t true.

Figure 1: A 3x3 Risk Matrix

Figure 1: A 3x3 Risk Matrix

Let’s look at the simple case of two risks A and B ranked on a 2×2 risk matrix shown in Figure 2 below.  Let’s assume that the probability and impact of each of the two risks are independent and uniformly distributed between 0 and 1. Clearly, if the two risks have the same qualitative ranking (high, say), there is no way to rank them correctly unless one has quantitative knowledge of probability and impact – which is usually not the case. In the absence of this information, there’s a 50% chance (all other factors being equal) of ranking them correctly – i.e.  one is effectively “flipping a coin” to choose which one has the higher (or lower) rank. This situation highlights a shortcoming of risk matrices: poor resolution. It is not possible to rank risks that have the same qualitative ranking.

Figure 2: A 2x2 Risk Matrix

Figure 2: A 2x2 Risk Matrix

“That’s obvious,” I hear  you say – and you’re right. But there’s more:  if one of the ratings is medium and the other one is not (i.e. the other one is high or low), then there is a non-zero chance of making an incorrect ranking because some points in the cell with the higher qualitative rating have a lower quantitative value of risk than some points in the cell with the lower qualitative ranking. Look at that statement again: it implies that risk matrices can incorrectly assign higher qualitative rankings to quantitatively smaller risks – i.e. there is the possibility of making ranking errors.  This point is seriously counter-intuitive (to me anyway) and merits a proof, which Cox provides and I  discuss below.  Before doing so, I should also point out that the discussion of this paragraph assumes that the probabilities and impacts of the two risks are independent and uniformly distributed. Cox also points out that the chance of making the wrong ranking can be even higher if the joint distribution of the two are correlated. In particular, if the correlation is negative (i.e. probability decreases as impact increases), a random ranking is actually better than that provided by the risk matrix. In this situation the information provided by risk matrices is “worse than useless” (a random choice is better!).  Negative correlations between probability and impact are actually quite common – many situations involve a mix of high probability-low impact and low probability-high impact risks. See the paper for more on this.

Weak consistency and its implications

With the issues of poor resolution and ranking errors established, Cox asks the question: What can be salvaged?  The underlying problem is that the joint distribution of probability and impact is unknown. The standard approach to improving the utility of risk matrices is to attempt to characterise this distribution. This can be done using artificial intelligence tools – and Cox provides references to papers that use some of these techniques to characterise distributions. These techniques typically need plentiful data as they attempt to infer characteristics of the joint distribution from data points. Cox, instead, proposes an approach that is based on general properties of risk matrices – i.e. an approach that prescribes a set of rules that ensure consistency. This has the advantage of being general,  and not depending on the availability of data points to characterise the probability distribution.

So what might a consistency criterion look like? Cox suggests that, at the very least, a risk matrix should be able to distinguish reliably between very high and very low risks. He formalises this requirement in his definition of weak consistency, which I quote from the paper:

A risk matrix with more than one “colour” (level of risk priority) for its cells satisfies weak consistency with a quantitative risk interpretation if points in its top risk category (red) represent higher quantitative risks than points in its bottom category (green)

The notion of weak consistency formalises the intuitive expectation that a risk matrix must, at the very least, distinguish  between the lowest and highest (quantitative) risks.  If it can’t, it is indeed “worse than useless”.  Note that weak consistency doesn’t say anything about distinguishing between medium and lowest/highest risks – merely between the lowest and highest.

Having defined weak consistency, Cox derives some of its surprising consequences, which I describe next.

Cox’s First Lemma:  If a risk matrix satisfies weak consistency, then no red cell (highest risk category) can share an edge with a green cell (lowest risk category).

Proof:  To see how this is plausible, consider the different ways in which a red cell can adjoin a green one. Basically there are only two ways in which this can happen, which I’ve illustrated in Figure 3. Now assume that the quantitative risk of the midpoint of the common edge is a number n (n between 0 and 1). Then if x and y and are the impact and probability, we have

xy=n or y=n/x

So, the locus of all points having the same risk (often called the iso-risk contour) as the midpoint is a rectangular hyperbola with negative slope (i.e.  y decreases as x increases). The negative slope (see Figure 3) implies that the points above the iso-risk contour in the green cell have a higher quantitative risk than points below the contour in the red cell. This contradicts weak consistency. Hence – by reductio ad absurdum –  it isn’t possible to have a green cell and a red cell with a common edge.

Figure 3: Figure for Lemma 1

Figure 3: Figure for Lemma 1

Cox’s Second Lemma: if a risk matrix satisfies weak consistency and has at least two colours (green in lower left and red in upper right, if axes are oriented to depict increasing probability and impact), then no red cell can occur in the bottom row or left column of the matrix.

Proof:  Assume it is possible to have a red cell in the bottom row or left column. Now consider an iso-risk contour for a sufficiently small risk (i.e. a contour that passes through the lower left-most green cell). By the properties of rectangular hyperbolas, this contour must pass through all cells in the bottom row and the left-most column, as shown in Figure 4. Thus, by an argument similar to the one of the previous lemma, all points below the iso-risk contour in either of the red cells have a smaller quantitative risk than point above it in the green cell. This violates weak consistency, and hence the assumption is incorrect.

Figure 4: Figure for Lemma 2

Figure 4: Figure for Lemma 2

An implication that follows directly from the above lemmas is that any risk matrix that satisfies weak consistency must have at least three colours!

Surprised? I certainly was when I first read this.

Between-ness and its implications

If a risk matrix provides a qualitative representation of the actual qualitative risks, then small changes in the probability or impact should not cause discontinuous jumps in risk categorisation from lowest to highest category without going through the intermediate category. (Recall, from the previous section, that a weakly consistent matrix must have at least three colours).

This expectation is formalised in the axiom of between-ness:

A risk matrix satisfies the axiom of between-ness if every positively sloped line segment that lies in a green cell at its lower end and a red cell at its upper end must pass through at least one intermediate cell (i.e. one that is neither red nor green).

By definition, no 2×2 cell can satisfy between-ness. Further, amongst 3×3 matrices, only one colour scheme satisfies both weak consistency and between-ness. This is the matrix shown in Figure 1: green in the left and bottom columns , red in upper right-most cell and yellow in all other cells. This, to me, is a truly amazing consequence of a couple of simple,  intuitive axioms.

Consistent colouring and its implications

The basic idea behind consistent colouring is that risks that have the identical quantitative values should have the same qualitative ratings. This is impossible to achieve in a discrete risk matrix because iso-risk contours cannot coincide with cell boundaries (Why? Because  iso-risk contours have negative slopes whereas cell boundaries have zero or infinite slope  – i.e. they are horizontal or vertical lines).  So, Cox suggests the following: enforce consistent colouring for extreme categories only – red and green – allowing violations for intermediate categories.  What this means is that cells that contain iso-risk contours which pass through other red cells (“red contours”) must be red and cells that contain iso-risk contours which pass through other green cells (“green contours”) must be green. Hence the following definition of consistent colouring:

  1. A cell is red if it contains points with quantitative risks at least as high as those in other red cells, and does not contain points with quantitative risks as small as those on any green cell.
  2. A cell is green if it contains points with risks at least as small as those in other green cells, and does not contain points with quantitative risks as high as those in any red cell.
  3. A cell has an intermediate colour only if it a) lies between a red cell and a green cell or b) it contains points with quantitative risks higher than those in some red cells and also points with quantitative risks lower than those in some green cells.

An iso-risk contour is green if it passes through one or more green cells but no red cells and a red contour is one which passes through one or more red cells but no green cells. Consistent colouring then implies that cells with red contours and no green contours are red; and cells with green contours and no red contours are green (and, obviously, cells with contours of both colours are intermediate)

Implications of the three axioms – Cox’s Risk Matrix Theorem

So, after a longish journey, we have three axioms: weak consistency, between-ness and consistent colouring. With that done, Cox rolls out his theorem – which I dub Cox’s Risk Matrix Theorem (not to be confused with Cox’s Theorem from statistics!), which can be stated as follows:

In a risk matrix satisfying weak consistency, between-ness and consistent colouring:

a)      All cells in the leftmost column and in the bottom row are green.

b)      All cells in the second column from the left and the second row from the bottom are non-red.

The proof is a bit long, so I’ll omit it, making a couple of plausibility arguments instead:

  1. The lower leftmost cell is green (by definition), and consistent colouring implies that all contours that lie below the one passing through the upper right corner of this cell must also be green because a) they pass through the lower leftmost cell which is green and b) none of the other cells they pass through are red (by Cox’s second lemma). The other cells on the lowest or leftmost edge of the matrix can only be intermediate or green. That they cannot be intermediate is a consequence of  between-ness.
  2. That the second row and second column must be non-red is also easy to see: assume any of these cells to be red. We then have a red cell adjoining a green cell, which violates between-ness.

I’ll leave it at that, referring the interested reader to the paper for a complete proof.

Cox’s theorem has an immediate corollary which is particularly interesting for project managers who use 3×3 and 4×4 risk matrices:

A tricoloured 3×3 or 4×4 matrix that satisfies weak consistency, between-ness and consistent colouring can have only the following (single!) colour scheme:

a)      Leftmost column and bottom row coloured green.

b)      Top right cell (for 3×3) or four top right cells (for 4×4) coloured red.

c)      All other cells coloured yellow.
Proof:  Cox’s theorem implies that the leftmost column and bottom row are green. The top right cell must be red (since it is a tricoloured matrix). Consistent colouring implies that the two cells adjoining this cell (in a 4×4 matrix) and the one diagonally adjacent must also be red (this cannot be so for a 3×3 matrix because these cells would adjoin a green cell which violates Cox’s first lemma). All other cells must be yellow by between-ness.

This result is quite amazing. From three very intuitive axioms Cox derives essentially the only possible colouring scheme for 3×3 and 4×4 risk matrices.

Conclusion

This brings me to the end of this post on the Cox’s axiomatic approach to building logically consistent risk matrices.  I highly recommend reading the original paper for more. Although it presents some fairly involved arguments, it is very well written. The arguments are presented with clarity and logical surefootedness,  and the assumptions underlying each argument are clearly laid out.  The three principles (or axioms) proposed are intuitively appealing – even obvious – but their consequences are quite unexpected (witness the unique colouring scheme for 3×3 and 4×4 matrices). Further, the arguments leading up to the lemmas and theorems bring up points that are worth bearing in mind when using risk matrices in practical situations.

In closing I should mention that the paper also discusses some other limitations of risk matrices that flow from these principles: in particular, spurious risk resolution and inappropriate resource allocation based on qualitative risk categorisation.   For reasons of space, and the very high likelihood that I’ve already tested my readers’ patience to near (if not beyond) breaking point,  I’ll defer a discussion of these to a future post.

Note added on 20 December, 2009:

See this post for a visual representation of the above discussion of Cox’s risk matrix theorem and the comments that follow.

Written by K

July 1, 2009 at 10:05 pm

Managing participant motivation in knowledge management projects

with 8 comments

Introduction

One consequence of the increasing awareness of knowledge as an organisational asset is that many organisations have launched projects aimed at managing knowledge.  Unfortunately, a large number of these efforts focus entirely on technical solutions, neglecting the need for employee participation. The latter is important; as  stated in this paper, published a decade ago, “Knowledge transfer is about connection not collection, and that connection ultimately depends on choice made by individuals…”  This suggests that participant motivation is a key success factor for knowledge management initiatives. A recent paper entitled, Considering Participant Motivation in Knowledge Management Projects, by Allen Whittom and Marie-Christine Roy looks at theories of motivation from the context of knowledge management projects. This post is a summary and annotated review of the paper.

Many researchers claim that the failure rate of knowledge management projects is high, but there seems to be some confusion as to just how high the figure is (see this paper, for example). In the introduction to their paper, Whittom and Roy claim that the failure rate may be higher than 80% – but they offer no proof. Still, with many independent researchers quoting figures ranging from 50 to 80%, one can take it as established that it is a matter that merits investigation. Accordingly, many researchers have looked at causes of failure of knowledge management projects (see this paper or this one).  Some specifically identify lack of participant motivation as a cause of failure (see this paper). Whittom and Roy claim that despite the work done thus far, knowledge management research does not provide any suggestions as to how motivation is to be managed in such projects. Their aim, therefore, is to:

  1. Present concepts from theories of  motivation that are relevant to knowledge management projects.
  2. Propose ways in which project managers can foster participant motivation in a way that is consistent with business objectives.

These points are covered in the next two sections. The final section presents some concluding remarks.

 Theoretical Overview

 Motivation and Knowledge Transfer

The authors define motivation as the underlying reason(s) for a person’s actions. Motivation is usually classified as extrinsic or intrinsic depending on whether its source is external or internal to the individual. People who are extrinsically motivated are driven by rewards such as bonuses or promotions. Typically such individuals work for rewards. On the other hand intrinsically motivated individuals are self-driven, and need little supervision. Their enthusiasm, however, depends on whether or not their personal goals are congruent with the task at hand. This is important: their aims and objectives may not always be aligned with business goals. Further, intrinsically motivated individuals perform creative or complex tasks better than others, but this type of motivation varies greatly from one person to another and cannot be controlled by management. See my post on motivation in project management for a comprehensive discussion on extrinsic and intrinsic motivation.

The authors then discuss the link between motivation and the willingness to share knowledge. Knowledge falls into two categories: tacit and explicit. Tacit knowledge is hard to codify and communicate (e.g. a skill, such as riding a bicycle) whereas explicit knowledge can be formalised and transmitted (e.g. how to open a bank account). Tacit knowledge is in “people’s heads” and is consequently harder to capture. More often than not, though, it turns out to be more valuable than explicit knowledge. In their paper entitled, Motivation, Knowledge Transfer and Organisational Forms,  Osterloh and Frey state that, “…Intrinsic motivation is crucial when tacit knowledge in and between teams must be transferred…” Following this work, Gartner researchers Morello and Caldwell proposed a model in which intrinsic motivation drives the creation and sharing of tacit knowledge which in turn drives the dissemination and use of tacit knowledge in the organisation (I couldn’t find a publicly available copy of their work – but there is an illustration of the model in  Figure 1 Whittom and Roy’s paper).

The message from motivation research is clear: intrinsic motivation is critical to the success of knowledge management projects.

Rewards and Recognition

Rewards and recognition are “levers of motivation”: they can be used to enhance and direct employee motivation towards achieving organisational goals. Reward systems are aimed at aligning individual efforts with organisational objectives. Recognition systems, on the other hand,  are designed to express public appreciation for high standards of achievement or competence. These may be set  according to criteria that diverge from preset objectives (as an example – a public thanks for a job well done can be made irrespective of whether the job is in line with company objectives)

Rewards can be extrinsic (not related to the task) or intrinsic (related to the task) and material or non-material.  Extrinsic rewards are typically material – i.e. they involve giving the recipient something tangible. Financial incentives are the most common form of extrinsic rewards because they are easily administered through the pay system.  Extrinsic rewards can also be non-financial (gift certificates or a meal at a nice restaurant, for example). For the same investment, non-financial rewards are found to have a more lasting effect than financial ones. This makes sense: people are more likely to remember a memorable meal than a few hundred dollar raise; the latter is sometimes forgotten as soon as it comes into effect. A downside of financial rewards is that they are easily forgotten and may actually decrease intrinsic motivation (see this paper by David Beswick). Another is that they may encourage sub-standard work, particularly in cases where benchmarks are based on volume rather than quality of output.

Extrinsic rewards can also be non-material – promotions and training opportunities, for example (see this paper by Wolfgang Semar for more on non-material, extrinsic rewards).

Intrinsic rewards generally pertain to the satisfaction derived from performing a task. The moral satisfaction arising from a job done well is also a form of intrinsic reward. It should be clear that these rewards work only for intrinsically motivated individuals. Intrinsic rewards are invariably non-material and they cannot be controlled by management.  However, awareness of factors influencing intrinsic motivation  can help managers create the right environment for intrinsically motivated individuals.  Kenneth Thomas, in his book entitled, Intrinsic Motivation at Work – Building Energy and Commitment, identifies four psychological factors that can influence intrinsic motivation. They are:

  1. Feelings of accomplishment: These can be enhanced by devising interesting work tasks and aligning them with employee interests.
  2. Feelings of autonomy: These can be enhanced by empowering employees with responsibility and authority to do their work.
  3. Feelings of competence: These can be enhanced by offering employees opportunities to demonstrate and enhance their expertise.
  4. Feelings of progress: These can be enhanced by fostering a collaborative atmosphere in which project successes are celebrated.

These factors are (to an extent) under management control. If nothing else, it is worth being aware of them so that one can avoid doing things that might reduce intrinsic motivation.

Motivation crowding and psychological contracts

The authors then examine the effects of rewards on intrinsic motivation in the context of knowledge management projects (Recall that intrinsic motivation was seen to be a key success factor in knowledge management projects).  They use motivation crowding theory to frame their discussion. Crowding theory suggests that intrinsic motivation can be enhanced (“crowded-in”) or undermined (or “crowded-out”) by external rewards.

To understand motivation crowding, one has to look at how extrinsic (or external) rewards work. Basically there are two ways in which an extrinsic reward can be perceived. To quote from the paper,

External interventions, such as rewards, may influence this perception either through information or control. If people see a reward as being related to their competence (information), intrinsic motivation for the task will be encouraged or maintained. On the other hand, if they see a reward as a way to control their performance or autonomy, intrinsic motivation would be decreased.

Extrinsic rewards can have a positive or negative effect on information and control. This is best understood through an example: consider a company that announces cash incentives for the top three contributors to a knowledge database. This reward has a positive control aspect (i.e. encourages participation) but a negative information aspect (i.e. no check on quality of contributions). Consequently, the reward encourages high volume of contributions with no regard to quality. This situation typically undermines or “crowds-out” intrinsic motivation. Note that motivation “crowding out” is sometimes referred to as motivation eviction in the literature.

Crowding-out is also seen in recurring tasks. For example, if a monetary incentive is offered for a task, there will be an expectation that the incentive be offered the next time around. On the other hand, non-monetary interventions such as increased employee involvement and autonomy in project decision making can “crowd-in” or enhance intrinsic motivation.

These effects are intuitively quite obvious, but it’s interesting to see them from a social science / economics point of view. If you’d like to find out more, I highly recommend the paper, Motivation crowding theory: A survey of empirical evidence, by Bruno Frey and Reto Jegen.

The take home lesson from the above is that intrinsic motivation can sometimes be negatively affected by external rewards. Manager, beware.

Whittom and Roy also discuss the notion of psychological contracts between the employer and employee. These contracts, distinct from formal employment contracts, refer to the unstated (but implied) informal, mutual obligations pertaining to respect, autonomy, work ethic, fairness etc. An employee’s intrinsic motivation can be greatly reduced if he or she perceives that the contract has been breached. For example, if an employee’s regarding improvements to a knowledge database are ignored, she might feel undervalued. In her eyes, management (and hence the organisation) has lost credibility, and the psychological contract has been violated. In psychological contract theory, personal relationships are seen to be  an important driver of intrinsic motivation: people are more likely to enjoy working in teams in which they have good relations with team members.

Discussion

Practices to foster intrinsic motivation

One conclusion from the aforementioned theories is that intrinsic motivation is essential for the transfer of tacit knowledge. Accordingly, the authors suggest the following practices to maintain and enhance intrinsic motivation of employees involved in knowledge management projects:

  1. Avoid the use of monetary rewards. Instead, use non-monetary rewards that recognize competence. Monetary rewards may also encourage the transfer of unimportant knowledge.
  2. Involve employees in the formulation of project objectives.
  3. Encourage team work and team bonding. A good team dynamic encourages the sharing of tacit knowledge. The technique of dialogue mapping facilitates the sharing and capture of knowledge in a team environment.
  4. Emphasise how the employee might benefit from the project – this is the old WIIFM factor. This needs to be done in a way that shows how the benefit is integrated into the organisation’s culture – i.e. the benefit must be a realistic and believable one, else the employee will see right through it.
  5. Good communication between management and employees. This one is a “usual suspect” that comes up in virtually all best practice recommendations. Unfortunately it is seldom done right.

Contextual recommendations based on knowledge and motivation types

Theories of motivation indicate that, as far as motivation for knowledge sharing is concerned, one size does not fit all. The particular strategy used depends on the nature of the knowledge that is being captured (tacit or explicit), participants’ motivational drivers (intrinsic or extrinsic) and organizational resources. Based on this, the authors discuss the following contexts

  1.  Tacit knowledge management / intrinsic motivation: This is an ideal situation. Here the manager’s role is to support participants in achieving project objectives rather than to influence their behaviour through rewards. Extrinsic rewards should be avoided because participants are intrinsically motivated.
  2. Tacit knowledge management / extrinsic motivation: From the preceding discussion of motivation theories, it is clear that this is not a good situation. However, all is not lost. A manager can develop knowledge management strategies based on structured training, discussion groups etc. to help codify and transfer tacit knowledge. These strategies should highlight the project benefits (for the employee and the organisation). Further, extrinsic rewards can be offered, but their “crowding-out” effect over time should be kept in mind.
  3. Explicit knowledge / intrinsic motivation:  Here the knowledge management aspect is easier because the knowledge is explicit. Typically, once the objectives are identified, it is clear how knowledge should be captured and organized. Obviously, structured training and tools such as Wikis and databases can help facilitate knowledge transfer. Further, these will be more effective than case (2) above, because the participants are intrinsically motivated.. Recommendations, as far as rewards are concerned are the same as in the first case.
  4. Explicit knowledge / extrinsic motivation: For knowledge management the same considerations apply as in case (3). However, these strategies will be less effective because employees are extrinsically motivated. For rewards management, the considerations of case (2) apply.

As discussed above, the motivation strategy should be determined by whether the team members are intrinsically or extrinsically motivated. Unfortunately, though, the strategy often dictated by the culture of the organization – the manager may have little say in determining it. The authors do not discuss what a manager might do in such a situation.

Conclusion

The paper presents no new data or analysis of existing data. As such it must be evaluated on the basis of new concepts and theoretical constructs that it presents. From this perspective  there’s little that’s new in this paper.   That said, project managers leading knowledge management projects might find the paper a worthwhile read because of  its coverage of motivation theories (crowding theory and psychological contracts, in particular).

Let me end with an extrapolation of the above discussion to software projects. The holy grail of knowledge management initiatives is to capture tacit knowledge. By definition, this knowledge is difficult to codify.  One sees something similar in requirements gathering for application software. The analyst needs to capture all the explicit and tacit process knowledge that’s in users’ heads. The former is easy to capture; the latter isn’t. As a result requirements usually do not capture tacit process knowledge. This is one aspect of what Brooks referred to as the essential problem of software design – figuring out what the software really needs to do (see this post for more on Brooks’ argument). Well designed software embodies both kinds of knowledge, so software projects are knowledge management projects in a sense.  As far as motivation is concerned, therefore,  the theories and conclusions sketched above should apply to software projects.  An intrinsically motivated development team will  improve the chances of success greatly;  a trite statement perhaps, but one that may resonate with those who have had the privilege of working with such teams.

Written by K

June 18, 2009 at 10:20 pm