Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Decision Making’ Category

Uncertainty about uncertainty

leave a comment »

Introduction

More often than not, managerial decisions are made on the basis of uncertain information. To lend some rigour to the process of decision making, it is sometimes assumed that uncertainties of interest can be quantified accurately using probabilities. As it turns out, this assumption can be incorrect in many situations because the probabilities themselves can be uncertain.   In this post I discuss a couple of ways in which such uncertainty about uncertainty can manifest itself.

The problem of vagueness

In a paper entitled, “Is Probability the Only Coherent Approach to Uncertainty?”,  Mark Colyvan made a distinction between two types of uncertainty:

  1. Uncertainty about some underlying fact. For example, we might be uncertain about the cost of a project – that there will be a cost is a fact, but we are uncertain about what exactly it will be.
  2. Uncertainty about situations where there is no underlying fact.  For example, we might be uncertain about whether customers will be satisfied with the outcome of a project. The problem here is the definition of customer satisfaction. How do we measure it? What about customers who are neither satisfied nor dissatisfied?  There is no clear-cut definition of what customer satisfaction actually is.

The first type of uncertainty refers to the lack of knowledge about something that we know exists. This is sometimes referred to as epistemic uncertainty – i.e. uncertainty pertaining to knowledge. Such uncertainty arises from imprecise measurements, changes in the object of interest etc.  The key point is that we know for certain that the item of  interest has well-defined properties, but we don’t know what they are and hence the uncertainty. Such uncertainty can be quantified accurately using probability.

Vagueness, on the other hand, arises from an imprecise use of language.  Specifically, the term refers to the use of criteria that cannot distinguish between borderline cases.  Let’s clarify this using the example discussed earlier.  A popular way to measure customer satisfaction is through surveys. Such surveys may be able to tell us that customer A is more satisfied than customer B. However, they cannot distinguish between borderline cases because any boundary between satisfied and not satisfied customers is arbitrary.  This problem becomes apparent when considering an indifferent customer. How should such a customer be classified – satisfied or not satisfied? Further, what about customers who choose not to respond? It is therefore clear that any numerical probability computed from such data cannot be considered accurate.  In other words, the probability itself is uncertain.

Ambiguity in classification

Although the distinction made by Colyvan is important, there is a deeper issue that can afflict uncertainties that appear to be quantifiable at first sight. To understand how this happens, we’ll first need to take a brief look at how probabilities are usually computed.

An operational definition of probability is that it is the ratio of the number of times the event of interest occurs to the total number of events observed. For example, if my manager notes my arrival times at work over 100 days and finds that I arrive before 8:00 am on 62 days then he could infer that the probability my arriving before 8:00 am is 0.62.   Since the probability is assumed to equal the frequency of occurrence of the event of interest, this is sometimes called the frequentist interpretation of probability.

The above seems straightforward enough, so you might be asking: where’s the problem?

The problem is that events can generally be classified in several different ways and the computed probability of an event occurring can depend on the way that it is classified. This is called the reference class problem.   In a paper entitled, “The Reference Class Problem is Your Problem Too”, Alan Hajek described the reference class problem as follows:

“The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified.”

Consider the situation I mentioned earlier. My manager’s approach seems reasonable, but there is a problem with it: all days are not the same as far as my arrival times are concerned. For example, it is quite possible that my arrival time is affected by the weather: I may arrive later on rainy days than on sunny ones.  So, to get a better estimate my manager should also factor in the weather. He would then end up with two probabilities, one for fine weather and the other for foul. However, that is not all: there are a number of other criteria that could affect my arrival times – for example, my state of health (I may call in sick and not come in to work at all), whether I worked late the previous day etc.

What seemed like a straightforward problem is no longer so because of the uncertainty regarding which reference class is the right one to use.

Before closing this section, I should mention that the reference class problem has implications for many professional disciplines. I have discussed its relevance to project management in my post entitled, “The reference class problem and its implications for project management”.

To conclude

In this post we have looked at a couple of forms of uncertainty about uncertainty that have practical implications for decision makers. In particular, we have seen that probabilities used in managerial decision making can be uncertain because of  vague definitions of events and/or ambiguities in their classification.  The bottom line for those who use probabilities to support decision-making is to ensure that the criteria used to determine events of interest refer to unambiguous facts that are appropriate to the situation at hand.  To sum up: decisions made on the basis of probabilities are only as good as the assumptions that go into them, and the assumptions themselves may be prone to uncertainties such as the ones described in this article.

Written by K

September 29, 2011 at 10:34 pm

Mapping project dialogues using IBIS – a paper preview

with one comment

Work commitments have conspired to keep this post short. Well, short compared to my usual long-winded essays at any rate. Among other things, I’m currently helping  get a biggish project started while also trying to finish my current writing commitments in whatever little free time I have.  Fortunately, I have a ready-made topic to write about this week:  my recently published paper on the use of dialogue mapping in project management.  Instead of summarizing the paper, as I usually do in my paper reviews, I’ll simply present some background to the paper and describe, in brief, my rationale for writing it.

As regular readers of this blog will know, I am a fan of dialogue mapping,  a conversation mapping technique pioneered by Jeff Conklin. Those unfamiliar with the technique will find a super-quick introduction here.  Dialogue mapping uses a visual notation called issue based information system (IBIS) which I have described in detail in this post.  IBIS was invented by Horst Rittel as a means to capture and clarify facets of   wicked problems – problems that are hard to define, let alone solve.  However, as I discuss in the paper, the technique also has utility in the much more mundane day-to-day business of managing projects.

In essence, IBIS provides a means to capture questions,  responses to questions and arguments for and against those responses. This, coupled with the fact that it is easy to use, makes it eminently suited to capturing conversations in which issues are debated and resolved. Dialogue mapping is therefore a great way to surface options, debate them and reach a “best for group” decision in real-time. The technique thus has many applications in organizational settings. I have used it regularly in project meetings, particularly those in which critical decisions regarding design or approach are being discussed.

Early last year I used the technique to kick-start a data warehousing initiative within the organisation I work for. In the paper I use this experience as a case-study to illustrate some key aspects and features of dialogue mapping that make it useful in project discussions.  For completeness I also discuss why other visual notations for decision and design rationale don’t work as well as IBIS for capturing conversations in real-time. However, the main rationale for the paper is to provide a short,  self-contained introduction to the technique via a realistic case-study.

Most project managers would have had to confront questions such as “what approach should we take to solve this problem?” in situations where there is not enough information to make a sound decision. In such situations, the only recourse one has is to dialogue – to talk it over with the team, and thereby reach a shared understanding of the options available. More often than not, a  consensus decision emerges from such dialogue.  Such a decision would be based on the collective knowledge of the team, not just that of an individual.  Dialogue mapping provides a means to get to such a collective decision.

Why deliberation trumps standard decision-making methods

with 12 comments

Wikipedia defines decision analysis as the discipline comprising the philosophy, theory, methodology, and professional practice necessary to address important decisions in a formal manner.  Standard decision-making techniques generally involve the following steps:

  1. Identify available options.
  2. Develop criteria for rating options.
  3. Rate options according to criteria developed.
  4. Select the top-ranked option.

This sounds great in theory, but as Tim van Gelder points out in an article entitled the The Wise Delinquency of Decision Makers, formal methods of decision analysis are not used as often as textbooks and decision-theorists would have us believe.  This, he argues, isn’t due to ignorance:  even those trained in such methods often do not use them for decisions that really matter. Instead they resort to deliberation –    weighing up options in light of the arguments and evidence for and against them. He discusses why this is so, and also points out some problems with deliberative methods and what can be done do fix them. This post is a summary of the main points he makes in the article.

To begin with, formal methods aren’t suited to many decision-making problems encountered in the real world. For instance:

  1. Real-world options often cannot be quantified or rated in a meaningful way. Many of life’s dilemmas fall into this category. For example, a decision to accept or decline a job offer is rarely made on the basis of material gain alone.
  2. Even where ratings are possible, they can be highly subjective. For example, when considering a job offer, one candidate may give more importance to financial matters whereas another might consider lifestyle-related matters (flexi-hours, commuting distance etc.) to be paramount. Another complication here is that there may not be enough information to settle the matter conclusively. As an example, investment decisions are often made on the basis of quantitative information that is based on questionable assumptions.
  3. Finally, the problem may be wicked – i.e. complex, multi-faceted and difficult to analyse using formal decision making methods. Classic examples of wicked problems are climate change (so much so, that some say it is not even a problem) and city / town planning. Such problems cannot be forced into formal decision analysis frameworks in any meaningful way.

Rather than rating options and assigning scores, deliberation involves making arguments for and against each option and weighing them up in some consistent (but qualitative) way. In contrast to textbook methods of decision analysis, this is essentially an informal process; there is no prescribed method that one must follow. One could work through an arguments oneself or in conversation with others.  Because of the points listed above, deliberation is often better suited to deal with many of the decisions we are confronted with in our work and personal lives (see this post for a real-life example of deliberative decision making)

However, as Van Gelder points out,

The trouble is that deliberative decision making is still a very problematic business. Decisions go wrong all the time. Textbook decision methods were developed, in part, because it was widely recognized that our default or habitual decision making methods are very unreliable.

He  lists four problems with deliberative methods:

  1. Biases – Many poor decisions can be traced back to cognitive biaseserrors of judgement based on misperceptions of situations, data or evidence. A common example of such a bias is overconfidence in one’s own judgement. See this post for a discussion of how failures of high-profile projects may have been due to cognitive biases.
  2. Emotions – It is difficult, if not impossible, to be completely rational when making a decision – even a simple one.  However, emotions can cloud judgement and lead to decisions being made on the basis of pride, anger or envy rather than a clear-headed consideration of known options and their pros and cons.
  3. Tyranny of the group – Important decisions are often made by committees. Such decisions are subjected to collective biases such as groupthink – the tendency of group members to think alike and ignore external inputs so as to avoid internal conflicts. See this post for a discussion of groupthink in project environments. 
  4. Lack of training – People end up making poor decisions because they lack knowledge of informal logic and argumentation, skills that can be taught and then honed through practice.

Improvements in our ability to deliberate matters can be brought about by addressing the above. Clearly, it is difficult to be completely objective when confronted with tough decisions just as it is impossible to rid ourselves of our (individual and collective) biases.  That said, any technique that lays out all the options and arguments for and against them in a easy-to-understand way may help in making our biases and emotions (and those of others) obvious. Visual notations such as  IBIS (Issue-Based Information Systems) and  Argument Mapping do just that.  See this post for more on why it is better to represent reasoning visually than in prose.

The use of techniques such as the ones listed in the previous paragraph can lead to immediate improvements in corporate decision making. Firstly,  because gaps in logic and weaknesses in supporting evidence are made obvious, those responsible for formulating, say, a business case can focus on improving the their arguments prior to presenting them to senior managers. Secondly, decision makers can see the logic, supporting materials and the connections between them at a glance. In short: those formulating an argument and those making decisions based on it can focus on the essential points of the matter without having to wade through reams of documentation or tedious presentations.

To summarise: formal decision-making techniques are unsuited to complex problems  or those that have  options that cannot be quantified in a meaningful way. For such issues, deliberation –  supplemented by visual notations such as IBIS or Argument Mapping – offers a better alternative.

Written by K

May 13, 2011 at 5:32 am