Archive for the ‘Argumentation’ Category
Why deliberation trumps standard decision-making methods
Wikipedia defines decision analysis as the discipline comprising the philosophy, theory, methodology, and professional practice necessary to address important decisions in a formal manner. Standard decision-making techniques generally involve the following steps:
- Identify available options.
- Develop criteria for rating options.
- Rate options according to criteria developed.
- Select the top-ranked option.
This sounds great in theory, but as Tim van Gelder points out in an article entitled the The Wise Delinquency of Decision Makers, formal methods of decision analysis are not used as often as textbooks and decision-theorists would have us believe. This, he argues, isn’t due to ignorance: even those trained in such methods often do not use them for decisions that really matter. Instead they resort to deliberation – weighing up options in light of the arguments and evidence for and against them. He discusses why this is so, and also points out some problems with deliberative methods and what can be done do fix them. This post is a summary of the main points he makes in the article.
To begin with, formal methods aren’t suited to many decision-making problems encountered in the real world. For instance:
- Real-world options often cannot be quantified or rated in a meaningful way. Many of life’s dilemmas fall into this category. For example, a decision to accept or decline a job offer is rarely made on the basis of material gain alone.
- Even where ratings are possible, they can be highly subjective. For example, when considering a job offer, one candidate may give more importance to financial matters whereas another might consider lifestyle-related matters (flexi-hours, commuting distance etc.) to be paramount. Another complication here is that there may not be enough information to settle the matter conclusively. As an example, investment decisions are often made on the basis of quantitative information that is based on questionable assumptions.
- Finally, the problem may be wicked – i.e. complex, multi-faceted and difficult to analyse using formal decision making methods. Classic examples of wicked problems are climate change (so much so, that some say it is not even a problem) and city / town planning. Such problems cannot be forced into formal decision analysis frameworks in any meaningful way.
Rather than rating options and assigning scores, deliberation involves making arguments for and against each option and weighing them up in some consistent (but qualitative) way. In contrast to textbook methods of decision analysis, this is essentially an informal process; there is no prescribed method that one must follow. One could work through an arguments oneself or in conversation with others. Because of the points listed above, deliberation is often better suited to deal with many of the decisions we are confronted with in our work and personal lives (see this post for a real-life example of deliberative decision making)
However, as Van Gelder points out,
The trouble is that deliberative decision making is still a very problematic business. Decisions go wrong all the time. Textbook decision methods were developed, in part, because it was widely recognized that our default or habitual decision making methods are very unreliable.
He lists four problems with deliberative methods:
- Biases – Many poor decisions can be traced back to cognitive biases – errors of judgement based on misperceptions of situations, data or evidence. A common example of such a bias is overconfidence in one’s own judgement. See this post for a discussion of how failures of high-profile projects may have been due to cognitive biases.
- Emotions – It is difficult, if not impossible, to be completely rational when making a decision – even a simple one. However, emotions can cloud judgement and lead to decisions being made on the basis of pride, anger or envy rather than a clear-headed consideration of known options and their pros and cons.
- Tyranny of the group – Important decisions are often made by committees. Such decisions are subjected to collective biases such as groupthink – the tendency of group members to think alike and ignore external inputs so as to avoid internal conflicts. See this post for a discussion of groupthink in project environments.
- Lack of training – People end up making poor decisions because they lack knowledge of informal logic and argumentation, skills that can be taught and then honed through practice.
Improvements in our ability to deliberate matters can be brought about by addressing the above. Clearly, it is difficult to be completely objective when confronted with tough decisions just as it is impossible to rid ourselves of our (individual and collective) biases. That said, any technique that lays out all the options and arguments for and against them in a easy-to-understand way may help in making our biases and emotions (and those of others) obvious. Visual notations such as IBIS (Issue-Based Information Systems) and Argument Mapping do just that. See this post for more on why it is better to represent reasoning visually than in prose.
The use of techniques such as the ones listed in the previous paragraph can lead to immediate improvements in corporate decision making. Firstly, because gaps in logic and weaknesses in supporting evidence are made obvious, those responsible for formulating, say, a business case can focus on improving the their arguments prior to presenting them to senior managers. Secondly, decision makers can see the logic, supporting materials and the connections between them at a glance. In short: those formulating an argument and those making decisions based on it can focus on the essential points of the matter without having to wade through reams of documentation or tedious presentations.
To summarise: formal decision-making techniques are unsuited to complex problems or those that have options that cannot be quantified in a meaningful way. For such issues, deliberation – supplemented by visual notations such as IBIS or Argument Mapping – offers a better alternative.
Value judgements in system design
Introduction
How do we choose between competing design proposals for information systems? In principle this should be done using an evaluation process based on objective criteria. In practice, though, people generally make choices based on their interests and preferences. These can differ widely, so decisions are often made on the basis of criteria that do not satisfy the interests of all stakeholders. Consequently, once a system becomes operational, complaints from stakeholder groups whose interests were overlooked are almost inevitable (Just think back to any system implementation that you were involved in…).
The point is: choosing between designs is not a purely technical issue; it also involves value judgements – what’s right and what’s wrong – or even, what’s good and what’s not. Problem is, this is a deeply personal matter – different folks have different values and, consequently, differing ideas of which design ideal is best (Note: the term ideal refers to the value judgements associated with a design). Ten years ago, Heinz Klein and Rudy Hirschheim addressed this issue in a landmark paper entitled, Choosing Between Competing Design Ideals in Information Systems Development. This post is a summary of the main ideas presented in their paper.
Design ideals and deliberation
A design ideal is not an abstract, philosophical concept. The notion of good and bad or right and wrong can be applied to the technical, economic and social aspects of a system. For example, a choice between building and buying a system has different economic and social consequences for stakeholder groups within and outside the organisation. What’s more, the competing ideals may be in conflict – developers employed in the organisation would obviously prefer to build rather than buy because their employment depends on it; management, however, may have a very different take on the issue.
The essential point that Hirschheim and Klein make is that differences in values can be reconciled only through honest discussion. They propose a deliberative approach wherein all stakeholders discuss issues in order to come to an agreement. To this end, they draw on the theories of argumentation and communicative rationality to come up with a rational framework for comparing design ideals. Since these terms are new, I’ll spend a couple of paragraphs in describing them briefly.
Argumentation is essentially reasoned debate – i.e. the process of reaching conclusions via arguments that use informal logic – which, according to the definition in the foregoing link, is the attempt to develop a logic to assess, analyse and improve ordinary language (or “everyday”) reasoning. Hirschheim and Klein use the argumentation framework proposed by Stephen Toulmin, to illustrate their approach.
The basic premise of communicative rationality is that rationality (or reason) is tied to social interactions and dialogue. In other words, the exercise of reason can occur only through dialogue. Such communication, or mutual deliberation, ought to result in a general agreement about the issues under discussion. Only once such agreement is achieved can there be a consensus on actions that need to be taken. See my article on rational dialogue in project environments for more on communicative rationality.
Obstacles to rational dialogue and how to overcome them
The key point about communicative rationality is that it assumes the following conditions hold:
- Inclusion: includes all stakeholders.
- Autonomy: all participants should be able to present and criticise views without fear of reprisals.
- Empathy: participants must be willing to listen to and understand claims made by others.
- Power neutrality: power differences (levels of authority) between participants should not affect the discussion.
- Transparency: participants must not indulge in strategic actions (i.e. lying!).
Clearly these are idealistic conditions, difficult to achieve in any real organisation. Klein and Hirschheim acknowledge this point, and note the following barriers to rationality in organisational decision making:
- Social barriers: These include inequalities (between individuals) in power, status, education and resources.
- Motivational barriers: This refers to the psychological cost of prolonged debate. After a period of sustained debate, people will often cave in just to stop arguing even though they may have the better argument.
- Economic barriers: Time is money: most organisations cannot afford a prolonged debate on contentious issues.
- Personality differences: How often is it that the most charismatic or articulate person gets their way, and the quiet guy in the corner (with a good idea or two) is completely overlooked?
- Linguistic barriers: This refers to the difficulty of formulating arguments in a way that makes sense to the listener. This involves, among other things, the ability to present ideas in a way that is succinct, without glossing over the important issue – a skill not possessed by many.
These barriers will come as no surprise to most readers. It will be just as unsurprising that it is difficult to overcome them. Klein and Hirschheim offer the usual solutions including:
- Encourage open debate – They suggest the use of technologies that support collaboration. They can be forgiven for their optimism given that the paper was written a decade ago, but the fact of the matter is that all the technologies that have sprouted since have done little to encourage open debate.
- Implement group decision techniques – these include arrangements such as quality circles, nominal groups and constructive controversy. However, these too will not work unless people feel safe enough to articulate their opinions freely.
Even though the barriers to open dialogue are daunting, it behooves system designers to strive towards reducing or removing them. There are effective ways to do this, but that’s a topic I won’t go into here as it has been dealt with at length elsewhere.
Principles for arguments about value judgements
So, assuming the environment is right, how should we debate value judgements? Klein and Hirschheim recommend using informal logic supplemented with ethical principles. Let’s look at these two elements briefly.
Informal logic is a means to reason about human concerns. Typically, in these issues there is no clear cut notion of truth and falsity. Toulmin’s argumentation framework (mentioned earlier in this post) tells us how arguments about such issues should be constructed. It consists of the following elements:
- Claim: A statement that one party asks another to accept as true. An example would be my claim that I did not show up to work yesterday because I was not well.
- Data (Evidence): The basis on which one party expects to other to accept a claim as true. To back the claim made in the previous line, I might draw attention to my runny nose and hoarse voice.
- Warrant: The bridge between the data and the claim. Again, using the same example, a warrant would be that I look drawn today, so it is likely that I really was sick yesterday.
- Backing: Further evidence, if the warrant should prove insufficient. If my boss is unconvinced by my appearance he may insist on a doctor certificate.
- Qualifier: These are words that express a degree of certainty about the claim. For instance, to emphasise just how sick I was, I might tell my boss that I stayed in bed all day because I had high fever.
This is all quite theoretical: when we debate issues we do not stop to think whether a statement is a warrant or a backing or something else; we just get on with the argument. Nevertheless, knowledge of informal logic can help us construct better arguments for our positions. Further, at the practical level there are computer supported deliberative techniques such as argument mapping and dialogue mapping which can assist in structuring and capturing such arguments.
The other element is ethics: Klein and Hirschheim contend that moral and ethical principles ought to be considered when value judgements are being evaluated. These principles include:
- Ought implies can – which essentially means that one morally ought to do something only if one can do it (see this paper for an interesting counterview of this principle). Taking the negation of this statement, one gets “Cannot implies ought not” which means that a design can be criticised if it involves doing something that is (demonstrably) impossible – or makes impossible demands on certain parties.
- Conflicting principles – This is best explained via an example. Consider a system that saves an organisation money but involves putting a large number of people out of work. In this case we would have an economic principle coming up against a social one. According to the principle, a design ideal based on an economic imperative can be criticised on social grounds.
- The principle of reconsideration – This implies reconsidering decisions if relevant new information becomes available. For example, if it is found that a particular design overlooked a certain group of users, then the design should be reconsidered in the light of their needs.
They also mention that new ethical and moral theories may trigger the principle of reconsideration. In my opinion, however, this is a relatively rare occurrence: how often are new ethical or moral theories proposed?
Summarising
The main point made by the authors is that system design involves value judgements. Since these are largely subjective, open debate using the principles of informal logic is the best way to deal with conflicting values. Moreover, since such issues are not entirely technical, one has to use ethical principles to guide debate. These principles – not asking people to do the impossible; taking everyone’s interests into account and reconsidering decisions in the light of new information – are reasonable if not self-evident. However, as obvious as they are, they are often ignored in design deliberations. Hirschheim and Klein do us a great service by reminding us of their importance.

