Archive for the ‘Wicked Problems’ Category
3 or 7, truth or trust
“It is clear that ethics cannot be articulated.” – Ludwig Wittgenstein
Over the last few years I’ve been teaching and refining a series of lecture-workshops on Decision Making Under Uncertainty. Audiences include data scientists and mid-level managers working in corporates and public service agencies. The course is based on the distinction between uncertainties in which the variables are known and can be quantified versus those in which the variables are not known upfront and/or are hard to quantify.
Before going any further, it is worth explaining the distinction via a couple of examples:
An example of the first type of uncertainty is project estimation. A project has an associated time and cost, and although we don’t know what their values are upfront, we can estimate them if we have the right data. The point to note is this: because such problems can be quantified, the human brain tends to deal with them in a logical manner.
In contrast, business strategy is an example of the second kind of uncertainty. Here we do not know what the key variables are upfront. Indeed we cannot, because different stakeholders will perceive different aspects of a strategy to be paramount depending on their interests – consider, for example, the perspective of a CFO versus that of a CMO. Because of these differences, one cannot make progress on such problems until agreement has been reached on what is important to the group as a whole. The point to note here is that since such problems involve contentious issues, our reactions to them tend to be emotional rather than logical.
The difference between the two types of uncertainty is best conveyed experientially, so I have a few in-class activities aimed at doing just that. One of them is an exercise I call “3 or 7“, in which I give students a sheet with the following printed on it:
Circle either the number 3 or 7 below depending on whether you want 3 marks or 7 marks added to your Assignment 2 final mark. Yes, this offer is for real, but there a catch: if more than 10% of the class select 7, no one gets anything.
Write your student ID on the paper so that Kailash can award you the marks. Needless to say, your choice will remain confidential, no one (but Kailash) will know what you have selected.
3 7
Prior to handing out the sheet, I tell them that they:
- should sit far enough apart so that they can’t see what their neighbours choose,
- are not allowed to communicate their choices to others until the entire class has turned their sheets.
Before reading any further you may want to think about what typically happens.
–x–
Many readers would have recognized this exercise as a version of the Prisoner’s Dilemma and, indeed, many students in my classes recognize this too. Even so, there are always enough of “win at the cost of others” types in the room who ensure that I don’t have to award any extra marks. I’ve run the exercise about 10 times, often with groups comprised of highly collaborative individuals who work well together. Despite that,15-20% of the class ends up opting for 7.
It never fails to surprise me that, even in relatively close-knit groups, there are invariably a number of individuals who, if given a chance to gain at the expense of their colleagues, will not hesitate to do so providing their anonymity is ensured.
–x–
Conventional management thinking deems that any organisational activity involving several people has to be closely supervised. Underlying this view is the assumption that individuals involved in the activity will, if left unsupervised, make decisions based on self-interest rather than the common good (as happens in the prisoner’s dilemma game). This assumption finds justification in rational choice theory, which predicts that individuals will act in ways that maximise their personal benefit without any regard to the common good. This view is exemplified in 3 or 7 and, at a societal level, in the so-called Tragedy of the Commons, where individuals who have access to a common resource over-exploit it, thus depleting the resource entirely.
Fortunately, such a scenario need not come to pass: the work of Elinor Ostrom, one of the 2009 Nobel prize winners for Economics, shows that, given the right conditions, groups can work towards the common good even if it means forgoing personal gains.
Classical economics assumes that individuals’ actions are driven by rational self-interest – i.e. the well-known “what’s in it for me” factor. Clearly, the group will achieve much better results as a whole if it were to exploit the resource in a cooperative way. There are several real-world examples where such cooperative behaviour has been successful in achieving outcomes for the common good (this paper touches on some). However, according to classical economic theory, such cooperative behaviour is simply not possible.
So the question is: what’s wrong with rational choice theory? A couple of things, at least:
Firstly, implicit in rational choice theory is the assumption that individuals can figure out the best choice in any given situation. This is obviously incorrect. As Ostrom has stated in one of her papers:
Because individuals are boundedly rational, they do not calculate a complete set of strategies for every situation they face. Few situations in life generate information about all potential actions that one can take, all outcomes that can be obtained, and all strategies that others can take.
Instead, they use heuristics (experienced-based methods), norms (value-based techniques) and rules (mutually agreed regulations) to arrive at “good enough” decisions. Note that Ostrom makes a distinction between norms and rules, the former being implicit (unstated) rules, which are determined by the cultural attitudes and values)
Secondly, rational choice theory assumes that humans behave as self-centred, short-term maximisers. Such theories work in competitive situations such as the stock-market but not in situations in which collective action is called for, such as the prisoners dilemma.
Ostrom’s work essentially addresses the limitations of rational choice theory by outlining how individuals can work together to overcome self-interest.
–x–
In a paper entitled, A Behavioral Approach to the Rational Choice Theory of Collective Action, published in 1998, Ostrom states that:
…much of our current public policy analysis is based on an assumption that rational individuals are helplessly trapped in social dilemmas from which they cannot extract themselves without inducement or sanctions applied from the outside. Many policies based on this assumption have been subject to major failure and have exacerbated the very problems they were intended to ameliorate. Policies based on the assumptions that individuals can learn how to devise well-tailored rules and cooperate conditionally when they participate in the design of institutions affecting them are more successful in the field…[Note: see this book by Baland and Platteau, for example]
Since rational choice theory aims to maximise individual gain, it does not work in situations that demand collective action – and Ostrom presents some very general evidence to back this claim. More interesting than the refutation of rational choice theory, though, is Ostrom’s discussion of the ways in which individuals “trapped” in social dilemmas end up making the right choices. In particular she singles out two empirically grounded ways in which individuals work towards outcomes that are much better than those offered by rational choice theory. These are:
Communication: In the rational view, communication makes no difference to the outcome. That is, even if individuals make promises and commitments to each other (through communication), they will invariably break these for the sake of personal gain …or so the theory goes. In real life, however, it has been found that opportunities for communication significantly raise the cooperation rate in collective efforts (see this paper abstract or this one, for example). Moreover, research shows that face-to-face is far superior to any other form of communication, and that the main benefit achieved through communication is exchanging mutual commitment (“I promise to do this if you’ll promise to do that”) and increasing trust between individuals. It is interesting that the main role of communication is to enhance or reinforce the relationship between individuals rather than to transfer information. This is in line with the interactional theory of communication.
Innovative Governance: Communication by itself may not be enough; there must be consequences for those who break promises and commitments. Accordingly, cooperation can be encouraged by implementing mutually accepted rules for individual conduct, and imposing sanctions on those who violate them. This effectively amounts to designing and implementing novel governance structures for the activity. Note that this must be done by the group; rules thrust upon the group by an external authority are unlikely to work.
Of course, these factors do not come into play in artificially constrained and time-bound scenarios like 3 or 7. In such situations, there is no opportunity or time to communicate or set up governance structures. What is clear, even from the simple 3 or 7 exercise, is that these are required even for groups that appear to be close-knit.
Ostrom also identifies three core relationships that promote cooperation. These are:
Reciprocity: this refers to a family of strategies that are based on the expectation that people will respond to each other in kind – i.e. that they will do unto others as others do unto them. In group situations, reciprocity can be a very effective means to promote and sustain cooperative behaviour.
Reputation: This refers to the general view of others towards a person. As such, reputation is a part of how others perceive a person, so it forms a part of the identity of the person in question. In situations demanding collective action, people might make judgements on a person’s reliability and trustworthiness based on his or her reputation.’
Trust: Trust refers to expectations regarding others’ responses in situations where one has to act before others. And if you think about it, everything else in Ostrom’s framework is ultimately aimed at engendering or – if that doesn’t work – enforcing trust.
–x—
In an article on ethics and second-order cybernetics, Heinz von Foerster tells the following story:
I have a dear friend who grew up in Marrakech. The house of his family stood on the street that divide the Jewish and the Arabic quarter. As a boy he played with all the others, listened to what they thought and said, and learned of their fundamentally different views. When I asked him once, “Who was right?” he said, “They are both right.”
“But this cannot be,” I argued from an Aristotelian platform, “Only one of them can have the truth!”
“The problem is not truth,” he answered, “The problem is trust.”
For me, that last line summarises the lesson implicit in the admittedly artificial scenario of 3 or 7. In our search for facts and decision-making frameworks we forget the simple truth that in many real-life dilemmas they matter less than we think. Facts and frameworks cannot help us decide on ambiguous matters in which the outcome depends on what other people do. In such cases the problem is not truth; the problem is trust. From your own experience it should be evident it is impossible convince others of your trustworthiness by assertion, the only way to do so is by behaving in a trustworthy way. That is, by behaving ethically rather than talking about it, a point that is squarely missed by so-called business ethics classes.
Yes, it is clear that ethics cannot be articulated.
Notes:
- Portions of this article are lightly edited sections from a 2009 article that I wrote on Ostrom’s work and its relevance to project management.
- Finally, an unrelated but important matter for which I seek your support for a common good: I’m taking on the 7 Bridges Walk to help those affected by cancer. Please donate via my 7 Bridges fundraising page if you can . Every dollar counts; all funds raised will help Cancer Council work towards the vision of a cancer free future.
Learning, evolution and the future of work
The Janus-headed rise of AI has prompted many discussions about the future of work. Most, if not all, are about AI-driven automation and its consequences for various professions. We are warned to prepare for this change by developing skills that cannot be easily “learnt” by machines. This sounds reasonable at first, but less so on reflection: if skills that were thought to be uniquely human less than a decade ago can now be done, at least partially, by machines, there is no guarantee that any specific skill one chooses to develop will remain automation-proof in the medium-term future.
This begs the question as to what we can do, as individuals, to prepare for a machine-centric workplace. In this post I offer a perspective on this question based on Gregory Bateson’s writings as well as my work and teaching experiences.
Levels of learning
Given that humans are notoriously poor at predicting the future, it should be clear hitching one’s professional wagon to a specific set of skills is not a good strategy. Learning a set of skills may pay off in the short term, but it is unlikely to work in the long run.
So what can one do to prepare for an ambiguous and essentially unpredictable future?
To answer this question, we need to delve into an important, yet oft-overlooked aspect of learning.
A key characteristic of learning is that it is driven by trial and error. To be sure, intelligence may help winnow out poor choices at some stages of the process, but one cannot eliminate error entirely. Indeed, it is not desirable to do so because error is essential for that “aha” instant that precedes insight. Learning therefore has a stochastic element: the specific sequence of trial and error followed by an individual is unpredictable and likely to be unique. This is why everyone learns differently: the mental model I build of a concept is likely to be different from yours.
In a paper entitled, The Logical Categories of Learning and Communication, Bateson noted that the stochastic nature of learning has an interesting consequence. As he notes:
If we accept the overall notion that all learning is in some degree stochastic (i.e., contains components of “trial and error”), it follows that an ordering of the processes of learning can be built upon a hierarchic classification of the types of error which are to be corrected in the various learning processes.
Let’s unpack this claim by looking at his proposed classification:
Zero order learning – Zero order learning refers to situations in which a given stimulus (or question) results in the same response (or answer) every time. Any instinctive behaviour – such as a reflex response on touching a hot kettle – is an example of zero order learning. Such learning is hard-wired in the learner, who responds with the “correct” option to a fixed stimulus every single time. Since the response does not change with time, the process is not subject to trial and error.
First order learning (Learning I) – Learning I is where an individual learns to select a correct option from a set of similar elements. It involves a specific kind of trial and error that is best explained through a couple of examples. The canonical example of Learning I is memorization: Johnny recognises the letter “A” because he has learnt to distinguish it from the 25 other similar possibilities. Another example is Pavlovian conditioning wherein the subject’s response is altered by training: a dog that initially salivates only when it smells food is trained, by repetition, to salivate when it hears the bell.
A key characteristic of Learning I is that the individual learns to select the correct response from a set of comparable possibilities – comparable because the possibilities are of the same type (e.g. pick a letter from the set of alphabets). Consequently, first order learning cannot lead to a qualitative change in the learner’s response. Much of traditional school and university teaching is geared toward first order learning: students are taught to develop the “correct” understanding of concepts and techniques via a repetition-based process of trial and error.
As an aside, note that much of what goes under the banner of machine learning and AI can be also classed as first order learning.
Second order learning (Learning II) – Second order learning involves a qualitative change in the learner’s response to a given situation. Typically, this occurs when a learner sees a familiar problem situation in a completely new light, thus opening up new possibilities for solutions. Learning II therefore necessitates a higher order of trial and error, one that is beyond the ken of machines, at least at this point in time.
Complex organisational problems, such as determining a business strategy, require a second order approach because they cannot be precisely defined and therefore lack an objectively correct solution. Echoing Horst Rittel, solutions to such problems are not true or false, but better or worse.
Much of the teaching that goes on in schools and universities hinders second order learning because it implicitly conditions learners to frame problems in ways that make them amenable to familiar techniques. However, as Russell Ackoff noted, “outside of school, problems are seldom given; they have to be taken, extracted from complex situations…” Two aspects of this perceptive statement bear further consideration. Firstly, to extract a problem from a situation one has to appreciate or make sense of the situation. Secondly, once the problem is framed, one may find that solving it requires skills that one does not possess. I expand on the implications of these points in the following two sections.
Sensemaking and second order learning
In an earlier piece, I described sensemaking as the art of collaborative problem formulation. There are a huge variety of sensemaking approaches, the gamestorming site describes many of them in detail. Most of these are aimed at exploring a problem space by harnessing the collective knowledge of a group of people who have diverse, even conflicting, perspectives on the issue at hand. The greater the diversity, the more complete the exploration of the problem space.
Sensemaking techniques help in elucidating the context in which a problem lives. This refers to the the problem’s environment, and in particular the constraints that the environment imposes on potential solutions to the problem. As Bateson puts it, context is “a collective term for all those events which tell an organism among what set of alternatives [it] must make [its] next choice.” But this begs the question as to how these alternatives are to be determined. The question cannot be answered directly because it depends on the specifics of the environment in which the problem lives. Surfacing these by asking the right questions is the task of sensemaking.
As a simple example, if I request you to help me formulate a business strategy, you are likely to begin by asking me a number of questions such as:
- What kind of business are you in?
- Who are your customers?
- What’s the competitive landscape?
- …and so on
Answers to these questions fill out the context in which the business operates, thus making it possible to formulate a meaningful strategy.
It is important to note that context rarely remains static, it evolves in time. Indeed, many companies faded away because they failed to appreciate changes in their business context: Kodak is a well-known example, there are many more. So organisations must evolve too. However, it is a mistake to think of an organisation and its environment as evolving independently, the two always evolve together. Such co-evolution is as true of natural systems as it is of social ones. As Bateson tells us:
…the evolution of the horse from Eohippus was not a one-sided adjustment to life on grassy plains. Surely the grassy plains themselves evolved [on the same footing] with the evolution of the teeth and hooves of the horses and other ungulates. Turf was the evolving response of the vegetation to the evolution of the horse. It is the context which evolves.
Indeed, one can think of evolution by natural selection as a process by which nature learns (in a second-order sense).
The foregoing discussion points to another problem with traditional approaches to education: we are implicitly taught that problems once solved, stay solved. It is seldom so in real life because, as we have noted, the environment evolves even if the organisation remains static. In the worst case scenario (which happens often enough) the organisation will die if it does not adapt appropriately to changes in its environment. If this is true, then it seems that second-order learning is important not just for individuals but for organisations as a whole. This harks back to notion of the notion of the learning organisation, developed and evangelized by Peter Senge in the early 90s. A learning organisation is one that continually adapts itself to a changing environment. As one might imagine, it is an ideal that is difficult to achieve in practice. Indeed, attempts to create learning organisations have often ended up with paradoxical outcomes. In view of this it seems more practical for organisations to focus on developing what one might call learning individuals – people who are capable of adapting to changes in their environment by continual learning.
Learning to learn
Cliches aside, the modern workplace is characterised by rapid, technology-driven change. It is difficult for an individual to keep up because one has to:
-
- Figure out which changes are significant and therefore worth responding to.
- Be capable of responding to them meaningfully.
The media hype about the sexiest job of the 21st century and the like further fuel the fear of obsolescence. One feels an overwhelming pressure to do something. The old adage about combating fear with action holds true: one has to do something, but the question then is: what meaningful action can one take?
The fact that this question arises points to the failure of traditional university education. With its undue focus on teaching specific techniques, the more important second-order skill of learning to learn has fallen by the wayside. In reality, though, it is now easier than ever to learn new skills on ones own. When I was hired as a database architect in 2004, there were few quality resources available for free. Ten years later, I was able to start teaching myself machine learning using topnotch software, backed by countless quality tutorials in blog and video formats. However, I wasted a lot of time in getting started because it took me a while to get over my reluctance to explore without a guide. Cultivating the habit of learning on my own earlier would have made it a lot easier.
Back to the future of work
When industry complains about new graduates being ill-prepared for the workplace, educational institutions respond by updating curricula with more (New!! Advanced!!!) techniques. However, the complaints continue and Bateson’s notion of second order learning tells us why:
- Firstly, problem solving is distinct from problem formulation; it is akin to the distinction between human and machine intelligence.
- Secondly, one does not know what skills one may need in the future, so instead of learning specific skills one has to learn how to learn
In my experience, it is possible to teach these higher order skills to students in a classroom environment. However, it has to be done in a way that starts from where students are in terms of skills and dispositions and moves them gradually to less familiar situations. The approach is based on David Cavallo’s work on emergent design which I have often used in my consulting work. Two examples may help illustrate how this works in the classroom:
- Many analytically-inclined people think sensemaking is a waste of time because they see it as “just talk”. So, when teaching sensemaking, I begin with quantitative techniques to deal with uncertainty, such as Monte Carlo simulation, and then gradually introduce examples of uncertainties that are hard if not impossible to quantify. This progression naturally leads on to problem situations in which they see the value of sensemaking.
- When teaching data science, it is difficult to comprehensively cover basic machine learning algorithms in a single semester. However, students are often reluctant to explore on their own because they tend to be daunted by the mathematical terminology and notation. To encourage exploration (i.e. learning to learn) we use a two-step approach: a) classes focus on intuitive explanations of algorithms and the commonalities between concepts used in different algorithms. The classes are not lectures but interactive sessions involving lots of exercises and Q&A, b) the assignments go beyond what is covered in the classroom (but still well within reach of most students), this forces them to learn on their own. The approach works: just the other day, my wonderful co-teachers, Alex and Chris, commented on the amazing learning journey of some of the students – so tentative and hesitant at first, but well on their way to becoming confident data professionals.
In the end, though, whether or not an individual learner learns depends on the individual. As Bateson once noted:
Perhaps the best documented generalization in the field of psychology is that, at any given moment, the behavioral characteristics of a mammal, and especially of [a human], depend upon the previous experience and behavior of that individual.
The choices we make when faced with change depend on our individual natures and experiences. Educators can’t do much about the former but they can facilitate more meaningful instances of the latter, even within the confines of the classroom.

Risk management and organizational anxiety
In practice risk management is a rational, means-end based process: risks are identified, analysed and then “solved” (or mitigated). Although these steps seem to be objective, each of them involves human perceptions, biases and interests. Where Jill sees an opportunity, Jack may see only risks.
Indeed, the problem of differences in stakeholder perceptions is broader than risk analysis. The recognition that such differences in world-views may be irreconcilable is what led Horst Rittel to coin the now well-known term, wicked problem. These problems tend to be made up of complex interconnected and interdependent issues which makes them difficult to tackle using standard rational- analytical methods of problem solving.
Most high-stakes risks that organisations face have elements of wickedness – indeed any significant organisational change is fraught with risk. Murphy rules; things can go wrong, and they often do. The current paradigm of risk management, which focuses on analyzing and quantifying risks using rational methods, is not broad enough to account for the wicked aspects of risk.
I had been thinking about this for a while when I stumbled on a fascinating paper by Robin Holt entitled, Risk Management: The Talking Cure, which outlines a possible approach to analysing interconnected risks. In brief, Holt draws a parallel between psychoanalysis (as a means to tackle individual anxiety) and risk management (as a means to tackle organizational anxiety). In this post, I present an extensive discussion and interpretation of Holt’s paper. Although more about the philosophy of risk management than its practice, I found the paper interesting, relevant and thought provoking. My hope is that some readers might find it so too.
Background
Holt begins by noting that modern life is characterized by uncertainty. Paradoxically, technological progress which should have increased our sense of control over our surroundings and lives has actually heightened our personal feelings of uncertainty. Moreover, this sense of uncertainty is not allayed by rational analysis. On the contrary, it may have even increased it by, for example, drawing our attention to risks that we may otherwise have remained unaware of. Risk thus becomes a lens through which we perceive the world. The danger is that this can paralyze. As Holt puts it,
…risk becomes the only backdrop to perceiving the world and perception collapses into self-inhibition, thereby compounding uncertainty through inertia.
Most individuals know this through experience: most of us have at one time or another been frozen into inaction because of perceived risks. We also “know” at a deep personal level that the standard responses to risk are inadequate because many of our worries tend to be inchoate and therefore can neither be coherently articulated nor analysed. In Holt’s words:
..People do not recognize [risk] from the perspective of a breakdown in their rational calculations alone, but because of threats to their forms of life – to the non-calculative way they see themselves and the world. [Mainstream risk analysis] remains caught in the thrall of its own ‘expert’ presumptions, denigrating the very lay knowledge and perceptions on the grounds that they cannot be codified and institutionally expressed.
Holt suggests that risk management should account for the “codified, uncodified and uncodifiable aspects of uncertainty from an organizational perspective.” This entails a mode of analysis that takes into account different, even conflicting, perspectives in a non-judgemental way. In essence, he suggests “talking it over” as a means to increase awareness of the contingent nature of risks rather than a means of definitively resolving them.
Shortcomings of risk analysis
The basic aim of risk analysis (as it is practiced) is to contain uncertainty within set bounds that are determined by an organisation’s risk appetite. As mentioned earlier, this process begins by identifying and classifying risks. Once this is done, one determines the probability and impact of each risk. Then, based on priorities and resources available (again determined by the organisation’s risk appetite) one develops strategies to mitigate the risks that are significant from the organisation’s perspective.
However, the messiness of organizational life makes it difficult to see risk in such a clear-cut way. We may pretend to be rational about it, but in reality we perceive it through the lens of our background, interests , experiences. Based on these perceptions we rationalize our action (or inaction!) and simply get on with life. As Holt writes:
The concept [of risk] refers to…the mélange of experience, where managers accept contingencies without being overwhelmed to a point of complete passivity or confusion, Managers learn to recognize the differences between things, to acknowledge their and our limits. Only in this way can managers be said to make judgements, to be seen as being involved in something called the future.
Then, in a memorable line, he goes on to say:
The future, however, lasts a long time, so much so as to make its containment and prediction an often futile exercise.
Although one may well argue that this is not the case for many organizational risks, it is undeniable that certain mitigation strategies (for example, accepting risks that turn out to be significant later) may have significant consequences in the not-so-near future.
Advice from a politician-scholar
So how can one address the slippery aspects of risk – the things people sense intuitively, but find difficult to articulate?
Taking inspiration from Machiavelli, Holt suggests reframing risk management as a means to determine wise actions in the face of the contradictory forces of fortune and necessity. As Holt puts it:
Necessity describes forces that are unbreachable but manageable by acceptance and containment—acts of God, tendencies of the species, and so on. In recognizing inevitability, [one can retain one’s] position, enhancing it only to the extent that others fail to recognize necessity. Far more influential, and often confused with necessity, is fortune. Fortune is elusive but approachable. Fortune is never to be relied upon: ‘The greatest good fortune is always least to be trusted’; the good is often kept underfoot and the ridiculous elevated, but it provides [one] with opportunity.
Wise actions involve resolve and cunning (which I interpret as political nous). This entails understanding that we do not have complete (or even partial) control over events that may occur in the future. The future is largely unknowable as are people’s true drives and motivations. Yet, despite this, managers must act. This requires personal determination together with a deep understanding of the social and political aspects of one’s environment.
And a little later,
…risk management is not the clear conception of a problem coupled to modes of rankable resolutions, or a limited process, but a judgemental analysis limited by the vicissitudes of budgets, programmes, personalities and contested priorities.
In short: risk management in practice tends to be a far way off from how it is portrayed in textbooks and the professional literature.
The wickedness of risk management
Most managers and those who work under their supervision have been schooled in the rational-scientific approach of problem solving. It is no surprise, therefore, that they use it to manage risks: they gather and analyse information about potential risks, formulate potential solutions (or mitigation strategies) and then implement the best one (according to predetermined criteria). However, this method works only for problems that are straightforward or tame, rather than wicked.
Many of the issues that risk managers are confronted with are wicked, messy or both. Often though, such problems are treated as being tame. Reducing a wicked or messy problem to one amenable to rational analysis invariably entails overlooking the views of certain stakeholder groups or, worse, ignoring key aspects of the problem. This may work in the short term, but will only exacerbate the problem in the longer run. Holt illustrates this point as follows:
A primary danger in mistaking a mess for a tame problem is that it becomes even more difficult to deal with the mess. Blaming ‘operator error’ for a mishap on the production line and introducing added surveillance is an illustration of a mess being mistaken for a tame problem. An operator is easily isolated and identifiable, whereas a technological system or process is embedded, unwieldy and, initially, far more costly to alter. Blaming operators is politically expedient. It might also be because managers and administrators do not know how to think in terms of messes; they have not learned how to sort through complex socio-technical systems.
It is important to note that although many risk management practitioners recognize the essential wickedness of the issues they deal with, the practice of risk management is not quite up to the task of dealing with such matters. One step towards doing this is to develop a shared (enterprise-wide) understanding of risks by soliciting input from diverse stakeholders groups, some of who may hold opposing views.
The skills required to do this are very different from the analytical techniques that are the focus of problem solving and decision making techniques that are taught in colleges and business schools. Analysis is replaced by sensemaking – a collaborative process that harnesses the wisdom of a group to arrive at a collective understanding of a problem and thence a common commitment to a course of action. This necessarily involves skills that do not appear in the lexicon of rational problem solving: negotiation, facilitation, rhetoric and those of the same ilk that are dismissed as being of no relevance by the scientifically oriented analyst.
In the end though, even this may not be enough: different stakeholders may perceive a given “risk” in have wildly different ways, so much so that no consensus can be reached. The problem is that the current framework of risk management requires the analyst to perform an objective analysis of situation/problem, even in situations where this is not possible.
To get around this Holt suggests that it may be more useful to see risk management as a way to encounter problems rather than analyse or solve them.
What does this mean?
He sees this as a forum in which people can talk about the risks openly:
To enable organizational members to encounter problems, risk management’s repertoire of activity needs to engage their all too human components: belief, perception, enthusiasm and fear.
This gets to the root of the problem: risk matters because it increases anxiety and generally affects peoples’ sense of wellbeing. Given this, it is no surprise that Holt’s proposed solution draws on psychoanalysis.
The analogy between psychoanalysis and risk management
Any discussion of psychoanalysis –especially one that is intended for an audience that is largely schooled in rational/scientific methods of analysis – must begin with the acknowledgement that the claims of psychoanalysis cannot be tested. That is, since psychoanalysis speaks of unobservable “objects” such as the ego and the unconscious, any claims it makes about these concepts cannot be proven or falsified.
However as Holt suggests, this is exactly what makes it a good fit for encountering (as opposed to analyzing) risks. In his words:
It is precisely because psychoanalysis avoids an overarching claim to produce testable, watertight, universal theories that it is of relevance for risk management. By so avoiding universal theories and formulas, risk management can afford to deviate from pronouncements using mathematical formulas to cover the ‘immanent indeterminables’ manifest in human perception and awareness and systems integration.
His point is that there is a clear parallel between psychoanalysis and the individual, and risk management and the organisation:
We understand ourselves not according to a template but according to our own peculiar, beguiling histories. Metaphorically, risk management can make explicit a similar realization within and between organizations. The revealing of an unconscious world and its being in a constant state of tension between excess and stricture, between knowledge and ignorance, is emblematic of how organizational members encountering messes, wicked problems and wicked messes can be forced to think.
In brief, Holt suggests that what psychoanalysis does for the individual, risk management ought to do for the organisation.
Talking it over – the importance of conversations
A key element of psychoanalysis is the conversation between the analyst and patient. Through this process, the analyst attempts to get the patient to become aware of hidden fears and motivations. As Holt puts it,
Psychoanalysis occupies the point of rupture between conscious intention and unconscious desire — revealing repressed or overdetermined aspects of self-organization manifest in various expressions of anxiety, humour, and so on
And then, a little later, he makes the connection to organisations:
The fact that organizations emerge from contingent, complex interdependencies between specific narrative histories suggests that risk management would be able to use similar conversations to psychoanalysis to investigate hidden motives, to examine…the possible reception of initiatives or strategies from the perspective of inherently divergent stakeholders, or to analyse the motives for and expectations of risk management itself. This fundamentally reorients the perspective of risk management from facing apparent uncertainties using technical assessment tools, to using conversations devoid of fixed formulas to encounter questioned identities, indeterminate destinies, multiple and conflicting aims and myriad anxieties.
Through conversations involving groups of stakeholders who have different risk perceptions, one might be able to get a better understanding of a particular risk and hence, may be, design a more effective mitigation strategy. More importantly, one may even realise that certain risks are not risks at all or others that seem straightforward have implications that would have remained hidden were it not for the conversation.
These collective conversations would take place in workshops…
…that tackle problems as wicked messes, avoid lowest-denominator consensus in favour of continued discovery of alternatives through conversation, and are instructed by metaphor rather than technical taxonomy, risk management is better able to appreciate the everyday ambivalence that fundamentally influences late-modern organizational activity. As such, risk management would be not merely a rationalization of uncertain experience but a structured and contested activity involving multiple stakeholders engaged in perpetual translation from within environments of operation and complexes of aims.
As a facilitator of such workshops, the risk analyst provokes stakeholders to think about their feelings and motivations that may be “out of bounds” in a standard risk analysis workshop. Such a paradigm goes well beyond mainstream risk management because it addresses the risk-related anxieties and fears of individuals who are affected by it.
Conclusion
This brings me to the end of my not-so-short summary of Holt’s paper. Given the length of this post, I reckon I should keep my closing remarks short. So I’ll leave it here paraphrasing the last line of the paper, which summarises its main message: risk management ought to be about developing an organizational capacity for overcoming risks, freed from the presumption of absolute control.

