Eight to Late

Sensemaking and Analytics for Organizations

Archive for May 2009

A quick test of organisational culture

with 3 comments

Organisational culture is defined by the values and norms that are shared by people and groups in an organisation. These values and norms, in turn, influence how  people interact with each other and with outsiders. That’s well and good, but how does one determine an an organisation’s culture?  In my opinion, this is best evaluated by looking at how people react in typical work situations.  What follows is a quick quiz to test an organisation’s culture based on this principle.

Note that the test can also be applied to projects – as projects are temporary organisations. Typically project and team cultures simply reflect those of the organisations in which they exist. However, there can be differences: a good project or team leader can (to an extent) shield his or her team from the effects of a toxic organisational culture. But that’s fodder for another post.  For now, let’s get on with the quiz.

A tip before starting: don’t over-think your answers; your initial response is probably the most accurate one.

Ready? Right, here we go…the sixty-second quiz on your workplace culture:

a)  You make a mistake that no one notices. What do you do:

  1. Keep quiet about it and hopes it remains unnoticed.
  2. Own up because it is OK to make mistakes around here.
  3. Dream up a scheme to pin it someone else, preferably a rival for a promotion.

b)  You have an idea that might lead to a new product. You

  1. Use workmates and manager as a sounding board for whether it is any good.
  2. Start to work it through yourself to see if it is any good.
  3. Forget about it

c) You have an idea which involves collaborating with someone from another department. You

  1. Approach the person directly.
  2. Go through the proper channels – approach your manager who approaches their manager and so on.
  3. Forget about it: inter-departmental politics would get in the way.

d) People at an organisation-wide event (company day or a project team day out, for example):

  1. Stick with folks from their departments.
  2. Mingle, and look like they’re enjoying it.
  3. Look like they want to be elsewhere. In fact many of them are – they’ve called in sick.

e) A project has gone horribly wrong. Do people

  1. Look for a scapegoat.
  2.  Say, “I had nothing to do with it.”
  3. Work together to fix it.

f)  Someone from another department approaches you for assistance relating to your area of expertise. Do you

  1. Help them right away, or as soon as you can.
  2. Ask them to speak to your manager first.
  3. Fob them off – you’re way too overworked and don’t really feel like doing a whit of work more than you absolutely have to.

g)  What do people in your organisation do when they are annoyed by some aspect of their job? (Note: see this post for more on this question)

  1. They complain about it.
  2. They ignore it.
  3. They fix it.

h) The atmosphere in cross-departmental meetings in your organisation is generally:

  1.  Cordial.
  2. Tense
  3. Neutral

i) An impossible deadline looms. In order to meet it you

  1. Work overtime because you have to.
  2. Work overtime because you want to.
  3. This question is inapplicable – you never have impossible deadlines.

j)  You’ve done something brilliant that saves the organisation a packet. Your manager:

  1. Acknowledges your efforts publicly.
  2. Acknowledges your efforts privately.
  3. Grabs the glory.

k) You’ve worked overtime on a project and its all come good. You get

  1. A pat on the back.
  2. A pat on the back and something tangible (a bonus, a meal or at least a movie voucher)
  3.  Nothing (We pay you a salary, don’t we?)

l)  You’re feeling under the weather, but are not really sick (Put it this way: no doctor would give you a certificate). However, you honestly don’t think you can make it through the work day.  What do you do?

  1. Thank God and take the day off.
  2. Go to work because you want to.
  3. Go to work because you have to.

Score:

The score for each response is the number  in brackets against the choice you made.

a. 1 (5)  2(10)  3(0)
b. 1(10) 2(5) 3(0)
c. 1(10) 2(5) 3(0)
d. 1(5) 2(10) 3(0)
e. 1(0) 2(5) 3(10)
f.  1(10) 2(5) 3(0)
g. 1(0) 2(5) 3(10)
h. 1(10) 2(0) 3(5)
i.  1(0) 2(5) 3(10)
j.  1(10) 2(5) 3(0)
k. 1(5) 2(10) 3(0)
l.  1(0) 2(10) 3(5)

What does your score mean?

> 100 :  Does your organisation have any vacancies for a PM/dev manager?

80-95 :  I bet you enjoy working here.

60-75: Still on the right side of the divide, but things do get unpleasant occasionally

40 -55: Things could be a lot worse – but, they could also be better.

20-35: Things are a lot worse

< 20: Workplace hell?

A good organisational culture is one which encourages and enables people to do the right thing  without coercion or fear of consequences.  What’s right?  Most people just know what is right and what’s not, without having to be told.   I can think of no better way to end this post than by quoting from the start of Robert Pirsig’s classic,  Zen and The Art of Motorcycle Maintenance:

And what is good, Phædrus,
And what is not good…
Need we ask anyone to tell us these things
?

Written by K

May 15, 2009 at 6:53 am

The role of cognitive biases in project failure

with 37 comments

Introduction

There are two distinct views of project management practice: the rational view which focuses on management tools and techniques such as those espoused by frameworks and methodologies, and the social/behavioural view which looks at the social aspect of projects – i.e. how people behave and interact in the context of a project and the wider organisation. The difference between the two is significant: one looks at how projects should be managed,  it prescribes tools, techniques and practices;  the other at what actually happens on projects, how people interact and how managers make decisions.  The gap between the two can sometimes  spell the difference between project success and failure. In many failed projects, the failure can be traced back to poor decisions, and the decisions themselves to cognitive biases: i.e.  errors in judgement based on perceptions. A paper entitled, Systematic Biases and Culture in Project Failure, by Barry Shore looks at the role played by selected cognitive biases in the failure of some high profile projects. The paper also draws some general conclusions on the relationship between organisational culture and cognitive bias. This post presents a summary and review of the paper.

The paper begins with a brief discussion of the difference in the rational and social/behavioural view of project management.  The rational view is prescriptive  – it describes management procedures and techniques which claim to increase the chances of success if followed. Further, it emphasises causal effects (if you follow X procedure then Y happens).  The social/behavioural view is less well developed because it looks at human behaviour which is hard to study in controlled conditions,  let alone in projects. Yet, developments in behavioural economics – mostly based on the pioneering work of Kahnemann and Tversky – can be directly applied to project management (see my post on biases in project estimation, for instance).  In the paper, Shore looks at eight case studies of failed projects  and attempts to attribute their failure to selected cognitive biases. He  also looks into the relationship between (project and organisational) culture and the prevalence of the selected biases. Following Hofstede, he defines organisational culture as shared perceptions of organisational work practices and, analogously, project culture as shared perceptions of project work practices. Since projects take place within organisations, project culture is obviously influenced by the organisational culture.

Scope and Methodology

In this section I present a brief discussion of the biases that the paper focuses on and the study methodology.

There are a large number of cognitive biases in the literature. The author selects the following for his study:

Available data:  Restricting oneself to using data that is readily or conveniently available. Note that “Available data” is a non-standard term: it is normally referred to as a sampling bias, which in turn is a type of selection bias.

Conservatism (Semmelweis reflex): Failing to consider new information or negative feedback.

Escalation of commitment:  Allocating additional resources to a project that is unlikely to succeed.

Groupthink: Members of a project group under pressure to think alike, ignoring evidence that may threaten their views.

Illusion of control: Management believing they have more control over a situation than an objective evaluation would suggest.

Overconfidence:  Having a level of confidence that is unsupported by evidence or performance.

Recency (serial position effect): Undue emphasis being placed on most recent data (ignoring older data)

Selective perception: Viewing a situation subjectively; perceiving only certain (convenient) aspects of a situation.

Sunk cost: Not accepting  that costs already incurred cannot be recovered and should not be considered as criteria for future decisions. This bias is closely related to loss aversion.

The author acknowledges that there is a significant overlap between some of these effects: for example, illusion of control has much in common with overconfidence. This implies a certain degree of subjectivity in assigning these as causes for project failures.

The failed projects studied in the paper are high profile efforts that failed in one or more ways.  The author obtained data for the projects from public and government sources. He then presented the data and case studies to five independent groups of business professionals (constituted from a class he was teaching) and asked them to reach a consensus on which biases could have played a role in causing the failures. The groups presented their results to the entire class, then through discussions, reached agreement on which of the  biases may have lead to the failures.

The case studies

This section describes the failed project studied and the biases that the group identified as being relevant.

Airbus 380: Airbus was founded as a consortium of independent aerospace companies. The A380 project which was started in 2000 – was aimed at creating the A380 superjumbo jet with a capacity of 800 passengers. The project involved coordination between many sites.  Six years into the project, when the aircraft was being assembled in Toulouse, it was found that a wiring harness produced in Hamburg failed to fit the airframe.

The group identified the following biases as being relevant to the failure of the Airbus project:

Selective perception: Managers acted to guard their own interests and constituencies.

Groupthink:  Each participating organisation  worked in isolation from the others, creating an environment in which groupthink would thrive.

Illusion of control:  Corporate management assumed they had control over participating organisations.

Availability bias: Management in each of the facilities did not have access to data in other facilities, and thus made decisions based on limited data.

Coast Guard Maritime Domain Awareness Project: This project, initated in 2001, was aimed at creating the maritime equivalent of an air traffic control system. It was to use a range of technologies, and involved coordination between many US government agencies. The goal of the first phase of the project was to  create a surveillance system that would be able to track boats as small as jet skis. The surveillance data was to be run through a software system that would flag potential threats.  In 2006  – during the testing phase – the surveillance system failed to meet quality criteria. Further, the analysis software was not ready for testing.

The group identified the following biases as being relevant to the failure of the  Maritime Awareness project:

Illusion of control: Coordinating several federal agencies is a complex task. This suggests that project managers may have thought they had more control than they actually did.

Selective perception: Separate agencies worked only on their portions of the project,  failing to see the larger picture. This suggests that project groups may have unwittingly been victims of selective perception.

Columbia Shuttle: The Columbia Shuttle disaster was caused by a piece of foam insulation breaking off the propellant tank and damaging the wing. The problem with the foam sections was known, but management had assumed that it posed no risk.

In their analysis, the group found the following biases to be relevant to the failure of this project:

Conservatism: Management failed to take into account negative data.

Overconfidence:  Management was confident there were no safety issues.

Recency:  Although foam insulation had broken off on previous flights, it had not caused any problems.

Denver Airport Baggage Handling System: The Denver airport project, which was scheduled for completion in 1993, was to feature a completely automated baggage handling system. The technical challenges were enormous because the proposed system was an order of magnitude more complex than those that existed at the time. The system was completed in 1995, but was riddled with problems. After almost a decade of struggling to fix the problems, not to mention being billions over-budget, the project was abandoned in 2005.

The group identified the following biases as playing a role in the failure of this project:

Overconfidence: Although the project was technically very ambitious, the contractor (BAE systems) assumed that all technical obstacles could be overcome within the project timeframes.

Sunk cost: The customers (United Airlines) did not pull out of the project even when other customers pulled out, suggesting that they were reluctant to write off already incurred costs.

Illusion of control: Despite evidence to the contrary, management assumed that problems could be solved and that the project remained  under control.

Mars Climate Orbiter and Mars Polar Lander: Telemetry signals from the Mars climate orbiter ceased when the spacecraft approached its destination. The root cause of the problem was found to be a failure to convert between metric and British units: apparently the contractor, Lockheed, had used British units in the engine design but NASA scientists who were responsible for operations and flight assumed the data was in metric units. A few months after the climate orbiter disaster, another spacecraft, the Mars polar lander fell silent just short of landing on the surface of Mars. The failure was attributed to a software problem that caused the engines to shutdown prematurely, thereby causing the spacecraft to crash.

The group attributed the above project failures to the following biases:

Conservatism: Project engineers failed to take action when they noticed that the spacecraft was off-trajectory early in the flight.

Sunk cost: Managers were under pressure to launch the spacecraft on time – waiting until the next launch window would have entailed a wait of many months thus “wasting” the effort up to that point. (Note: In my opinion this is an incorrect interpretation of sunk cost)

Selective perception: The spacecraft modules  were constructed by several different teams. It is very likely that teams worked with a very limited view of the project (one which was relevant to their module).

Merck Vioxx: Vioxx was a very successful anti-inflammatory medication developed and marketed by Merck. An article published in 2000 suggested that Merck misrepresented clinical trial data, and another paper published in 2001 suggested that those who took Vioxx were subject to a significantly increased risk of assorted cardiac events. Under pressure, Merck put a warning label on the product in 2002. Finally, the drug was withdrawn from the market in 2004 after over 80 million people had taken it.

The group found the following biases to be relevant to the failure of this project:

Conservatism:  The company ignored early warning signs about the toxicity of the drug.

Sunk cost: By the time concerns were raised, the company had already spent a large amount of money in developing the drug. It is therefore likely that there was a reluctance to write off the costs incurred to that point.

Microsoft Xbox 360: The Microsoft Xbox console was released to market in 2005, a year before comparable offerings from its competitors. The product was plagued with problems from the start; some of them include: internet connectivity issues, damage caused to game disks, faulty power cords and assorted operational issues. The volume of problems and complaints prompted Microsoft to extend the product warranty from one to three years at an expected cost of $1 billion.

The group thought that the following biases were significant in this case:

Conservatism: Despite the early negative feedback (complaints and product returns), the development group seemed to acknowledge that there were problems with the product.

Groupthink:  It is possible that the project team ignored data that threatened their views on the product. The group reached this conclusion because Microsoft seemed reluctant to comment publicly on the causes of problems.

Sunk cost: By the time problems were identified, Microsoft had invested a considerable sum of money on product development. This suggests that the sunk cost trap may have played a role in this project failure.

NYC Police Communications System: (Note: I couldn’t find any pertinent links to this project). In brief: the project was aimed at developing a communications system that would enable officers working in the subway system to communicate with those on the streets. The project was initiated in 1999 and scheduled for completion in 2004 with a budgeted cost of $115 million. A potential interference problem was identified in 2001 but the contractors ignored it. The project was completed in 2007, but during trials it became apparent that interference was indeed a problem. Fixing the issue was expected to increase the cost by $95 million.

The group thought that the following biases may have contributed to the failure of this project:

Conservatism: Project managers failed to take early data on intereference account.

Illusion of control: The project team believed – until very late in the project – that the interference issue could be fixed.

Overconfidence:  Project managers believed that the design was sound, despite evidence to the contrary.

Analysis and discussion

The following four biases appeared more often than others: Conservatism, illusion of control, selective perception and sunk cost.

The following biases appeared less often: groupthink and overconfidence.

Recency and availability were mentioned only once.

Based on the small data sample and the somewhat informal means of analysis, the author concludes that the first four biases may be dominant in project management. In my opinion this conclusion is shaky because the study has a few shortcomings, which I list below:

  • The sample size is small
  • The sample covers a range of domains.
  • No checks were done to verify the  group members’ understanding of  all the biases.
  • The data on which the conclusions are based is incomplete – based only on publicly available data. (perhaps is this an example of the available data bias at work?)
  • A limited set of biases is used – there could be other biases at work.
  • The conclusions themselves are subject to group-level biases such as groupthink. This is a particular concern because the group was specifically instructed to look at the case studies through the lens of the selected cognitive biases.
  • The analysis is far from exhaustive or objective; it  was done as a part of classroom exercise.

For the above reasons, the analysis is at best suggestive:  it indicates that biases may play a role in the decisions  that lead to project failures.

The author also draws a link between organisational culture and environments in which biases might thrive. To do this, he maps the biases on to the competing values framework of organisational culture, which views organisations along two dimensions:

  • The focus of the organisation – internal or external.
  • The level of management control in the organisation  – controlling (stable) or discretionary (flexible).

According to the author, all nine biases are more likely in a stability (or control) focused environment than a flexible one, and all barring sunk cost are more likely to thrive in a internal focused organisation than an externally focused one. This conclusion makes sense: project teams are more likely to avoid biases when empowered to make decisions,  free from management and organisational pressures. Furthermore, biases are also less likely to play a role when external input – such as customer feedback –  is taken seriously.

That said, the negative effects of internally focused, high control organisations can be countered. The author quotes two examples:

  1. When designing the 777 aircraft, Boeing introduced a new approach to project management wherein teams were required to include representatives from all groups of stakeholders. The team was encouraged to air differences in opinion and to deal with these in an open manner. This approach has been partly credit for the success of the 777 project.
  2. Since the Vioxx debacle, Merck rewards research scientists who terminate projects that do not look promising.

Conclusions

Despite my misgivings about the research sample and methodology, the study does suggest that standard project management practices could benefit by incorporating insights from behavioural studies.  Further, the analysis indicates that cognitive biases may have indeed played a role in the failure of some high profile projects.  My biggest concern here, as stated earlier,  is that the groups were required to associate the decisions with specific biases – i.e. there was an assumption that one or more of the biases from the (arbitrarily chosen) list was responsible for the failure. In reality, however,  there may have been other more important  factors at work.

The connections with organisational culture are interesting too, but hardly surprising: people are more likely to do the right thing when management  empowers them with responsibility and authority.

In closing: I found the paper interesting because it deals with an area that isn’t very well represented in the project management literature. Further, I  believe these biases play a significant role in project decision making, especially in internally focussed / controlled organisations (project managers are human, and hence not immune…).  However, although the paper supports this view, it doesn’t make a wholly convincing case for it.

Further Reading

For more on cognitive biases in organisations, see Chapter 2 of my book, The Heretic’s Guide to Best Practices.

Written by K

May 8, 2009 at 5:47 am

Beyond words: visualising arguments using issue maps

with 16 comments

Anyone who has struggled to follow a complex argument in a book or article knows from experience that reasoning in written form can be hard to understand. Perhaps this is why many people prefer to learn by attending a class or viewing a lecture rather than by reading. The cliché about a picture being worth more than a large number of words has a good deal of truth to it: visual representations can be helpful in clarifying complex arguments. In a recent post,  I presented a quick introduction to a visual issue mapping technique called IBIS (Issue Based Information System),  discussing how it could be used on complex projects. Now I follow up by demonstrating its utility in visualising complex arguments  such as those presented in research papers. I do so by example: I  map out a well known opinion piece written over two decades ago –  Fred Brooks’ classic article, No Silver Bullet, (abbreviated as NSB in the remainder of this article).

[Note: those not familiar with IBIS may want to read one of the  introductions listed here before proceeding]

Why use NSB as an example for argument mapping? Well, for a couple of reasons:

  1. It deals with issues that most software developers have grappled with at one time or another.
  2. The piece has been widely misunderstood (by Brooks’ own admission – see his essay entitled No Silver Bullet Refired, published in the anniversary edition of The Mythical Man Month).

First, very briefly, for those who haven’t read the article: NSB presents reasons why software development is intrinsically hard and consequently conjectures that “silver bullet” solutions are impossible, even in principle. Brooks defines a silver bullet solution for software engineering as any tool or technology that facilitates a tenfold improvement in productivity in software development.

To set the context for the discussion and to see the angle from which Brooks viewed the notion of a silver bullet for software engineering, I can do no better than quote the first two paragraphs of NSB:

Of all the monsters that fill the nightmares of our folklore, none terrify more than werewolves, because they transform unexpectedly from the familiar into horrors. For these, one seeks bullets of silver that can magically lay them to rest.

The familiar software project, at least as seen by the non-technical manager, has something of this character; it is usually innocent and straightforward, but is capable of becoming a monster of missed schedules, blown budgets, and flawed products. So we hear desperate cries for a silver bullet—something to make software costs drop as rapidly as computer hardware costs do.

The first step in mapping out an argument is to find the basic issue that it addresses. That’s easy for NSB; the issue, or question, is: why is there no silver bullet for software development?

Brooks attempts to answer the question via two strands:

  1. By examining the nature of the essential (intrinsic or inherent) difficulties in developing software.
  2. By examining the silver bullet solutions proposed so far.

That gives us enough for us to begin our IBIS map …

Issue Map - Stage 1

Issue Map – Stage 1

The root node of the map – as in all IBIS maps – is a question node. Responding to the question, we have an idea node (Essential difficulties) and another question node (What about silver bullet solutions proposed to date). We also have a note node which clarifies  what is meant by a silver bullet solution.

The point regarding essential difficulties needs elaboration, so we ask the question– What are essential difficulties?

According to Brooks, essential difficulties are those that relate to conceptualisation – i..e. design. In contrast, accidental (or non-essential) difficulties are those pertaining to implementation. Brooks examines the nature of essential difficulties – i.e. the things that make software design hard. He argues that the following four properties of software systems are the root of the problem:

Complexity: Beyond the basic syntax of a language, no two parts of a software system are alike – this contrasts with other products (such as cars or buildings) where repeated elements are common. Furthermore, software has a large number of states, multiplied many-fold by interactions with other systems. No person can fully comprehend all the consequences of this complexity. Furthermore, no silver bullet solution can conquer this problem because each program is complex in unique ways.

Conformity: Software is required to conform to arbitrary business rules. Unlike in the natural sciences, these rules may not (often do not!) have any logic to them. Further, being the newest kid on the block, software often has to interface with disparate legacy systems as well. Conformity-related issues are external to the software and hence cannot be addressed by silver bullet solutions.

Changeability: Software is subject to more frequent change than any other part of a system or even most other manufactured products. Brooks speculates that this is because most software embodies system functionality (i.e. the way people use the system), and functionality is subject to frequent change. Another reason is that software is intangible (made of “thought stuff”) and perceived as being easy to change.

Invisibility: Notwithstanding simple tools such as flowcharts and modelling languages, Brooks argues that software is inherently unvisualisable. The basic reason for this is that software – unlike most products (cars, buildings, silicon chips, computers) – has no spatial form.

These essential properties are easily captured in summary form in our evolving argument map:

Issue Map - Stage 2

Issue Map – Stage 2

Brooks’ contention is that software design is hard because every software project has to deal with unique manifestations of these properties.

Brooks then looks at silver bullet solutions proposed up to 1987  (when the article was written)  and those on the horizon at the time. He finds most of these address accidental (or non-intrinsic) issues – those that relate to implementation rather than design. They enhance programmer productivity – but not by the ten-fold magnitude required for them to be deemed silver bullets. Brooks reckons this is no surprise: the intrinsic difficulties associated with design are by far the biggest obstacles in any software development effort.

In the map I club all these proposed solutions under “silver bullet solutions proposed to date.”

Incorporating the above, the map now looks like:

Issue Map - Stage 3

Issue Map – Stage 3

[For completeness here’s a glossary of abbreviations: OOP – Object-oriented programming; IDE – Integrated development environment; AI – Artificial intelligence]

The proposed silver bullets lead to incremental improvements in productivity, but they do not address the essential problem of design. Further, some of the solutions have restricted applicability. These points are captured as pros and a cons in the map (click on the map to view a larger image):

Issue Map - Stage 4

Issue Map – Stage 4

It is interesting to note that in his 1997 article, No Silver Bullet Refired , which revisited the questions raised in NSB, Brooks found that the same conclusions held true. Furthermore, at a twentieth year retrospective panel discussion that took place during the 22nd International Conference on Object-Oriented Programming, Systems, Languages, and Applications, panellists again concluded that there’s no silver bullet – and none likely.

Having made his case that no silver bullet exists, and that none are likely, Brooks finishes up by outlining a few promising approaches to tackling the design problem. The first one, Buy don’t build, is particularly prescient in view of the growth of the shrink-wrapped software market in the two decades since the first publication of NSB. The second one – rapid prototyping and iterative/incremental development – is vindicated by the widespread adoption and mainstreaming of agile methodologies. The last one, nurture talent, perhaps remains relatively ignored. It should be noted that Brooks considers these approaches promising, but not silver bullets;  he maintains that none of these by themselves can lead to a tenfold increase in productivity.

So we come to the end of NSB and our map, which now looks like (click on the map to view a larger image):

Final Map

Final Map

The map captures the essence of the argument in NSB – a reader can see, at a glance, the chain of reasoning and the main points made in the article.  One could  embellish the map and improve readability by:

  • Adding in details via note nodes, as I have done in my note explaining what is meant by a silver bullet.
  • Breaking up the argument into sub-maps – the areas highlighted in yellow in each of the figures could be hived off into their own maps.

But these are details;  the  essence of the argument in NSB is  captured adequately in the final map above.

In this post I have attempted to illustrate, via example, the utility of IBIS in constructing maps of complicated arguments. I hope I’ve convinced you that issue maps offer a simple way to capture the essence of a written argument in an easy-to-understand way.

Perhaps the cliche should be revised: a picture may be worth a thousand words, but an issue map is worth considerably more.

IBIS References on the Web

For a quick introduction, I recommend Jeff Conklin’s introduction to IBIS on the Cognexus site (and the links therein) or  my piece on the use of IBIS in projects. If you have  some  time,  I  highly recommend Paul Culmsee’s excellent series of posts: the one best practice to rule them all.