Archive for the ‘Paper Review’ Category
The legacy of legacy software
Introduction
On a recent ramble through Google Scholar, I stumbled on a fascinating paper by Michael Mahoney entitled, What Makes the History of Software Hard. History can offer interesting perspectives on the practice of a profession. So it is with this paper. In this post I review the paper, with an emphasis on the insights it provides into the practice of software development.
Mahoney’s thesis is that,
The history of software is the history of how various communities of practitioners have put their portion of the world into the computer. That has meant translating their experience and understanding of the world into computational models, which in turn has meant creating new ways of thinking about the world computationally and devising new tools for expressing that thinking in the form of working programs….
In other words, software– particularly application software – embodies real world practices. As a consequence,
…the models and tools that constitute software reflect the histories of the communities that created them and cannot be understood without knowledge of those histories, which extend beyond computers and computing to encompass the full range of human activities…
This, according Mahoney, is what makes the history of software hard.
The standard history of computing
The standard (textbook) history of computing is hardware-focused: a history of computers rather than computing. The textbook version follows a familiar tune starting with the abacus and working its way up via analog computers, ENIAC, mainframes, micros, PCs and so forth. Further, the standard narrative suggests that each of these were invented in order to satisfy a pre-existing demand, which makes their appearance almost inevitable. In Mahoney’s words,
…Just as it places all earlier calculating devices on one or more lines leading toward the electronic digital computer, as if they were somehow all headed in its direction, so too it pulls together the various contexts in which the devices were built, as if they constituted a growing demand for the invention of the computer and as if its appearance was a response to that demand.
Mahoney says that this is misleading for because,
…If people have been waiting for the computer to appears as the desired solution to their problems, it is not surprising that they then make use of it when it appears, or indeed that they know how to use it…
Further, it
…sets up a narrative of revolutionary impact, in which the computer is brought to bear on one area after another, in each case with radically transformative effect….”
The second point – revolutionary impact – is interesting because we still suffer its fallout: just about every issue of any trade journal has an article hyping the Next Big Computing Revolution. It seems that their writers are simply taking their cues from history. Mahoney puts it very well,
One can hardly pick up a journal in computing today without encountering some sort of revolution in the making, usually proclaimed by someone with something to sell. Critical readers recognise most of it as hype based on future promise than present performance…
The problem with revolutions, as Mahoney notes, is that they attempt to erase (or rewrite) history, ignoring the real continuities and connections between present and the past,
Nothing is in fact unprecedented, if only because we use precedents tot recognise, accommodate and shape the new…
CIOs and other decision makers, take note!
But what about software?
The standard history of computing doesn’t say much about software,
To the extent that the standard narrative covers software, the story follows the generations of machines, with an emphasis on systems software, beginning with programming languages and touching—in most cases, just touching—on operating systems, at least up to the appearance of time-sharing. With a nod toward Unix in the 1970s, the story moves quickly to personal computing software and the story of Microsoft, seldom probing deep enough to reveal the roots of that software in the earlier period.
As far as applications software is concerned –whether in construction, airline ticketing or retail – the only accounts that exist are those of pioneering systems such as the Sabre reservation system. Typically these efforts focus on the system being built, excluding any context and connection to the past. There are some good “pioneer style” histories: an example is Scott Rosenberg’s book Dreaming in Code – an account of the Chandler software project. But these are exceptions rather than the rule.
In the revolutionary model, people react to computers. In reality, though, it’s the opposite: people figure out ways to use computers in their areas of expertise. They design and implement programs to make computers do useful things. In doing so, they make choices:
Hence, the history of computing, especially of software, should strive to preserve human agency by structuring its narratives around people facing choices and making decisions instead of around impersonal forces pushing people in a predetermined direction. Both the choices and the decisions are constrained by the limits and possibilities of the state of the art at the time, and the state of the art embodies its history to that point.
The early machines of the 1940s and 50s were almost solely dedicated to numerical computations in the mathematical and physical sciences. Thereafter, as computing became more “mainstream” other communities of practitioners started to look at how they might use computers:
These different groups saw different possibilities in the computer, and they had different experiences as they sought to realize those possibilities, often translating those experiences into demands on the computing community, which itself was only taking shape at the time.
But these different communities have their own histories and ways of doing things – i.e. their own, unique worlds. To create software that models these worlds, the worlds have to be translated into terms the computer can “understand” and work with. This translation is the process of software design. The software models thus created embody practices that have evolved over time. Hence, the models also reflect the histories of the communities that create them.
Models are imperfect
There is a gap between models and reality, though. As Mahoney states,
…Programming is where enthusiasm meets reality. The enduring experience of the communities of computing has been the huge gap between what we can imagine computers doing and what we can actually make them do.
This lead to the notion of a “software crisis: and calls to reform the process of software development, which in turn gave rise to the discipline of software engineering. Many improvements resulted: better tools, more effective project management, high-level languages etc. But all these, as Brooks pointed out in his classic paper, addressed issues of implementation (writing code) not those of design (translating reality into computable representations). As Mahoney state,
…putting a portion of the world into the computer means designing an operative representation of that portion of the world that captures what we take to be its essential features. This has proved, as I say, no easy task; on the contrary it has proved difficult, frustrating and in some cases disastrous.
The problem facing the software historian is that he or she has to uncover the problem context and reality as perceived by the software designer, and thus reach an understanding of the design choices made. This is hard to do because it is implicit in the software artefact that the historian studies. Documentation is rarely any help here because,
…what programs do and what the documentation says they do are not always the same thing. Here, in a very real sense, the historian inherits the problems of software maintenance: the farther the program lies from its creators, the more difficult it is to discern its architecture and the design decisions that inform it.
There are two problems here:
- That software embodies a model of some aspect of reality.
- The only explanation of the model is the software itself.
As Mahoney puts it,
Legacy code is not just old code, but rather a continuing enactment, an operative representation, of the domain knowledge embodied in it. That may explain the difficulties software engineers have experienced in upgrading and replacing older systems.
Most software professionals will recognise the truth of this statement.
The legacy of legacy code
The problem is that new systems promise much, but are expensive and pose too many risks. As always continuity must be maintained, but this is nigh impossible because no one quite understands the legacy bequeathed by legacy code: what it does, how it does it and why it was designed so. So, customers play it safe and legacy code lives on. Despite all the advances in software engineering, software migrations and upgrades remain fraught with problems.
Mahoney concludes with the following play on the word “legacy”,
This situation (the gap between the old and the new) should be of common interest to computer people and to historians. Historians will want to know how it developed over several decades and why software systems have not kept pace with advances in hardware. That is, historians are interested in the legacy. Even as computer scientists wrestle with a solution to the problem the legacy poses, they must learn to live with it. It is part of their history, and the better they understand it, the better they will be able to move on from it.
This last point should be of interest to those running software development projects in corporate IT environments (and to a lesser extent those developing commercial software). An often unstated (but implicit) requirement is that the delivered software must maintain continuity between the past and present. This is true even for systems that claim to represent a clean break from the past; one never has the luxury of a completely blank slate, there are always arbitrary constraints placed by legacy systems. As Fred Brooks mentions in his classic article No Silver Bullet,
…In many cases, the software must conform because it is the most recent arrival on the scene. In others, it must conform because it is perceived as the most conformable. But in all cases, much complexity comes from conformation to other interfaces…
So, the legacy of legacy software is to add complexity to projects intended to replace it. Mahoney’s concluding line is therefore just as valid for project managers and software designers as it is for historians and computer scientists: project managers and software designers must learn to live with and understand this complexity before they can move on from it.
On the emotions evoked by project management artefacts
Introduction
The day-to-day practice of project management involves the use of several artefacts: from the ubiquitous Gantt chart to the less commonly used trend chart. It is of interest to understand the practical utility of these artefacts; consequently there is a fair bit of published work devoted to answering questions such as “What percentage of project managers use this artefact?”, “How and why do they use it?” etc. (see this paper review, for example). Such questions address the cognitive aspects of these artefacts – the logic, reasoning and thought processes behind their use. There is a less well understood side to the use of the artefacts: the affective or emotional one; the yin to the yang of the cognitive or logical side. A paper by Jon Whitty entitled, Project management artefacts and the affective emotions they evoke, (to appear in the International Journal of Managing Projects in Business in 2010) looks into the emotional affects (on practitioners) caused by (the use of) project management artefacts (see the note in the following paragraph for more on the term affect). This post presents an annotated summary of the paper.
[Note on the difference between affect and emotion. As I understand it, the term affect refers to automatic emotional responses which may amount to no more than a quick feeling of something being good or bad. This is in contrast to a full-blown emotion in which feelings are more intense. Unlike emotions, affective responses occur within a fraction of a second and may dissipate just as quickly. Furthermore, affect lacks the range and variety of conscious emotions.]
Like some of Whitty’s previous work, the paper presents an unusual – dare I say, challenging – perspective on the reasons why project managers use artefacts. I use the word “challenging” here in the sense of “questioning the rationale behind their use”, not in the sense of “difficulty.” To put the work in a wider context, it builds on the evolutionary view of project management advanced by Whitty and Schulz in an earlier paper. The evolutionary view holds that project management practices and principles give organisations (and hence individual project managers) certain survival advantages. The current paper studies how project management artefacts – through the emotions they evoke in project managers – “create” behaviours that “cause” project managers to sustain and propagate the practice of project management within their organisations.
Study Objectives
Whitty begins by framing two hypotheses which serve to outline the objectives of his study. They are:
- Project managers obtain an emotional affect from aspects of the project management experiences.
- Project managers use the emotional affects of project management artefacts to increase their competitive advantage.
The first examines whether project managers’ behaviours are driven by the experience of managing projects; the second examines whether project managers – through their use of artefacts – manipulate their environment to their advantage. The paper “tests” these hypotheses empirically (the reason for the enclosing quotes will become clearer later), and also examines some implications of the results.
Background
The paper contains an extensive review of the literature on the evolutionary view of project management and emotions / affect. I found the review very useful; not only did it help me appreciate the context of the research, it also gave me some new insights into professional practice. I summarise the review below, so you can judge for yourself.
In their paper entitled, The PM_BOK Code, Whitty and Schulz argue that, in order to survive in an organisational environment, project managers are driven to put on a performance – much like stage actors – of managing projects. They recite lines (use project management terminology, deliver status reports) and use props (project management artefacts) before an audience of stakeholders ranging from senior sponsors to team members.
Subscribing to, and practising the ideals of, project management enables practitioners to gain a competitive advantage in the organisational jungle. One aim of the paper is to clarify the role of artefacts in the evolutionary framework: specifically, how does the use of artefacts confer survival benefits, and what affects evoked in practitioners (who are using artefacts) cause the artefacts themselves to be passed on (i.e. survive).
As far as emotion or affect is concerned, Whitty mentions that much of the work done to date focuses on the management of positive and negative emotions (felt by both the project manager and the team) so as to achieve a successful project outcome. And although there is a significant body of work on the effectiveness of project management artefacts, there is virtually nothing on the emotional affect of artefacts as they are being used. Nevertheless, research in other areas suggests a strong connection between the creation/ use of artefacts, emotions evoked and the consequences thereof. An example is the affective response evoked by building architecture in a person and the consequent effect on the person’s mood. Changes in mood in turn might predispose the person to certain ideals and values. Although the emotional response caused by artefacts has been studied in other organisational contexts, it has not been done heretofore in project management. For this reason alone, this paper merits attention from project management practitioners.
Methodology
Unlike research into the utility of artefacts – where an objective definition of utility is possible – any questions relating to the emotions evoked by artefacts can only be answered subjectively: I can tell you how I feel when I do something, you may even be able to tell how I feel by observing me, but you can never feel what I feel. Hence the only possible approach to answering such questions is a phenomenological one – i.e. attempting to understand reality as seen by others through their perceptions and subjective experience. Whitty uses an approach based on empirical phenomenology – a research methodology that aims to produce accurate descriptions of human experience through observation of behaviour.
[Note on phenomenological approaches to management research: There are two phenomenological research methods in management: hermeneutic and empirical. The empirical approach follows fairly strict data collection and analytical methods whereas the hermeneutic approach is less prescriptive about techniques used . Another difference between the two is that the hermeneutic approach uses a range of sources including literary texts (since these are considered to reflect human experience) whereas empirical phenomenology is based on the analysis of factual data only. In essence the latter is closer to being a scientific/analytical approach to studying human experience than the former. See the very interesting paper – Revisiting Phenomenology: its potential for management research – by Lisa Ehrich for more.]
For his study, Whitty selected a group of about 50 project managers drawn from the ranks of professional bodies. The participants were asked to answer questions regarding what project management tools they enjoyed using, the emotions elicited by these tools and how they would feel if they weren’t allowed to use them. Additionally, they were also asked to imagine their ideal project management tool / process. Based on the answers provided, Whitty selected a small number of subjects for detailed, face-to-face interviews. The interviews probed for details on the responses provided in the survey. Audio and videotapes of the interviews were analysed to understand what the use of each artefact meant to the user, what emotions were evoked during its use and common gestures used while working with these tools – with the aim of understanding the essence of the experience of using a particular tool or process.
Whitty acknowledges some limitations of his approach, most of which are common to organisational studies. Some of these include: problems with self-reported data, limited (and potentially non-representative) sample size. I have discussed some of these limitations in my posts on the role of social desirability bias and the abuse of statistics in project management research.
Results
From his analysis, Whitty found that eleven artefacts came up more often than others. These could be divided into conceptual and tangible artefacts. The former includes the following:
- Project
- Deadline
- Team
- Professional persona of a project manager
The latter includes:
- Gantt Chart
- WBS
- Iron Triangle
- S-Curve
- Project management post-nominals (certifications, degrees, titles)
- PMBOK Guide & Project management methodologies
- Professional bodies
The paper contains detailed descriptions of the results, including interesting comments by the participants. I can do no better than refer the reader to the original paper for these. Here, in the interests of space, I’ll present only a selection of the artefacts analysed, relying heavily on quotations from the paper. My choice is based entirely on the items and interpretations that I found particularly striking.
Project
From the responses received, Whitty concludes that:
Projects appear to be emotionally perceived as though they are composed of two opposing forces or elements which were not as dichotomistic as good and bad. Rather, these forces are more complementary or completing aspects of the one phenomenon such as in the concept of Yin – Yang, though this term was only mentioned by one participants. All of the participants described the most difficult parts of their roles as “challenges”, and felt they gained a sense of achievement and learning from their projects.
Participants described the experience of managing a project in terms of a duality between thrill and excitement, even fear and personal satisfaction…Furthermore, many believed in some sort of karmic effect where the benefits of a good work ethic today would be paid back in future project success.
Team
When asked about the concept of a team, most of the respondents felt that there was a sense of mutual commitment between the project manager and team, but not necessarily one of mutual responsibility. One project manager said, “If they (the team) shine I shine, but if it all goes wrong I take the heat.”
On the other hand, most respondents seemed to be appreciative of their teams. Many used the gesture of a circle (tracing out a circle with a finger, for example) when talking about their teams. Whitty writes,
…As an expression of emotion the circle gesture has a limitless or boundless aspect with no beginning, no end, and no division. It symbolises wholeness and completeness, and it is possibly used by project managers to express their feelings of mutual commitment and fidelity to the team and the project.
Despite the general feeling that the project manager takes the blame if things go wrong, most respondents thought that there was a strong mutual committment between the team and project manager.
Project Manager Persona
This one is very interesting. When asked to describe what a successful project manager would look like, some responses were, “mid 20s to mid 40s”, “businesslike”, “must wear a business suit”, “confident and assertive”. Some commented on how they personally and actively used the persona. On the other hand, Whitty states,
There appears to be a tension or anxiety when creating and maintaining the façade of control…
He also suggests a metaphor for the persona:
…that beneath the external impression of the graceful swan are furiously paddling legs…
I think that’s an absolutely marvellous characterisation of a project manager under stress.
Gantt Chart
The Gantt Chart is perhaps the most well-known (and over-exposed) tangible project management artefact. For this reason alone, it is interesting to look into the emotional responses evoked by it. To quote from the paper,
It seems that project managers cannot talk about PM without mentioning the Gantt chart. Project managers appear to be compelled to make them to create and maintain their professional persona.
On the other hand,
Though the Gantt chart is closely associated with PM, many participants regarded this association as a burden…Even though project managers feel frustration that they are expected, even forced to use Gantt charts, they also manipulate this situation to their advantage and use Gantt charts to placate senior management and clients.
Stress and hopefulness appear to be two emotions linked with Gantt Charts (duality again!). One participant said “the Gantt charts you’re showing me don’t mean anything to me I feel pretty neutral about them. But my Gantt charts can really stress me out.” And another, “When I look at it (the Gantt chart) all finished, (heavy sigh) I suppose I’m hoping that’s how it will all turn out…”
As I see it, the Gantt chart – much like the PERT chart – is used more to manage management than to manage projects, and hence the mixed emotions evoked by its use.
Work Breakdown Structure
Whitty mentions that over two-thirds of the participants said they used WBS in one form or another. He states,
All participants view work in packets or as bounded objects. As one put it, “I like to break the work down into nice crisp chunks, and then connect them all up together again.” This behaviour support Gestalt theories that in order to interpret what we receive through our senses we attempt to organize information into certain groups which include: similarity, proximity, continuity, and closure….
Through his reference to Gestalt theories, Whitty suggests that breaking the project up into chunks of work and then putting it back together again helps the project manager grok the project – i.e. understand the interconnections between project elements and the totality of the project in a deep way. A little later he states,
Many experience satisfaction, contentment, even a sense of control from the WBS process.
I can’t help but wonder – does the popularity of the tool stem, at least in part, from its ability to evoke positive affect?
PMBOK Guide and Methodologies
Based on the responses received, Whitty mentions,
It is apparent that some PM methodologies are PM artefacts in themselves and are used as currency to gain a competitive advantage.”
Yet the profession appears to be divided about the utility of methodologies,
All the participants were aware of the PMBOK® Guide, and all of them utilised a PM methodology of some sort, whether it were an off-the-shelf brand or a company-grown product. Participants appeared to be either for or against PM methodologies, some even crossed over the dividing line mid-sentence (!)
Another theme that arose is that methodologies are “something to hide behind” should things go wrong: “Don’t blame me, I did things by the Book.” Methodologies thus offer two side benefits (apart from the man one of improving chances of project success!): they help “certify” to a project manager’s competence and act as a buffer if things go wrong.
Discussion
Whitty concludes that the data supports his hypothesis that project managers obtain an emotional affect from aspects of project management experience. In his words:
This study has shown that project managers are drawn to project work. The participants in this study forage for projects because they can obtain or experience an emotional affect or more informally stated a favourable emotional fix from the challenge they present…. they are stimulated by the challenges the construct of a project has to offer. Furthermore, they appear to be fairly sure they can handle these challenges with their existing skill and abilities…
The data also suggests that despite the dominant deterministic approach to project management, project managers also,
…operate under the cognitive logic of yin-yang. They conceptualise the emotional experience of managing a project in terms of two possible states or statuses of events that ebb and flow; one state gradually transforming into the other state along a time dimension. What is also interesting is that these project managers find it necessary to conceal this behaviour for survival reasons.
The data also supports the second hypothesis: that project managers use the emotional affects of the project management experience to increase their competitive advantage. This is clear, for example, from the discussions of the Project Manager Persona, Gantt Chart and methodologies.
Concluding remarks
This is an important study because it has implications for how project management is taught, practised and researched. For example, most project management courses teach tools and techniques – such as Gantt Charts – with the implicit assumption that using them will improve chances of project success. However, from this research it is clear, and I quote
…some practitioners create Gantt charts because they enjoy the Gantt charting process, and some create them to placate others and/or to be viewed favourably by others. It is simply not clear how Gantt charts or the scheduling process in general contributes to the overall performance of a project…
Using this as an example, Whitty makes a plea for an objective justification of project management practices. It’s just not good enough to say we must use something because so-and-so methodology says so (see my piece entitled, A PERT myth, for another example of a tool that, though well entrenched, has questionable utility). The research also indicates that a project manager’s behaviours are influenced by the physical and cultural environment in which he or she operates: some practices are followed because they give the project manager a sense of control; others because they help gain a competitive advantage. Whitty suggests that senior managers would get more out of their project managers if they understood how project managers are affected by their environment. Further, he recommends that project managers should be encouraged to adopt only those techniques, practices and norms that are demonstrably useful. Those that aren’t should be abandoned.
So what are the implications for profession? In a nutshell: it is to think critically about the way we manage projects. Practices recommended by a particular methodology or authority are sometimes followed without critical analysis or introspection. So the next time you invoke a tool, technique or practice – stop for a minute and reflect on what you’re doing and why. An honest answer may hold some surprises.
The role of cognitive biases in project failure
Introduction
There are two distinct views of project management practice: the rational view which focuses on management tools and techniques such as those espoused by frameworks and methodologies, and the social/behavioural view which looks at the social aspect of projects – i.e. how people behave and interact in the context of a project and the wider organisation. The difference between the two is significant: one looks at how projects should be managed, it prescribes tools, techniques and practices; the other at what actually happens on projects, how people interact and how managers make decisions. The gap between the two can sometimes spell the difference between project success and failure. In many failed projects, the failure can be traced back to poor decisions, and the decisions themselves to cognitive biases: i.e. errors in judgement based on perceptions. A paper entitled, Systematic Biases and Culture in Project Failure, by Barry Shore looks at the role played by selected cognitive biases in the failure of some high profile projects. The paper also draws some general conclusions on the relationship between organisational culture and cognitive bias. This post presents a summary and review of the paper.
The paper begins with a brief discussion of the difference in the rational and social/behavioural view of project management. The rational view is prescriptive – it describes management procedures and techniques which claim to increase the chances of success if followed. Further, it emphasises causal effects (if you follow X procedure then Y happens). The social/behavioural view is less well developed because it looks at human behaviour which is hard to study in controlled conditions, let alone in projects. Yet, developments in behavioural economics – mostly based on the pioneering work of Kahnemann and Tversky – can be directly applied to project management (see my post on biases in project estimation, for instance). In the paper, Shore looks at eight case studies of failed projects and attempts to attribute their failure to selected cognitive biases. He also looks into the relationship between (project and organisational) culture and the prevalence of the selected biases. Following Hofstede, he defines organisational culture as shared perceptions of organisational work practices and, analogously, project culture as shared perceptions of project work practices. Since projects take place within organisations, project culture is obviously influenced by the organisational culture.
Scope and Methodology
In this section I present a brief discussion of the biases that the paper focuses on and the study methodology.
There are a large number of cognitive biases in the literature. The author selects the following for his study:
Available data: Restricting oneself to using data that is readily or conveniently available. Note that “Available data” is a non-standard term: it is normally referred to as a sampling bias, which in turn is a type of selection bias.
Conservatism (Semmelweis reflex): Failing to consider new information or negative feedback.
Escalation of commitment: Allocating additional resources to a project that is unlikely to succeed.
Groupthink: Members of a project group under pressure to think alike, ignoring evidence that may threaten their views.
Illusion of control: Management believing they have more control over a situation than an objective evaluation would suggest.
Overconfidence: Having a level of confidence that is unsupported by evidence or performance.
Recency (serial position effect): Undue emphasis being placed on most recent data (ignoring older data)
Selective perception: Viewing a situation subjectively; perceiving only certain (convenient) aspects of a situation.
Sunk cost: Not accepting that costs already incurred cannot be recovered and should not be considered as criteria for future decisions. This bias is closely related to loss aversion.
The author acknowledges that there is a significant overlap between some of these effects: for example, illusion of control has much in common with overconfidence. This implies a certain degree of subjectivity in assigning these as causes for project failures.
The failed projects studied in the paper are high profile efforts that failed in one or more ways. The author obtained data for the projects from public and government sources. He then presented the data and case studies to five independent groups of business professionals (constituted from a class he was teaching) and asked them to reach a consensus on which biases could have played a role in causing the failures. The groups presented their results to the entire class, then through discussions, reached agreement on which of the biases may have lead to the failures.
The case studies
This section describes the failed project studied and the biases that the group identified as being relevant.
Airbus 380: Airbus was founded as a consortium of independent aerospace companies. The A380 project which was started in 2000 – was aimed at creating the A380 superjumbo jet with a capacity of 800 passengers. The project involved coordination between many sites. Six years into the project, when the aircraft was being assembled in Toulouse, it was found that a wiring harness produced in Hamburg failed to fit the airframe.
The group identified the following biases as being relevant to the failure of the Airbus project:
Selective perception: Managers acted to guard their own interests and constituencies.
Groupthink: Each participating organisation worked in isolation from the others, creating an environment in which groupthink would thrive.
Illusion of control: Corporate management assumed they had control over participating organisations.
Availability bias: Management in each of the facilities did not have access to data in other facilities, and thus made decisions based on limited data.
Coast Guard Maritime Domain Awareness Project: This project, initated in 2001, was aimed at creating the maritime equivalent of an air traffic control system. It was to use a range of technologies, and involved coordination between many US government agencies. The goal of the first phase of the project was to create a surveillance system that would be able to track boats as small as jet skis. The surveillance data was to be run through a software system that would flag potential threats. In 2006 – during the testing phase – the surveillance system failed to meet quality criteria. Further, the analysis software was not ready for testing.
The group identified the following biases as being relevant to the failure of the Maritime Awareness project:
Illusion of control: Coordinating several federal agencies is a complex task. This suggests that project managers may have thought they had more control than they actually did.
Selective perception: Separate agencies worked only on their portions of the project, failing to see the larger picture. This suggests that project groups may have unwittingly been victims of selective perception.
Columbia Shuttle: The Columbia Shuttle disaster was caused by a piece of foam insulation breaking off the propellant tank and damaging the wing. The problem with the foam sections was known, but management had assumed that it posed no risk.
In their analysis, the group found the following biases to be relevant to the failure of this project:
Conservatism: Management failed to take into account negative data.
Overconfidence: Management was confident there were no safety issues.
Recency: Although foam insulation had broken off on previous flights, it had not caused any problems.
Denver Airport Baggage Handling System: The Denver airport project, which was scheduled for completion in 1993, was to feature a completely automated baggage handling system. The technical challenges were enormous because the proposed system was an order of magnitude more complex than those that existed at the time. The system was completed in 1995, but was riddled with problems. After almost a decade of struggling to fix the problems, not to mention being billions over-budget, the project was abandoned in 2005.
The group identified the following biases as playing a role in the failure of this project:
Overconfidence: Although the project was technically very ambitious, the contractor (BAE systems) assumed that all technical obstacles could be overcome within the project timeframes.
Sunk cost: The customers (United Airlines) did not pull out of the project even when other customers pulled out, suggesting that they were reluctant to write off already incurred costs.
Illusion of control: Despite evidence to the contrary, management assumed that problems could be solved and that the project remained under control.
Mars Climate Orbiter and Mars Polar Lander: Telemetry signals from the Mars climate orbiter ceased when the spacecraft approached its destination. The root cause of the problem was found to be a failure to convert between metric and British units: apparently the contractor, Lockheed, had used British units in the engine design but NASA scientists who were responsible for operations and flight assumed the data was in metric units. A few months after the climate orbiter disaster, another spacecraft, the Mars polar lander fell silent just short of landing on the surface of Mars. The failure was attributed to a software problem that caused the engines to shutdown prematurely, thereby causing the spacecraft to crash.
The group attributed the above project failures to the following biases:
Conservatism: Project engineers failed to take action when they noticed that the spacecraft was off-trajectory early in the flight.
Sunk cost: Managers were under pressure to launch the spacecraft on time – waiting until the next launch window would have entailed a wait of many months thus “wasting” the effort up to that point. (Note: In my opinion this is an incorrect interpretation of sunk cost)
Selective perception: The spacecraft modules were constructed by several different teams. It is very likely that teams worked with a very limited view of the project (one which was relevant to their module).
Merck Vioxx: Vioxx was a very successful anti-inflammatory medication developed and marketed by Merck. An article published in 2000 suggested that Merck misrepresented clinical trial data, and another paper published in 2001 suggested that those who took Vioxx were subject to a significantly increased risk of assorted cardiac events. Under pressure, Merck put a warning label on the product in 2002. Finally, the drug was withdrawn from the market in 2004 after over 80 million people had taken it.
The group found the following biases to be relevant to the failure of this project:
Conservatism: The company ignored early warning signs about the toxicity of the drug.
Sunk cost: By the time concerns were raised, the company had already spent a large amount of money in developing the drug. It is therefore likely that there was a reluctance to write off the costs incurred to that point.
Microsoft Xbox 360: The Microsoft Xbox console was released to market in 2005, a year before comparable offerings from its competitors. The product was plagued with problems from the start; some of them include: internet connectivity issues, damage caused to game disks, faulty power cords and assorted operational issues. The volume of problems and complaints prompted Microsoft to extend the product warranty from one to three years at an expected cost of $1 billion.
The group thought that the following biases were significant in this case:
Conservatism: Despite the early negative feedback (complaints and product returns), the development group seemed to acknowledge that there were problems with the product.
Groupthink: It is possible that the project team ignored data that threatened their views on the product. The group reached this conclusion because Microsoft seemed reluctant to comment publicly on the causes of problems.
Sunk cost: By the time problems were identified, Microsoft had invested a considerable sum of money on product development. This suggests that the sunk cost trap may have played a role in this project failure.
NYC Police Communications System: (Note: I couldn’t find any pertinent links to this project). In brief: the project was aimed at developing a communications system that would enable officers working in the subway system to communicate with those on the streets. The project was initiated in 1999 and scheduled for completion in 2004 with a budgeted cost of $115 million. A potential interference problem was identified in 2001 but the contractors ignored it. The project was completed in 2007, but during trials it became apparent that interference was indeed a problem. Fixing the issue was expected to increase the cost by $95 million.
The group thought that the following biases may have contributed to the failure of this project:
Conservatism: Project managers failed to take early data on intereference account.
Illusion of control: The project team believed – until very late in the project – that the interference issue could be fixed.
Overconfidence: Project managers believed that the design was sound, despite evidence to the contrary.
Analysis and discussion
The following four biases appeared more often than others: Conservatism, illusion of control, selective perception and sunk cost.
The following biases appeared less often: groupthink and overconfidence.
Recency and availability were mentioned only once.
Based on the small data sample and the somewhat informal means of analysis, the author concludes that the first four biases may be dominant in project management. In my opinion this conclusion is shaky because the study has a few shortcomings, which I list below:
- The sample size is small
- The sample covers a range of domains.
- No checks were done to verify the group members’ understanding of all the biases.
- The data on which the conclusions are based is incomplete – based only on publicly available data. (perhaps is this an example of the available data bias at work?)
- A limited set of biases is used – there could be other biases at work.
- The conclusions themselves are subject to group-level biases such as groupthink. This is a particular concern because the group was specifically instructed to look at the case studies through the lens of the selected cognitive biases.
- The analysis is far from exhaustive or objective; it was done as a part of classroom exercise.
For the above reasons, the analysis is at best suggestive: it indicates that biases may play a role in the decisions that lead to project failures.
The author also draws a link between organisational culture and environments in which biases might thrive. To do this, he maps the biases on to the competing values framework of organisational culture, which views organisations along two dimensions:
- The focus of the organisation – internal or external.
- The level of management control in the organisation – controlling (stable) or discretionary (flexible).
According to the author, all nine biases are more likely in a stability (or control) focused environment than a flexible one, and all barring sunk cost are more likely to thrive in a internal focused organisation than an externally focused one. This conclusion makes sense: project teams are more likely to avoid biases when empowered to make decisions, free from management and organisational pressures. Furthermore, biases are also less likely to play a role when external input – such as customer feedback – is taken seriously.
That said, the negative effects of internally focused, high control organisations can be countered. The author quotes two examples:
- When designing the 777 aircraft, Boeing introduced a new approach to project management wherein teams were required to include representatives from all groups of stakeholders. The team was encouraged to air differences in opinion and to deal with these in an open manner. This approach has been partly credit for the success of the 777 project.
- Since the Vioxx debacle, Merck rewards research scientists who terminate projects that do not look promising.
Conclusions
Despite my misgivings about the research sample and methodology, the study does suggest that standard project management practices could benefit by incorporating insights from behavioural studies. Further, the analysis indicates that cognitive biases may have indeed played a role in the failure of some high profile projects. My biggest concern here, as stated earlier, is that the groups were required to associate the decisions with specific biases – i.e. there was an assumption that one or more of the biases from the (arbitrarily chosen) list was responsible for the failure. In reality, however, there may have been other more important factors at work.
The connections with organisational culture are interesting too, but hardly surprising: people are more likely to do the right thing when management empowers them with responsibility and authority.
In closing: I found the paper interesting because it deals with an area that isn’t very well represented in the project management literature. Further, I believe these biases play a significant role in project decision making, especially in internally focussed / controlled organisations (project managers are human, and hence not immune…). However, although the paper supports this view, it doesn’t make a wholly convincing case for it.
Further Reading
For more on cognitive biases in organisations, see Chapter 2 of my book, The Heretic’s Guide to Best Practices.

