Archive for the ‘Bias’ Category
A note on bias in project management research
Project management research relies heavily on empirical studies – that is, studies that are based on observation of reality. This is necessary because projects are coordinated activities involving real-world entities: people, teams and organisations. A project management researcher can theorise all he or she likes, but the ultimate test of any theory is, “do the hypotheses agree with the data?” In this, project management is no different from physics: to be accepted as valid, any theory must agree with reality. In physics (or any of the natural sciences), however, experiments can be carried out in controlled conditions that ensure objectivity and the elimination of any extraneous effects or biases. This isn’t the case in project management (or for that matter any of the social sciences). Since people are the primary subjects of study in the latter, subjectivity and bias are inevitable. This post delves into the latter point with an emphasis on project management research.
From my reading of several project management research papers, most empirical studies in project management proceed roughly as follows:
- Formulate a hypotheses based on observation and / or existing research.
- Design a survey based on the hypotheses.
- Gather survey data.
- Accept or reject the hypotheses based on statistical analysis of the data.
- Discuss and generalise.
Survey data plays a crucial role in empirical project management studies. This pleads the question: Do researchers account for bias in survey responses? Before proceeding, I’d like to clarify the question with with an example. Assume I’m a project manager who receives a research survey asking questions about my experience and the kinds of projects I have managed. What’s to stop me from inflating my experience and exaggerating the projects I have run? Answer: Nothing! Now, assuming that a small (or, possibly, not so small) percentage of project managers targeted by research surveys stretch the truth for whatever reason, the researcher is going to end up with data that is at least partly garbage. Hence the italicised question that I posed at the start of this paragraph.
The tendency of people to describe themselves in a positive light referred to as social desirability bias. It is impossible to guard against, even if the researcher assures respondents of confidentiality and anonymity in analysis and reporting. Clearly this is more of a problem when used for testing within an organisation: respondents may fear reprisals for being truthful. In this connection William Whyte made the following comment in his book The Organization Man, “When an individual is commanded by an organisation to reveal his innermost feelings, he has a duty to himself to give answers that serve his self-interest rather than that of The Organization.” Notwithstanding this, problems remains even with external surveys. The bias is lessened by anonymity, but doesn’t completely disappear. It seems logical that people will be more relaxed with external surveys (in which they have no direct stake), more so if they are anonymous. However, one cannot be completely certain that responses are bias-free.
Of course, researchers are aware of this problem, and have devised techniques to deal with it. The following methods are commonly used to reduce social desirability bias
- The use of scales, such as the Marlowe-Crowne social desirability scale, to determine susceptibility of respondents to social desirability bias. These scales are based on responses to questions that represent behaviours which are socially deemed as desirable, but at the same time very unlikely. It’s a bit hard to explain; the best way to understand the concept is to try this quiz. A recognised limitation of do not distinguish between genuine differences and bias. Many researchers have questioned the utility of such scales on other grounds as well- see this paper, for example.
- The use of forced choice responses – where respondents are required to choose between different scenarios rather than assigning a numerical (or qualitative) rating to a specific statement. In this case, survey design is very important as the choices presented need to be well-balanced and appropriately worded. However, even with due attention to design, there are well-known problems with forced choice response surveys (see this paper abstract, for example).
It appears that social desirability bias is hard to eliminate, though with due care it can be reduced. As far as I can tell (from my limited reading of project management research), most researchers count on guaranteed anonymity of survey responses as being enough to control this bias. Is this good enough? May be it is, may be not: academics and others are invited to comment.
Improving project forecasts
Many projects are plagued by cost overruns and benefit shortfalls. So much so that a quick search on Google News almost invariably returns a recent news item reporting a high-profile cost overrun. In a 2006 paper entitled, From Nobel Prize to Project Management: Getting Risks Right, Bent Flyvbjerg discusses the use of reference class forecasting to reduce inaccuracies in project forecasting. This technique, which is based on theories of decision-making in uncertain (or risky) environments,1 forecasts the outcome of a planned action based on actual outcomes in a collection of actions similar to the one being forecast. In this post I present a brief overview of reference class forecasting and its application to estimating projects. The discussion is based on Flyvbjerg’s paper.
According to Flyvbjerg, the reasons for inaccuracies in project forecasts fall into one or more of the following categories:
- Technical – These are reasons pertaining to unreliable data or the use of inappropriate forecasting models.
- Psychological – This pertains to the inability of most people to judge future events in an objective way. Typically it manifests itself as undue optimism, unsubstantiated by facts; behaviour that is sometimes referred to as optimism bias. This is the reason for statements like, “No problem, we’ll get this to you in a day.” – when the actual time is more like a week.
- Political – This refers to the tendency of people to misrepresent things for their own gain – e.g. one might understate costs and / or overstate benefits in order to get a project funded. Such behaviour is sometimes called strategic misrepresentation (commonly known as lying!) .
Technical explanations are often used to explain inaccurate forecasts. However, Flyvbjerg rules these out as valid explanations for the following reasons. Firstly, inaccuracies attributable to data errors (technical errors) should be normally distributed with average zero, but actual inaccuracies were shown to be non-normal in a variety of cases. Secondly, if inaccuracies in data and models were the problem, one would expect this to get better as models and data collection techniques get better. However, this clearly isn’t the case, as projects continue to suffer from huge forecasting errors.
Based on the above Flyvbjerg concludes that technical explanations do not account for forecast inaccuracies as comprehensively as psychological and political explanations do. Both the latter involve human bias. Such bias is inevitable when one takes an inside view, which focuses on the internals of a project – i.e. the means (or processes) through which a project will be implemented. Instead, Flyvbjerg suggests taking an outside view – one which focuses on outcomes of similar (already completed) projects rather than on the current project. This is precisely what reference class forecasting does, as I explain below.
Reference class forecasting is a systematic way of taking an outside view of planned activities, thereby eliminating human bias. In the context of projects this amounts to creating a probability distribution of estimates based on data for completed projects that are similar to the one of interest, and then comparing the said project with the distribution in order to get a most likely outcome. Basically, reference class forecasting consists of the following steps:
- Collecting data for a number of similar past projects – these projects form the reference class. The reference class must encompass a sufficient number of projects to produce a meaningful statistical distribution, but individual projects must be similar to the project of interest.
- Establishing a probability distribution based on (reliable!) data for the reference class. The challenge here is to get good data for a sufficient number of reference class projects.
- Predicting most likely outcomes for the project of interest based on comparisons with the reference class distribution.
In the paper, Flyvbjerg describes an application of reference class forecasting to large scale transport infrastructure projects. The processes and procedures used are published in a guidance document entitled Procedures for Dealing with Optimism Bias in Transport Planning, so I won’t go into details here. The trick, of course, is to get reliable data for similar projects. Not an easy task.
To conclude, project forecasts are often off the mark by a wide margin. Reference class forecasting is an objective technique that eliminates human bias from the estimating process. However, because of the cost and effort involved in building the reference distribution, it may only be practical to use it on megaprojects.
1Daniel Kahnemann received the Nobel Prize in Economics in 2002 for his work on how people make decisions in uncertain situations. His work, which is called Prospect Theory, forms the basis of Reference Class Forecasting.
Do project managers learn from experience?
Do project managers learn from their experiences? One might think the answer is a pretty obvious, “Yes.” However in a Harvard Business Review article entitled, The Experience Trap, Kishore Sengupta, Tarek Abdel-Hamid and Luk Van Wassenhove suggest the answer may be a negative, especially on complex projects. I found this claim surprising, as I’m sure many project managers would. It is therefore worth reviewing the article and the arguments made therein.
The article is based on a study in which several hundred experienced project managers were asked to manage a simulated software project with specified goals and constraints. Most participants failed miserably: their deliverables were late , over-budget and defect ridden. This despite the fact that most of them acknowledged that the problems encountered on the simulations were reflective of those they had previously seen on real projects. The authors suggest this indicates problems with the way project managers learn from experience. Specifically:
- When making decisions, project managers do not take into account consequences of prior actions.
- Project managers don’t change their approach, even when it is evident that it doesn’t work.
The authors identify three causes for this breakdown in learning:
- Time lags between cause and effect: In complex projects, the link between causes and effects are not immediately apparent. The main reason for this, the authors contend, is that there can be significant delays between the two – e.g. something done today might affect the project only after a month. The authors studied this effect through another simulated project in which requirements increased during implementation. The participants were asked to make hiring decisions at specified intervals in the project, based on their anticipated staffing requirements. The results showed that the ability of the participants to make good hiring decisions deteriorated as the arrival lag (time between hiring and arrival) or assimilation lag (time between arrival and assimilation) increased. This, the authors claim, shows that project managers find it hard to make causal connections when delays between causes and effects are large.
- Incorrect estimates: It is well established that software projects are notoriously hard to estimate (see my article on complexity of IT projects for more on why this is so). The authors studied how project managers handle incorrect estimates. This, again, was done through a simulation. What they found was participants tended to be overly conservative when providing estimates even when things were actually going quite well. The authors suggest this is an indication that project managers attempt to game the system to get more resources (or time), regardless of what the project data tells them.
- Initial goal bias: Through yet another another simulation, the authors studied what happens as project goals change with time. The simulation started out with a well-defined initial scope which was then changed some time after the project started. Participants were not required to reevaluate goals as scope changed, but that avenue was open to them. The researchers found that none of the participants readjusted their goals in response to the change, thus indicating that unless explicitly required to reevaluate objectives, project managers tend to stick to their original targets.
After discussing the above barriers to learning, the authors provide the following suggestions reduce these:
- Provide cognitive feedback: A good way to understand causal relationships in complex processes is to provide cognitive feedback – i.e feedback that clarifies the connection between important variables. In the simulation exercise involving arrival / assimilation delays, participants who were provided with such feedback (basically graphical displays of number of defects detected vs time) were able to make better (i.e. more timely) staffing decisions.
- Use calibrated model-based tools and guidelines: The authors suggest using decision support and forecasting tools to guide project decision-making. They warn that these tools should be calibrated to the specific industry and environment.
- Set goals based on behaviours rather than performance: Most project managers are assessed on their performance – i.e. the success of their projects. Instead, the authors suggest setting goals that promote specific behaviours. An example of such a goal might be the reduction of team attrition. Such a goal would ensure that project managers focus on things such as promoting learning within the team, protecting their team from schedule pressure etc. This, the logic goes, will lead to better team cohesion and morale, ultimately resulting in better project outcomes.
- Use project simulators: Project simulations provide a safe environment for project managers to hone their skills and learn new ones. The authors cite a case where introduction of project simulation games significantly improved the performance of managers on projects, and also lead to a better understanding of dynamic relationships in complex environments.
Although many of the problems (e.g. inaccurate estimates) and solutions (e.g. use of simulation and decision support software) discussed in the article aren’t new, the authors present an interesting and thought-provoking study on the apparently widespread failure of project managers to learn from experience. However, for reasons which now I outline, I believe their case may somewhat overstated.
Regarding the research methodology, I believe their reliance on simulations limits the strength, if not the validity, of their claims. More on this below:
- Having participated in project simulations before, I can say that simulators cannot simulate (!)important people-related factors which are always present in a real project environment. These include factors such as personal relationships and ill-defined but important notions such as organisational culture. In my experience, project managers always have to take these into account when making project decisions.
- Typically many of the important factors on real projects are “fuzzy” and have complex dependencies that are hard to disentangle. Simulations are only as good as the models they use, and these factors are hard to model.
On the solutions recommended by the authors:
- I’m somewhat sceptical about the use of software tools to supports decision making. In my experience, decision support tools require a fair bit of calibration, practice and (good) data to be of any real use. By the time one gets them working, one usually has a good handle on the problem any way. They’re also singularly useless when extrapolated to new situations – and projects (almost by definition) often involve new situations.
- Setting behavioural goals is nice in theory, but I’m not sure how it would work in practice. Essentially I have a problem with how a project manager’s performance will be measured against such goals. The causal connection between a behavioural goal such as “reduce team attrition” and improved project performance is indirect at best.
- Regarding simulators as training tools, I have used them and have been less than impressed. It is very easy to make a “wrong” decision on a simulator because information has been hidden from you. In a real life situation, a canny project manager would find ways to gather the information he or she needs to make an informed decision, even if this is hard to do. Typically, this involves using informal communication modes and generally keeping ear to the ground. The best project managers excel at this.
So, in closing, I think the authors have a point about the disconnect between project management practice and learning at the level of an individual project manager. However, I believe their thesis is somewhat diluted because it is based on the results of simulated project games which are limited in their ability to replicate complex, people-related issues that are encountered on real projects.

