Author Archive
Enumeration or analysis? A note on the use and abuse of statistics in project management research
In a detailed and insightful response to my post on bias in project management research, Alex Budzier wrote, “Good quantitative research relies on Theories and has a sound logical explanation before testing something. Bad research gets some data throws it to the wall (aka correlation analysis) and reports whatever sticks.” I believe this is a very important point: a lot of current research in project management uses statistics in an inappropriate manner; using the “throwing data on a wall” approach that Alex refers to in his comment. Often researchers construct models and theories based on data that isn’t sufficiently representative to support their generalisations.
This point is the subject of a paper entitled, On Probability as a Basis for Action, published by Edwards Deming in 1975. In the paper, Deming makes the important distinction between enumerative and analytical studies. The basic difference between the two is that analytical studies are aimed at establishing cause and effect based on data (i.e. building theories that explain why the data is what it is), whereas enumerative studies are concerned with classification (i.e categorising data). In this post I delve into the use (or abuse) of statistics in project management research, with particular reference to enumerative and analytical studies. The discussion presented below is based on Deming’s paper and a very readable note by David and Sarah Kerridge.
Some terminology before diving into the discussion: Deming uses the notion of a frame, which he defines as an aggregate of identifiable physical units of some kind, any or all of which may be selected and investigated. In short: the aggregate of potential samples.
So what’s an enumerative study? In his paper, Deming defines it as one in which, “…action will be taken on the material in the frame studied…The aim of a study in an enumerative problem is descriptive. How many farms or people belong to this or that category? What is the expected out-turn of wheat for this region? How many units in the lot are defective? The aim (in the last example) is not to find out why there are so many or so few units in this or that category: merely how many.”
In contrast, an analytic study is one “in which action will be taken on the process or cause-system that produced the frame studied, the aim being to improve practice in the future…Examples include, comparison of two industrial processes A and B. (Possible) actions: adopt method B over method A, or hold on to A, or continue the experiment (gather more data).“
Deming also provides a criterion by which to distinguish between enumerative and analytic studies. To quote from the paper, “A 100 percent sample of the frame provides the complete answer to the question posed for the enumerative problem, subject to the limitations of the method of investigation. In contrast a 100 percent sample of the frame is inconclusive in an analytic problem“
It may be helpful to illustrate the above via project management examples. A census of tools used by project managers is an enumerative problem: sampling the entire population of project managers provides a complete answer. In contrast, building (or validating) a model of project manager performance is an analytic study: it is not possible, even in principle, to verify the model under all circumstances. To paraphrase Deming: there is no statistical method by which to extrapolate the validity of the model to other project managers or environments. This is the key point. Statistical methods have to be complemented by knowledge of the subject matter – in the case of project manager performance this may include organisational factors, environmental effects, work history and experience of project managers etc. Such knowledge helps the investigator design studies that cover a wide range of circumstances, paving the way for generalisations necessary for theory building. Basically, the sample data must cover the entire range over which generalisations are made. What this means is that the choice of samples depends on the aim of the study. The Kerridges offer some examples in their note, which I reproduce below:
Aim: Discover problems and possibilities, to form a new theory.
Method: Look for interesting groups, where new ideas will be obvious. These
may be focus groups, rather than random samples. Accuracy and rigour aren’t required. But this assumes that the possibilities discovered will be tested by other means, before making any prediction.Aim: Predict the future, to test a general theory.
Method: Study extreme and atypical samples, with great rigour and accuracy.Aim: Predict the future, to help management.
Method: Get samples as close as possible to the foreseeable range of circumstances
in which the prediction will be used in practice.Aim: Change the future, to make it more predictable.
Method: Use statistical process control to remove special causes, and experiment using the PDSA cycle to reduce common cause variation.
Unfortunately, many project management studies that purport to build theories do not exercise appropriate care in study design. The typical offence is that samples used in the studies do not support generalisations made. The resulting theories are thus built on flimsy empirical foundations. To be sure, most offenders label their studies as preliminary (other favoured adjectives include exploratory, tentative initial etc), thereby absolving themselves of responsibility for their irresponsible speculations. That would be OK if such work were followed up by a thorough empirical study, but it often isn’t. I’m loath to point fingers at specific offenders, but readers will find an example or two amongst papers reviewed on this blog. Lest I be accused of making gross and unfair generalisations, I should hasten to add that the reviews also include papers in which statistical analysis is done right (I’ll leave it to the reader to figure out which ones these are…).
To sum up: in this post I’ve discussed the difference between enumerative and analytic studies and its implications for the validity of some published project management research. Enumerative statistics deals with counting and categorisations whereas the analytical studies are concerned with clarifying cause-effect relationships. In analytical work, it is critical that samples are chosen that reflect the stated intent of the work, be it general theory-building or prediction in specific circumstances. Although this distinction should be well understood (having been articulated clearly over quarter a century ago!), it appears that it isn’t always given due consideration in project management research.
A roadmap to agility
Many corporate IT shops use big design up-front methodologies to guide their internal software development projects. Generally, IT decision makers seem reluctant to trial iterative/incremental approaches, which have proven their worth in diverse development environments. The best known amongst these techniques are the ones based on agile development principles. “Agile principles are OK for software development houses,” say these managers, “but they’ll never work in the corporate world.” I don’t quite agree with this because I’ve had some minor successes in using agile principles (continual customer collaboration, for instance) within corporate IT environments. However – and I freely admit it – my efforts have been piecemeal and somewhat ad-hoc. Now, finally, help is at hand for those who have wondered how they might “add agility” to their development processes: A book entitled Becoming Agile…in an imperfect world, by Greg Smith and Ahmed Sidky, shows how non-agile development environments can be transformed through a gradual adoption of agile techniques. This post is an extensive review of the book.
I should add a caveat before proceeding any further: this review is written from the perspective of a development manager / team lead working in corporate IT – for no better reason than it’s what I do at present. That said, I hope there’s enough detail and commentary for it to be of interest to those working in other environments too.
The book begins with a story about a mining rescue, which provides an excellent illustration of agile principles in practice. The analogy is apt because, to be successful, any rescue effort must be collaborative (must involve many people with diverse skills), adaptive (must be responsive to changes in conditions) and, above all, must produce results (those trapped must be rescued unharmed). Traditional project management, with its insistence on complete, up-front requirements analysis and inflexibility to change would be hopelessly inappropriate for any rescue effort. Why? Because one cannot know a priori what might lead to a successful rescue – it is a complex process that unfolds and evolves with time. Similarly, as Fredrick Brooks emphasised more than 20 years ago, software development is intrinsically complex. What makes it so is the in-principle impossibility of obtaining and assimilating user requirements upfront. This is the essential difference between – say – a construction project and a software development effort. Recent research on project complexity suggests that agile techniques offer the best hope of dealing with this complexity. The essential advantage conferred by agile processes is the built-in adaptability to change via iterative development and continual customer involvement. In the end, this is what enables development teams to build applications that customers really want. An obvious corollary – if it needs to be stated at all – is that the adoption of agile techniques provides demonstrable business value. This is important if one wants to get management buy-in for a move to agility.
The book provides a roadmap for software development teams that want to improve their agility. Although the authors claim they do not favour a specific methodology, much of their discussion is based on Scrum. There’s nothing wrong with this per se, but I believe it is more important to focus on principles (or intent) behind the practices rather than the practices themselves. Folks working in corporate IT environments would have a better chance of introducing agility into their processes by adopting principles (or ways of working) gradually, rather than attempting to introduce a specific methodology wholesale – the latter approach being much too radical for the corporate world. The book also lists some common “roadblocks to agility” and a brief discussion of how these can be addressed. The authors emphasise that the aim should be to create a customised agile development process that is tailored to the needs of the organisation. Furthermore, instead of aiming for “agile perfection”, one should aim at reaching the right level of agility for one’s organisation. Excellent advice!
The path to agility, as laid out in the book, is as follows:
- Assessment: evaluating current processes and developing a path to agility. Following Boehm and Turner, the authors suggest that upfront analysis be done to identify mismatches between organisational culture / practices and the agile techniques the organisation wishes to adopt. A proper assessment will help identify mismatches (or risks) associated with the transition. The book also provides a link to an online readiness assessment (registration required!). The assessments are to be provided in an appendix to the book. However, the review draft I received did not have this appendix, so I can’t comment on the utility of the tool.
- Getting buy-in: Introducing an agile methodology is impossible without management support. One needs to make a case for this upfront. The authors note that the move to agility should be undertaken only if there are demonstrable benefits for the company. When canvassing support, the costs, benefits (for the company and management) and risks must be clearly articulated in a business case for the migration to agile practices. The book provides some examples of each.
- Understanding current processes and modifying them appropriately: The authors emphasise that one needs to understand ones existing processes thoroughly before attempting to change them. Only when this is done can one determine which processes would benefit the most from change. The basic idea here is to make one’s processes as agile as possible, within organisational and other constraints. Transplanting another organisation’s processes into one’s environment is unlikely to work. The book outlines how organisations can develop customised processes suited to their specific environments. I found the book’s case-study based approach very helpful, as it provided a grounded example of how a company might approach the transition. In cases where companies have no pre-existing processes (or completely dysfunctional processes), the authors suggest starting with a packaged agile methodology such as Scrum.
- Piloting the new process: The new processes have to be tested on a real project. The authors recommend doing a pilot project using the new methodology. Much of the book is dedicated to discussing a case study of a pilot project in a fictitious organisation. The discussion is useful because it highlights common issues that any organisation might face in using agile processes for the first time. The pilot project is a useful vehicle to illustrate how feasibility studies, estimation and planning, iterative development, release and delivery work in an agile environment. I really liked this approach as it provided a grounded context to the principles.
- Retrospective: A retrospective or post-mortem offers the opportunity to improve the development process. Unfortunately, post-mortems are rarely done right. The book offers excellent advice on planning retrospectives. The basic idea: improve the process, don’t dissect the specific project.
Of course, achieving agility is more than modifying or adopting processes – it involves changing organisational culture as well. One of the main cultural obstacles is the command and control management style that is so prevalent in the corporate world. Another cultural issue is the lack of communication across organisational functions. The book provides advice on how to engender an agile culture within an organisation. Essentially, executives must endorse agile principles, line managers need to become coaches rather than supervisors, and teams need to adapt and adopt agile practices. Another characteristic of an agile culture is that teams are empowered to make their own decisions. This can be a challenge for managers and teams attuned to working in corporate IT environments that subscribe to the command and control approach.
The authors recommend engaging consultants to help with the transition to agility, but I think organisations may be better served by honest self evaluation first, followed by the development of an action plan. The action plan (in true agile fashion!) must be developed collaboratively, by involving all stakeholders who will be affected by the transformation. Books (such as the one being reviewed) and training courses can help one along the way, but there’s really no substitute for introspection and change from within. On a related note, the book mentions that agile teams should be composed of generalists – people with a broad range of technical skills. Corporate IT teams, on the other hand, tend to made up of specialists. The authors point out that this can be a barrier to agility, but not one that is insurmountable.
Finally, the authors use the Technology Adoption Cycle to illustrate the difficulties of moving to an enterprise wide adoption of agile techniques. Given the huge culture change involved, they recommend an evolutionary transition to agile processes. In this connection, the authors identify five levels of agility: Collaborative, Evolutionary, Integrated, Adaptive and Encompassing, and recommend that enterprises progress through each of these steps on their way to agility nirvana. The book presents a chart outlining what each level of agility entails (see this article for more). This approach enables the organisation (and people involved) to “digest and assimilate” the changes in bite-sized pieces. The really good news is that the lower levels of agility are eminently achievable, as they emphasise agile principles such as customer collaboration and evolutionary (iterative) development, whilst placing no great demands on technical skills. This puts agility within reach of most organisations. So if you work in a non-agile environment, you may want to consider getting yourself a copy of the book as a first step towards becoming agile.
References:
Greg Smith and Ahmed Sidkey, Becoming Agile…in an imperfect world, Manning Publishers, Manning Early Access release, Sep 2007; Softbound print release, Feb 2009 (est).
A new perspective on risk analysis in projects
Introduction
Projects are, by definition, unique endeavours. Hence it is important that project risks be analysed and managed in a systematic manner. Traditionally, risk analysis in projects – or any other area – focuses on external events. In a recent paper entitled, The Pathogen Construct in Risk Analysis, published in the September 2008 issue of the Project Management Journal, Jerry Busby and Hongliang Zhang articulate a fresh perspective on risk analysis in projects. They argue that the analysis of external threats should be complemented by an understanding of how internal decisions and organisational structures affect risks. What’s really novel, though, is their use of metaphor: they characterise these internal sources of risk as pathogens. Below I explore their arguments via an annotated summary of their paper.
What’s a risk pathogen?
“Risk,” the authors state, “is a statistical concept of events that happen to someone or something.” Traditional risk analysis concerns itself with identifying risks, determining the probability of their occurrence, and finding ways of dealing with them. Risks are typically considered to be events that are external to an organisation. This approach has its limitations because it does not explicitly take into account the deficiencies and strengths of the organisation. For example, a project may be subject to risk due to the use of an unproven technology. When the risk becomes obvious, one has to ask why that particular technology was chosen. There could be several reasons for this, each obviously flawed only in hindsight. Some reasons may be: a faulty technology selection process, over optimism, decision makers’ fascination with new technology or some other internal predisposition. Whatever the case, the “onditions that lead to the choice of technology existed prior to the event that triggered the failure. The authors label such preexisting conditions pathogens. In the authors’ words, “At certain times, external circumstances combine with ‘resident pathogens’ to overcome a system’s defences and bring about its breakdown. The defining aspect of these metaphorical pathogens is that they predate the conditions that trigger the breakdown, and are generally more stable and observable.”
It should be noted that the pathogen tag is subjective – that is, one party might view a certain organisational predisposition as pathogenic whereas another might view it as protective. To illustrate using the above example – management might view a technology as unproven, whereas developers might view it as offering the company a head start in a new area. Perceptions determine how a “risk” is viewed: different groups will select particular risks for attention, depending on the cultural affiliations, background, experience and training. Seen in this light, the subjectivity of the pathogen label is reasonable, if not obvious. In the paper, the authors examine risk pathogens in projectised organisations, with particular focus on the subjectivity of the label (i.e. different perceptions of what is pathogenic). Why is this important? The authors note that in their studies, “the most insidious kind of risk to a project – the least well understood and potentially the most difficult to manage if materialised – was the kind that involved contradictory interpretations.” These contradictory interpretations must be recognised and addressed by risk analysis; else they will come in the way of dealing with risks that become reality.
The authors use a case study based approach, using a mix of projects drawn from UK and China. In order to accentuate the differences between pathogenic and protective perspectives of “pathogens”, the selected projects had both public and private sector involvement. In each of the projects, the following criteria were used to identify pathogens. A pathogen
- Is the cause of an identifiable adverse organisational effect.
- Is created by social actors – it should not be an intrinsic vulnerability such as a contract or practice.
- Exists prior to the problem – i.e. it predates the triggering event.
- Becomes a problem (or is identified as a problem) only after the triggering event.
The authors claim that in all cases studied, the pathogen was easily identifiable. Further it was also easy to identify contradictory interpretations (protective behaviour) made by other parties. As an example, in a government benefits card project, the formulation of requirements was done only at a high-level (pathogen). The project could not be planned properly as a consequence (triggering event). This lead to poor developer performance and time/cost overruns (effect). The ostensible reason for doing requirements only at a high-level was to save time and cost in the bidding process (protective interpretation). Another protective interpretation was that detailed requirements would strait-jacket the development team and preclude innovation. Note that the adaptive (or protective) interpretation refers to a risk other than the one that actually occurred. This is true of all the examples listed by the authors – in all cases the alternate interpretation refers to a risk other than the one that occurred, implying that the risk that actually occurred was somehow overlooked or ignored in the original risk analysis. It is interesting to explore why this happens, so I’ll jump straight to the analysis and discussion, referring the reader to the paper for further details on the case studies.
Analysis and Discussion
From an analysis of their data, the authors suggest three reasons why a practice that is seen as adaptive, might actually end up being pathogenic:
- Risks change with time, and managing risk at one time cannot be separated from managing it at another. For example, a limited-scale pilot project may be done on a shoestring budget (to save cost). A successful pilot may be seen as protective in the sense that it increases confidence that the project is feasible. However, because of the limited scope of the pilot, it may overlook certain risks that are triggered much later in the project.
- Risks are often interdependent – i.e. how one risk is addressed may affect another risk in an adverse manner (e.g. increase the probability of its occurrence)
- The stakeholders in a project do not have unrestricted choices on how they can address risks. There are always constraints (procedural or financial, for example) which restrict options on how risks can be handled. These constraints may lead to decisions that affect other risks negatively.
I would add another point to this list:
- Stakeholders do not always have all the information they need to make informed decisions on risks. As a consequence, they may not foresee the pathogenic effect of their decisions. The authors allude to this in the paper, but do not state it as an explicit point. In their words, “Being engaged in a particular stage of a project selects certain risks for a project manager’s attention, and the priority becomes dealing with these risks rather than worrying about how widely the way of dealing with them will ramify into other stages of the project.”
The authors then discuss the origins of subjectivity on whether something is pathogenic or adaptive. Their data suggests the following factors play an important role in how a stakeholder might view a particular construct:
- Identity: This refers to the roles people play on projects. For example, a sponsor might view a quick requirements gathering phase as protective, in that it saves time and money; whereas a project manager or developer may view it as pathogenic, as it could lead to problems later.
- Expectations of blame: It seems reasonable that stakeholders would view factors that cause outcomes that they may be blamed for as pathogenic. As the authors state, “Blameworthy events become highly specific risks to an individual and the origin of these events – whether practices, artefacts or decisions – become relevant pathogens.” The authors also point out that the expectation of blame plays a larger role in projectised organisations – where project managers are given considerable autonomy – compared to functional organisations where blame may be harder to apportion.
Traditional risk analysis, according to the authors, focus on face-value risks – i.e. on external threats – rather than the subjective interpretations of these risks by different stakeholders. To quote, “…problematic events become especially intractable because of actors’ interpretation of risk are contradictory.” These contradictory interpretations are easy to understand in the light of the discussion above. This then begs the question: how does one deal with this subjectivity of risk perception? The authors offer the following advice, combining elements of traditional risk analysis with some novel suggestions:
- Get the main actors (or stakeholders) to identify the risks (as they perceive them), analyse them and come up with mitigation strategies.
- Get the stakeholders to analyse each others analyses, looking for contradictory interpretations of factors.
- Get the stakeholders together, to explore the differences in interpretations particularly from the perspective of whether:
- These differences will interfere with management of risks as they arise.
- There are ways of managing risks that avoid creating problems for other risks.
They suggest that it is important to avoid seeking consensus, because consensus invariably results in compromises that are sub-optimal from the point of view of managing multiple risks
I end this section with a particularly apposite quote from the paper, “At some point the actors need to agree on how to get on with the concrete business of the project, but they should be clear not only about the risks this will create for them, but also the risks it creates for others – and the risks that will come from others trying to manage their risks.” That, in a nutshell, is the message of the paper.
Conclusion
The authors use the metaphor of a pathogen to describe inherent organisational characteristics or factors that become “harmful” or “pathogenic” when certain risks are triggered. The interpretations of these factors subjective in that one person’s “pathogen” may be another person’s “protection”. Further, a factor that offers protection at one stage of a project may in fact become pathogenic at a later stage. Such contradictory views must be discussed in an open manner in order to manage risks effectively.
Although the work is based on relatively few data points, it offers a novel perspective on the perception of risks in projects. In my opinion the paper is well written, interesting and well worth a read for academics, consultants and project managers.
References:
Busby, Jerry. & Zhang, Hongliang., The Pathogen Construct in Risk Analysis, Project Management Journal, 39 (3), 86-96. (2008).

