Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Project Management’ Category

A new perspective on risk analysis in projects

with 2 comments

Introduction

Projects are, by definition, unique endeavours. Hence it is important that project risks be analysed and managed in a systematic manner. Traditionally,  risk analysis in projects – or any other area –  focuses on external events.  In a recent paper entitled, The Pathogen Construct in Risk Analysis, published in the September 2008 issue of the Project Management Journal, Jerry Busby and Hongliang Zhang articulate a fresh perspective on risk analysis in projects. They argue that the analysis of external threats  should be complemented by an understanding  of  how internal decisions and organisational structures affect risks.  What’s really novel, though, is their  use of metaphor: they characterise these internal sources of risk as pathogens. Below I explore their arguments via an annotated summary of their paper.

What’s a risk pathogen?

Risk,” the authors state, “is a statistical concept of events that happen to someone or something.” Traditional risk analysis concerns itself with identifying risks, determining the probability of their occurrence, and finding ways of dealing with them. Risks are typically considered to be events that are external to an organisation. This approach has its limitations because it does not explicitly take into account the deficiencies and strengths of the organisation. For example, a project may be subject to risk due to the use of an unproven technology. When the risk becomes obvious, one has to ask why that particular technology was chosen. There could be several reasons for this, each obviously flawed only in hindsight. Some reasons may be: a faulty technology selection process, over optimism, decision makers’ fascination with new technology or some other internal predisposition. Whatever the case, the “onditions that lead to the choice of technology existed prior to  the event that triggered the failure.  The authors label such preexisting conditions pathogens. In the authors’ words, “At certain times, external circumstances combine with ‘resident pathogens’ to overcome a system’s defences and bring about its breakdown. The defining aspect of these metaphorical pathogens is that they predate the conditions that trigger the breakdown, and are generally more stable and observable.”

It should be noted that the pathogen tag is subjective – that is, one party might view a certain organisational predisposition as pathogenic whereas another might view it as protective. To illustrate using the above example – management might view a technology as unproven, whereas developers might view it as offering the company a head start in a new area. Perceptions determine how a “risk” is viewed: different groups will select particular risks for attention, depending on the cultural affiliations, background, experience and training. Seen in this light, the subjectivity of the pathogen label is reasonable, if not obvious. In the paper, the authors examine risk pathogens in projectised organisations, with particular focus on the subjectivity of the label (i.e. different perceptions of what is pathogenic). Why is this important? The authors note that in their studies, “the most insidious kind of risk to a project – the least well understood and potentially the most difficult to manage if materialised – was the kind that involved contradictory interpretations.” These contradictory interpretations must be recognised and addressed by risk analysis; else they will come in the way of dealing with risks that become reality.

The authors use a case study based approach, using a mix of projects drawn from UK and China. In order to accentuate the differences between pathogenic and protective perspectives of “pathogens”, the selected projects had both public and private sector involvement. In each of the projects, the following criteria were used to identify pathogens. A pathogen

  • Is the cause of an identifiable adverse organisational effect.
  • Is created by social actors – it should not be an intrinsic vulnerability such as a contract or practice.
  • Exists prior to the problem – i.e. it predates the triggering event.
  • Becomes a problem (or is identified as a problem) only after the triggering event.

The authors claim that in all cases studied, the pathogen was easily identifiable. Further it was also easy to identify contradictory interpretations (protective behaviour) made by other parties. As an example, in a government benefits card project, the formulation of requirements was done only at a high-level (pathogen). The project could not be planned properly as a consequence (triggering event). This lead to poor developer performance and time/cost overruns (effect). The ostensible reason for doing requirements only at a high-level was to save time and cost in the bidding process (protective interpretation). Another protective interpretation was that detailed requirements would strait-jacket the development team and preclude innovation. Note that the adaptive (or protective) interpretation refers to a risk other than the one that actually occurred. This is true of all the examples listed by the authors –  in all cases the alternate interpretation refers to a risk other than the one that occurred, implying that the risk that actually occurred was somehow overlooked or ignored in the original risk analysis. It is interesting to explore why this happens, so I’ll jump straight to the analysis and discussion, referring the reader to  the paper for further details on the case studies.

Analysis and Discussion

From an analysis of their data, the authors suggest three reasons why a practice that is seen as adaptive, might actually end up being pathogenic:

  • Risks change with time, and managing risk at one time cannot be separated from managing it at another. For example, a limited-scale pilot project may be done on a shoestring budget (to save cost). A successful pilot may be seen as protective in the sense that it increases confidence that the project is feasible. However, because of the limited scope of the pilot, it may overlook certain risks that are triggered much later in the project.
  • Risks are often interdependent – i.e. how one risk is addressed may affect another risk in an adverse manner (e.g. increase the probability of its occurrence)
  • The stakeholders in a project do not have unrestricted choices on how they can address risks. There are always constraints (procedural or financial, for example) which restrict options on how risks can be handled. These constraints may lead to decisions that affect other risks negatively.

I would add another point to this list:

  • Stakeholders do not always have all the information they need to make informed decisions on risks. As a consequence, they may not foresee the pathogenic effect of their decisions. The authors allude to this in the paper, but do not state it as an explicit point. In their words, “Being engaged in a particular stage of a project selects certain risks for a project manager’s attention, and the priority becomes dealing with these risks rather than worrying about how widely the way of dealing with them will ramify into other stages of the project.

The authors then discuss the origins of subjectivity on whether something is pathogenic or adaptive. Their data suggests the following factors play an important role in how a stakeholder might view a particular construct:

  • Identity: This refers to the roles people play on projects. For example, a sponsor might view a quick requirements gathering phase as protective, in that it saves time and money; whereas a project manager or developer may view it as pathogenic, as it could lead to problems later.
  • Expectations of blame: It seems reasonable that stakeholders would view factors that cause outcomes that they may be blamed for as pathogenic. As the authors state, “Blameworthy events become highly specific risks to an individual and the origin of these events – whether practices, artefacts or decisions – become relevant pathogens.” The authors also point out that the expectation of blame plays a larger role in projectised organisations – where project managers are given considerable autonomy – compared to functional organisations where blame may be harder to apportion.

Traditional risk analysis, according to the authors, focus on face-value risks – i.e. on external threats – rather than the subjective interpretations of these risks by different stakeholders. To quote, “…problematic events become especially intractable because of actors’ interpretation of risk are contradictory.” These contradictory interpretations are easy to understand in the light of the discussion above. This  then begs the question: how does one deal with this subjectivity of risk perception?  The authors offer the following advice, combining elements of traditional risk analysis with some novel suggestions:

  • Get the main actors (or stakeholders) to identify the risks (as they perceive them), analyse them and come up with mitigation strategies.
  • Get the stakeholders to analyse each others analyses, looking for contradictory interpretations of factors.
  • Get the stakeholders together, to explore the differences in interpretations particularly from the perspective of whether:
    • These differences will interfere with management of risks as they arise.
    • There are ways of managing risks that avoid creating problems for other risks.

They suggest that it is important to avoid seeking consensus, because consensus invariably results in compromises that are sub-optimal from the point of view of managing multiple risks

I end this section with a particularly apposite quote from the paper, “At some point the actors need to agree on how to get on with the concrete business of the project, but they should be clear not only about the risks this will create for them, but also the risks it creates for others – and the risks that will come from others trying to manage their risks.” That, in a nutshell, is the message of the paper.

Conclusion

The authors use the metaphor of a pathogen to describe inherent organisational characteristics or factors that become “harmful” or “pathogenic” when certain risks are triggered. The interpretations of these factors subjective in that one person’s “pathogen” may be another person’s “protection”. Further, a factor that offers protection at one stage of a project may in fact become pathogenic at a later stage. Such contradictory views must be discussed in an open manner in order to manage risks effectively.

Although the work is based on relatively few data points,  it offers a novel perspective on the perception of risks in projects.  In my opinion the paper is well written, interesting and well worth a read for academics, consultants and project managers.

References:

Busby, Jerry. & Zhang, Hongliang.,  The Pathogen Construct in Risk Analysis, Project Management Journal, 39 (3), 86-96. (2008).

Written by K

November 10, 2008 at 9:27 pm

Management games

with one comment

It is an unfortunate fact of corporate life that management is sometimes practiced as a series of games between the manager and the managed (with the odds stacked against the latter, of course).  In this post I list some of the more common games I have witnessed over time. As with all games, it is useful to know the ground rules before proceeding. In this case it’s simple because there’s only one: the manager always wins. Now that the ground rule is set, let the games begin…

Two cents up: Some managers feel obliged to contribute to any and every discussion – even those involving  topics they know nothing about. These gents (and ladies) are professional players of the game of Two Cents Up. The game is played as follows: contribute your two cents (or equivalent in any other currency) to all discussions. There is no limit on the number of turns, and at the end  of the discussion you  simply tot up  your  contributions to get your net score.  In case it isn’t clear, only managers get a turn. Expert players of this game routinely end up with several dollars worth of pointless contributions.

Now I delegate; now I don’t: This is essentially a game of delegation peekaboo. The manager delegates responsibility to an employee then, a little while later, takes it back. Then, later still delegates again and so on. The game can be played through several such delegation-undelegation cycles, driving the subordinate to responsibility uncertainty: a state where the subordinate knows not what he or she is  (or isn’t) responsible for. The best exponents of this game can ensure that nothing ever gets done because no one on the team (the manager included) knows who is responsible making decisions.

The second guess: This game is the favourite of managers who find it hard to delegate real responsibility to their subordinates. They delegate only when forced to (by their managers), but then constantly second guess decisions made by the delegatee. As per the Merriam-Webster definition of second-guess, the game can be played at two levels: a) criticise decisions when they are made and then b) criticise them again after the result of the decision is known. Two bites of the cherry! What more could a second-guesser want? No, no… don’t bother answering that.

My way: This is the management version of the well-known childrens’ adage: he who owns the ball, makes the rules.  In the grown-ups game the manager insists on doing things his or her way, riding roughshod over his team’s opinions or advice. The best way to sum up this game is through the (edited) lyrics of the eponymous song:

I’ll plan each charted course;
Each careful step along the byway,
But more, much more than this,
We’ll do it my way.

A more cut-throat version of the game is called my way or the highway – a cliche that nicely sums up what happens to those who choose not to follow the leader.

Bolt from the blue: This game is invoked by some managers when their  opinion is challenged by an employee with a well thought out, irrefutable case.  Just when the employee reckons the manager is about to concede, the manager invokes a bolt from the blue: a statement that has no relevance to the discussion, but serves as an effective distractor to confuse his opponent (sorry, I mean, employee). Here’s an almost true example from real life:

Ben – “So, from the evaluation,  I think we can safely conclude that Oracle is a better than option SQL Server for this project.”

Manager – “May be so, but have you considered using SOA…”

This non sequitur usually results in game, set and match to the manager.

Leap of logic: This game is an insidious variant of the previous one. Like the bolt from the blue, the leap of logic is aimed at distracting the employee. However, it is harder to tackle a leap of logic because the argument isn’t as obviously unrelated to the discussion as the bolt from the blue. Illustrating the leap of logic using the previous example, the manager’s response to Ben might be:

Manager – “Ah, but what about non-relational databases…”

Brilliant! Although the manager is ostensibly talking about databases, he is really spouting nonsense. Ben’s  gobsmacked, and doesn’t know where to begin refuting the point.

Picking nits: This game is played when the manager wants to find fault with work done. It’s an axiom that nothing’s perfect, so one can always find things that haven’t been done right. Some managers are specialist nitpickers – expressing great creativity in finding so-called errors or problems with the work done. Like the first game described in this post, this one can be scored. too. The scoring works as follows: a point per nit picked. At the risk of stating the obvious: only the manager can score.

Although management games are common in corporate settings, they aren’t particular to the business world. Games such as these are played out everyday in organisations ranging from  government bureaucracies to universities.  I should caution my readers that the foregoing listing is far from comprehensive – it is but a small list of the more common games that one might encounter. No doubt, other games (and variants of the ones I’ve described) exist, and still more are being invented by creative managers.  Please feel free to add in management games that you have come across – if they’re good you might even score a point or two.

Written by K

November 3, 2008 at 11:14 pm

A note on bias in project management research

with 8 comments

Project management research relies heavily on empirical studies – that is, studies that are based on observation of reality. This is necessary because projects are coordinated activities involving real-world entities:  people, teams and organisations.  A project management researcher can theorise all he or she likes, but the ultimate test of any theory is, “do the hypotheses agree with the data?”  In this, project management is no different from physics: to be accepted as valid, any theory must agree with reality. In physics (or any of the natural sciences), however, experiments can be carried out in controlled conditions that ensure objectivity and the elimination of any extraneous effects or biases. This isn’t the case in project management (or for that matter any of the social sciences). Since people are the primary subjects of study in the latter, subjectivity and bias are inevitable. This post delves into the latter point with an emphasis on project management research.

From my reading of several project management research papers, most empirical studies in project management proceed roughly as follows:

  1. Formulate a hypotheses based on observation and / or existing research.
  2. Design a survey based on the hypotheses.
  3. Gather survey data.
  4. Accept or reject the hypotheses based on statistical analysis of the data.
  5. Discuss and generalise.

Survey data plays a crucial role in empirical project management studies. This pleads the question: Do researchers account for bias in survey responses? Before proceeding, I’d like to clarify the question with with an example. Assume I’m a project manager who receives a research survey asking questions about my experience and the kinds of projects I have managed. What’s to stop me from inflating my experience and exaggerating the projects I have run? Answer: Nothing! Now, assuming that a small (or, possibly, not so small) percentage of project managers targeted by research surveys stretch the truth for whatever reason, the researcher is going to end up with data that is at least partly garbage. Hence the italicised question that I posed at the start of this paragraph.

The tendency of people to describe themselves in a positive light referred to as social desirability bias. It is impossible to guard against, even if the researcher assures respondents of confidentiality and anonymity in analysis and reporting. Clearly this is more of a problem when used for testing within an organisation: respondents may fear reprisals for being truthful. In this connection William Whyte made the following comment in his book The Organization Man, “When an individual is commanded by an organisation to reveal his innermost feelings, he has a duty to himself to give answers that serve his self-interest rather than that of The Organization.” Notwithstanding this, problems remains even with external surveys. The bias  is lessened by anonymity, but doesn’t completely disappear. It seems logical that people will be more relaxed with external surveys (in which they have no direct stake), more so if they are anonymous. However, one cannot be completely certain that responses are bias-free.

Of course, researchers are aware of this problem, and have devised techniques to deal with it. The following methods are commonly used to reduce social desirability bias

  1. The use of scales, such as the Marlowe-Crowne social desirability scale, to determine susceptibility of respondents to social desirability bias. These scales are based on responses to questions that represent behaviours which are socially deemed as desirable, but at the same time very unlikely. It’s a bit hard to explain; the best way to understand the concept is to try this quiz. A recognised limitation of do not distinguish between genuine differences and bias. Many researchers have questioned the utility of such scales on other grounds as well- see this paper, for example.
  2. The use of forced choice responses – where respondents are required to choose between different scenarios rather than assigning a numerical (or qualitative) rating to a specific statement. In this case, survey design is very important as the choices presented need to be well-balanced and appropriately worded. However, even with due attention to design, there are well-known problems with forced choice response surveys (see this paper abstract, for example).

It appears that social desirability bias is hard to eliminate, though with due care it can be reduced. As far as I can tell (from my limited reading of project management research), most researchers count on guaranteed anonymity of survey responses as being enough to control this bias. Is this good enough? May be it is, may be not: academics and others are invited to comment.

Written by K

October 22, 2008 at 9:16 pm

Posted in Bias, Project Management

Tagged with