Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Project Management’ Category

Project execution: efficiency versus learning

with one comment

Most projects are subject to tight constraints. As a consequence, project teams are conditioned to focus on efficiency of project execution – i.e. to get things done within the least possible time, effort and expense.  In this post I explore another approach; one that emphasises learning over efficiency. Now I know this sounds somewhat paradoxical: we all know learning takes time – and time’s the one commodity that’s universally in short supply on projects. However, please read on – I hope to convince you that an emphasis on learning may actually improve efficiency. My discussion is based on a recent Harvard Business Review article entitled, The Competitive Imperative of Learning, in which the author, Professor Amy Edmondson, presents two perspectives on organisational execution, which she defines as “the efficient, timely, consistent production and delivery of goods or services.” The two perspectives are:  “execution as efficiency” and “execution as learning.”  The former emphasises getting the job done, whereas the latter underscores the importance of finding better ways to get the job done. Projects are organisations too – albeit temporary ones – so the two views of execution discussed in the article may be relevant to project environments.  This post discusses execution as efficiency vs. execution as learning in the context of project management.

Professor Edmondson compares and contrasts the two views of execution as follows:

Execution as efficiency Execution as learning
Leaders provide answers. Leaders set direction and articulate mission.
Employees follow directions. Employees discover the way.
Optimal work processes are designed and set up in advance. Tentative work processes are set up as a starting point.
New work processes are developed infrequently. Work processes are ever evolving and improving.
Feedback is one way (manager to team). Two way feedback is common.
Employees rarely exercise judgement and make decisions. Employees continually make important judgement-based decisions.

The reader will notice that the efficiency approach is rigid and very “top-down” whereas  the learning approach is flexible and if not quite “bottom-up”, at least open to change.  The remainder of this note  discusses the latter might work in a project environment.

In projects the focus is on getting the job done in time and on budget.  This sometimes (often?) leads to micro-management of project execution to the extent that team members are given detailed directions on how they should do their tasks.  This corresponds to the efficiency approach.  In contrast, the execution as learning way recommends that project managers set the overall objectives and leave team members to find their own way to achieve them (within parameters of scope, time and budget).

On a similar note, as I have written in a post on motivation, the best way to ensure that people remain engaged in their work is to give them autonomy – i.e. empower them to make decisions pertaining to their work. This is true both in (permanent) organisations and projects.  Many project managers are reluctant to  delegate responsibility to team members – and here I mean proper delegation, where team members are given responsibility and decision rights over all issues that come up in their work.  Granted, on some projects it may not be possible to delegate these rights entirely. Nevertheless, even in such cases it should still be possible to make decisions in a collaborative manner, with input from all affected parties.

In another post I pointed out that  project management methodologies are sometime implemented wholesale, without any regard to their appropriateness for a particular project. This corresponds to an execution as efficiency approach where directions are followed without question.  In contrast, an execution as learning approach is one in which  processes are adopted and adapted as required. This is better because it uses only those processes that contribute to achieving a project’s objectives; anything more is  recognised as bureaucratic overhead –   good only for generating pointless documentation and wasting time. This applies to not only to project management processes, but also to processes used in the creation of deliverables. This bit of common sense can be codified into a  “principle of minimal process” which  may be  stated as follows: one should not increase, beyond what is necessary, the number of processes to achieve a particular end.  This principle is akin to the principle of parsimony or  Occam’s Razor in the sciences. Furthermore, in an execution as efficiency approach, processes, once established, rarely change. However, a project’s environment is always subject to change. In response to this, execution as learning recommends that processes be continually reviewed and tweaked, or even overhauled, as  needed. What works well today may not work so well tomorrow.  Bottom line: process is good as long as it contributes to getting the project done, anything that doesn’t should be discarded or fixed (i.e. improved).

An  execution as efficiency  approach downplays the need for communication because it is assumed that all processes are already running as efficiently as they possibly can. Communication in these environments tends to be one-way: from top to bottom. In contrast, in learning-oriented environments communication is a two way process: those doing the work suggest process improvements to management and management, in turn, provides feedback. Two-way communication is therefore an important element of execution as learning in organisations. I’d argue this is especially the case for projects because – as all project managers know – change (in scope, timeline, budget or whatever) is inevitable.

To conclude: projects are organisations, albeit temporary ones. Therefore,  principles and learnings from research on permanent organisations should be checked for potential applicability to project environments. With this in mind, it may be more productive to approach project execution with a learning mindset rather than a focus on efficiency.  Of course, this is not new – proponents of agile techniques have long advocated such an approach; learning is  at the heart of the the agile manifesto. That said, I’d love to hear what you think; I look forward to your comments.

Written by K

December 6, 2008 at 11:08 am

Enumeration or analysis? A note on the use and abuse of statistics in project management research

with 3 comments

In a detailed and insightful response to my post on bias in project management research, Alex Budzier wrote, “Good quantitative research relies on Theories and has a sound logical explanation before testing something. Bad research gets some data throws it to the wall (aka correlation analysis) and reports whatever sticks.” I believe this is a very important point: a lot of current research in project management uses statistics in an inappropriate manner; using the “throwing data on a wall” approach that Alex refers to in his comment.  Often researchers construct models and theories based on data that isn’t sufficiently representative to support their generalisations.

This point is the subject of a paper entitled, On Probability as a Basis for Action, published by Edwards Deming in 1975. In the paper, Deming makes the important distinction between enumerative and analytical studies. The basic difference between the two is that analytical studies are aimed at establishing cause and effect based on data  (i.e. building theories that explain why the data is what it is), whereas enumerative studies are concerned with classification (i.e categorising data). In this post I delve into the use (or abuse) of statistics in project management research, with particular reference to enumerative and analytical studies.  The discussion presented below is based on Deming’s paper and a very readable note by David and Sarah Kerridge.

Some terminology before diving into the discussion: Deming uses the notion of a frame, which he defines as an aggregate of identifiable physical units of some kind, any or all of which may be selected and investigated. In short: the aggregate of potential samples.

So what’s an enumerative study? In his paper, Deming defines it as one in which, “…action will be taken on the material in the frame studied…The aim of a study in an enumerative problem is descriptive. How many farms or people belong to this or that category? What is the expected out-turn of wheat for this region? How many units in the lot are defective? The aim (in the last example) is not to find out why there are so many or so few units in this or that category: merely how many.”

In contrast, an analytic study is one “in which action will be taken on the process or cause-system that produced the frame studied, the aim being to improve practice in the future…Examples include, comparison of two industrial processes A and B. (Possible) actions: adopt method B over method A, or hold on to A, or continue the experiment (gather more data).

Deming also provides a criterion by which to distinguish between enumerative and analytic studies. To quote from the paper, “A 100 percent sample of the frame provides the complete answer to the question posed for the enumerative problem, subject to the limitations of the method of investigation. In contrast a 100 percent sample of the frame is inconclusive in an analytic problem

It may be helpful to illustrate the above via project management examples. A census of tools used by project managers is an enumerative problem: sampling the entire population of project managers provides a complete answer. In contrast, building (or validating) a model of project manager performance is an analytic study: it is not possible, even in principle, to verify the model under all circumstances. To paraphrase Deming: there is no statistical method by which to extrapolate the validity of the model to other project managers or environments. This is the key point. Statistical methods have to be complemented by knowledge of the subject matter – in the case of project manager performance this may include organisational factors, environmental effects, work history and experience of project managers etc. Such knowledge helps the investigator design studies that cover a wide range of circumstances, paving the way for generalisations necessary for theory building. Basically, the sample data must cover the entire range over which generalisations are made. What this means is that the choice of samples depends on the aim of the study. The Kerridges offer some examples in their note, which I reproduce below:

Aim: Discover problems and possibilities, to form a new theory.
Method: Look for interesting groups, where new ideas will be obvious. These
may be focus groups, rather than random samples. Accuracy and rigour aren’t required. But this assumes that the possibilities discovered will be tested by other means, before making any prediction.

Aim: Predict the future, to test a general theory.
Method: Study extreme and atypical samples, with great rigour and accuracy.

Aim: Predict the future, to help management.
Method: Get samples as close as possible to the foreseeable range of circumstances
in which the prediction will be used in practice.

Aim: Change the future, to make it more predictable.
Method: Use statistical process control to remove special causes, and experiment using the PDSA cycle to reduce common cause variation.

Unfortunately, many project management studies that purport to build theories do not exercise appropriate care in study design. The typical offence is that samples used in the studies do not support generalisations made. The resulting theories are thus built on flimsy empirical foundations. To be sure, most offenders label their studies as preliminary (other favoured adjectives include exploratory, tentative initial etc), thereby absolving themselves of responsibility for their irresponsible speculations. That would be OK if such work were followed up by a thorough empirical study, but it often isn’t. I’m loath to point fingers at specific offenders, but readers will find an example or two amongst papers reviewed on this blog. Lest I be accused of making gross and unfair generalisations, I should hasten to add that the reviews also include papers in which statistical analysis is done right (I’ll leave it to the reader to figure out which ones these are…).

To sum up: in this post I’ve discussed the difference between enumerative and analytic studies and its implications for the validity of some published project management research. Enumerative statistics deals with counting and categorisations whereas the analytical studies are concerned with clarifying cause-effect relationships. In analytical work, it is critical that samples are chosen that reflect the stated intent of the work, be it general theory-building or prediction in specific circumstances. Although this distinction should be well understood (having been articulated clearly over quarter a century ago!),  it appears that it isn’t always given due consideration in project management research.

Written by K

December 2, 2008 at 9:37 pm

Posted in Project Management, Statistics

Tagged with

A roadmap to agility

with one comment

Many corporate IT shops use big design up-front methodologies to guide their internal software development projects.  Generally, IT decision makers seem reluctant to trial  iterative/incremental approaches, which have proven their worth in diverse development environments. The best known amongst these techniques are the ones based on agile development principles. “Agile principles are OK for software development houses,” say these  managers,  “but they’ll never work in the corporate world.”  I don’t quite agree with this because I’ve had some minor successes in using agile principles (continual customer collaboration, for instance) within  corporate IT environments. However – and I freely admit it – my efforts have been piecemeal and somewhat ad-hoc. Now, finally, help is at hand for those who have wondered how they might “add agility” to their development processes: A book entitled Becoming Agile…in an imperfect world, by Greg Smith and Ahmed Sidky, shows how non-agile development environments can be transformed through a gradual adoption of agile techniques. This post is an extensive review of the book.

I should add a caveat before proceeding any further: this review is written from the perspective of a development manager / team lead working in corporate IT – for no better reason than it’s what I do at present. That said, I hope there’s enough detail and commentary for it to be of interest to those working in other environments too.

The book begins with a story about a mining rescue, which provides an excellent illustration of agile principles in practice. The analogy is apt because, to be successful, any rescue effort must be collaborative (must involve many people with diverse skills), adaptive (must be responsive to changes in conditions) and, above all, must produce results (those trapped must be rescued unharmed). Traditional project management, with its insistence on complete, up-front requirements analysis and inflexibility to change would be hopelessly inappropriate for any rescue effort. Why? Because one cannot know a priori what might lead to a successful rescue – it is a complex process that unfolds and evolves with time. Similarly, as Fredrick Brooks emphasised more than 20 years ago, software development is intrinsically complex. What makes it so is the in-principle impossibility of obtaining and assimilating user requirements upfront. This is the essential difference between – say – a construction project and a software development effort. Recent research on project complexity suggests that agile techniques offer the best hope of dealing with this complexity. The essential advantage conferred by agile processes is the built-in adaptability to change via iterative development and continual customer involvement. In the end, this is what enables development teams to build applications that customers really want. An obvious corollary – if it needs to be stated at all – is that the adoption of agile techniques provides demonstrable business value. This is important if one wants to get management buy-in for a move to agility.

The book provides a roadmap for software development teams that want to improve their agility. Although the authors claim they do not favour a specific methodology, much of their discussion is based on Scrum. There’s nothing wrong with this per se, but I believe it is more important to focus on principles (or intent) behind the practices rather than the practices themselves. Folks working in corporate IT environments would have a better chance of introducing agility into their processes by adopting principles (or ways of working) gradually, rather than attempting to introduce a specific methodology wholesale – the latter approach being much too radical for the corporate world. The book also lists some common “roadblocks to agility” and a brief discussion of how these can be addressed. The authors emphasise that the aim should be to create a customised agile development process that is tailored to the needs of the organisation. Furthermore, instead of aiming for “agile perfection”, one should aim at reaching the right level of agility for one’s organisation. Excellent advice!

The path to agility, as laid out in the book, is as follows:

  1. Assessment: evaluating current processes and developing a path to agility. Following Boehm and Turner, the authors suggest that upfront analysis be done to identify mismatches between organisational culture / practices and the agile techniques the organisation wishes to adopt. A proper assessment will help identify mismatches (or risks) associated with the transition. The book also provides a link to an online readiness assessment (registration required!). The assessments are to be provided in an appendix to the book. However, the review draft I received did not have this appendix, so I can’t comment on the utility of the tool.
  2. Getting buy-in: Introducing an agile methodology is impossible without management support. One needs to make a case for this upfront. The authors note that the move to agility should be undertaken only if there are demonstrable benefits for the company. When canvassing support, the costs, benefits (for the company and management) and risks must be clearly articulated in a business case for the migration to agile practices. The book provides some examples of each.
  3. Understanding current processes and modifying them appropriately: The authors emphasise that one needs to understand ones existing processes thoroughly before attempting to change them. Only when this is done can one determine which processes would benefit the most from change. The basic idea here is to make one’s processes as agile as possible, within organisational and other constraints. Transplanting another organisation’s processes into one’s environment is unlikely to work. The book outlines how organisations can develop customised processes suited to their specific environments. I found the book’s case-study based approach very helpful, as it provided a grounded example of how a company might approach the transition. In cases where companies have no pre-existing processes (or completely dysfunctional processes), the authors suggest starting with a packaged agile methodology such as Scrum.
  4. Piloting the new process: The new processes have to be tested on a real project. The authors recommend doing a pilot project using the new methodology. Much of the book is dedicated to discussing a case study of a pilot project in a fictitious organisation. The discussion is useful because it highlights common issues that any organisation might face in using agile processes for the first time. The pilot project is a useful vehicle to illustrate how feasibility studies, estimation and planning, iterative development, release and delivery work in an agile environment. I really liked this approach as it provided a grounded context to the principles.
  5. Retrospective: A retrospective or post-mortem offers the opportunity to improve the development process. Unfortunately, post-mortems are rarely done right. The book offers excellent advice on planning retrospectives. The basic idea: improve the process, don’t dissect the specific project.

Of course, achieving agility is more than modifying or adopting processes – it involves changing organisational culture as well. One of the main cultural obstacles is the command and control management style that is so prevalent in the corporate world. Another cultural issue is the lack of communication across organisational functions. The book provides advice on how to engender an agile culture within an organisation. Essentially, executives must endorse agile principles, line managers need to become coaches rather than supervisors, and teams need to adapt and adopt agile practices. Another characteristic of an agile culture is that teams are empowered to make their own decisions. This can be a challenge for managers and teams attuned to working in corporate IT environments that subscribe to the command and control approach.

The authors recommend engaging consultants to help with the transition to agility, but I think organisations may be better served by honest self evaluation first, followed by the development of an action plan. The action plan (in true agile fashion!) must be developed collaboratively, by involving all stakeholders who will be affected by the transformation. Books (such as the one being reviewed) and training courses can help one along the way, but there’s really no substitute for introspection and change from within. On a related note, the book mentions that agile teams should be composed of generalists – people with a broad range of technical skills. Corporate IT teams, on the other hand,  tend to made up of specialists. The authors point out that this can be a barrier to agility, but not one that is insurmountable.

Finally, the authors use the Technology Adoption Cycle to illustrate the difficulties of moving to an enterprise wide adoption of agile techniques. Given the huge culture change involved, they recommend an evolutionary transition to agile processes. In this connection, the authors identify five levels of agility: Collaborative, Evolutionary, Integrated, Adaptive and Encompassing, and recommend that enterprises progress through each of these steps on their way to agility nirvana. The book presents a chart outlining what each level of agility entails (see this article for more). This approach enables the organisation (and people involved) to “digest and assimilate” the changes in bite-sized pieces. The really good news is that the lower levels of agility are eminently achievable, as they emphasise agile principles such as customer collaboration and evolutionary (iterative) development, whilst placing no great demands on technical skills. This puts agility within reach of most organisations. So if you work in a non-agile environment, you may want to consider getting yourself a copy of the book as a first step towards becoming agile.

References:

Greg Smith and Ahmed Sidkey, Becoming Agile…in an imperfect world, Manning Publishers, Manning Early Access release, Sep 2007; Softbound print release, Feb 2009 (est).

Written by K

November 18, 2008 at 8:00 pm