Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Corporate IT’ Category

The politics of data warehousing revisited

with 3 comments

Introduction

An enterprise IT initiative generally affects a range of stakeholders groups, each with their own take on why the project is being undertaken and what the result should look like.  This diversity of views is no surprise:  an organisation-wide effort affects many divisions and departments, so there are bound to be differing – even conflicting –  views regarding the initiative and its expected outcome.

The existence of many irreconcilable viewpoints is one of the main symptoms of a wicked problem – a problem that is hard to define, let alone solve.  Paul Culmsee  has written about the inherent wickedness of projects that involve collaborative platforms such as SharePoint.   In this post I discuss how another class of enterprise scale   initiatives – efforts to consolidate and harmonise organizational data for analytical and reporting purposes (so-called data warehouse projects) – display characteristics of wickedness. I also briefly discuss a couple of approaches that can be used to manage this issue.

As some of my readers may not be familiar with the terms data warehouse or wicked problem, I’ll start with a short introduction to the two terms in order to set the stage for the main topic.

Data warehouse

A data warehouse is a repository of data that a business deems important for reporting and analysis. Ideally, a data warehouse integrates data from multiple sources – for example, CRM and financial systems – thereby serving as an authoritative source for management reports (often referred to as  a “single point of truth”). There are at least a couple of different design philosophies for data warehouses, but I won’t go into these as they are not relevant to the discussion.  What’s interesting is that most of the literature on data warehousing deals with its technical aspects – things such as data modelling and extract-transform-load processes –   yet, as anyone who has been involved in an enterprise-scale data warehousing effort will tell you, the biggest challenges are political, not technical. To be fair, this was recognized a while ago – Marc Demarest wrote an article on the politics of data warehousing in 1997. However, it is worth revisiting this issue because there are techniques to handle it that weren’t widely known at the time Demarest wrote his article. I discuss these briefly later, but first let’s look at what wickedness means and its relevance to data warehouse projects.

Wicked problems

The term wicked problem was coined by Horst Rittel and Melvin Webber in a now-classic paper entitled Dilemmas in a General Theory of Planning. The paper is essentially a critique of the traditional approach to social planning, wherein decisions are made by experts who, by virtue of their specialist knowledge and training, are assumed to know best.  Such an approach often doesn’t work because it ends up alienating stakeholders who are adversely affected by the “solution.”  This is a symptom of social complexity – messiness and conflict arising from diverse opinions as to what the problem is and how it should be solved.   Those involved in enterprise-scale IT initiatives – whether as users, managers or technical specialists – would have had first-hand experience of this social complexity.

How do we know that a problem is socially complex (or wicked)? That’s easy: In the paper, Rittel and Webber describe  ten criteria for wickedness – so a problem is wicked if it satisfies some or all of the Rittel-Webber criteria. We’ll take a look at the criteria and their relevance to data warehousing next.

The wickedness of data warehouse initiatives

To support my claim about the wickedness of data warehousing initiatives, I’ll simply list the ten Rittel-Webber criteria (in their original form) along with a brief commentary on how they can crop up in data warehouse projects.  Here we go:

  1. There is no definitive formulation of a wicked problem: Those who have worked on organisation-wide efforts at integrating data will know that the first problem is to decide “what’s in and what’s out” – that is, what data sources are considered in scope for integration. The problem arises because different business stakeholders have different views on what is important. For example, data that is critical to HR may not be a priority for the marketing function.
  2. Wicked problems have no stopping rule: Data warehouse initiatives are never definitively completed: there are always new data sources that need to be integrated;  old ones to be turned off;  business rules to be changed and so on. Any stopping rule that one might define will need to be revised as new business requirements come up and new data sources are revealed.
  3. Solutions to wicked problems are not true or false, but better or worse: This is simply an expression of the truism that there is no right or wrong way to build a data warehouse. There are a range of different architectures and approaches that can be chosen, each with their pros and cons (see this paper for a comparison of the two most popular approaches). The problem is that one often cannot tell beforehand which approach is going to be best for  a particular situation.
  4. There is no immediate or ultimate test of a solution to a wicked problem: This is a statement of the fact that one cannot tell whether or not a particular implementation can completely solve the problem of data integration. As Rittel and Webber put it, “…any solution, after being implemented, will generate waves of consequences over an extended – virtually unbounded – period of time. Moreover, the next day’s consequences of the solution may yield utterly undesirable consequences…”  Although these words are somewhat over-the-top, the message isn’t: for example, I have seen situations where programming errors that have remained undetected for years (yes, years) have lead to incorrect data being used in reports.
  5. Every solution to a wicked problem is a “one-shot” operation; because there is no opportunity to learn by trial and error, every attempt counts significantly: Because of the high costs of implementation, enterprise-scale IT initiatives tend to be one-shot affairs. Another limiting factor is that there is usually a very short window of time in which the project must be completed – as the cliché goes, “users need these reports yesterday.” Among other things, this precludes the option of learning by trial and error.
  6. Wicked problems do not have an enumerable (or exhaustively describable) set of potential solutions, nor is there a set of well-describable options that may be incorporated into the plan: This point may seem like it doesn’t apply to data warehousing initiatives – all data warehousing projects have a plan, right? Nevertheless, those who have worked on such projects will attest to the fact that the plan – such as it is – needs frequent revision because of surprises that crop up along the way.  Iterative/incremental development approaches can address these issues to some extent, but cannot eliminate them completely. Because of time constraints, it is inevitable that solutions to unexpected roadblocks occur through   improvisation rather than planning.
  7. Every wicked problem is essentially unique: This one is easy to see: every organisation is unique, and so are its data integration requirements. Methodologists and consultants may try to convince you otherwise, and tempt you into following generic approaches – but don’t be fooled, generic approaches will come unstuck. Your data is unique, treat it with the respect and seriousness it deserves.
  8. Every wicked problem can be considered to be a symptom of another problem: One of the key drivers of data warehouse projects is that organizations tend to have the same (or similar) data residing in multiple databases. As a consequence there are several different “sources of truth” for reports. These different sources of truth arise because systems used in different departments may have different definitions of the same business entity. For example, a customer might be defined in one way within the financial system but in another way in a CRM system. Seen in this light, the problem of multiple sources of truth is actually a symptom of lack of communication between different departments,  what is sometimes called silo mentality.
  9. The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem’s resolution: As discussed in the previous point, the discrepancy in the case of a data integration problem is the lack of congruency between different data sources. There can be a range of   explanations for the discrepancy. For example, one explanation may be that the data is actually different – a customer in the CRM system is not the same as the customer in the finance system; another explanation may be that  the two entities are the same but their definitions differ because the systems were developed independently of each other. The data integration solution in the two cases will differ –in other words, the solution to the problem depends on which explanation is seen as the correct one.
  10. The planner has no right to be wrong:  The data warehouse designer is in a difficult position: he or she may have to reconcile contradictory requirements. Following from the example of the previous point, whatever design decisions the designer makes regarding the definition of a customer, there will be some parties that will not be happy: if she goes with the finance definition, sales will be ticked off; if she chooses the sales definition, finance will not be happy; if she chooses to define a single common entity, neither will be pleased. Yet, her mandate is to satisfy all business requirements. This criterion is essentially an expression of the political aspect of data warehouse projects.

I find it quite amazing that criteria that were framed in the context of social planning problems can apply word-for-word  to data consolidation initiatives.

Managing wickedness in data warehousing

As should be evident from the above, wicked problems can’t be solved in the usual sense of the word, but they can be managed.  Although there are many techniques to manage wickedness, they all focus on the same end: to help all stakeholder groups reach a shared understanding of the problem and make a shared commitment to action.  Such a shared understanding  is absolutely critical because business and IT folks often have differing views on what a data warehouse ought to be.

One approach that I have used to help stakeholders get to a shared understanding in data warehouse projects is  dialogue mapping, a facilitation technique that maps out the conversation between stakeholders as it occurs. Dialogue mapping uses the Issue-Based Information System (IBIS) notation which was invented by Rittel as a means to document the different facets of a wicked problem. See this post for a data warehouse related example of dialogue mapping and this one for more on the IBIS notation.

Shared understanding and commitment to action is well and good, but in the end success is measured by deliverables: the data warehouse and accompanying reports must be built.  One of the challenges with a data warehouse initiative is that  customers have to wait a long (very long!) time before they see any tangible benefits. Agile approaches to data warehousing offer a way to address this issue. For those interested in the nuts and bolts of agile data warehousing, I recommend Ralph Hughes’ book, which discusses how Scrum can be adapted for data warehousing projects.

Although the juxtaposition of the terms “agile” and “data warehouse” may sound oxymoronic to some, there is evidence that it works (see this case study, for example).  Of course, no approach is a silver bullet; those who want to read about potential problems may want to look at  this thesis for a research-based view of the pros and cons of an  agile approach to data warehousing.

In the end, though, one has to keep in mind that no development technique – agile or otherwise – will succeed unless all stakeholders have a shared understanding of what the data warehouse is intended to achieve. The biggest issues are organisational rather than technical.

Conclusion

As we have seen, corporate data integration problems satisfy many – if not all – of the criteria for wickedness.  The main implication of this is that data consolidation at an enterprise level is not just a difficult technical problem it is also a socially complex one.  Although tackling this requires skills and techniques that are outside of the standard repertoire of technical staff and managers, these skills can be learnt.  What’s more, they are critical for success: those who undertake data warehouse projects without an understanding of the conflicting agendas of stakeholder groups may fail for reasons that have nothing to do with technology.

Written by K

June 23, 2011 at 11:11 pm

Six common pitfalls in project risk analysis

with 3 comments

The discussion of risk in presented in most textbooks and project management courses follows the well-trodden path of risk identification, analysis, response planning and monitoring (see the PMBOK guide, for example).  All good stuff, no doubt.  However, much of the guidance offered is at a very high level. Among other things, there is little practical advice on what not to do. In this post I address this issue by outlining some of the common pitfalls in project risk analysis.

1. Reliance on subjective judgement: People see things differently:  one person’s risk may even be another person’s opportunity. For example, using a new technology in a project can be seen as a risk (when focusing on the increased chance of failure) or opportunity (when focusing on the opportunities afforded by being an early adopter). This is a somewhat extreme example, but the fact remains that individual perceptions influence the way risks are evaluated.  Another problem with subjective judgement is that it is subject to cognitive biases – errors in perception. Many high profile project failures can be attributed to such biases:  see my post on cognitive bias and project failure for more on this. Given these points, potential risks should be discussed from different perspectives with the aim of reaching a common understanding of what they are and how they should be dealt with.

2. Using inappropriate historical data: Purveyors of risk analysis tools and methodologies exhort project managers to determine probabilities using relevant historical data. The word relevant is important: it emphasises that the data used to calculate probabilities (or distributions) should be from situations that are similar to the one at hand.  Consider, for example, the probability of a particular risk – say,  that a particular developer will not be able to deliver a module by a specified date.  One might have historical data for the developer, but the question remains as to which data points should be used. Clearly, only those data points that are from projects that are similar to the one at hand should be used.  But how is similarity defined? Although this is not an easy question to answer, it is critical as far as the relevance of the estimate is concerned. See my post on the reference class problem for more on this point.

3. Focusing on numerical measures exclusively: There is a widespread perception that quantitative measures of risk are better than qualitative ones. However,  even where reliable and relevant data is available,  the measures still need to  based on sound methodologies. Unfortunately, ad-hoc techniques abound in risk analysis:  see my posts on Cox’s risk matrix theorem and limitations of risk scoring methods for more on these.  Risk metrics based on such techniques can be misleading.  As Glen Alleman points out in this comment, in many situations qualitative measures may be more appropriate and accurate than quantitative ones.

4. Ignoring known risks: It is surprising how often known risks are ignored.  The reasons for this have to do with politics and mismanagement. I won’t dwell on this as I have dealt with it at length in an earlier post.

5. Overlooking the fact that risks are distributions, not point values: Risks are inherently uncertain, and any uncertain quantity is represented by a range of values, (each with an associated probability) rather than a single number (see this post for more on this point). Because of the scarcity or unreliability of historical data, distributions are often assumed a priori: that is, analysts will assume that the risk distribution has a particular form (say, normal or lognormal) and then evaluate distribution parameters using historical data.  Further, analysts often choose simple distributions that that are easy to work with mathematically.  These distributions often do not reflect reality. For example,  they may be vulnerable to “black swan” occurences because they do not account for outliers.

6. Failing to update risks in real time: Risks are rarely static – they evolve in time, influenced by circumstances and events both in and outside the project. For example, the acquisition of a key vendor by a mega-corporation is likely to affect the delivery of that module you are waiting on –and quite likely in an adverse way. Such a change in risk is obvious; there may be many that aren’t. Consequently, project managers need to reevaluate and update risks periodically. To be fair, this is a point that most textbooks make – but it is advice that is not followed as often as it should be.

This brings me to the end of my (subjective) list of risk analysis pitfalls. Regular readers of this blog will have noticed that some of the points made in this post are similar to the ones I made in my post on estimation errors. This is no surprise: risk analysis and project estimation are activities that deal with an uncertain future, so it is to be expected that they have common problems and pitfalls. One could generalize this point:  any activity that involves gazing into a murky crystal ball will be plagued by similar problems.

Written by K

June 2, 2011 at 10:21 pm

There’s trouble ahead: early warning signs of project failure

leave a comment »

Introduction

I’ve written a number of articles on project failure, covering topics ranging from  definitions of success to the role of biases in project failure.  As interesting as these issues are, they are somewhat removed from the day-to-day concerns of a  project manager who is  more interested in avoiding failure than defining or analyzing it. In a paper entitled, Early warning signs of IT project failure: the dominant dozen,   Leon Kappelman et. al. outline the top twelve risks associated with IT project failures.  This post summarises the paper and lists the top twelve signs of impending trouble on projects.

Background and research methodology

The authors  focus on early warning signs – i.e. those that occur within the initial 20% of the planned schedule. Further, to ensure comprehensive coverage of risks, their conclusions are  based on inputs from academic and industry journals as well as from experienced IT project managers.  The paper provides a detailed explanation of their research methodology, which I’ll quote directly from the paper:

The research team first searched the literature extensively to develop a preliminary list of early warning signs (EWSs). The two authors experienced in IT project management then added several EWSs based on their personal experience. Next, 19 IT project management experts were asked to assess the list. On the basis of their feedback, we added new items and modified others to develop a list of 53 EWSs. Finally, the research team invited 138 experienced IT project managers (including the original 19 experts) to participate in rating the 53 EWSs using a scale from 1 (extremely unimportant) to 7 (extremely important). Fifty-five (55) of these managers completed the survey, yielding a response rate of nearly 40 percent. The respondents had an average of more than 15 years of IT project management experience. The budgets of the largest IT projects they managed ranged from 3 million to 7 billion dollars. About 30 percent held the title of program or project manager and nearly 20 percent had consultant titles. Director or program analyst titles accounted for about 15 percent each, 10 percent were vice presidents, and the rest held titles such as CEO, CIO, chief scientist, chief technologist, or partner.

Although the list and the rankings were based on the subjective opinions of experts, the large number of participants ensures a degree of  consensus regarding  the most important factors.

The troublesome twelve

After ranking the fifty odd risks, the authors focused on those that had scores above 6 (out of  a maximum possible of  7 as discussed above).  There were 17 risks that satisfied this (somewhat arbitrary) criterion.  Some of these were similar, so they could be combined. For example, the four risks:

  • No documented milestone deliverables and due dates.
  • No project status progress process
  • Schedule deadline not reconciled to the project schedule
  • Early project delays are ignored — no revision to the overall project schedule

were combined into: ineffective schedule planning and/or management.

This process of combining the top 17 items resulted in twelve risks, half of which turned out to be people-related and the other half process-related.  I discuss each of the risks in detail below.

People-related early warning signs

1.       Lack of top management support: This was the number one risk out of the fifty three that the authors listed. This isn’t surprising – a project that lacks executive support is unlikely to get the financial,  material or human resources necessary to make it happen.

2.       Ineffective project manager: Project managers who lack the communication and managerial skills needed to move the project ahead pose a serious risk to projects. The authors point out  that this is a common risk on IT projects because project managers  are often technical folks who have been promoted to managers. As such they may lack the interest, aptitude and/or skills to manage projects. Interestingly, the authors do not comment on the converse problem – whether the project manager’s lack of technical/domain knowledge contributes to project failure.

3.       No stakeholder involvement and/or participation: A large number of projects proceed with minimal involvement of key stakeholders. Such folks often lose interest in  projects when more immediate matters consume their attention.  In such situations a project manager may find it hard to get the resources he or she needs to get the project done. Stakeholder or sponsor apathy is an obvious warning sign that a project is headed for trouble.

4.       Uncommitted project team: The commitment (preferably, full-time) of a team is essential for the success of a project.  Management needs to ensure that team members are given the time (and incentives) to work on the project. A point that is often left unconsidered is the intrinsic motivation of the team – see this post for a detailed discussion of motivation in project management.

5.       Lack of technical knowledge/skills:  Project teams need to have the technical skills and knowledge that is relevant to the project. Managers sometimes wrongly assume that project staff can pick up the required skills  whilst working on a project.  Another common management misconception is that project personnel can master new technologies solely by attending training courses.   Getting contractors to do the work  is one solution to the problem.  However,  the best option is  to give the team enough time to get familiar with the technology prior to the project or, failing this,  to switch to a technology that the team is familiar with.

6.       Subject matter experts are not available: It is often assumed that subject matter experts can provide adequate inputs into projects whilst doing their regular jobs. This seldom works – when there’s a choice between the project and their jobs, the latter always wins.  Project sponsors need to  ensure that subject matter experts are freed up to work on the project.

Process-related early warning signs

1.       Unclear scope: The authors label this one as “Lack of documented requirements and/or success criteria.” However I think it is better described by the phrase I’ve used. All project management methodologies emphasise the importance of clear, well-documented requirements and success criteria –  and with good reason too. Lack of clarity regarding project scope means that no one knows where the project is headed – a sure sign of trouble ahead.

2.       No change control process:  As the cliché reminds us, change is the only constant in business environments.  It is therefore inevitable that project scope will change.  Changes to scope –however minor they may seem- need to be assessed for their impact on the project. The effect of several small (unanalyzed) scope changes on the project schedule should not be underestimated! Many project managers have a hard time pushing back on scope changes foisted on them by senior executives. Hence it is important that the change control process applies across the board – to everyone regardless of their authority.

3.       Ineffective scheduling and schedule management: Many schedules are built on little more than guesswork and an unhealthy dose of optimism, often because they are drawn up without input from the folks who’ll actually do the work (see my article on estimation errors for more on this). Schedules need to be rooted in reality. For this to happen, they must be based on reliable estimates, preferably from those responsible for creating the deliverables. Once the schedule is created, it is the project manager’s responsibility to update it continually, reflecting all the detours and road-bumps that have occurred along the way.  A common failing is that time overruns are not properly recorded,  leading to a false illusion of progress.

4.       Communication breakdown: Project communication is the art of getting people on the same page when they are reading different books. In my post on obstacles to project communication, I have discussed some generic difficulties posed by differences in stakeholder backgrounds and world-views. One of the key responsibilities of a project manager is to ensure that everyone on the project has a shared understanding of the project goals and shared commitment to achieving them. This is as true in the middle or the end of a project as it is at the start.

5.       Resources assigned to another project:  In my experience resources are rarely reassigned wholesale to other projects. What usualy happens is that they are  reassigned on a part time basis,  as in “we’ll take 20 % of Matt’s time and 10% of Nick’s time.” The problem with this is that Matt and Nick will end up spending most  of their time on the other project, leaving the one on hand bereft.

6.       No  business case: A not uncommon refrain in corporate hallways is, “Why are we doing this project?” No project should be given the go-ahead without a well-articulated business case. Further still, since an understanding reason(s) for doing the project are central to its success, these should be made available to every stakeholder:  a shared understanding of the goals of the project is a prerequisite to a shared understanding of the rationale behind it.

I’m sure there aren’t any surprises in this list –  most project managers would agree that these are indeed common (and often ignored) early warning signs of failure.  However, I suspect that there will be substantial differences of opinion regarding their ranking. Wisely, the authors have refrained from attempting to rank the risks – the list is not in order of importance.

Conclusion

Good projects managers  anticipate potential problems and take action to avoid them.  Although the risks listed above are indeed obvious , they are often ignored. Affected projects then  limp on to oblivion because those responsible failed to react to portents of trouble.  Granted, it can be hard to see problems from within the system,  particularly when the system is a  high-pressure project.  That’s where such lists are useful:  they can warn the project manager of potential  trouble ahead.

Written by K

January 6, 2011 at 10:42 pm