Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Corporate IT’ Category

The king’s son – a project management fable

with 22 comments

Once upon a time there was a king who was much loved by his people. The people loved him because he did many Good Things: he built roads for those who needed to travel long distances, houses for those who lacked a place to live and even initiated software projects to keep geeks in gainful employment.

All the Good Things the king did needed money and although the king was rich, his resources were not unlimited.  Naturally, the king’s treasurer wanted to ensure that the funds flowing out of the state coffers were being put to good use.

One day, at a council meeting the treasurer summoned up his courage and asked the king, “Your highness, I know your intentions are good, but how do we know that all the money we spend is being used properly?”

“It must be so because the people are happy,” replied the king.

“Yes they are happy and that is good,” said the treasurer, “but how do we know that money we spend is not being wasted?  Is it not possible that we could save money by coordinating, planning and monitoring the Good Things we do in an organized manner?”

The king (who was known to think from time to time) mulled over this for a few days.

After much mulling, he summoned his treasurer and said, “You are right. We should be more organized in the way we do all the Good Things we do. This task is so important that I will ask my second son to oversee the Good Things we do. He is, after all, a Prince Too.”

The second son (who was a Prince Too) took to his new role with relish. His first act was to set up a Governance Committee to oversee and direct all the Good Things that were being done. He ordered the board to come up with a process that would ensure that the Good Things being done would be done in an efficient and transparent way.  His second act was to publish a decree, declaring that all those who did not follow the process would be summarily terminated.

Many expensive consultants and long meetings later, the Governance Committee announced they had a methodology (they could coin a word or two…) which, if followed to the letter, would ensure that all the Good Things being done were done efficiently, in a way  to ensure value for the state. They had the assurance of those expensive consultants that the methodology was tested and proven so they believed this would happen as a matter of course. Moreover,  the rates that the consultants charged convinced the Governance Committee that this must indeed be so.

In keeping with penchant of committees to name things, they gave the methodology the name of the king’s son (who, as we have seen earlier, was a Prince Too).

And so it came to pass that all the Good Things being done followed a process.  Those who managed the Good Things and those who actually did them, underwent rigorous training in the foundations and practice of the methodology (which meant more revenue for the consultants). The planners and the doers then went out and applied the methodology in their work.

And for a while, everyone was happy: the king, the treasurer, the Governance Committee ….and of course, the Prince Too.

After sometime, however, the treasurer noticed that the flow of money out of his coffers and into the Good Things had not lessened – on the contrary, it seemed to have increased. This alarmed him, so  he requested a meeting with the king’s son to discuss the matter. The king’s son, on hearing the treasurer’s tale, was alarmed too (his father would not be happy if he heard that methodology had made the matter worse…).

The king’s son summoned the Governance Committee and demanded an Explanation Now! Yes, this was how he said it, he was very, very angry.

The Governance Committee were at a loss to explain the paradox. They were using a tested and proven methodology (as the expensive consultants assured them), yet their cost of all the Good Things they were doing was rising. “What gives?” they wondered. Try as they did, they could not find an answer. After much cogitation they called in the expensive consultants and demanded an explanation.

The consultants said that the methodology was Tested and Proven. It was simply not possible that it wasn’t working.  To diagnose the problem they recommended a month long audit of all the Good Things that had been done since the methodology was imposed.

The Governance Committee agreed; they had little choice (unless they preferred summary termination, which they didn’t).

The audit thus proceeded.

A month later the consultant reported  back to the Governance Committee.  “We know what the problem is,” they said. “Those who do Good Things aren’t following the methodology to the letter.  You must understand that the benefits of the methodology will be realised only if it is implemented properly. We recommend that everyone undergoes refresher training in the methodology so that they understand it properly .”

The Governance Committee went to the treasurer, explained the situation and requested that funds be granted for refresher courses.

On hearing this, the treasurer was livid. “What? We have to spend more money to fix this problem? You must be joking.”  He was very angry but he could see no other way;  the consultants were the only ones who could see them out of this mess.

The money was sanctioned and the training conducted. More Good Things were done but, unfortunately, the costs did not settle down.  Things, in fact, got so bad that the treasurer went directly to the king and mentioned the problem.

The king said, “Summon my second son,” he said imperiously, “I must have Words with him.”

The second son (who was a Prince Too) was summoned and arrived post-haste. His retainers had warned him that the king was very very angry.

“Father, you requested my presence?” He asked, a tad tremulously.

“Damn right, I requested your presence. I asked you to ensure that my money is being well spent on creating Good Things, and now I find that you are spending even more than we did before I put you in charge. I demand an explanation,” thundered the king.

The king’s son knew he was in trouble, but he was a quick thinker.  “Father,” he said, “I am as disappointed as you are with the performance of the Governance Committee; so disappointed am I that I shall terminate them summarily.”

“You do that son,” said the king, “and staunch the flow of funds from my coffers. I don’t know much, but I do know that when the treasurer tells me that I am running out of money, I have a serious problem.”

And so the Governance Committee was terminated. The expensive consultants, however, lived on as did the king’s son (who was after all a Prince Too ).  He knew he would try again, but with a more competent Governance Committee.  He had no choice –  the present bunch of incompetents had been summarily terminated.

Acknowledgement

This piece was inspired by Craig Brown’s New Prince2 Hypothesis.

Written by K

May 2, 2012 at 7:19 pm

On the limitations of business intelligence systems

with 7 comments

Introduction

One of the main uses of business intelligence  (BI) systems is to support decision making in organisations.  Indeed, the old term Decision Support Systems is more descriptive of such applications than the term BI systems (although the latter does have more pizzazz).   However, as Tim Van Gelder pointed out in an insightful post,  most BI tools available in the market do not offer a means to clarify the rationale behind decisions.   As he stated, “[what] business intelligence suites (and knowledge management systems) seem to lack is any way to make the thinking behind core decision processes more explicit.”

Van Gelder is absolutely right:  BI tools do not support the process of decision-making directly, all they do is present data or information on which a decision can be based.  But there is more:  BI systems are based on  the view that data should be the primary consideration when making decisions.   In this post I explore some of the (largely tacit) assumptions that flow from such a data-centric view. My discussion builds on some points made by Terry Winograd and Fernando Flores in their wonderful book, Understanding Computers and Cognition.

As we will see, the assumptions regarding the centrality of data are questionable, particularly when dealing with complex decisions. Moreover, since these assumptions are implicit in all BI systems, they highlight the limitations of using BI systems for making business decisions.

An example

To keep the discussion grounded, I’ll use a scenario to illustrate how assumptions of data-centrism can sneak into decision making. Consider a sales manager who creates sales action plans for representatives based on reports extracted from his organisation’s BI system. In doing this, he makes a number of tacit assumptions. They are:

  1. The sales action plans should be based on the data provided by the BI system.
  2. The data available in the system is relevant to the sales action plan.
  3. The information provided by the system is objectively correct.
  4. The  side-effects of basing decisions (primarily) on data are negligible.

The assumptions and why they are incorrect

Below I state some of the key assumptions of the data-centric paradigm of BI and discuss their limitations using the example of the previous section.

Decisions should be based on data alone:    BI systems promote the view that decisions can be made based on data alone.  The danger in such a view is that it overlooks social, emotional, intuitive and qualitative factors that can and should influence decisions.  For example, a sales representative may have qualitative information regarding sales prospects that cannot be inferred from the data. Such information should be factored into the sales action plan providing the representative can justify it or is willing to stand by it.

The available data is relevant to the decision being made: Another tacit assumption made by users of BI systems is that the information provided is relevant to the decisions they have to make. However, most BI systems are designed to answer specific, predetermined questions. In general these cannot cover all possible questions that managers may ask in the future.

More important is the fact that the data itself may be based on assumptions that are not known to users. For example, our sales manager may be tempted to incorporate market forecasts simply because they are available in the BI system.  However, if he chooses to use the forecasts, he will likely not take the trouble to check the assumptions behind the models that generated the forecasts.

The available data is objectively correct:  Users of BI systems tend to look upon them as a source of objective truth. One of the reasons for this is that quantitative data tends to be viewed as being more reliable than qualitative data.  However, consider the following:

  1. In many cases it is impossible to establish the veracity of quantitative data, let alone its accuracy. In extreme cases, data can be deliberately distorted or fabricated (over the last few years there have been some high profile cases of this that need no elaboration…).
  2. The imposition of arbitrary quantitative scales on qualitative data can lead to meaningless numerical measures. See my post on the limitations of scoring methods in risk analysis for a deeper discussion of this point.
  3. The information that a BI system holds is based the subjective choices (and biases) of its designers.

In short, the data in a BI system does not represent an objective truth. It is based on subjective choices of users and designers, and thus may not be an accurate reflection of the reality it allegedly represents. (Note added on 16 Feb 2013:  See my essay on data, information and truth in organisations for more on this point).

Side-effects of data-based decisions are negligible:  When basing decisions on data, side-effects are often ignored. Although this point is closely related to the first one, it is worth making separately.  For example, judging a sales representative’s performance on sales figures alone may motivate the representative to push sales at the cost of building sustainable relationships with customers.  Another example of such behaviour is observed in call centers where employees are measured by number of calls rather than call quality (which is much harder to measure). The former metric incentivizes employees to complete calls rather than resolve issues that are raised in them. See my post entitled, measuring the unmeasurable, for a more detailed discussion of this point.

Although I have used a scenario to highlight problems of the above assumptions, they are independent of the specifics of any particular decision or system. In short, they are inherent in BI systems that are based on data – which includes most systems in operation.

Programmable and non-programmable decisions

Of course, BI systems are perfectly adequate – even indispensable –  for certain situations. Examples of these include, financial reporting (when done right!) and other operational reporting (inventory, logistics etc).  These generally tend to be routine situations with clear cut decision criteria and well-defined processes. Simply put, they are the kinds of decisions that can be programmed.

On the other hand, many decisions cannot be programmed: they have to be made based on incomplete and/or ambiguous information that can be interpreted in a variety of ways. Examples include issues such as what an organization should do in response to increased competition or formulating a sales action plan in a rapidly changing business environment. These issues are wicked: among other things, there is a diversity of viewpoints on how they should be resolved. A business manager and a sales representative are likely to have different views on how sales action plans should be adjusted in response to a changing business environment. The shortcomings of BI systems become particularly obvious when dealing with such problems.

Some may argue that it is naïve to expect BI systems to be able to handle such problems. I agree entirely. However, it is easy to overlook over the limitations of these systems, particularly when called upon to make snap decisions on complex matters. Moreover, any critical reflection regarding what BI ought to be is drowned in a deluge of vendor propaganda and advertisements masquerading as independent advice in the pages of BI trade journals.

Conclusion

In this article I have argued that BI systems have some inherent limitations as decision support tools because they focus attention on data to the exclusion of other, equally important factors.  Although the data-centric paradigm promoted by these systems is adequate for routine matters, it falls short when applied to complex decision problems.

Written by K

November 24, 2011 at 6:20 am

Chasing the mirage: the illusion of corporate IT standards

with 6 comments

Introduction

Corporate IT environments tend to evolve in a haphazard fashion, reflecting the competing demands made on them by the organisational functions they support. This state of affairs suggests that IT is doing what it should be doing: supporting the work of  organisations.  On the other hand, this can result in unwieldy environments that are difficult and expensive to maintain. Efforts to address this  generally involve the imposition of standards relating to infrastructure, software and processes.  Unfortunately, the results of such efforts are mixed: although the adoption of standards  can reduce IT  costs, it does not lead to as much standardization as one might expect. In this post I explore why this is so. To this end I first look at intrinsic properties or characteristics that standards are assumed to have and discuss why they don’t actually have them. After that I look at some other factors that are external to standards but can also work against them.  My discussion is inspired by and partially based on a paper by Ole Hanseth and Kristin Braa entitled, Hunting for Treasure at the End of the Rainbow: Standardizing Corporate IT Infrastructure.

Assumed characteristics of standards and why they are false

Those who formulate corporate IT standards have in mind a set of specifications that have the following intrinsic characteristics:

  1. Universality –  the specifications are applicable to all users and situations.
  2. Completeness –    they include all details,  leaving nothing to the discretion of implementers.
  3. Unambiguity –     every specification has only one possible interpretation.

Unfortunately, none of these hold in the real world. Let’s take a brief look at each of them in turn.

Non-universality

To understand why the universality claimed by standards is false, it is useful to start by considering how a standard is created. Any new knowledge is necessarily local before it becomes a standard– that is, it is formed in a particular context and situation. For example, a particular IT help desk process depends, among other things, on the budget of the IT department and the skills of the helpdesk staff.  Moreover, it also depends on external factors such as organizational culture, business expectations, vendor response times and other external interfaces.

Once a process is established, however, local context is deleted and the process is presented as being universal. The key point is that this is an abstraction – the process is presented in a way that presumes that the original context does not matter. However, when one wants to reproduce the process in another environment, one has to reconstruct the context. The problem is that this is not possible; one cannot reproduce the exact same context as the one in which the process was originally constructed.  Consequently, the standard  has to be tailored to suit the new situation and context. Often this tailoring can be quite drastic. Further, different units in within an organisation might need to tailor the process differently: the customisations that work for the US branch of an organisation may not work in its Australian subsidiary. So one often ends up with different organisational units implementing their own versions of the standard.

Incompleteness

Related to the above point is the fact that standards are incomplete. We have seen that standards omit context. However, that is not all: standards documents are generally written at a high level that inevitably overlooks technical detail.  As a consequence, those implementing standards have to fill in the gaps based on their knowledge of the technology This inevitably leads to a divergence between an espoused standard and its implementation.

Ambiguity

Two people who read a set of high-level instructions will often come away with different interpretations of what exactly those instructions mean. Such differences can be overcome providing:

  1. Those involved are aware of the differences in interpretation, and
  2. They care enough to want to do something about it.

These points are moot  Firstly, people tend to assume that their interpretation is the right one. Secondly, even if they are aware of ambiguities,  they may choose not to seek clarification because of geographical, language  and other barriers.

Other factors

Some may argue that it is possible to work through some of the problems listed above. For example, it is possible – with some effort – to reduce incompleteness and ambiguity. Nevertheless, even if one does this (and the effort should not be underestimated!), there are other factors that can sabotage the implementation of standards. These include:

  1. Politics – It is a fact of life that organisations consist of stakeholder groups with different interests.  Quite often these interests will conflict with each other.  A good example is the outsourcing vs. in-house IT debate, in which  management and staff usually have opposing views.
  2. Legacy – Those who want to implement standards have to overcome the resistance of legacy – the installed base that already exists within the organisation. Typically owners and users of legacy systems will oppose the imposition of the new standards, first overtly and if that does not work, then covertly. Moreover, legacy applications make demands of their own – infrastructure requirements, interfaces, support etc., each of which may not  be compatible with the new standards.
  3. FUD factor – Finally, there is the whole issue of FUD (Fear, Uncertainty and Doubt) caused by the new standards. Many IT staff and other employees view standards negatively because they represent an unknown. Although much  is said about the need to inform and educate people, most often this is done in a half-baked way that only serves to increase FUD.

 In summary

Although the implementation of corporate IT standards can reduce an organisation’s application portfolio and the attendant costs, it does not reduce complexity as much as managers might hope.   As discussed above, non-universality, incompleteness and ambiguity of standards will generally end up subverting standardization (see my post entitled The ERP paradox for an example of this at work).  Moreover, even if an organisation addresses the inherent shortcomings of standards,  the human factor remains:  individuals who might lose out  will resist change, and different groups will push to have their preferred platforms included in the standard.

In summary:  a standardized IT environment will remain a mirage, tantalizingly in sight but always out of reach.

Written by K

September 16, 2011 at 5:53 am