Archive for the ‘Consulting’ Category
On the limitations of business intelligence systems
Introduction
One of the main uses of business intelligence (BI) systems is to support decision making in organisations. Indeed, the old term Decision Support Systems is more descriptive of such applications than the term BI systems (although the latter does have more pizzazz). However, as Tim Van Gelder pointed out in an insightful post, most BI tools available in the market do not offer a means to clarify the rationale behind decisions. As he stated, “[what] business intelligence suites (and knowledge management systems) seem to lack is any way to make the thinking behind core decision processes more explicit.”
Van Gelder is absolutely right: BI tools do not support the process of decision-making directly, all they do is present data or information on which a decision can be based. But there is more: BI systems are based on the view that data should be the primary consideration when making decisions. In this post I explore some of the (largely tacit) assumptions that flow from such a data-centric view. My discussion builds on some points made by Terry Winograd and Fernando Flores in their wonderful book, Understanding Computers and Cognition.
As we will see, the assumptions regarding the centrality of data are questionable, particularly when dealing with complex decisions. Moreover, since these assumptions are implicit in all BI systems, they highlight the limitations of using BI systems for making business decisions.
An example
To keep the discussion grounded, I’ll use a scenario to illustrate how assumptions of data-centrism can sneak into decision making. Consider a sales manager who creates sales action plans for representatives based on reports extracted from his organisation’s BI system. In doing this, he makes a number of tacit assumptions. They are:
- The sales action plans should be based on the data provided by the BI system.
- The data available in the system is relevant to the sales action plan.
- The information provided by the system is objectively correct.
- The side-effects of basing decisions (primarily) on data are negligible.
The assumptions and why they are incorrect
Below I state some of the key assumptions of the data-centric paradigm of BI and discuss their limitations using the example of the previous section.
Decisions should be based on data alone: BI systems promote the view that decisions can be made based on data alone. The danger in such a view is that it overlooks social, emotional, intuitive and qualitative factors that can and should influence decisions. For example, a sales representative may have qualitative information regarding sales prospects that cannot be inferred from the data. Such information should be factored into the sales action plan providing the representative can justify it or is willing to stand by it.
The available data is relevant to the decision being made: Another tacit assumption made by users of BI systems is that the information provided is relevant to the decisions they have to make. However, most BI systems are designed to answer specific, predetermined questions. In general these cannot cover all possible questions that managers may ask in the future.
More important is the fact that the data itself may be based on assumptions that are not known to users. For example, our sales manager may be tempted to incorporate market forecasts simply because they are available in the BI system. However, if he chooses to use the forecasts, he will likely not take the trouble to check the assumptions behind the models that generated the forecasts.
The available data is objectively correct: Users of BI systems tend to look upon them as a source of objective truth. One of the reasons for this is that quantitative data tends to be viewed as being more reliable than qualitative data. However, consider the following:
- In many cases it is impossible to establish the veracity of quantitative data, let alone its accuracy. In extreme cases, data can be deliberately distorted or fabricated (over the last few years there have been some high profile cases of this that need no elaboration…).
- The imposition of arbitrary quantitative scales on qualitative data can lead to meaningless numerical measures. See my post on the limitations of scoring methods in risk analysis for a deeper discussion of this point.
- The information that a BI system holds is based the subjective choices (and biases) of its designers.
In short, the data in a BI system does not represent an objective truth. It is based on subjective choices of users and designers, and thus may not be an accurate reflection of the reality it allegedly represents. (Note added on 16 Feb 2013: See my essay on data, information and truth in organisations for more on this point).
Side-effects of data-based decisions are negligible: When basing decisions on data, side-effects are often ignored. Although this point is closely related to the first one, it is worth making separately. For example, judging a sales representative’s performance on sales figures alone may motivate the representative to push sales at the cost of building sustainable relationships with customers. Another example of such behaviour is observed in call centers where employees are measured by number of calls rather than call quality (which is much harder to measure). The former metric incentivizes employees to complete calls rather than resolve issues that are raised in them. See my post entitled, measuring the unmeasurable, for a more detailed discussion of this point.
Although I have used a scenario to highlight problems of the above assumptions, they are independent of the specifics of any particular decision or system. In short, they are inherent in BI systems that are based on data – which includes most systems in operation.
Programmable and non-programmable decisions
Of course, BI systems are perfectly adequate – even indispensable – for certain situations. Examples of these include, financial reporting (when done right!) and other operational reporting (inventory, logistics etc). These generally tend to be routine situations with clear cut decision criteria and well-defined processes. Simply put, they are the kinds of decisions that can be programmed.
On the other hand, many decisions cannot be programmed: they have to be made based on incomplete and/or ambiguous information that can be interpreted in a variety of ways. Examples include issues such as what an organization should do in response to increased competition or formulating a sales action plan in a rapidly changing business environment. These issues are wicked: among other things, there is a diversity of viewpoints on how they should be resolved. A business manager and a sales representative are likely to have different views on how sales action plans should be adjusted in response to a changing business environment. The shortcomings of BI systems become particularly obvious when dealing with such problems.
Some may argue that it is naïve to expect BI systems to be able to handle such problems. I agree entirely. However, it is easy to overlook over the limitations of these systems, particularly when called upon to make snap decisions on complex matters. Moreover, any critical reflection regarding what BI ought to be is drowned in a deluge of vendor propaganda and advertisements masquerading as independent advice in the pages of BI trade journals.
Conclusion
In this article I have argued that BI systems have some inherent limitations as decision support tools because they focus attention on data to the exclusion of other, equally important factors. Although the data-centric paradigm promoted by these systems is adequate for routine matters, it falls short when applied to complex decision problems.
Chasing the mirage: the illusion of corporate IT standards
Introduction
Corporate IT environments tend to evolve in a haphazard fashion, reflecting the competing demands made on them by the organisational functions they support. This state of affairs suggests that IT is doing what it should be doing: supporting the work of organisations. On the other hand, this can result in unwieldy environments that are difficult and expensive to maintain. Efforts to address this generally involve the imposition of standards relating to infrastructure, software and processes. Unfortunately, the results of such efforts are mixed: although the adoption of standards can reduce IT costs, it does not lead to as much standardization as one might expect. In this post I explore why this is so. To this end I first look at intrinsic properties or characteristics that standards are assumed to have and discuss why they don’t actually have them. After that I look at some other factors that are external to standards but can also work against them. My discussion is inspired by and partially based on a paper by Ole Hanseth and Kristin Braa entitled, Hunting for Treasure at the End of the Rainbow: Standardizing Corporate IT Infrastructure.
Assumed characteristics of standards and why they are false
Those who formulate corporate IT standards have in mind a set of specifications that have the following intrinsic characteristics:
- Universality – the specifications are applicable to all users and situations.
- Completeness – they include all details, leaving nothing to the discretion of implementers.
- Unambiguity – every specification has only one possible interpretation.
Unfortunately, none of these hold in the real world. Let’s take a brief look at each of them in turn.
Non-universality
To understand why the universality claimed by standards is false, it is useful to start by considering how a standard is created. Any new knowledge is necessarily local before it becomes a standard– that is, it is formed in a particular context and situation. For example, a particular IT help desk process depends, among other things, on the budget of the IT department and the skills of the helpdesk staff. Moreover, it also depends on external factors such as organizational culture, business expectations, vendor response times and other external interfaces.
Once a process is established, however, local context is deleted and the process is presented as being universal. The key point is that this is an abstraction – the process is presented in a way that presumes that the original context does not matter. However, when one wants to reproduce the process in another environment, one has to reconstruct the context. The problem is that this is not possible; one cannot reproduce the exact same context as the one in which the process was originally constructed. Consequently, the standard has to be tailored to suit the new situation and context. Often this tailoring can be quite drastic. Further, different units in within an organisation might need to tailor the process differently: the customisations that work for the US branch of an organisation may not work in its Australian subsidiary. So one often ends up with different organisational units implementing their own versions of the standard.
Incompleteness
Related to the above point is the fact that standards are incomplete. We have seen that standards omit context. However, that is not all: standards documents are generally written at a high level that inevitably overlooks technical detail. As a consequence, those implementing standards have to fill in the gaps based on their knowledge of the technology This inevitably leads to a divergence between an espoused standard and its implementation.
Ambiguity
Two people who read a set of high-level instructions will often come away with different interpretations of what exactly those instructions mean. Such differences can be overcome providing:
- Those involved are aware of the differences in interpretation, and
- They care enough to want to do something about it.
These points are moot Firstly, people tend to assume that their interpretation is the right one. Secondly, even if they are aware of ambiguities, they may choose not to seek clarification because of geographical, language and other barriers.
Other factors
Some may argue that it is possible to work through some of the problems listed above. For example, it is possible – with some effort – to reduce incompleteness and ambiguity. Nevertheless, even if one does this (and the effort should not be underestimated!), there are other factors that can sabotage the implementation of standards. These include:
- Politics – It is a fact of life that organisations consist of stakeholder groups with different interests. Quite often these interests will conflict with each other. A good example is the outsourcing vs. in-house IT debate, in which management and staff usually have opposing views.
- Legacy – Those who want to implement standards have to overcome the resistance of legacy – the installed base that already exists within the organisation. Typically owners and users of legacy systems will oppose the imposition of the new standards, first overtly and if that does not work, then covertly. Moreover, legacy applications make demands of their own – infrastructure requirements, interfaces, support etc., each of which may not be compatible with the new standards.
- FUD factor – Finally, there is the whole issue of FUD (Fear, Uncertainty and Doubt) caused by the new standards. Many IT staff and other employees view standards negatively because they represent an unknown. Although much is said about the need to inform and educate people, most often this is done in a half-baked way that only serves to increase FUD.
In summary
Although the implementation of corporate IT standards can reduce an organisation’s application portfolio and the attendant costs, it does not reduce complexity as much as managers might hope. As discussed above, non-universality, incompleteness and ambiguity of standards will generally end up subverting standardization (see my post entitled The ERP paradox for an example of this at work). Moreover, even if an organisation addresses the inherent shortcomings of standards, the human factor remains: individuals who might lose out will resist change, and different groups will push to have their preferred platforms included in the standard.
In summary: a standardized IT environment will remain a mirage, tantalizingly in sight but always out of reach.
The ERP paradox
“…strategic alignment flounders in never-ending tactics and compromises…” Ole Hanseth et. al. in The Control Devolution: ERP and the Side-effects of Globalization (The Database for advances in information systems, Vol. 32, pp. 34-36)
Introduction
Organisations implement Enterprise Resource Planning (ERP) systems for a number of reasons. Some of the more common ones are:
- To gain control over processes within the organisation.
- To make these processes more efficient.
- To reduce the portfolio of applications that the IS department has to manage.
Yet, the end result of ERP implementations is often the opposite: less control, and efficiency; and even though the number of applications may be reduced, this advantage is often offset by the cost and effort of maintaining ERP systems. In this post I explore this paradox, drawing from the paper from which the quote at the start of this post was taken. In essence, the paper discusses- via a case study – how the implementation of an ERP system can actually increase organisational drift and reduce efficiency.
Globalisation and its effect on IT strategy
Those who have lived through an ERP implementation would be well aware of the some of the difficulties associated with these. This is no longer news: there is a fair bit of research done on the problems and pitfalls of ERP system implementations (see this paper, for example). The question, however, is why ERP implementations run into problems. To answer this, the authors of the paper turn to the notion of globalisation and how ERP systems can be seen as a reaction to it.
Globalisation is essentially the interaction and integration between people of different cultures across geographical boundaries, facilitated by communication, trade and technology. The increasing number of corporations with a global presence is one of the manifestations of globalisation. For such organisations, ERP systems are seen as a means to facilitate globalisation and control it.
There are four strategies that an organisation can choose from when establishing a global presence. These are:
- Multinational: Where individual subsidiaries are operated autonomously.
- International: Where work practices from the parent company diffuse through the subsidiaries (in a non-formal way).
- Global: Where local business activities are closely controlled by the parent corporation.
- Transnational: This (ideal) model balances central control and local autonomy in a way that meets the needs of the corporation while taking into account the uniqueness of local conditions.
These four business strategies map to two corporate IT strategies:
- Autonomous: where individual subsidiaries have their own IT strategies, loosely governed by corporate.
- Headquarters-driven: where IT operations are tightly controlled by the parent corporation.
Neither is perfect – both have downsides that start to become evident only after a particular strategy is implemented. Given this, it is no surprise that organisations tend to cycle between the two strategies, with cycle times varying from five to ten years; a trend that corporate IT minions are all too familiar with. Typically, though, executive management tends to favour the centrally-driven approach since it holds the promise of higher control and reduced costs.
Another consequence of globalisation is the trend towards outsourcing IT infrastructure and services. This is particularly popular for operational IT – things like infrastructure and support. In view of this, it is no surprise that the organisation discussed in the paper chose to outsource their ERP operations to an external vendor. Equally unsurprising, perhaps, is that the quality of service did not match expectations.
The effect of modernity
The phenomenon of modernity forms an essential part of the backdrop against which ERP systems are implemented. According to a sociological definition due to Anthony Giddens, modernity is “associated with (1) a certain set of attitudes towards the world, the idea of the world as open to transformation, by human intervention; (2) a complex of economic institutions, especially industrial production and a market economy; (3) a certain range of political institutions, including the nation-state and mass democracy”
Modernity is characterised by the following three “forces” that have a direct impact on our lives:
- The separation of space and time: This refers to the ability to coordinate activities across the world – be they global supply chains or virtual project teams. The ability to coordinate work across space and time is made possible by technology. The important consequence of this ability, relevant to ERP systems, is that it makes it possible for organisations to increase their level of surveillance and control of key business processes.
- The development of disembedding mechanisms: As I have discussed at length in this post, organisations often “import” procedures that have worked well in organisations. The assumption underlying this practice is that the procedures can be lifted out of their original context and implemented in another one without change. This, in turn, tacitly assumes that those responsible for implementing the procedure in the new context understand the underlying cause-effect relationships completely. This world-view, where organisational processes and procedures are elevated to the status of universal “best practices” is an example of a disembedding mechanism at work. Disembedding mechanisms are essentially processes via which certain facts are abstracted from their context and ascribed a universal meaning.
- The reflexivity of knowledge and practice: Reflexive phenomena are those for which cause-effect relationships are bi-directional – i.e. causes determine effects which in turn modify the causes. Such phenomena are unstable in the sense that they are continually evolving – in potentially unpredictable ways. Organisational practices (which are based on organisational knowledge) are reflexive in the sense that they are continually modified in the light of their results or effects. This conflicts with the main rationale for ERP systems, which is to rationalise and automate organisational processes and procedures (most often in a completely inflexible manner) .
Implications for organisations
One of the main implications of globalisation and modernity is that the world is now more interconnected than ever before. This is illustrated by the global repercussions of the financial crises that have occurred in recent times. For globalised organisations this manifests itself in not-so-obvious dependencies – both on events internal to the organisation and those that take place in its business, political and social environment. The important thing to note is that these events are outside of the organisation’s control. At best they can be managed as risks –i.e. events that cannot be foreseen with certainty.
A standard response to risk is to increase control. Arguably, this may well be the single most common executive-level rationale behind many ERP implementations. Yet, paradoxically, the imposition of stringent controls can lead to less control. One of the main reasons for this is that strict controls can give rise to unanticipated side effects. A good example of this is when employees learn how to game performance metrics and service level agreements.
The gap between plan and reality
The authors use a case study to illustrate how ERP implementations can be subverted by the side effects of globalisation and modernity. The organisation they studied was Norsk Hydro a Norwegian multinational which, at that time, was undergoing an organisation-wide consolidation of its IT infrastructure and services. Up until then, the IT landscape within the organisation was heterogeneous, with individual business units and subsidiaries free to implement whatever systems they saw fit. The decision to implement a global ERP system (SAP R/3) was a direct consequence the drive to consolidate the IT portfolio.
To reduce risk, it was decided to develop and validate a pilot project in one site and manufacturing plant. Problems started to emerge during the pilot validation. As the authors state:
When the pilot was installed, it took three months of extensive support to make it work properly. …The validation effort identified more than 1000 “issues,” each of them requiring changes in the system.
Understandably, business managers were not impressed:
Some managers also argued that the “final” version should be based on a complete redesign of the pilot, as the latter was not structured as well as the more complex “final” version would require.
Yet, this redesign never happened. One can only speculate why – but it is a pretty good guess that cost had something to do with it.
The SAP implementation unfolded against a backdrop of a large-scale business restructuring. One of the other sub-projects in this restructuring was a business re-engineering initiative. Quite logically, this was subsumed within the SAP project. One of the main outcomes of this was the establishment of “common” organisation-wide processes to replace myriad local processes. These common processes were to be modelled on “best practices.” Although this made sense from a management perspective, implementation was difficult because just about every local procedure had quirks that could not be shoe-horned into standardised global processes.
Side effects
The authors list a number of unintended “side effects” of the implementation. I will describe just a couple of these, referring the reader to the original paper for others.
Homogeneity to heterogeneity
Ideally, an SAP implementation is intended to provide a single “harmonised” solution across an organisation. In practice, however, local differences and the existence of legacy systems guarantees that this ideal will be compromised. This is exactly what happened at Norsk Hydro. In the authors’ words:
…differences in business cultures and market structures in nations and regions [had to be accounted for]. In this process locals played a key role. They, in fact, took over the design process and turned SAP into an ally helping them get control over the overall change process…the SAP solution was customised for each individual site. Slowly, but irreversibly, the SAP solution had changed from one coherent common system to a complex, heterogeneous infrastructure.
Those who have lived through an ERP implementation may recognise echoes of this in their own experiences.
Side effects of integration
Although the above example illustrates the integration was perhaps not as complete as was intended, the implementation was largely successful in rationalising the organisation’s IT landscape. For one, it replaced several legacy systems, thus (in theory) reducing costs. However, as the authors’ point out, integration means interdependence, which can create significant maintenance problems. ERP systems are notoriously hard and expensive to maintain. Norsk Hydro’s experience was no different: SAP upgrades were horrendously expensive and time consuming. As the authors state:
Typically, SAP is subject to rapid change because the huge customer base generates lots of new requirements all the time. Moreover, as its integrated nature implies, when any module is changed, the whole system has to be modified. Thus, in spite of the fact that the number of interfaces to be maintained decreases when an organization installs SAP, their complexity and change rate increase so much that the overall maintenance costs reach very high levels.
In spite of the standard solutions applied, the upgrades of the SAP code itself are also very complex and time consuming. The last upgrade (at the time of writing) enforced the SAP application to be down for 9 days! Also here there are many explanations.
For example, when all the work processes are integrated it creates a complex production lattices. Because of many errors in the software all work processes have to be tested extensively, etc.
This side effect is, in fact, an unavoidable consequence of the complexity and interconnectedness of ERP systems
Conclusion
In closing, it is appropriate to return to the themes mentioned at the start of this post. The case study discussed by the authors highlights the fact that ERP systems can have effects that are exactly opposite to the ones intended. Specifically, they can lead to less rather than more control, less efficiency and addition of complexity to the IT portfolio. Moreover, seen in a broader context, ERP systems are a microcosm of modernity: they attempt to coordinate activities at a global scale, implement disembedding mechanisms in the form of best practices, and are reflexive in the sense that they change organisational practices but are also changed by them. The interconnectedness and uncertain cause-effect relationships lead to unanticipated side effects that can completely subvert the original intent of these systems.
The authors summarise this well in the last few lines of the paper:
ERP installations in global organizations conform pretty well to the image of the modern world as a juggernaut, i.e. a runaway engine of enormous power that, collectively as human beings, we can drive to some extent but that also threatens to rush out of our control in directions we cannot foresee, crushing those who resist it
In my opinion, those thinking of committing their organisations to implementing ERP systems would do well to read this paper in addition to vendor propaganda literature.

