Archive for the ‘IT Management’ Category
Ironies of enterprise information technology
Introduction
On one of my random walks through Google Scholar, I stumbled on an interesting paper entitled, Ironies of Automation. The main message of the paper is nicely summarized in its first few lines:
The classic aim of automation is to replace human manual control, planning and problem solving by automatic devices and computers. However… even highly automated systems, such as electric power networks, need human beings for supervision, adjustment, maintenance, expansion and improvement. Therefore one can draw the paradoxical conclusion that automated systems still are man-machine systems, for which both technical and human factors are important. This paper suggests that the increased interest in human factors among engineers reflects the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator.
These lines were written over thirty years ago, but are ever more apt today – such paradoxes are rife, not only in automation, but in any field in which technology plays an important part. To illustrate my point, I highlight a couple of ironies drawn from a domain that is likely to be familiar to many readers of this blog: the world of enterprise IT. I also present a brief discussion of how these ironies of enterprise IT can be avoided.
Ironies of enterprise IT
In the last few decades information technology has found its way into diverse organisational functions. This trend has been accompanied by an explosive growth in new technologies. As a result of this, corporate IT infrastructures have become ever more complex and the costs of maintaining them have burgeoned. Quite naturally, the focus has thus turned to taming both complexity and cost. The favoured approaches to tackling this problem are standardisation and/or outsourcing. However, as I discuss below, both often lead to ironic outcomes.
An irony of standardisation
Enterprise IT environments tend to evolve rapidly, reflecting the many demands made on them by the organisational functions they support. This is good because it means that IT is doing what it should be doing: supporting the work of the parent organisation. On the other hand, this can result in unwieldy environments that are difficult (not to mention, expensive) to maintain. One way to address this is to impose of standards relating to processes (such as ITIL) and infrastructure (such as SAP or any enterprise application).
The question is, how well does such standardisation work in practice?
In his book entitled, From Control to Drift, Claudio Ciborra pointed out that IT infrastuctures in organisations tend to drift – i.e. they escape processes, plans and standards, and take on a life of their own. The reason they drift is that they are subject to unpredictable forces within and outside the hosting organisation. The imposition of standards may slow the drift but cannot arrest it entirely. Infrastructures are therefore best seen as ever-evolving constructs consisting of systems, people and processes that interact with each other in often unforeseen ways. As he put it:
Corporate information infrastructures are puzzles, or better collages, and so are the design and implementation processes that lead to their construction and operation. They are embedded in larger, contextual puzzles and collages. Interdependence, intricacy, and interweaving of people, systems, and processes are the culture bed of infrastructure. Patching, alignment of heterogeneous actors and making do are the most frequent approaches…irrespective of whether management [is] planning or strategy oriented, or inclined to react to contingencies.
The essential message here is that standards and processes overlook the fact that enterprises are complex social systems that are subject to internal and external influences which cannot always be foreseen. Dealing with these, more often than not, entails the implementation of hacks and workarounds that violate the imposed standards and thus nullify the benefits of standardisation.
In summary, “standardised” IT environments often end up have a plethora of non-standard hacks and workarounds that are necessary, but are generally messy and expensive to maintain.
An irony of outsourcing
One of the main reasons for outsourcing IT is to reduce costs. Yes, I am aware that many decision-makers claim that their primary reason is to reduce complexity rather than cost, but the choices they make often belie their claims. The irony is that in their eagerness to control costs, they often end up increasing them because they overlook hidden factors. I explain this in brief below, drawing on my post on the transaction costs of outsourcing.
The basic idea is simple – it is that the upfront fee quoted by the vendor is but a fraction of the total cost that will be incurred by the customer. Some of the costs that are generally not included in upfront costs are:
- Search /selection costs: these are the costs associated with searching for and shortlisting vendors.
- Bargaining costs: these are costs associated with negotiations for a mutually acceptable contract.
- Costs of coordinating work: these are costs associated with coordinating external and internal work. This is particularly important in the case of software-as-a-service because the effort required to interface cloud applications with in-house systems is often underestimated.
- Costs of enforcement and change: These are costs associated with enforcing the terms of the contract and those associated with change.
The point to note is these costs are rarely if ever mentioned by the vendor, but almost always show up in one form or another. It is therefore important for the customer to try and get a handle on these before entering into any commercial agreements. The problem is, some of these costs (particularly 3 and 4 above) are hard if not impossible to figure out upfront. For example, if the relationship turns sour the only solution might be to switch vendors. The cost associated with this is often significant and is borne entirely by the customer. A lack of awareness of such costs associated with outsourcing will invariably result in ironical outcomes.
In summary: attempts to control costs by outsourcing IT can have the contrary effect of increasing them.
Avoiding ironical outcomes
So how does one avoid ironical outcomes?
I have only one piece of advice to offer here: when planning IT architectures or outsourcing initiatives, use an incremental or emergent approach that avoids big designs or commitments upfront. Using an emergent approach not only limits risk, it also provides opportunities for learning. Most important, it enables one to verify that the envisaged benefits are not just wishful architect or manager-level thinking
Below I outline what such an approach might entail for the two ironies discussed earlier:
- For infrastructures/systems: avoid grandiose system designs that attempt to span the “enterprise” – remember that one size will not fit all of your users. . Consequently, enterprise architectures and governance systems should provide guidelines, rather than detailed prescriptions. As Anders Jensen-Waud puts it in this post: they should foster resilience and adaptability rather than conformance,
- For outsourcing: start small, possibly with a small project or system. This will help you get a sense for how outsourcing would work in your environment and help you figure out whether the vendor you have selected is really right for you. Remember, no two environments are identical so others’ lessons learned may be considerably less useful than you think. Finally, if you’re going to the cloud, be sure to factor in costs and technical challenges associated with interfacing external apps with in-house ones.
Yes, there’s nothing particularly profound here, it is just common sense…but you know what they say about the commonality of common sense.
Conclusion
In this post I have highlighted some ironies of enterprise information systems and have briefly outlined an emergent approach to avoiding them. I believe but cannot prove that ironical outcomes are almost guaranteed if one takes a monolithic, enterprise-style approach or a let’s-outsource-it-all attitude to enterprise information technology. Such a view overlooks the messy little details and differences that trip up big designs and grandiose plans. In the end, the only way to avoid ironical outcomes is to start small, learn from experience and incorporate that learning in an incremental manner in whatever you’re building or doing. Yes, you might end up with something you did not envisage at the start, but you will have learnt much along the way. More important, perhaps, is that you will be able to rest assured that it works.
The dilemmas of enterprise IT
Information technology (IT) is an integral part of any modern day business. Indeed, as Bill Gates once put it, “Information technology and business are becoming inextricably interwoven. I don’t think anybody can talk meaningfully about one without the talking about the other.” Although this is true, decision makers often display ambivalent, even contradictory attitudes towards enterprise IT. For example, depending on the context, an executive might view IT as a cost of doing business or as a strategic advantage: the former view is common when budgets are being drawn up whereas the latter may come to the fore when a bold new e-marketing initiative is being discussed.
In this post I discuss some of these dilemmas of IT and show how the opposing viewpoints embodied in them need to be managed rather than resolved. I illustrate my point by describing one way in which this can be done.
The dilemmas in brief
Many of the dilemmas of IT are consequences of conflicting views of what IT is and/or how it should be managed. I’ll describe some of these in brief below, leaving a discussion of their implications to the next section:
- IT as a cost of doing business versus IT as strategic asset: This distinction highlights the ambivalent attitudes that senior executives have towards IT. On the one hand, IT is seen as offering strategic advantages to the organization (for example a custom built application for customer segmentation). On the other, it is seen as an operational necessity (for example, core banking systems in the financial industry).
- Centralised IT versus Autonomous IT: This refers to the debate about whether an organisation’s IT environment should be tightly controlled from head office or whether subsidiaries should be given a degree of autonomy. This is essentially a debate between top-down versus bottom-up approaches to IT planning.
- Planning versus Improvisation: This refers to the tension between the structure offered by a plan and process-driven approach to IT and the necessity to step outside of plans and processes in order to come up with improvised solutions suited to the situation at hand. I have written about this paradox in a post on planning and improvisation.
There are other dilemmas – for example, technology driven IT versus business driven IT. However, for the purpose of this discussion the three listed above will suffice.
The poles of a dilemma
In his book entitled Polarity Management, Barry Johnson described how complex organizational issues can often be analysed in terms of their mutually contradictory facets. He termed these facets poles or polarities. In this and the next section, I elaborate on Johnson’s notion of polarity and show how it offers a means to understand and manage the dilemmas of enterprise IT.
The key features of poles are as follows:
Each pole has associated positives and negatives. For example, the up side of viewing IT as a cost is that the organisation focuses on IT efficiency and value for money; the downside is that exploration and experimentation that is necessary for IT innovation would likely be seen as risky. On the other hand, the positive side of IT as a strategic asset is that it is seen as a means to enable an organisation’s growth and development; the negative is that it can encourage unproven technologies (since new technologies are more likely to offer competitive advantages) and uncontrolled experimentation along with their attendant costs.
Most organisations oscillate between poles. At any given time the organisation will be “living” in one pole. In such situations, some stakeholders will perceive the negatives of that pole strongly and will thus see the other pole as being more desirable (the “grass is greener on the other” side syndrome). Johnson labels such stakeholders crusaders” – those who want to rush off into the new world. On the other hand, there are tradition bearers, those who want to stay put. When an organisation has spent a fair bit of time in one pole, the influence of crusaders tends wax while that of the tradition bearers weakens because the negatives become apparent to more and more people.
A concrete example may help clarify this point:
Consider a situation where all subsidiaries of a multinational have autonomous IT units (and have had these for a while). The main benefits of such a model are responsiveness and relevance: local IT units will able to respond quickly to local needs and will also be able to deliver solutions that are tailored to the specific needs of the local business. However, this model has many negative aspects: for example, high costs, duplication of effort, massive software portfolio and attendant costs, high cost of interfacing between subsidiaries etc.
When the model has been in operation for a while, it is quite likely that IT decision makers will perceive the negatives of this pole more clearly than they see the positives. They will then initiate a reform to centralize IT because they perceive the positives of that pole –i.e. low costs, centralization of services etc. – as being worth striving for. However, when the new world is in place and has been operating for a while, the organisation will begin to see its downside: bureaucracy, lack of flexibility, applications that don’t meet specific local business needs etc. They will then start to delegate responsibility back to the subsidiaries…and thus goes the polarity merry-go-round.
Managing enterprise IT dilemmas
As discussed above, any option will have its supporters and detractors. For example, finance folks may see IT as a cost of doing business whereas those in IT will consider it to be a strategic asset. What’s important, however, is that most organisations “resolve” such contradictions by taking sides. That is, one side “wins” and their point of view gets implemented as a “solution.” The concerns of the “losing” side are overlooked entirely.
Although such a “solution” appears to solve the problem, it does not take long for the negative aspects of the other pole to manifest itself; the rumbles of discontent from those whose concerns have been ignored grow louder with time. In this sense, issues that can be defined in terms of polarities are wicked problems – they are perceived in different ways by different stakeholders and so are difficult to define, let alone solve.
As we have seen above, however, the poles of a dilemma are but different facets of a single reality. Hence, the first step towards managing a dilemma lies in realizing that it cannot be resolved definitively; regardless of the path chosen, there will always be a group whose concerns remain unaddressed. The best one can do is to be aware of the positives and negatives of each pole and ensure that the entire spectrum of stakeholders is aware of these. A shared awareness can help the group in figuring out ways to mitigate the worst effects of the negatives.
One which this can be done is via a facilitated session, involving people who represent the two sides of the issue. To begin with, the facilitator helps the group identify the poles. She then helps the group create a polarity map which shows the contradictory aspects of the issue along with their positives and negatives. A rudimentary polarity map for the autonomous/centralized IT dilemma is shown in Figure 1 below.
To ensure completeness of the map, the group must include stakeholders who represent both sides of the dilemma (and also those who hold views that lie between).
As mentioned in the previous section, organisations are not static, they oscillate between poles. Moreover, Johnson claimed that they follow a specific path in the map. Quoting from the book I wrote with Paul Culmsee:
According to Johnson, organisations tended to oscillate between poles. If you accept the notion of a wicked problem as a polarity, the overall pattern traced as one moves between these poles resembles an infinity symbol. The typical path is L- to R+, to R-, across to L+ and Johnson argued that the trajectory could not be avoided. All we can do is focus on minimizing our time spent in the lower quadrants.
Again, it is worth emphasizing that the conflict between the two groups of stakeholders cannot be resolved definitively. The best one can do is to get the two sides to understand each other’s’ point of view and hence attempt to minimize the downsides of each option.
Finally, polarity management is but one way to manage the dilemmas associate with enterprise IT or any other organizational decision. There are many others – and I highly recommend my book if you’re interested in finding out more about these . In the end, though, the point I wished to make in this post is less about any particular technique and more about the need to air and acknowledge differing perspectives on issues pertaining to enterprise IT or any other decision with organization-wide implications.
Wrapping up
The dilemmas of enterprise IT are essentially consequences of mutually contradictory, yet equally valid perspectives. Is IT a cost of doing business or is it a strategic asset? The answer depends on the perspective one takes…and there is no objectively right or wrong answer. Given this, it is important to be aware of both the up and down side of each perspective (or pole) before one makes a decision. Unfortunately, most often decisions are made on the basis of the up side of one option and the down side of the other. As should be evident now, a decision that is based on such a selective consideration of viewpoints invariably invites conflict and leads to undesirable outcomes.
Towards an antifragile IT strategy
Introduction
In his thought-provoking book on antifragility, Nassim Taleb makes the point that the opposite of fragility is not robustness or resilience, rather it is the ability to thrive on or benefit from uncertainty. There is no word in the English language to describe such behavior, and that is what led him to coin the term antifragile.
Nature is an excellent example of an antifragile system: whenever subjected to a cataclysmic event (like this one that occurred ~66 million years ago), nature manages not only to recover, but does so in novel and arguably better ways. Unlike nature, however, most human-made systems tend to be fragile. An example that Taleb highlights is the global financial system , not just prior to the 2008 financial crisis but even now.
The broader lesson to be learnt from the financial crisis is that it is impossible to predict the future in any detail. Systems should therefore be designed to cope with (if not take advantage of) the irreducible uncertainty associated with this lack of predictability. Human-made systems that overlook this inescapable fact tend to be brittle by design.
The above is true not only of systems, but also of future-directed activities such as strategic planning. Overlooking the role of irreducible uncertainty in planning invariably locks an organization into an inflexible course of action. Unfortunately, this is not always appreciated by those who run organisations. As Taleb puts it:
Corporations are in love with the idea of the strategic plan. They need to pay to figure out where they are going. Yet there is no evidence that strategic planning works —we even seem to have evidence against it. A management scholar, William Starbuck, has published a few papers debunking the effectiveness of planning [see this paper, for example]—it makes the corporation option-blind, as it gets locked into a non-opportunistic course of action.
The trick, as Taleb hints in the above passage (and elaborates on in his book), is to plan in such a way as to take advantage of options that we are unaware of now, but might emerge in the future.
That, of course, is easier said than done.
In this post, I draw on Taleb’s book and my own experiences to discuss how one can formulate an IT strategy that thrives on uncertainty. Although my focus is primarily on IT, the points discussed have a wider applicability to strategic planning in general.
Towards an antifragile IT strategy
Before we get into antifragility, it is useful to take a brief look at how IT strategy is usually formulated.
Although some IT leaders will contest this point, a good majority of organisations tend to view IT as a cost rather than a strategic focus area. As a consequence, the objectives of an IT strategy are generally geared towards cost reduction and increased efficiency. The obvious ways in which to do this are through strict governance, standardization and/or outsourcing. Unfortunately, these actions tend to make organisations less flexible and hence more susceptible to uncertainty…and thus more fragile.
So, the key to an antifragile strategy is flexibility…but what exactly is flexibility?
The best definition of flexibility I have come across is the one proposed by Gregory Bateson who defined it as uncommitted potential for change (see this post for more on Bateson’s definition of flexibility). Only if one is flexible in this sense can one take advantage of unexpected events when they occur. The problem of formulating an antifragile IT (or any other!) strategy thus boils down to finding ways in which one can increase one’s flexibility. With that in mind, here are some suggestions.
Decentralisation
This one is going to raise some eyebrows because the general trend in the world of corporate IT is to move in exactly the opposite direction – i.e. towards greater centralisation. The drive to centralisation manifests itself in many different ways: from top-down decision-making to the deployment of standardized processes and pan-organisational “enterprise” applications (single instance ERP systems being an extreme example).
The justification offered by advocates of centralisation is that it increases efficiency and reduces cost by using a one-size-fits-all approach. In reality, however, such an approach almost always has undesirable features. For example:
- It overlooks the unique features of different structural units of the organization (subsidiaries in different countries, for example). Indeed, this is precisely at platform standardization fail – see my post entitled, The ERP paradox for more on this point.
- It increases coupling between different structural units. Since systems and processes have a global reach, an unexpected glitch in any of these will affect all structural units within the organization.
Decentralisation basically amounts to giving structural units the autonomy they need in order to make decisions and choices that affect them. To be sure, this must be balanced with some oversight and direction from a central authority, but the overall aim should be a federal structure rather than a centralized one. A few examples of things that can be controlled centrally include network infrastructure, security…and possibly even things such as preferred vendors, especially from the perspective of getting volume discounts on pricing. There is no black and white here: choices need to be made judiciously and revisited if they don’t work.
Agility
I use the term agile here in the sense of adaptability rather than as a reference to the slew of methodologies that go under the banner of Agile. Indeed, agility in the sense I use it here is more about a mindset than a methodology: if you are adapting to a shifting environment by changing your approach and priorities appropriately, then you are being agile in the sense of adaptability.
So, what does agility entail? Here are some things I see as being important:
- Responding to changes within and outside the organization…but only after determining that they need to be responded to. The qualifier is important: one must be able to distinguish between changes that merit a response and those that don’t. Moreover, any change should be instituted in a gradual or incremental fashion so that one can adjust one’s approach and take corrective actions if needed. Agility does not imply rapid, large-scale change.
- Sensing (or even creating!) new opportunities and taking advantage of them. The term intrapreneurial is often used to describe such a mindset. Many IT leaders are aware of the need to do this, but don’t always know how. In my experience, instituting a dedicated innovation group isn’t the best way to go about it. Instead, it may be better to focus on creating an environment in which people feel inspired to try new things. One of the ways to do this is to actively encourage staff to learn by experimenting on company time – say, one Friday afternoon per month – with no expectation of useful outcomes.
- Building flexibility into your external contracts so that you can respond to changes that weren’t foreseen when the contract was drawn up. Essentially this amounts to building a trust-based relationship with your vendors (see the last point in the present post for more on this) and factoring in transaction costs in your outsourcing deals.
An agile mindset is unlikely to thrive in an IT department that is bogged down by overly onerous rules and procedures. To be sure, rules and processes are necessary, but not at the expense of flexibility.
Diversification
One of the keys to being antifragile in financial investing is to spread one’s investments over a range of different products. In analogy, one of the best ways towards developing an antifragile IT strategy is to diversify elements of your IT environment, especially those things that are likely to be negatively affected by uncertainty.
Here are some examples:
- For coverage in times of trouble, ensure that your team consists of people with overlapping sets of skills. This should be reinforced by periodic cross-training of staff in all key technologies that are used within your organisation.
- Hire people with different thinking styles. Your teams should contain a mix of people with analytic and synthetic approaches to problem solving. Most uncertain situations require both types of approaches.
- Diversify your vendor base. Among other things this means do not…and I repeat, do not…tie yourself to a single vendor by signing a multi-year, multi-million dollar contract!
- Set up small, low-cost skunkworks projects to explore technologies and ideas that have the potential to provide your business an edge.
- Seek to understand diverse viewpoints. Any important decision should be made only after soliciting and understanding viewpoints that are different from yours. Such an understanding will lead to better decisions than those made by relying on gut instinct or advice from a single source.
…and I’m sure there are many other possibilities.
Creating an environment of trust
I kept this for the last because it is possibly the hardest to put into practice. An antifragile IT strategy will only work if there is a mutual trust between all parties involved in a business relationship – be they managers and employees or IT folks and the businesses they serve. Although much has been written and spoken about trust, the fact is that it is conspicuous by its absence in the present day corporate world. Indeed, use of the word in corporate circles tends to evoke cynical reactions from the rank and file; it is seen as platitude rather than a word of significance.
Why is trust important?
Elinor Ostrom’s prize winning work established that trust is one of the core relationships that promote cooperation (see this post for more on this point). In situations of uncertainty, those who work in a high-trust environment would generally be willing to step outside their regular roles and work with others to fix the problem. In contrast those in a low-trust environment are likely to switch off or worse, start apportioning blame. On another note, people are more likely to share their ideas in a high-trust environment rather than in one that is riven by mistrust and unhealthy competition. I’m pretty sure that most readers would have experience of low-trust environments and would know from experience that such work environments are fragile in that they simply fall apart under stress.
It should be noted that that trust is also important in external relationships, such as those with vendors. Although purchasing and legal departments are quick to advise us about the importance of rock-solid contracts, in my experience it is far better to rely on trust. Indeed, it has been suggested that contracts can destroy trust!
Finally, just in case it is not clear: the onus for creating an environment of trust lies with management rather than the rank and file.
Summing up
I offer the above as some suggestions aimed at making your IT environment less susceptible, even responsive, to unexpected external or internal events. Indeed, I believe that in times of uncertainty, they are likely to work much better than some of the well-worn but discredited command and control approaches that are inexplicably popular.
To sum up: IT strategies are invariably focused on improving efficiency and reducing cost. Typical measures to achieve this tend to reduce flexibility (tighter governance and outsourcing, for example). As a result most IT strategies are unable to deal with, let alone benefit from uncertainty. In this post I have outlined key elements of an antifragile IT strategy that can correct this oversight.
When I reviewed this piece just prior to posting it, I was struck by the fact that the points I have mentioned have more to do with social or ethical matters than technology. This reminded me of Heinz von Foerster’s ethical imperative:
“Act always so as to increase the number of choices.”
And that, quite possibly, is the perfect one-line summary of an antifragile strategy.


