Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Best Practice’ Category

“Strategic alignment” – profundity or platitude?

with 7 comments

Introduction

Some time ago I wrote a post entitled, Models and messes in management, wherein I discussed how a “scientific” approach to management has resulted in a one-size-fits-all  approach to problem solving in organizations.  This is reflected in the tendency of organisations to  implement similar information technology (IT) systems, often on the  “expert” advice of carbon-copy consultancies that offer commoditized solutions.

A particularly effective marketing tactic is to advertise such “solutions” as being able to help organisations achieve  strategic alignment between IT and the business.  In this post I discuss how the concept of “strategic alignment” though seemingly sensible, makes no sense in the messy, real world of organization-land. My discussion is based on a brilliant paper by Claudio Ciborra entitled, De Profundis? Deconstructing the concept of strategic alignment.  The paper analyses the notion of alignment as it is commonly understood in the context of  IT– namely as a process by which IT objectives are brought in line with those of the organization it serves.

Background

The paper begins with a short chronology of the term strategic alignment, starting with this  highly cited paper  published by Henderson and Venkataraman  in 1993. The paper describes the need for alignment between business and IT strategies of companies.   More importantly, however, the authors detail a “Strategic Alignment Model” that purports to “guide management practice” towards achieving alignment between IT and the business. However, as Ciborra noted four years later, in 1997, it was still an open question as to what strategic alignment really meant and how it was to be achieved.

Fast forward 15 years to 2012, and it appears that the question of what strategic alignment is and how to achieve it still an open one. Here are some excerpts from recently published papers:

In the abstract to their paper entitled, Strategic Alignment of Business Processes, Morrison et. al. state:

Strategic alignment is a mechanism by which an organization can visualize the relationship between its business processes and strategies. It enables organizational decision makers to collect meaningful insights based on their current processes. Currently it is difficult to show the sustainability of an organization and to determine an optimal set of processes that are required for realizing strategies.” (italics mine)

Even worse, the question of what strategic alignment is is far from settled.  It appears that it means different things to different people. In the abstract to their paper entitled, Reconsidering the Dimensions of Business-IT Alignment, Schlosser et. al. state:

While the literature on business-IT alignment has become increasingly mature in the past 20 years, different definitions and conceptualizations have emerged. Several dimensions like strategic, intellectual, structural, social, and cultural alignment have been developed. However, no integrated and broadly accepted categorization exists and these dimensions are non-selective and do overlap…

This begs the question as to how meaningful it is for organizations to pursue “alignment” when people are still haggling over the fine print of what it means.

Ciborra dealt with this  very question 15 years ago. In the remainder of this post I summarize the central ideas of his paper,  which I think are as relevant today as the were at the time it was written.

Deconstructing  strategic alignment

The whole problem with the notion of strategic alignment is nicely summarized in a paragraph that appears in Ciborra’s introduction:

…while strategic alignment may be close to a truism conceptually, in the everyday business it is far from being implemented. Strategy ends up in “tinkering” and the IT infrastructure tends to “drift”. If alignment was supposed to be the ideal “bridge” connecting the two key variables [business and IT], it must be admitted that such a conceptual bridge faces the perils of the concrete bridge always re-designed and never built between continental Italy and Sicily, (actually, between Scylla and Charybdis) its main problem being the shores: shifting and torn by small and big earthquakes….

The question, then, is how and why do dubious concepts such as strategic alignment worm their way into mainstream management?

Ciborra places the blame for this squarely in the camp of academics who really ought to know better. As he states:

[Management Science] deploys careful empirical research, claiming to identify “naturally occurring phenomena” but in reality measures theoretical (and artificial) constructs so that the messiness of everyday reality gets virtually hidden. Or it builds models that should be basic but do not last a few years and quickly fall into oblivion.

And a few lines later:

…practitioners and academics increasingly worship simplified models that have a very short lifecycle….managers who have been exposed to such illusionary models, presented as the outcome of quasi-scientific studies, are left alone and disarmed in front of the intricacies of real business processes and behaviors, which in the meantime have become even more complicated than when these managers left for their courses. People’s existence, carefully left out of the models, waits for them at their workplaces.

Brilliantly put, I think!

Boxes, arrows and platitudes

Generally, strategic alignment is defined as a fit, or a bridge, between different domains of a business.  To be honest it seems the metaphor of a “bridge” seems to distract from reflecting on the chasm that is allegedly being crossed and the ever-shifting banks that lie on either side. Those who speak of alignment would do better to first focus on what they are trying to align. They may be surprised to find that the geometric models  that pervade their PowerPoint Presentations (e.g. organograms, boxes connected by arrows) are completely divorced from reality. Little surprise, then, that top-down, management efforts at achieving alignment invariably fail.

Why do such tragedies play out over and over again?

Once again, Ciborra offers some brilliant insights…

The messy world, he tells us, gives us the raw materials from which we build simplified representations of the organization we work in. These representations are often built in the image of models that we have learnt or read about (or have been spoon-fed to us by our expensive consultants). Unfortunately, these models are abstractions of reality – they cannot and must not be confused with the real thing.  So when we speak of alignment, we are talking of an abstraction that is not “out there in the world” but instead only resides in our heads; in textbooks and journal papers; and, of course, in business school curricula.

As he says,

…there is no pure alignment to be measured out there. It is, on the contrary, our pre-scientific understanding of and participating in the world of organizations that gives to the notion of alignment a shaky and ephemeral existence as an abstraction in our discourses and representations about the world.

This is equally true of management research programs on alignment: they are built on multiple abstractions and postulated causal connections that are simply not there.

If academics who spend their productive working lives elaborating these concepts make little headway, what hope is there for the manager who  to implement or measure strategic alignment?

Is there any hope?

Ciborra tells us that the answer to the question posed in the sub-heading is a qualified “yes”. One can pursue alignment, but one must first realize that the concept is an abstraction that exists only in an ideal world in which there are no surprises and in which things always go according to plan. Perhaps more importantly, they need to understand that implementations of technology invariably have significant unintended consequences that require improvised responses and adaptations. Moreover, these are essential aspects of the process of alignment, not things that can be wished away by better plans or improved monitoring.

So what can one do? As Ciborra states:

…we are confronted with a choice. Either we can do what management science suggests, that is “to realize these surprises in implementation as exceptions, build an ideal world of “how things should be” and to try to operate so that the messy world that in which managers operate moves towards this model….or we suspend  belief about what we think we know…and reflect on what we observe. Sticking to the latter we encounter phenomena that deeply enrich our notion of alignment… (italics mine)

Ciborra then goes on to elaborate on concepts of Care (dealing with the world as it is, but in a manner that is honest and free of preconceived notions), Cultivation (allowing systems to evolve in a way that is responsive to the needs of the organization rather than a predetermined plan) and Hospitality (the notion that the organization hosts the technology, much in the way that a host hosts a guest). It would take at least a thousand words more to elaborate on these concepts, so I’ll have to leave it here. However, if you are interested in finding out more, please see my summary and review of Ciborra’s book: The Labyrinths of Information.

…and finally, who aligns whom?

The above considerations lead us to the conclusion that, despite our best efforts, technology infrastructures tend to have lives of their own – they align us as much as we (attempt to) align them.  IT infrastructures are deeply entwined with the organizations that host them, so much so that they are invisible (until they breakdown, of course) and  even have human advocates who “protect” their (i.e. the infrastructure’s) interests! Although this point may seem surprising to business folks, it is probably familiar to those who work with information systems in corporate or other organizational environments.

A final word:  many other management buzz-phrases, though impressive sounding, are just as meaningless as the term strategic alignment.  However, I think I have rambled on enough, so I will leave you here to find and deconstruct some of these on your own.

Written by K

February 21, 2013 at 9:43 pm

Some perspectives on quality

with 7 comments

Introduction

A couple of years ago, I wrote a post entitled, A project manager’s ruminations on quality, in which I discussed meaning of the term quality as it pertains to project work. In that article I focused on how the standard project management definition of quality differs from the usual (dictionary) meaning of the term. Below, I expand on that post by presenting some alternate perspectives on quality.

Quality in mainstream project management

Let’s begin with a couple of  dictionary definitions of quality to see how useful they are from a project management perspective:

  1. An essential or distinctive characteristic, property, or attribute.
  2. High grade; superiority; excellence

Clearly, these aren’t much help because they don’t tell us how to measure quality. Moreover, the second definition confuses quality and grade – two terms that the PMBOK assures us are as different as chalk and cheese.

So what is a good definition of quality from the perspective of a project manager? The PMBOK, quoting from the American Society for Quality (ASQ), defines quality as, “the degree to which a set of inherent characteristics fulfil requirements.”  This is clearly a much more practical definition for a project manager, as it links the notion of quality to what the end-user expects from deliverables. PRINCE2, similarly, keeps the end-user firmly in focus when it defines quality, rather informally, as, fitness for purpose.”

Project managers steeped in PRINCE and other methodologies would probably find the above unexceptional. The end-goal in project management is to deliver what’s agreed to, whilst working within the imposed constraints of resources and time.  It is therefore no surprise that the definition of quality focuses on the characteristics of the deliverables, as they are specified in the project requirements.

Quality as an essential characteristic

The foregoing project management definitions beg the question:

Is “fitness of purpose” or the “degree to which product characteristics fulfils requirements” really a measure of quality?

The problem with these definitions is that they conflate quality with fulfilling requirements. But surely there is more to it than that. An easy way to see this is that one can have a high quality product that does not satisfy user requirements or meet cost and schedule targets. For example, many  people would agree that WordPress blogging software is of a high quality, yet it does not meet the requirement of, say, “a tool to manage projects.”

Indeed, Robert Glass states this plainly in his book, Facts and Fallacies of Software Engineering. Fact 47 in the book goes as follows:

Quality is not user satisfaction, meeting requirements, meeting cost and schedule targets or reliability.

So what is quality, then?

According to Glass quality is a set of (product) attributes, including things such as:

  1. Reliability – does the product work as it should?
  2. Useability – is it easy to use?
  3. Modifiability – can it be modified (maintained) easily?
  4. Understandability – is it easy to understand how it works?
  5. Efficiency – does it make efficient use of resources (including storage, computing power and time)?
  6. Testability – can it be tested easily?
  7. Portability – can it be ported to other platforms?  This isn’t an issue for all products – some programs need run only on one operating system.

Note that the above listing is not in order of importance. For some products useability maybe more important than efficiency, in others it could be the opposite – the order depends very much on the product and its applications.

Glass notes that these attributes are highly technical. Consequently, they are best dealt with by people who are directly involved in creating the product, not their managers, not even the customers. In this view, the responsibility for quality lies not with project managers, but with those who do the work. To quote from the book:

…quality is one of the most deeply technical issues in the software field. Management’s job, far from taking responsibility for achieving quality, is to facilitate and enable technical people and the get out of their way.

Another point to note is that the above characteristics are indeed measurable (if only in a qualitative sense), which addresses the objection I noted at in the previous section.

Quality as a means to an end

In our book, The Heretic’s Guide to Best Practices, Paul Culmsee and I discuss a couple of perspectives on quality which I summarise in this and the following section.

Our first contention is that quality cannot be an end in itself.  This is a subtle point so I’ll illustrate with an example. Consider the two  “ends-focused” definitions of quality mentioned earlier: quality as “fitness for purpose” and quality as a set of objective attributes. Chances are that different project  stakeholders will have differing views on which definition is “right”. The problem, as we have seen in the earlier sections, is that the two definitions are not the same.  Hence quality cannot be an end in itself.

Instead, we believe that a better definition comes from asking the question: “What difference would quality make to this project?” The answer determines an appropriate definition of quality for a particular project.  Implicit here is the notion of quality as an enabler to achieve the desired project objective. In other words, quality here is a means to an end, not an end in itself.

Quality and time

Typically, project deliverables – be they software or buildings or anything else – have lifetimes that are much longer than the duration of the project itself. There are a couple of important implications of this:

  1. Deliverables may be used in ways that were not considered when the project was implemented.
  2. They may have side effects that were not foreseen.

Rarely, if ever, do project teams worry about the long term consequences of their creations.  Their time horizons are limited to the duration of their projects. This myopic view is perpetuated by the so called iron triangle which tells us that quality is a function of cost, scope and time (i.e. duration) of a project.

The best way to see the short-sightedness of this view is through an example.  Consider the Sydney Opera House as an example of a project output. As we state in our book:

It is a global icon and there are people who come to Sydney just to see it. In term of economic significance to Sydney, it is priceless andi rreplaceable. The architect who designed it, Jørn Utzon, was awarded the Pritzker Prize (architecture’s highest honour) for it in 2003.

But the million dollar question is . . . “Was it a successful project?” If one was to ask one of the two million annual tourists who visit the place, we suspect that the answer would be an emphatic “Yes.” Yet, when we judge the project through the lens of the “iron triangle,” the view changes significantly. To understand why, consider these fun filled facts about the Sydney Opera House.

  • The Opera House was formally completed in 1973, having cost $102 million
  • The original cost estimate in 1957 was $7 million
  • The original completion date set by the government was 1963
  • Thus, the project was completed ten years late and over-budget by more than a factor of fourteen

If that wasn’t bad enough, Utzon, the designer of the opera house, never lived to set foot in it. He left Australia in disgust, swearing never to come back after his abilities had been called into question and payments suspended. When the Opera House was opened in 1973 by Queen Elizabeth II, Utzon was not invited to the ceremony, nor was his name mentioned…

Judged by the criteria of the iron triangle, the project was an abject failure. However, judged by through the lens of time, the project is an epic success! Quality must therefore also be viewed in terms of the legacy that the project leaves – how the deliverables will be viewed by future generations and what it will mean to them.

Wrapping up

As we have seen, the issue of quality is a vexed one because how one understands it  depends on which school of thought one subscribes to. We have seen that quality can refer to one of the following:

  1. The “fitness for purpose” of a product or its ability to “meet requirements.” (Source: PRINCE2 and PMBOK)
  2. An essential attribute of a product. This is based on the standard, dictionary definition of the term.
  3. A means of achieving a particular end.  Here quality is viewed as a  process rather than a project output.

Moreover, none of the above perspectives considers the  legacy bequeathed by a project;  how the deliverables will perceived by future  generations.

So where does that leave us?

Perhaps it is  best to leave definitions of quality to pedants, for as someone wise once said, “What is good and what is not good, need we have anyone tell us these things?”

Written by K

December 13, 2012 at 2:35 am

A consulting tragedy in five limericks

with 5 comments

The consultant said, “be assured,
my motives are totally pure.
I guarantee
my inflated fee
is well worth my ‘best practice’ cure.”

Although it was too much to pay,
this argument carried the day:
consultants hired
can always be fired
and assigned much of the blame.

After the contract was signed,
only then did the client find
the solution bought
would definitely not
help leave their troubles behind.

Cos’ the truth was plain to see,
the ‘best practice’ methodology
had only led
to the overhead
of a ponderous bureacracy.

The shock, the horror, the pain-
all that money and effort in vain,
but the tragedy
is the powers that be
would do it all over again.

Written by K

September 1, 2012 at 10:02 pm