Archive for the ‘Corporate IT’ Category
Some perspectives on quality
Introduction
A couple of years ago, I wrote a post entitled, A project manager’s ruminations on quality, in which I discussed meaning of the term quality as it pertains to project work. In that article I focused on how the standard project management definition of quality differs from the usual (dictionary) meaning of the term. Below, I expand on that post by presenting some alternate perspectives on quality.
Quality in mainstream project management
Let’s begin with a couple of dictionary definitions of quality to see how useful they are from a project management perspective:
- An essential or distinctive characteristic, property, or attribute.
- High grade; superiority; excellence
Clearly, these aren’t much help because they don’t tell us how to measure quality. Moreover, the second definition confuses quality and grade – two terms that the PMBOK assures us are as different as chalk and cheese.
So what is a good definition of quality from the perspective of a project manager? The PMBOK, quoting from the American Society for Quality (ASQ), defines quality as, “the degree to which a set of inherent characteristics fulfil requirements.” This is clearly a much more practical definition for a project manager, as it links the notion of quality to what the end-user expects from deliverables. PRINCE2, similarly, keeps the end-user firmly in focus when it defines quality, rather informally, as, fitness for purpose.”
Project managers steeped in PRINCE and other methodologies would probably find the above unexceptional. The end-goal in project management is to deliver what’s agreed to, whilst working within the imposed constraints of resources and time. It is therefore no surprise that the definition of quality focuses on the characteristics of the deliverables, as they are specified in the project requirements.
Quality as an essential characteristic
The foregoing project management definitions beg the question:
Is “fitness of purpose” or the “degree to which product characteristics fulfils requirements” really a measure of quality?
The problem with these definitions is that they conflate quality with fulfilling requirements. But surely there is more to it than that. An easy way to see this is that one can have a high quality product that does not satisfy user requirements or meet cost and schedule targets. For example, many people would agree that WordPress blogging software is of a high quality, yet it does not meet the requirement of, say, “a tool to manage projects.”
Indeed, Robert Glass states this plainly in his book, Facts and Fallacies of Software Engineering. Fact 47 in the book goes as follows:
Quality is not user satisfaction, meeting requirements, meeting cost and schedule targets or reliability.
So what is quality, then?
According to Glass quality is a set of (product) attributes, including things such as:
- Reliability – does the product work as it should?
- Useability – is it easy to use?
- Modifiability – can it be modified (maintained) easily?
- Understandability – is it easy to understand how it works?
- Efficiency – does it make efficient use of resources (including storage, computing power and time)?
- Testability – can it be tested easily?
- Portability – can it be ported to other platforms? This isn’t an issue for all products – some programs need run only on one operating system.
Note that the above listing is not in order of importance. For some products useability maybe more important than efficiency, in others it could be the opposite – the order depends very much on the product and its applications.
Glass notes that these attributes are highly technical. Consequently, they are best dealt with by people who are directly involved in creating the product, not their managers, not even the customers. In this view, the responsibility for quality lies not with project managers, but with those who do the work. To quote from the book:
…quality is one of the most deeply technical issues in the software field. Management’s job, far from taking responsibility for achieving quality, is to facilitate and enable technical people and the get out of their way.
Another point to note is that the above characteristics are indeed measurable (if only in a qualitative sense), which addresses the objection I noted at in the previous section.
Quality as a means to an end
In our book, The Heretic’s Guide to Best Practices, Paul Culmsee and I discuss a couple of perspectives on quality which I summarise in this and the following section.
Our first contention is that quality cannot be an end in itself. This is a subtle point so I’ll illustrate with an example. Consider the two “ends-focused” definitions of quality mentioned earlier: quality as “fitness for purpose” and quality as a set of objective attributes. Chances are that different project stakeholders will have differing views on which definition is “right”. The problem, as we have seen in the earlier sections, is that the two definitions are not the same. Hence quality cannot be an end in itself.
Instead, we believe that a better definition comes from asking the question: “What difference would quality make to this project?” The answer determines an appropriate definition of quality for a particular project. Implicit here is the notion of quality as an enabler to achieve the desired project objective. In other words, quality here is a means to an end, not an end in itself.
Quality and time
Typically, project deliverables – be they software or buildings or anything else – have lifetimes that are much longer than the duration of the project itself. There are a couple of important implications of this:
- Deliverables may be used in ways that were not considered when the project was implemented.
- They may have side effects that were not foreseen.
Rarely, if ever, do project teams worry about the long term consequences of their creations. Their time horizons are limited to the duration of their projects. This myopic view is perpetuated by the so called iron triangle which tells us that quality is a function of cost, scope and time (i.e. duration) of a project.
The best way to see the short-sightedness of this view is through an example. Consider the Sydney Opera House as an example of a project output. As we state in our book:
It is a global icon and there are people who come to Sydney just to see it. In term of economic significance to Sydney, it is priceless andi rreplaceable. The architect who designed it, Jørn Utzon, was awarded the Pritzker Prize (architecture’s highest honour) for it in 2003.
But the million dollar question is . . . “Was it a successful project?” If one was to ask one of the two million annual tourists who visit the place, we suspect that the answer would be an emphatic “Yes.” Yet, when we judge the project through the lens of the “iron triangle,” the view changes significantly. To understand why, consider these fun filled facts about the Sydney Opera House.
- The Opera House was formally completed in 1973, having cost $102 million
- The original cost estimate in 1957 was $7 million
- The original completion date set by the government was 1963
- Thus, the project was completed ten years late and over-budget by more than a factor of fourteen
If that wasn’t bad enough, Utzon, the designer of the opera house, never lived to set foot in it. He left Australia in disgust, swearing never to come back after his abilities had been called into question and payments suspended. When the Opera House was opened in 1973 by Queen Elizabeth II, Utzon was not invited to the ceremony, nor was his name mentioned…
Judged by the criteria of the iron triangle, the project was an abject failure. However, judged by through the lens of time, the project is an epic success! Quality must therefore also be viewed in terms of the legacy that the project leaves – how the deliverables will be viewed by future generations and what it will mean to them.
Wrapping up
As we have seen, the issue of quality is a vexed one because how one understands it depends on which school of thought one subscribes to. We have seen that quality can refer to one of the following:
- The “fitness for purpose” of a product or its ability to “meet requirements.” (Source: PRINCE2 and PMBOK)
- An essential attribute of a product. This is based on the standard, dictionary definition of the term.
- A means of achieving a particular end. Here quality is viewed as a process rather than a project output.
Moreover, none of the above perspectives considers the legacy bequeathed by a project; how the deliverables will perceived by future generations.
So where does that leave us?
Perhaps it is best to leave definitions of quality to pedants, for as someone wise once said, “What is good and what is not good, need we have anyone tell us these things?”
Out damn’d SPOT: an essay on data, information and truth in organisations
Introduction
Jack: My report tells me that we are on track to make budget this year.
Jill: That’s strange, my report tells me otherwise
Jack: That can’t be. Have you used the right filters?
Jill: Yes – the one’s you sent me yesterday.
Jack: There must be something else…my figures must be right, they come from the ERP system.
Jill: Oh, that must be it then…mine are from the reporting system.
Conversations such as the one above occur quite often in organisation-land. It is one of the reasons why organisations chase the holy grail of a single point of truth (SPOT): an organisation-wide repository that holds the officially endorsed true version of data, regardless of where it originates from. Such a repository is often known as an Enterprise Data Warehouse (EDW).
Like all holy grails, however, the EDW, is a mythical object that exists in only in the pages of textbooks (and vendor brochures…). It is at best an ideal to strive towards. But, like chasing the end of a rainbow it is an exercise that may prove exhausting and ultimately, futile.
Regardless of whether or not organisations can get to that mythical end of the rainbow – and there are those who claim to have got there – there is a deeper issue with the standard view of data and information that hold sway in organisation-land. In this post I examine these standard conceptions of data and information and truth, drawing largely on this paper by Bernd Carsten Stahl and a number of secondary sources.
Some truths about data and information
As Stahl observes in his introduction:
Many assume that information is central to managerial decision making and that more and higher quality information will lead to better outcomes. This assumption persists even though Russell Ackoff argued over 40 years ago that it is misleading…
The reason for the remarkable persistence of this incorrect assumption is that there is a lack of clarity as to what data and information actually are.
To begin with let’s take a look at what these terms mean in the sense in which they are commonly used in organisations. Data typically refers to raw, unprocessed facts or the results of measurements. Information is data that is imbued with meaning and relevance because it is referred to in a context of interest. For example, a piece of numerical data by itself has no meaning – it is just a number. However, its meaning becomes clear once we are provided a context – for example, that the number is the price of a particular product.
The above seems straightforward enough and embodies the standard view of data and information in organisations. However, a closer look reveals some serious problems. For example, what we call raw data is not unprocessed – the data collector always makes a choice as to what data will be collected and what will not. So in this sense, data already has meaning imposed on it. Further, there is no guarantee that what has been excluded is irrelevant. As another example, decision makers will often use data (relevant or not) just because it is available. This is a particularly common practice when defining business KPIs – people often use data that can be obtained easily rather than attempting to measure metrics that are relevant.
Four perspectives on truth
One of the tacit assumptions that managers make about the information available to them is that it is true. But what exactly does this mean? Let’s answer this question by taking a whirlwind tour of some theories of truth.
The most commonly accepted notion of truth is that of correspondence, that a statement is true if it describes something as it actually is. This is pretty much how truth is perceived in business intelligence: data/information is true or valid if it describes something – a customer, an order or whatever – as it actually is.
More generally, the term correspondence theory of truth refers to a family of theories that trace their origins back to antiquity. According to Wikipedia:
Correspondence theories claim that true beliefs and true statements correspond to the actual state of affairs. This type of theory attempts to posit a relationship between thoughts or statements on one hand, and things or facts on the other. It is a traditional model which goes back at least to some of the classical Greek philosophers such as Socrates, Plato, and Aristotle. This class of theories holds that the truth or the falsity of a representation is determined solely by how it relates to a reality; that is, by whether it accurately describes that reality.
One of the problems with correspondence theories is that they require the existence of an objective reality that can be perceived in the same way by everyone. This assumption is clearly problematic, especially for issues that have a social dimension. Such issues are perceived differently by different stakeholders, and each of these will legitimately seek data that supports their point of view. The problem is that there is often no way to determine which data is “objectively right.” More to the point, in such situations the very notion of “objective rightness” can be legitimately questioned.
Another issue with correspondence theories is that a piece of data can at best be an abstraction of a real-world object or event. This is a serious issue with correspondence theories in the context of data in organisations. For example, when a sales rep records a customer call, he or she notes down only what is required by the customer management system. Other data that may well be more important is not captured or is relegated to a “Notes” or “Comments” field that is rarely if ever searched or accessed.
Another perspective is offered by the so called consensus theories of truth which assert that true statements are those that are agreed to by the relevant group of people. This is often the way truth is established in organisations. For example, managers may choose to calculate Key Performance Indicators (KPIs )using certain pieces of data that are deemed to be true. The problem with this is that consensus can be achieved by means that are not necessarily democratic. For example, a KPI definition chosen by a manager may be hotly contested by an employee. Nevertheless, the employee has to accept it because organisations are typically not democratic. A more significant issue is that the notion of “relevant group” is problematic because there is no clear criterion by which to define relevance.
Pragmatic theories of truth assert that truth is a function of utility – i.e. a statement is true if it is useful to believe it is so. In other words, the truth of a statement is to be judged by the payoff obtained by believing it to be true. One of the problems with these theories is that it may be useful for some people to believe in a particular statement while is useful for others to disbelieve it. A good example of such a statement is: there is an objective reality. Scientists may find it useful to believe this whereas social constructionists may not. Closer home, it may be useful for a manager to believe that a particular customer is a good prospect (based on market intelligence, say), but a sales rep who knows the customer is unlikely to switch brands may think it useful to believe otherwise.
Finally, coherence theories of truth tell us that statements that are true must be consistent with a wider set of beliefs. In organisational terms, a piece of information or data that is true only if it does not contradict things that others in the organisation believe to be true. Coherence theories emphasise that the truth of statements cannot be established in isolation but must be evaluated as part of a larger system of statements (or beliefs). For example, managers may believe certain KPIs to be true because they fit in with other things they know about their business.
…And so to conclude
The truth is a slippery beast: what is true and what is not depends on what exactly one means by the truth and, as we have seen, there are several different conceptions of truth.
One may well ask if this matters from a practical point of view. To put it plainly: should executives, middle managers and frontline employees (not to mention business intelligence analysts and data warehouse designers) worry about philosophical theories of truth? My contention is that they should, if only to understand that the criteria they use for determining the validity of their data and information are little more than conventions that are easily overturned by taking other, equally legitimate, points of view.
On the politics of enterprise software implementation
Introduction
Project managers who have worked on enterprise system projects will know that the hardest problems to resolve are political rather than technical. Moreover, although such stories abound they remain largely untold, perhaps because those who have lived these stories are not free to speak about them. Even case studies in academic journals tend to gloss over political issues perhaps out fear of offending their hosts. Consequently there are not very many detailed case studies that focus on the politics of enterprise software customisation / implementation. In this post I summarise a paper by Christopher Bull entitled, Politics in Packaged Software Implementation, which describes a case study that highlights the complex and messy political issues that can arise in such projects.
Background
Given that IT tends to attract people with a analytical bent of mind, it is not surprising that those who plan enterprise scale system implementations focus on technical issues rather than politics. On the other hand, there is fairly rich research literature on the politics of system implementation.
In the paper, the author presents a short, selective review of the literature. The main point he makes is that the implementation of information systems is political because such systems are catalysts for organizational change. Some stakeholders may perceive benefits from certain changes whereas others will not. Given this, it is likely that the former group will be advocates for the system whereas the latter will not. Accordingly each side will present arguments that support their stance and these arguments will necessarily have a social/political dimension. That is, they are about more than just the technology.
A common way in which political behaviour manifests itself is as a resistance to the proposed changes. The author mentions that there are three theories of resistance, one each for origin / cause of resistance
- People-determined – in which resistance arises from a fear of change that is inherent in the human psyche.
- System-determined – wherein the change is resisted because the system is perceived as deficient or not useful.
- Interaction – where the system is seen as forcing a change in the culture and norms of the organisation. This is particularly the case for enterprise systems which tend to impose uniform work processes that are driven by the head office of an organisation, often to the detriment of efficiency in regional offices and subsidiaries.
Information systems academics tend to borrow heavily from other areas of the social sciences. It is therefore no surprise that there have been attempts to view the social aspects of information systems through the lens of Marxist theory. The parallels are obvious. Firstly, there are several different classes of people – management, developers and users – each with their own interests. Secondly, there are obvious inequalities in the distribution of influence and status between these groups. Case studies partially support Marxist theory – but I reckon this will not be a surprise to most readers.
The author points out that there are many different theories that can be used to make sense of social and political issues in information systems implementation. However, most of these tend to focus on one or another factor, overlooking others. In real life, political issues arise from diverse causes some of which may even counteract each other! The true value of focusing on the political aspects of system implementation is to gain an understanding of the causes of conflict and thereby develop techniques to alleviate them. It is here that case studies can be particularly useful because they allow researchers to study issues as they develop and thus developing an understanding of why they are happening and what could have been done to prevent them.
The case study
The study was carried out in a midsize, UK-based manufacturing company. The author noted the organisation had a hierarchical management structure with work organized by department. Interestingly, although management believed that communication between departments was good, other employees did not necessarily agree. Nevertheless, staff members were loyal and the company had a very low employee turnover rate.
The company was facing increasing pressure from competitors and had recently lost some key accounts. Management realised the organisation would need to become more proactive and responsive to customer needs, and this realization prompted the decision to implement a Customer Relationship Management (CRM) system.
The first decision that needed to be made was whether to build or buy – i.e whether the system should be built in-house or purchased. This decision has a political dimension as organisations often go down the “buy route” when they want to reduce the influence of their internal IT departments. However, building a CRM system is a huge undertaking and the IT department did not really want to be doing this. So the decision to buy rather than build proved to be popular with both IT and business staff.
Although the decision to buy the system was not contentious, the process of implementation turned out to be messier than either party had foreseen. Some of the problems mentioned by the author include:
- The project team (which was appointed by senior management) was widely thought to be unrepresentative. Groups that were not represented felt that their concerns would be ignored. Moreover, some felt genuinely threatened. For example, external sales staff (who were not included in the team or in project planning) felt that the system was intended to replace their roles.
- The steering committee was jointly headed by the IT and sales managers. This caused friction – the sales manager thought it undermined his authority whereas the IT manager viewed it as an unnecessary imposition on his time.
- Different departments had different views of what the system should do based on their respective departmental interests. Since it was difficult to achieve consensus, management engaged external consultants to assist the project team in requirements gathering and system sourcing/implementation.
- The consultants and IT had an adversarial relationship from the start. IT believed the consultants were biased towards a particular CRM product. There was also significant disagreement about priorities.
- Senior management appeared to trust the consultants more than the (internal) team members. This caused a degree of resentment and unease within the project team.
- Groups well represented on the steering committee (internal sales, in particular) were able to have say in how the system should work. Consequently, other groups felt that their concerns were not adequately addressed.
As a result of the above:
- Those who felt that their concerns were not addressed adequately indulged in delaying tactics.
- The project created a rift between employee groups had been homogenous in prior times. For example factions formed within the Sales Department, based on perceptions that certain groups (internal staff) would be better off after the system was implemented.
Moreover, effective use of the software entailed significant changes in existing work practices. Unsurprisingly, most of these changes were viewed negatively by employees. Quoting from the paper:
…new working practices were contentious because they were perceived to be unreasonable and unrealistic, particularly the scheduling and allocating of work for others. There were also complaints by certain sales staff that individuals who managed tasks at the beginning of the business chain would benefit considerably from those employed elsewhere because they were unaffected by internal organisational bottlenecks. Finally, the increasing number of surveillance features contained within the packaged system was resented by many sales support staff because of the pressures arising out of the increasing ability for managers to monitor and judge individual performance.
It is clear from the author’s description of the case study that those responsible for the project had not foreseen the political fallout of implementing the new system.
Lessons learned
I suspect the lessons the author draws from the case study will be depressingly familiar to many folks who have lived through a packaged software implementation. The main points made include:
- Senior management failed to consider the effect that the existing political tensions within the organisation would have on the project.
- There was no prior analysis of potential areas of concern that front line employees may have.
- There was a failure to recognize that the composition of a project steering committee will have political implications. Groups under-represented on the committee will almost always be resentful.
In short: new systems will almost always reconfigure relationships between different stakeholder groups. These reconfigurations will have political implications which need to be addressed as a part of the project.
Summing up
The paper details an interesting case study on the political effects of packaged software implementation, and although the paper was written well over a decade ago, many of the observations made in it are still very relevant today. I suspect many readers will find that author’s analysis and conclusions resonate with their own experiences.
The take-home lesson in a line is as follows: those implementing a packaged software system would do well to pay attention to existing relationships between different stakeholder groups and understand how these might be affected by the new system.

