Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Corporate IT’ Category

A walk in the park

with 4 comments

 Sydney has a  wealth of bushwalking trails within the metropolitan area. Over the last month or so, a friend and I have been exploring trails in and around Lane Cove National Park, a reserve within touching distance of the city centre. Last weekend, we revisited a walk we had done a few weeks ago – a small section of the Great North Walk, which runs all the way from Sydney to Newcastle. The section we covered extends between the northern Sydney suburbs of Chatswood and Thornleigh (~14 kms in all). The first time we did the walk, it took us a little over two hours;  last weekend it took about three. The same trail, the same people and similar conditions (both days were sunny with a temperature of 18-20 C) – yet, the second time around, it took us nearly 50% longer than it did before.

Why the difference?

The answer is simple: although the weather conditions were similar during both walks, there had been about a week of rain prior to the second one. Consequently the trails were slushy and slippery, and we had to tone down our usual brisk pace. Along the way I slipped and fell,  which slowed us down even further.  So, although the two walks covered the same route in similar ambient conditions, one took much longer than the other  owing to the difference in  conditions on the ground and consequent events.  Reword this just a bit and you have a nice analogy with packaged software implementation projects: two projects following the same plan in similar environments; one taking much longer than the other owing to different conditions on the ground and consequences thereof.

Unfortunately, I reckon bushwalkers understand and appreciate the importance of ground conditions  better than many project managers.

Here’s a story from some time ago…

A company I consulted for was looking for an application to manage customer data (a CRM system by another name). After a complicated but thorough evaluation process they settled on a particular vendor, whose name is not important.  One of the reasons for the choice was that the selected vendor had a lot of experience in the industry that my client was in. Having done several implementations the vendor knew the ins, outs and all the possible complications of implementing CRM systems in this industry.

Since cost was a big concern, my client decided to go for a “near vanilla” implementation; one which involved minimal customisation of the base package offered by the vendor. This decision delighted the vendor, “That’s a good move,” said the account manager, “We’ll be able to offer you excellent terms as we know exactly what’s involved in this. We’ve done many vanilla implementations for similar sized companies in this industry.” My client was offered an attractive fixed price contract. Accompanying the contract was a high level scoping document which outlined the software and services that would be provided. I pointed out that the document didn’t provide enough detail on what the vendor would actually do. More importantly, it did not define which customisations were in scope and which weren’t. However, at a superficial level it appeared to address all my client’s concerns. Against my advice, the document was signed.

The vendor’s project manager was very experienced. He’d done a similar project (for a similar sized company in the same industry) a year ago. “We’ve done so many of these,” he said, “It will be a walk in the park”. He inspired confidence, as good project managers do. He had drawn up a  plans and schedules, accompanied by impressive Gantt charts and all sorts of project management paraphernalia. That’s not to say he didn’t consult us – he did ask for input on the tasks we were responsible for (data migration was one). This was provided to him. However, we had no idea about the duration of implementation-related tasks, so these were left entirely to him. These, he assured us, were drawn up on the basis of the scoping document which, in turn was based on that successful “walk in the park” from last year.

Owing to “conditions on the ground”  the project started falling behind almost immediately. To begin with, requirements gathering took double the allotted time  because the initial scope (which was as plain as vanilla) excluded required functionality that wasn’t available  out of the box.  Most of this was  easy enough to implement, and the vendor undertook to include it at no additional cost.  However, as I’d expected,  the analysis also revealed a handful of requirements that would be tricky to implement.  The vendor – quite naturally – deemed these out of scope,  and insisted that they would have to be charged separately.  Much haggling followed and a compromise was struck,  but it was one which left no one happy – the vendor got less than they wanted and my client paid more than they thought appropriate. It was the beginning of an extended and messy detour in the park.

I won’t go into any of the details except to mention than the project took about 50% longer and cost about 50% more than originally projected. The vendor’s experience in traversing similar terrain in similar conditions had lead to undue optimism, as reflected in the statement  that it would be a “walk in the park”.   Every bushwalk is a unique one – even those on familiar trails may hold surprises. So it is with projects. As the PMBOK definition tells us, “a project is a temporary endeavour undertaken to create a unique product, service or result”. Packaged application vendors and their customers would do well to remember this.

Written by K

August 2, 2009 at 7:06 am

Maintenance matters

with 8 comments

Corporate developers spend majority of their programming time doing maintenance work.   My basis for this claim is two years worth of statistics that I have been gathering at my workplace. According to these figures, my group spends about 65 percent of their programming time on maintenance  (with  some developers spending considerably more, depending on the applications they support).   I suspect these numbers are applicable to most corporate IT shops – and possibly, to a somewhat smaller extent, to  software houses as well.  Unfortunately, maintenance work is often looked upon as being “inferior to” development.  This being the case,  it is worth dispelling some myths  about maintenance programming.  As it happens, I’ve just finished reading Robert Glass‘ wonderful book,  Facts and Fallacies of Software Engineering, in which he presents some interesting facts about software maintenance (among lots of other interesting facts).  This post looks at these facts which, I think,  some readers may find surprising.

Let’s get right to it.  Fact 41 in the book reads:

Maintenance typically consumes 40 to 80 percent (average 60 percent) of software costs. Therefore, it is probably the most important life cycle phase of software.

Surprised? Wait, there’s more: Fact 42 reads:

Enhancement is responsible for roughly 60 percent of software maintenance costs. Error correction is roughly 17 percent. Therefore software maintenance is largely about adding new capability to old software, not fixing it.

As a corollary to Fact 42, Glass unveils Fact 43, which simply states that:

 Maintenance is a solution, not a problem.

Developers who haven’t done any maintenance work may be surprised by these facts. Most corporate IT developers have done considerable maintenance time; so no one in my mob was  surprised when I mentioned these during a coffee break conversation.  Based on the number   quoted in the first paragraph (65 percent maintenance) and Glass’s figure (60 percent of maintenance is modification work), my colleagues  spend close to 40 percent of their time of  enhancing existing applications. All of them reckon this number is about right, and their thinking is  supported by my data.

A few weeks ago, I wrote a piece entitled the legacy of legacy software in which I pointed out that legacy code is a problem for historians and programmers alike. Both have to understand legacy code, albeit in different ways. The historian needs to understand how it developed over the years so that he can understand its history; why it is the way it is and what made it so. The programmer has a more pragmatic interest – she needs to understand how it works so that she can modify it.  Now, Glass’ Fact 42 tells us that much of maintenance work is adding new functionality. New functionality implies new code, or at least substantial modifications of existing code.  Software is therefore  a palimpsest – written once, and then overwritten again and again.

The maintenance programmer whose job it is to modify legacy code has to first understand it. Like a historian or archaeologist decoding a palimpsest, she has to sort through layers of modifications made by different people at different times for different reasons. The task is often made harder by the fact that modifications are often under-documented (if not undocumented).   In Fact 44 of the book,   Glass states that this effort of understanding code – an effort that he calls undesign – makes up about 30 percent of the total time spent in maintenance. It is therefore the most significant maintenance activity.

But that’s not all.  After completing “undesign” the maintenance programmer has to design the enhancement within the context of the existing code – design under constraints, so to speak.   There are at least a couple of reasons why this is hard.  First,  as Brooks tells us in No Silver Bullet — design itself is hard work; it is one of the essential difficulties of software engineering.  Second, the original design is created with a specific understanding of requirements.  By the time modifications come around, the requirements may have changed substantially. These new requirements may conflict with the original design.  If so, the maintenance task becomes that much harder.

Ideally, existing design documentation should ease the burden on the maintenance programmer. However it rarely does because such documentation is typically created in the design phase – and rarely modified to reflect design changes as the product is built. As a consequence, most design documentation is hopelessly out of date by the time the original product is released into production. To quote from the book:

Common sense would tell you that the design documentation, produced as the product is being built, would be an important basis for those undesign tasks. But common sense, in this case, would be wrong. As the product is built, the as-built program veers more and more away from the original design specifications. Ongoing maintenance drives the specs and product even further apart. The fact of the matter is, design documentation is almost completely untrustworthy when it comes to maintaining a software product. The result is, almost all of that undesign work involves reading of code (which is invariably up to date) and ignoring the documentation (which commonly is not).

So, one of the main reasons maintenance work is hard is that the programmer has to expend considerable effort in decoding someone else’s code (some might argue that this is the most time consuming part of undesign). Programmers know that it is hard to infer what a program does by reading it, so the word “code” in the previous sentence could well be used in the sense of code as an obfuscated or encrypted message. As Charles Simonyi said in response to an Edge question:

 Programmers using today’s paradigm start from a problem statement, for example that a Boeing 767 requires a pilot, a copilot, and seven cabin crew with various certification requirements for each—and combine this with their knowledge of computer science and software engineering—that is how this rule can be encoded in computer language and turned into an algorithm. This act of combining is the programming process, the result of which is called the source code. Now, programming is well known to be a difficult-to-invert function, perhaps not to cryptography’s standards, but one can joke about the possibility of the airline being able to keep their proprietary scheduling rules secret by publishing the source code for the implementation since no one could figure out what the rules were—or really whether the code had to do with scheduling or spare parts inventory—by studying the source code, it can be that obscure.

Glass offers up one final maintenance-related fact in his book (Fact 45):

 Better software engineering leads to more maintenance, not less.

Huh? How’s that possible.

The answer is actually implicit in the previous facts and Simonyi’s observation: in the absence of documentation, the ease with which modifications can be made is directly related to the ease with which the code can be understood. Well designed systems are easier to understand, and hence can be modified more quickly. So, in a given time interval, a well designed system will have more modifications done to it than one that is not so well designed. Glass mentions that this is an interesting manifestation of Fact 43: Maintenance as a solution, rather than a problem.

Towards the end of the book, Glass presents the following fallacy regarding maintenance:

The way to predict future maintenance costs and to make product replacement decisions is to look at past cost data.

The reason that prediction based on past data  doesn’t work is that a plot of maintenance costs vs. time plot has a bathtub shape. Initially, when a product is just released,   there is considerable maintenance work (error fixing and enhancements)  done on it. This decreases in time, until it plateaus out. This is the “stable” region corresponding to the period when the product is being used with relatively few modifications or error fixes.  Finally, towards the end of the product’s useful life, enhancements and error fixes become more expensive as technology moves on and/or the product begins to push the limits of its design. At this point costs increase again, often quite steeply.  The point Glass makes is that, in general, one does not know where the product is  on this bathtub curve. Hence, using past data to make predictions is fraught with risk – especially if one is near an inflection point, where the shape of the curve is changing.So what’s the solution? Glass suggests asking customer about their expectations regarding the future of the  product, rather than trying to extrapolate from past data.

Finally, Glass has this to say about replacing software:

Most companies find that retiring an existing software product is nearly impossible. To build  a replacement requires a source of the requirements that match the current version of the product, and those requirements probably don’t exist anywhere. They’re not in the documentation because it wasn’t kept up to date. They’re not to be found from the original customers or users or developers because those folks are long gone…They may be discernable form reverse engineering the existing product, but that’s an error-prone and undesirable task that hardly anyone wants to tackle. To paraphrase an old saying, “Old software never dies, it just tends to fade away.”

And it’s the maintenance programmer who extends its life, often way beyond original design and intent. So, maintenance matters because it adds complexity to the  legacy of legacy software. But above all it matters because it is a solution, not a problem.

Written by K

July 16, 2009 at 10:17 pm

Visualising arguments using issue maps – an example and some general comments

with 16 comments

The aim of an opinion piece writer is to convince his or her readers that a particular idea or point of view is reasonable or right.  Typically, such pieces  weave facts , interpretations and reasoning into prose, wherefrom it can be hard to pick out the essential thread of argumentation.  In an earlier post I showed how an issue map can help in clarifying the central arguments in a “difficult” piece of writing by mapping out Fred Brooks’ classic article No Silver Bullet.  Note that I use the word “difficult” only because the article has, at times, been misunderstood and misquoted; not because it is particularly hard to follow.  Still, Brooks’ article borders on the academic; the arguments presented therein are of interest to a relatively small group of people within the software development community. Most developers and architects aren’t terribly interested in the essential difficulties of the profession – they just want to get on with their jobs. In the present post, I develop an issue map of a piece that is of potentially wider interest to the IT community – Nicholas Carr’s 2003 article, IT Doesn’t Matter.

The main point of Carr’s article is that IT is becoming a utility,  much like electricity, water or rail. As this trend towards commoditisation gains momentum, the strategic advantage offered by in-house IT will diminish, and organisations will be better served by buying IT services from “computing utility” providers than by maintaining their own IT shops.  Although Carr makes a persuasive case, he glosses over a key difference between IT and other utilities (see this post for more). Despite this, many business and IT leaders have taken his words as the way things will be. It is therefore important for all IT professionals to understand Carr’s arguments. The consequences are likely to affect them some time soon, if they haven’t already.

Some preliminaries before proceeding with the map. First, the complete article is available here – you may want to have a read of it before proceeding (but this isn’t essential). Second, the discussion assumes a basic knowledge of  IBIS (Issue-Based Information System) –  see  this post for a quick tutorial on IBIS.  Third, the map is constructed using the open-source tool Compendium which can be downloaded here.

With the preliminaries out of the way, let’s get on with issue mapping Carr’s article.

So, what’s the root  (i.e. central) question that Carr poses in the article?  The title of the piece is  “IT Doesn’t Matter” – so one possible root question is, “Why doesn’t IT matter?” But there are other candidates:   “On what basis is IT an infrastructural technology?” or  “Why is the strategic value of IT diminishing?” for example. From this it should be clear that there’s a fair degree of subjectivity at every step of constructing an issue map. The visual representation that I construct here is but one interpretation of Carr’s argument.

Out of the above (and many other possibles),  I choose  “Why doesn’t IT matter?” as the root question. Why? Well,  in my view the whole  point of the piece  is to convince the reader that IT doesn’t matter because it is an infrastructural technology and consequently has no strategic significance. This point should become clearer as our development of the issue map progresses.

The ideas that respond to this question aren’t immediately obvious. This isn’t unusual:  as I’ve mentioned elsewhere, points can only be made sequentially – one after the other – when expressed in prose.  In some cases one may have to read a piece in its entirety to figure out the elements that respond to a root (or any other) question.

In the case at hand, the response to the root question stands out clearly after a quick browse through the article. It is:  IT is an infrastructural technology.

The map with the root question and the response is shown in Figure 1.

Figure 1: Issue Map Stage 1

Figure 1: Issue Map Stage 1

Moving on, what arguments does Carr offer for (pros) and against (cons) this idea? A reading of the article reveals one con and four pros. Let’s look at the cons first:

  1. IT (which I take to mean software) is complex and malleable, unlike other infrastructural technologies. This point is mentioned, in passing, on the third page of the paper: “Although more complex and malleable than its predecessors, IT has all the hallmarks of an infrastructural technology…”

The arguments supporting the idea that IT is an infrastructural technology are:

  1. The evolution of IT closely mirrors that of other infrastructural technologies such as electricity and rail. Although this point encompasses the other points made below, I think it merits a separate mention because the analogies are quite striking. Carr makes a very persuasive, well-researched case supporting this point.
  2. IT is highly replicable. This is point needs no further elaboration, I think.
  3. IT is a transport mechanism for digital information. This is true, at least as far as network and messaging infrastructure is concerned.
  4. Cost effectiveness increases as IT services are shared. This is true too, providing it is understood that flexibility is lost when services are shared.

The map, incorporating the pros and cons is shown in Figure 2.

Figure 2: Issue Map Stage 2

Figure 2: Issue Map Stage 2

Now that the arguments for and against the notion that IT is an infrastructural technology are laid out, lets look at the article again, this time with an eye out for any other issues  (questions)  raised.

The first question is an obvious one: What are the consequences of IT being an infrastructural technology?   

Another point to be considered is the role of proprietary technologies, which – by definition – aren’t infrastructural. The same holds true for  custom built applications. So, this begs the question, if IT is an infrastructural technology, how do proprietary and custom built applications fit in?

The map, with these questions  added in is shown in Figure 3.

Figure 3: Issue Map Stage 3

Figure 3: Issue Map Stage 3

Let’s now look at the ideas that respond to these two questions.

A point that Carr makes early in the article is that the strategic value of IT is diminishing. This is essentially a consequence of the notion that IT is an infrastructural technology. This idea is supported by the following arguments:

  1. IT is ubiquitous – it is everywhere, at least in the business world.
  2. Everyone uses it in the same way. This implies that no one gets a strategic advantage from using it.

What about proprietary technologies and custom apps?.  Carr reckons these are:

  1. Doomed to economic obsolescence. This idea is supported by the argument that these apps are too expensive and are hard to maintain.
  2. Related to the above, these will be replaced by generic apps that incorporate best practices. This trend is already evident in the increasing number of enterprise type applications that offered as services. The advantages of these are that they a) cost little b) can be offered over the web and c) spare the client all those painful maintenance headaches.

The map incorporating these ideas and their supporting arguments is shown in Figure 4.

Figure 4: Issue Map Stage 4

Figure 4: Issue Map Stage 4

Finally, after painting this somewhat gloomy picture (to a corporate IT minion, such as me) Carr asks and answers the question: How should organisations deal with the changing role of IT (from strategic to operational)? His answers are:

  1. Reduce IT spend.
  2. Buy only proven technology – follow don’t lead.
  3. Focus on (operational) vulnerabilities rather than (strategic) opportunities.

The map incorporating this question and the ideas that respond to it is shown in Figure 5, which is also the final map (click on the graphic to view  a full-sized image).

Figure 5: Final Issue Map

Figure 5: Issue Map Stage 5

Map completed, I’m essentially done with this post. Before closing, however, I’d like to mention a couple of general points that arise from issue mapping of prose pieces.

Figure 5 is my interpretation of the article. I should emphasise that my interpretation may not coincide with what Carr intended to convey (in fact, it probably doesn’t). This highlights an important, if obvious, point: what a writer intends to convey in his or her writing may not coincide with how readers interpret it. Even worse, different readers may interpret a piece differently. Writers need to write with an awareness of the potential for being misunderstood.  So, my  first point is that issue maps can help writers clarify and improve the quality of their reasoning  before they cast it in prose.

Issue maps sketch out the logical skeleton or framework of argumentative prose. As such, they  can help highlight weak points of arguments. For example, in the above article Carr glosses over the complexity and malleability of software. This is a weak point of the argument, because it is a key difference between IT and traditional infrastructural technologies. Thus my second point is that issue maps can help readers visualise weak links in arguments which might have been obscured by rhetoric and persuasive writing.

To conclude,   issue maps are valuable to writers and readers:  writers can use  issue maps to  improve the quality of their  arguments before committing them in writing, and  readers can use such maps to understand arguments that have been thus committed.