Eight to Late

Sensemaking and Analytics for Organizations

Author Archive

A walk in the park

with 4 comments

 Sydney has a  wealth of bushwalking trails within the metropolitan area. Over the last month or so, a friend and I have been exploring trails in and around Lane Cove National Park, a reserve within touching distance of the city centre. Last weekend, we revisited a walk we had done a few weeks ago – a small section of the Great North Walk, which runs all the way from Sydney to Newcastle. The section we covered extends between the northern Sydney suburbs of Chatswood and Thornleigh (~14 kms in all). The first time we did the walk, it took us a little over two hours;  last weekend it took about three. The same trail, the same people and similar conditions (both days were sunny with a temperature of 18-20 C) – yet, the second time around, it took us nearly 50% longer than it did before.

Why the difference?

The answer is simple: although the weather conditions were similar during both walks, there had been about a week of rain prior to the second one. Consequently the trails were slushy and slippery, and we had to tone down our usual brisk pace. Along the way I slipped and fell,  which slowed us down even further.  So, although the two walks covered the same route in similar ambient conditions, one took much longer than the other  owing to the difference in  conditions on the ground and consequent events.  Reword this just a bit and you have a nice analogy with packaged software implementation projects: two projects following the same plan in similar environments; one taking much longer than the other owing to different conditions on the ground and consequences thereof.

Unfortunately, I reckon bushwalkers understand and appreciate the importance of ground conditions  better than many project managers.

Here’s a story from some time ago…

A company I consulted for was looking for an application to manage customer data (a CRM system by another name). After a complicated but thorough evaluation process they settled on a particular vendor, whose name is not important.  One of the reasons for the choice was that the selected vendor had a lot of experience in the industry that my client was in. Having done several implementations the vendor knew the ins, outs and all the possible complications of implementing CRM systems in this industry.

Since cost was a big concern, my client decided to go for a “near vanilla” implementation; one which involved minimal customisation of the base package offered by the vendor. This decision delighted the vendor, “That’s a good move,” said the account manager, “We’ll be able to offer you excellent terms as we know exactly what’s involved in this. We’ve done many vanilla implementations for similar sized companies in this industry.” My client was offered an attractive fixed price contract. Accompanying the contract was a high level scoping document which outlined the software and services that would be provided. I pointed out that the document didn’t provide enough detail on what the vendor would actually do. More importantly, it did not define which customisations were in scope and which weren’t. However, at a superficial level it appeared to address all my client’s concerns. Against my advice, the document was signed.

The vendor’s project manager was very experienced. He’d done a similar project (for a similar sized company in the same industry) a year ago. “We’ve done so many of these,” he said, “It will be a walk in the park”. He inspired confidence, as good project managers do. He had drawn up a  plans and schedules, accompanied by impressive Gantt charts and all sorts of project management paraphernalia. That’s not to say he didn’t consult us – he did ask for input on the tasks we were responsible for (data migration was one). This was provided to him. However, we had no idea about the duration of implementation-related tasks, so these were left entirely to him. These, he assured us, were drawn up on the basis of the scoping document which, in turn was based on that successful “walk in the park” from last year.

Owing to “conditions on the ground”  the project started falling behind almost immediately. To begin with, requirements gathering took double the allotted time  because the initial scope (which was as plain as vanilla) excluded required functionality that wasn’t available  out of the box.  Most of this was  easy enough to implement, and the vendor undertook to include it at no additional cost.  However, as I’d expected,  the analysis also revealed a handful of requirements that would be tricky to implement.  The vendor – quite naturally – deemed these out of scope,  and insisted that they would have to be charged separately.  Much haggling followed and a compromise was struck,  but it was one which left no one happy – the vendor got less than they wanted and my client paid more than they thought appropriate. It was the beginning of an extended and messy detour in the park.

I won’t go into any of the details except to mention than the project took about 50% longer and cost about 50% more than originally projected. The vendor’s experience in traversing similar terrain in similar conditions had lead to undue optimism, as reflected in the statement  that it would be a “walk in the park”.   Every bushwalk is a unique one – even those on familiar trails may hold surprises. So it is with projects. As the PMBOK definition tells us, “a project is a temporary endeavour undertaken to create a unique product, service or result”. Packaged application vendors and their customers would do well to remember this.

Written by K

August 2, 2009 at 7:06 am

IBIS, dialogue mapping, and the art of collaborative knowledge creation

with 23 comments

Introduction

In earlier posts  I’ve described a notation called IBIS (Issue-based information system), and demonstrated its utility in visualising reasoning and resolving complex issues through dialogue mapping.  The IBIS notation consists of just three elements (issues, ideas and arguments) that can be connected in a small number of ways. Yet, despite these limitations, IBIS has been found to enhance creativity when used in collaborative design discussions.  Given the simplicity of the notation and grammar, this claim is surprising,  even paradoxical.  The present post  resolves this paradox by viewing  collaborative knowledge creation as an art, and considers the aesthetic competencies required to facilitate this art.

Knowledge art

In a position paper entitled, The paradox of the “practice level” in collaborative design rationale, Al Selvin draws an analogy between design  discussions using Compendium (an open source IBIS-based argument mapping tool)  and art.  He uses the example of the artist Piet Mondrian, highlighting the difference in  style between Mondrian’s earlier and later work. To quote from the paper,

Whenever I think of surfacing design rationale as an intentional activity — something that people engaged in some effort decide to do (or have to do), I think of Piet Mondrian’s approach to painting in his later years. During this time, he departed from the naturalistic and impressionist (and more derivative, less original) work of his youth (view an image here) and produced the highly abstract geometric paintings (view an image here) most associated with his name…

Selvin points out that the difference between the first and the second paintings is essentially one of abstraction: the first one is almost instantly recognisable as a depiction of dunes on a beach whereas the second one, from Mondrian’s minimalist period, needs some effort to understand and appreciate, as it uses a very small number of elements to create a specific ambience. To quote from the paper again,

“One might think (as many in his day did) that he was betraying beauty, nature, and emotion by going in such an abstract direction. But for Mondrian it was the opposite. Each of his paintings in this vein was a fresh attempt to go as far as he could in the depiction of cosmic tensions and balances. Each mattered to him in a deeply personal way. Each was a unique foray into a depth of expression where nothing was given and everything had to be struggled for to bring into being without collapsing into imbalance and irrelevance. The depictions and the act of depicting were inseparable. We get to look at the seemingly effortless result, but there are storms behind the polished surfaces. Bringing about these perfected abstractions required emotion, expression, struggle, inspiration, failure and recovery — in short, creativity…”

In analogy, Selvin contends that a group of people who work through design issues using a minimalist notation such as IBIS can generate creative new ideas. In other words:  IBIS, when used in a group setting such as dialogue mapping,  can become a vehicle for collaborative creativity. The effectiveness of the tool, though, depends on those who wield it:

“…To my mind using tools and methods with groups is a matter of how effective, artistic, creative, etc. whoever is applying and organizing the approach can be with the situation, constraints, and people. Done effectively, even the force-fitting of rationale surfacing into a “free-flowing” design discussion can unleash creativity and imagination in the people engaged in the effort, getting people to “think different” and look at their situation through a different set of lenses. Done ineffectively, it can impede or smother creativity as so many normal methods, interventions, and attitudes do…”

Although Selvin’s discussion is framed in the context of design discussions using Compendium,  this is but dialogue mapping by another name.  So,  in essence, he  makes a case for viewing the collaborative generation of knowledge (through dialogue mapping or any other means) as an art.  In fact, in another article, Selvin uses the term knowledge art to describe both the process and the product of creating knowledge as discussed above.   Knowledge Art as he sees it, is a marriage of the two forms of discourse that make up the term. On the one hand, we have knowledge which, “… in an organizational setting, can be thought of as what is needed to perform work; the tacit and explicit concepts, relationships, and rules that allow us to know how to do what we do.” On the other, we have art which “… is concerned with heightened expression, metaphor, crafting, emotion, nuance, creativity, meaning, purpose, beauty, rhythm, timbre, tone, immediacy, and connection.”   

Facilitating collaborative knowledge creation

In the business world, there’s never enough time to deliberate or think through ideas (either individually or collectively): everything is done in a hurry and the result is never as good as it should or could be; the picture never quite complete.  However, as Selvin says,

“…each moment (spent discussing or thinking through ideas or designs) can yield a bit of the picture, if there is a way to capture the bits and relate them, piece them together over time. That capturing and piecing is the domain of Knowledge Art. Knowledge Art requires a spectrum of skills, regardless of how it’ practiced or what form it takes. It means listening and paying attention, determining the style and level of intervention, authenticity, engagement, providing conceptual frameworks and structures, improvisation, representational skill and fluidity, and skill in working with electronic information…”

So,  knowledge art requires a wide range of technical and non-technical skills.  In previous posts  I’ve discussed some of  technical skills required – fluency with IBIS, for example.  Let’s now look at  some of the non-technical competencies.

What are the competencies needed for collaborative knowledge creation?  Palus and Horth offer some suggestions in their paper entitled, Leading Complexity; The Art of Making Sense.  They define the concept of  creative leadership as making shared sense out of complexity and chaos and the crafting of meaningful action.  Creative leadership is akin to dialogue mapping, which Jeff Conklin describes as  a means to achieve a shared understanding of wicked problems  and a shared commitment to solving them.  The connection between creative leadership and dialogue mapping is apparent once one notices the similarity between their definitions.  So the  competencies  of creative leadership should apply to dialogue mapping (or collaborative knowledge creation)  as well.

Palus  and Horth describe  six basic competencies of creative leadership. I outline these below, mentioning  their relevance to dialogue mapping:

Paying Attention:  This refers to the ability to slow down discourse  with the aim of  achieving a deep understanding of the issues at hand. A skilled dialogue mapper has to be able to listen; to pay attention to what’s being said.

Personalizing:  This refers to the ability to draw upon personal experiences, interests and passions whilst engaged in work. Although the connection to dialogue mapping isn’t immediately evident, the point Palus and Horth make is that the ability to make connections between work and one’s interests and passions helps increase involvement, enthusiasm and motivation in tackling work challenges.

Imaging:  This refers to the ability to visualise problems so as  to understand them better,  using metaphors, pictures stories etc to stimulate imagination, intuition and understanding. The connection to dialogue mapping is clear and needs no elaboration.

Serious play: This refers to the ability to experiment with new ideas; to learn by trying and doing in a non-threatening environment. This is something that software developers do when learning new technologies. A group engaged in a dialogue mapping must have a sense of play; of trying out new ideas, even if they seem somewhat unusual.

Collaborative enquiry: This refers to the ability to  sustain productive dialogue in a diverse group of stakeholders. Again, the connection to dialogue mapping is evident.

Crafting: This refers to the ability to synthesise issues, ideas, arguments and actions into coherent, meaningful wholes. Yet again, the connection to dialogue mapping is clear – the end product is ideally a shared understanding of the problem and a shared commitment to a meaningful solution.

Palus and Horth suggest that these competencies have been ignored in the business world because:

  1. They are seen as threatening the status quo (creativity is to feared because it invariably leads to changes).
  2. These competencies are aesthetic, and the current emphasis on scientific management devalues competencies that are not rational or analytical.

The irony is that creative scientists have these aesthetic competencies (or qualities) in spades. At the most fundamental level science is an art – it is about constructing theories or designing experiments that make sense of the world. Where do the ideas for these new theories or experiments come from? Well, they certainly aren’t out there in the objective world; they come from the imagination of the scientist. Science, in the real sense of the word, is knowledge art. If these competencies are useful in science, they should be more than good enough for the business world.

Summing up

To sum up:  knowledge creation in an organisational context is best viewed as an art – a collaborative art.  Visual representations such as IBIS provide a medium to capture snippets of knowledge and relate them, or  piece them together over time. They provide the canvas, brush and paint to express knowledge as art  through the process of dialogue mapping.

Maintenance matters

with 8 comments

Corporate developers spend majority of their programming time doing maintenance work.   My basis for this claim is two years worth of statistics that I have been gathering at my workplace. According to these figures, my group spends about 65 percent of their programming time on maintenance  (with  some developers spending considerably more, depending on the applications they support).   I suspect these numbers are applicable to most corporate IT shops – and possibly, to a somewhat smaller extent, to  software houses as well.  Unfortunately, maintenance work is often looked upon as being “inferior to” development.  This being the case,  it is worth dispelling some myths  about maintenance programming.  As it happens, I’ve just finished reading Robert Glass‘ wonderful book,  Facts and Fallacies of Software Engineering, in which he presents some interesting facts about software maintenance (among lots of other interesting facts).  This post looks at these facts which, I think,  some readers may find surprising.

Let’s get right to it.  Fact 41 in the book reads:

Maintenance typically consumes 40 to 80 percent (average 60 percent) of software costs. Therefore, it is probably the most important life cycle phase of software.

Surprised? Wait, there’s more: Fact 42 reads:

Enhancement is responsible for roughly 60 percent of software maintenance costs. Error correction is roughly 17 percent. Therefore software maintenance is largely about adding new capability to old software, not fixing it.

As a corollary to Fact 42, Glass unveils Fact 43, which simply states that:

 Maintenance is a solution, not a problem.

Developers who haven’t done any maintenance work may be surprised by these facts. Most corporate IT developers have done considerable maintenance time; so no one in my mob was  surprised when I mentioned these during a coffee break conversation.  Based on the number   quoted in the first paragraph (65 percent maintenance) and Glass’s figure (60 percent of maintenance is modification work), my colleagues  spend close to 40 percent of their time of  enhancing existing applications. All of them reckon this number is about right, and their thinking is  supported by my data.

A few weeks ago, I wrote a piece entitled the legacy of legacy software in which I pointed out that legacy code is a problem for historians and programmers alike. Both have to understand legacy code, albeit in different ways. The historian needs to understand how it developed over the years so that he can understand its history; why it is the way it is and what made it so. The programmer has a more pragmatic interest – she needs to understand how it works so that she can modify it.  Now, Glass’ Fact 42 tells us that much of maintenance work is adding new functionality. New functionality implies new code, or at least substantial modifications of existing code.  Software is therefore  a palimpsest – written once, and then overwritten again and again.

The maintenance programmer whose job it is to modify legacy code has to first understand it. Like a historian or archaeologist decoding a palimpsest, she has to sort through layers of modifications made by different people at different times for different reasons. The task is often made harder by the fact that modifications are often under-documented (if not undocumented).   In Fact 44 of the book,   Glass states that this effort of understanding code – an effort that he calls undesign – makes up about 30 percent of the total time spent in maintenance. It is therefore the most significant maintenance activity.

But that’s not all.  After completing “undesign” the maintenance programmer has to design the enhancement within the context of the existing code – design under constraints, so to speak.   There are at least a couple of reasons why this is hard.  First,  as Brooks tells us in No Silver Bullet — design itself is hard work; it is one of the essential difficulties of software engineering.  Second, the original design is created with a specific understanding of requirements.  By the time modifications come around, the requirements may have changed substantially. These new requirements may conflict with the original design.  If so, the maintenance task becomes that much harder.

Ideally, existing design documentation should ease the burden on the maintenance programmer. However it rarely does because such documentation is typically created in the design phase – and rarely modified to reflect design changes as the product is built. As a consequence, most design documentation is hopelessly out of date by the time the original product is released into production. To quote from the book:

Common sense would tell you that the design documentation, produced as the product is being built, would be an important basis for those undesign tasks. But common sense, in this case, would be wrong. As the product is built, the as-built program veers more and more away from the original design specifications. Ongoing maintenance drives the specs and product even further apart. The fact of the matter is, design documentation is almost completely untrustworthy when it comes to maintaining a software product. The result is, almost all of that undesign work involves reading of code (which is invariably up to date) and ignoring the documentation (which commonly is not).

So, one of the main reasons maintenance work is hard is that the programmer has to expend considerable effort in decoding someone else’s code (some might argue that this is the most time consuming part of undesign). Programmers know that it is hard to infer what a program does by reading it, so the word “code” in the previous sentence could well be used in the sense of code as an obfuscated or encrypted message. As Charles Simonyi said in response to an Edge question:

 Programmers using today’s paradigm start from a problem statement, for example that a Boeing 767 requires a pilot, a copilot, and seven cabin crew with various certification requirements for each—and combine this with their knowledge of computer science and software engineering—that is how this rule can be encoded in computer language and turned into an algorithm. This act of combining is the programming process, the result of which is called the source code. Now, programming is well known to be a difficult-to-invert function, perhaps not to cryptography’s standards, but one can joke about the possibility of the airline being able to keep their proprietary scheduling rules secret by publishing the source code for the implementation since no one could figure out what the rules were—or really whether the code had to do with scheduling or spare parts inventory—by studying the source code, it can be that obscure.

Glass offers up one final maintenance-related fact in his book (Fact 45):

 Better software engineering leads to more maintenance, not less.

Huh? How’s that possible.

The answer is actually implicit in the previous facts and Simonyi’s observation: in the absence of documentation, the ease with which modifications can be made is directly related to the ease with which the code can be understood. Well designed systems are easier to understand, and hence can be modified more quickly. So, in a given time interval, a well designed system will have more modifications done to it than one that is not so well designed. Glass mentions that this is an interesting manifestation of Fact 43: Maintenance as a solution, rather than a problem.

Towards the end of the book, Glass presents the following fallacy regarding maintenance:

The way to predict future maintenance costs and to make product replacement decisions is to look at past cost data.

The reason that prediction based on past data  doesn’t work is that a plot of maintenance costs vs. time plot has a bathtub shape. Initially, when a product is just released,   there is considerable maintenance work (error fixing and enhancements)  done on it. This decreases in time, until it plateaus out. This is the “stable” region corresponding to the period when the product is being used with relatively few modifications or error fixes.  Finally, towards the end of the product’s useful life, enhancements and error fixes become more expensive as technology moves on and/or the product begins to push the limits of its design. At this point costs increase again, often quite steeply.  The point Glass makes is that, in general, one does not know where the product is  on this bathtub curve. Hence, using past data to make predictions is fraught with risk – especially if one is near an inflection point, where the shape of the curve is changing.So what’s the solution? Glass suggests asking customer about their expectations regarding the future of the  product, rather than trying to extrapolate from past data.

Finally, Glass has this to say about replacing software:

Most companies find that retiring an existing software product is nearly impossible. To build  a replacement requires a source of the requirements that match the current version of the product, and those requirements probably don’t exist anywhere. They’re not in the documentation because it wasn’t kept up to date. They’re not to be found from the original customers or users or developers because those folks are long gone…They may be discernable form reverse engineering the existing product, but that’s an error-prone and undesirable task that hardly anyone wants to tackle. To paraphrase an old saying, “Old software never dies, it just tends to fade away.”

And it’s the maintenance programmer who extends its life, often way beyond original design and intent. So, maintenance matters because it adds complexity to the  legacy of legacy software. But above all it matters because it is a solution, not a problem.

Written by K

July 16, 2009 at 10:17 pm