Archive for the ‘Corporate IT’ Category
The legacy of legacy software
Introduction
On a recent ramble through Google Scholar, I stumbled on a fascinating paper by Michael Mahoney entitled, What Makes the History of Software Hard. History can offer interesting perspectives on the practice of a profession. So it is with this paper. In this post I review the paper, with an emphasis on the insights it provides into the practice of software development.
Mahoney’s thesis is that,
The history of software is the history of how various communities of practitioners have put their portion of the world into the computer. That has meant translating their experience and understanding of the world into computational models, which in turn has meant creating new ways of thinking about the world computationally and devising new tools for expressing that thinking in the form of working programs….
In other words, software– particularly application software – embodies real world practices. As a consequence,
…the models and tools that constitute software reflect the histories of the communities that created them and cannot be understood without knowledge of those histories, which extend beyond computers and computing to encompass the full range of human activities…
This, according Mahoney, is what makes the history of software hard.
The standard history of computing
The standard (textbook) history of computing is hardware-focused: a history of computers rather than computing. The textbook version follows a familiar tune starting with the abacus and working its way up via analog computers, ENIAC, mainframes, micros, PCs and so forth. Further, the standard narrative suggests that each of these were invented in order to satisfy a pre-existing demand, which makes their appearance almost inevitable. In Mahoney’s words,
…Just as it places all earlier calculating devices on one or more lines leading toward the electronic digital computer, as if they were somehow all headed in its direction, so too it pulls together the various contexts in which the devices were built, as if they constituted a growing demand for the invention of the computer and as if its appearance was a response to that demand.
Mahoney says that this is misleading for because,
…If people have been waiting for the computer to appears as the desired solution to their problems, it is not surprising that they then make use of it when it appears, or indeed that they know how to use it…
Further, it
…sets up a narrative of revolutionary impact, in which the computer is brought to bear on one area after another, in each case with radically transformative effect….”
The second point – revolutionary impact – is interesting because we still suffer its fallout: just about every issue of any trade journal has an article hyping the Next Big Computing Revolution. It seems that their writers are simply taking their cues from history. Mahoney puts it very well,
One can hardly pick up a journal in computing today without encountering some sort of revolution in the making, usually proclaimed by someone with something to sell. Critical readers recognise most of it as hype based on future promise than present performance…
The problem with revolutions, as Mahoney notes, is that they attempt to erase (or rewrite) history, ignoring the real continuities and connections between present and the past,
Nothing is in fact unprecedented, if only because we use precedents tot recognise, accommodate and shape the new…
CIOs and other decision makers, take note!
But what about software?
The standard history of computing doesn’t say much about software,
To the extent that the standard narrative covers software, the story follows the generations of machines, with an emphasis on systems software, beginning with programming languages and touching—in most cases, just touching—on operating systems, at least up to the appearance of time-sharing. With a nod toward Unix in the 1970s, the story moves quickly to personal computing software and the story of Microsoft, seldom probing deep enough to reveal the roots of that software in the earlier period.
As far as applications software is concerned –whether in construction, airline ticketing or retail – the only accounts that exist are those of pioneering systems such as the Sabre reservation system. Typically these efforts focus on the system being built, excluding any context and connection to the past. There are some good “pioneer style” histories: an example is Scott Rosenberg’s book Dreaming in Code – an account of the Chandler software project. But these are exceptions rather than the rule.
In the revolutionary model, people react to computers. In reality, though, it’s the opposite: people figure out ways to use computers in their areas of expertise. They design and implement programs to make computers do useful things. In doing so, they make choices:
Hence, the history of computing, especially of software, should strive to preserve human agency by structuring its narratives around people facing choices and making decisions instead of around impersonal forces pushing people in a predetermined direction. Both the choices and the decisions are constrained by the limits and possibilities of the state of the art at the time, and the state of the art embodies its history to that point.
The early machines of the 1940s and 50s were almost solely dedicated to numerical computations in the mathematical and physical sciences. Thereafter, as computing became more “mainstream” other communities of practitioners started to look at how they might use computers:
These different groups saw different possibilities in the computer, and they had different experiences as they sought to realize those possibilities, often translating those experiences into demands on the computing community, which itself was only taking shape at the time.
But these different communities have their own histories and ways of doing things – i.e. their own, unique worlds. To create software that models these worlds, the worlds have to be translated into terms the computer can “understand” and work with. This translation is the process of software design. The software models thus created embody practices that have evolved over time. Hence, the models also reflect the histories of the communities that create them.
Models are imperfect
There is a gap between models and reality, though. As Mahoney states,
…Programming is where enthusiasm meets reality. The enduring experience of the communities of computing has been the huge gap between what we can imagine computers doing and what we can actually make them do.
This lead to the notion of a “software crisis: and calls to reform the process of software development, which in turn gave rise to the discipline of software engineering. Many improvements resulted: better tools, more effective project management, high-level languages etc. But all these, as Brooks pointed out in his classic paper, addressed issues of implementation (writing code) not those of design (translating reality into computable representations). As Mahoney state,
…putting a portion of the world into the computer means designing an operative representation of that portion of the world that captures what we take to be its essential features. This has proved, as I say, no easy task; on the contrary it has proved difficult, frustrating and in some cases disastrous.
The problem facing the software historian is that he or she has to uncover the problem context and reality as perceived by the software designer, and thus reach an understanding of the design choices made. This is hard to do because it is implicit in the software artefact that the historian studies. Documentation is rarely any help here because,
…what programs do and what the documentation says they do are not always the same thing. Here, in a very real sense, the historian inherits the problems of software maintenance: the farther the program lies from its creators, the more difficult it is to discern its architecture and the design decisions that inform it.
There are two problems here:
- That software embodies a model of some aspect of reality.
- The only explanation of the model is the software itself.
As Mahoney puts it,
Legacy code is not just old code, but rather a continuing enactment, an operative representation, of the domain knowledge embodied in it. That may explain the difficulties software engineers have experienced in upgrading and replacing older systems.
Most software professionals will recognise the truth of this statement.
The legacy of legacy code
The problem is that new systems promise much, but are expensive and pose too many risks. As always continuity must be maintained, but this is nigh impossible because no one quite understands the legacy bequeathed by legacy code: what it does, how it does it and why it was designed so. So, customers play it safe and legacy code lives on. Despite all the advances in software engineering, software migrations and upgrades remain fraught with problems.
Mahoney concludes with the following play on the word “legacy”,
This situation (the gap between the old and the new) should be of common interest to computer people and to historians. Historians will want to know how it developed over several decades and why software systems have not kept pace with advances in hardware. That is, historians are interested in the legacy. Even as computer scientists wrestle with a solution to the problem the legacy poses, they must learn to live with it. It is part of their history, and the better they understand it, the better they will be able to move on from it.
This last point should be of interest to those running software development projects in corporate IT environments (and to a lesser extent those developing commercial software). An often unstated (but implicit) requirement is that the delivered software must maintain continuity between the past and present. This is true even for systems that claim to represent a clean break from the past; one never has the luxury of a completely blank slate, there are always arbitrary constraints placed by legacy systems. As Fred Brooks mentions in his classic article No Silver Bullet,
…In many cases, the software must conform because it is the most recent arrival on the scene. In others, it must conform because it is perceived as the most conformable. But in all cases, much complexity comes from conformation to other interfaces…
So, the legacy of legacy software is to add complexity to projects intended to replace it. Mahoney’s concluding line is therefore just as valid for project managers and software designers as it is for historians and computer scientists: project managers and software designers must learn to live with and understand this complexity before they can move on from it.
IT does matter
Does IT matter? This question has been debated endlessly since Nicholas Carr published his influential article over five year ago. Carr argues that the ubiquity of information of information technology has diminished its strategic importance: in other words, since every organisation uses IT in much the same way, it no longer confers a competitive advantage. Business executives and decision-makers who are familiar with Carr’s work will find a readymade rationale for restructuring their IT departments, or even doing away with them altogether. If IT isn’t a strategic asset, why bother having an in-house IT department? Salaries, servers and software add up to a sizeable stack of dollars. To an executive watching costs, particularly in these troubled times, the argument to outsource IT is compelling. Compelling maybe, but misguided. In this post I explain why I think so.
About a year ago, I wrote a piece entitled Whither Corporate IT, where I reflected on what the commoditization of IT meant for those who earn their daily dollar in corporate IT. In that article, I presumed that commoditization is inevitable, thus leaving little or no room for in-house IT professionals as we know them. I say “presumed” because I had taken the inevitability of commoditisation to be a given – basically because Carr said so. Now, a year later, I’d like to take a closer look at that presumed inevitability because it is actually far from obvious that everything the IT crowd does (or should be doing!) can be commoditised as Carr suggests.
The commoditisation of IT has a long history. The evolution of the computer from the 27 tonne ENIAC to the featherweight laptop is but one manifestation of this: the former was an expensive, custom-built machine that needed an in-house supporting crew of several technicians and programmers whereas the latter is a product that can be purchased off the shelf in a consumer electronics store. More recently, IT services such as those provided by people (e.g. service desk) and software (e.g. email) have also been packaged and sold. In his 2003 article, and book published a little over a year ago, Carr extrapolates this trend towards “productising” technology to an extreme, where IT becomes a utility like electricity or water.
The IT-as-utility argument focuses on technology and packaged services. It largely ignores the creative ways in which people adapt and use technology to solve business problems. It also ignores the fact that software is easy to adapt and change. As Scott Rosenberg has noted in his book, Dreaming in Code
“…of all the capital goods in which businesses invest large sums, software is uniquely mutable. The gigantic software packages for CRM and ERP that occupy the lives of the CTOs and CIOs of big corporations may be cumbersome and expensive. But they are still made of “thought-stuff”. And so every piece of software that gets used gets changed as people decide they want to adapt it for some new purpose…”
Some might argue that packaged enterprise applications are rarely, if ever, modified by in-house IT staff. True. But it is also undeniable that no two deployments of an enterprise application are ever identical; each has its own characteristics and quirks. When viewed in the context of an organisation’s IT ecosystem – which includes the application mix, data, interfaces etc – this uniqueness is even more distinct. I would go so far as to suggest that it often reflects the unique characteristics and quirks of the organisation itself.
Perhaps an example is in order here:
Consider a company that uses a CRM application. The implementation and hosting of the application can be outsourced, as it often is. Even better, organisations can often purchase such applications as a service (this CRM vendor is a good example). The latter option is, in fact, a form of IT as a utility – the purchasing organisation is charged a usage-based fee for the service, in much the same way as one is charged (by the meter) for water or electricity. Let’s assume that our company has chosen this option to save costs. Things seem to be working out nicely: costs are down (primarily because of the reduction in IT headcount); the application works as advertised; there’s little downtime, and routine service requests are handled in an exemplary manner. All’s well with the world until…inevitably…someone wants to do something that’s not covered by the service agreement: say, a product manager wants to explore the CRM data for (as yet unknown) relationships between customer attributes and sales (aka data mining). The patterns that emerge could give the company a competitive advantage in the market. There is a problem, though. The product manager isn’t a database expert, and there’s no in-house technical expert to help her with the analysis and programming. To make progress she has to get external help. However, she’s uncomfortable with the idea of outsourcing this potentially important and sensitive work. Even with signed confidentiality agreements in place, would you outsource work that could give your organisation an edge in the market? May be you would if you had to, but I suspect you wouldn’t be entirely comfortable doing so.
My point: IT, if done right, has the potential to be much much more than just a routine service. The example related earlier is an illustration of how IT can give an organisation an edge over its competitors.
The view of IT as service provider is a limited one. Unfortunately, that’s the view that many business leaders have. The IT-as-utility crowd know and exploit this. The trade media, with their continual focus on new technology, only help perpetuate the myth. In order to exploit existing technologies in new ways to solve business problems – and to even see the possibilities of doing so – companies need to have people who grok technology and the business. Who better to do this than an IT worker? Typically, these folks have good analytical abilities and the smarts to pick up new skills quickly, two traits that make them ideal internal consultants for a business. OK, some of them may have to work on their communication skills – going by the stereotype of the communication-challenged IT guy – but that’s far from a show-stopper.
Of course this needs to be a two-way street; a collaboration between business and IT.
In most organisations there is rarely any true collaboration between IT and the business. The fault, I think, lies with those on both sides of the fence. Folks in IT are well placed to offer advice on how applications and data can be used in new ways, but often lack a deep enough knowledge of the business to do so in any meaningful way. Those who do have this knowledge are generally loath to venture forth and sell their ideas to the business – primarily because they’re not likely to be taken seriously. On the other hand, those on the business side do not appreciate how technology can be adapted to solve specific business problems. What’s needed to bridge this gap is an ongoing dialogue between the two sides at all levels in the organisation. This is possible only with executive support, which won’t be forthcoming until business leaders appreciate the advantages that internal IT groups can offer.
Once in place, IT-business collaboration can evolve further. In an opinion piece published in CIO magazine, Andew Rowsell-Jones describes four roles that IT can assume in a enterprise. These are:
- Transactional Organisation: Here IT is an “order taker”. The business asks for what it needs and IT delivers. In such a role, IT is purely a technology provider; innovations as such focus only on improving operational efficiency. This is basically the outdated IT-as-service (and not much else) view.
- Business partner: Here IT engages with the business; it understand business needs and provides a solution appropriate to business requirements.
- Consultant: This is takes IT engagement to the next level: IT understands business issues and technology trends, and feels free to suggest solutions that will help drive business success- much like an external business/technology consultant.
- Strategic: This is the semi-mythical place all IT departments want to be: In such organisations IT is viewed as an asset that plays an important role in developing, implementing and executing the organisation’s strategy.
[Note that levels (2) and (3) are qualitatively the same: A business partner who understands the business and is viewed as an adviser by the business is really a consultant.]
These roles can be seen as describing the evolution of an IT department as it moves from an operational to a strategic function.
Moving up this value chain does not mean latching on to the latest fad in an uncritical manner. Yes, one has to keep up with and evaluate new offerings and ideas. But that apart, shiny, new technologies are best left alone until proven (preferably by others!). Even when proven, they should be used only when it makes strategic sense to do so. Business strategies are seldom realised by shoehorning organisational processes into alleged “best practices” or new technologies. Instead, organisations would be better served by a change in how IT is viewed; from service provider to business partner or, even better, consultant and advisor.
IT is more about business problem solving and innovation than about technology. Corporate IT folks must realise, believe and live this, because only then will they be able to begin to convince their business counterparts that IT really does matter.
Project portfolio management for the rest of us
Introduction
In small organisations, projects are often handled on a case-by-case basis, with little or no regard to the wider ramifications of the effort. As such organisations grow, there comes a point where it becomes necessary to prioritise and manage the gamut of projects from a strategic viewpoint. Why? Well, because if not, projects are undertaken on a first-come-first-served basis or worse, based on who makes the most noise (also known as the squeakiest wheel). Obviously, neither of these approaches serves the best interests of the organisation. The issue of prioritising projects is addressed by Project Portfolio Management or PPM (which should be distinguished from IT Portfolio Management). This post presents a simple approach to PPM; one that can be put to immediate use in smaller organisations which have grown to a point where an ad-hoc approach to multiple projects is starting to hurt.
Let’s begin with a few definitions:
Portfolio: The prioritised set of all projects and programs in an organisation.
Program: A set of multiple, interdependent projects which (generally, but not always) contribute to a single (or small number of) strategic objectives.
Project: A unique effort with a defined beginning and end, aimed at creating specific deliverables using defined resources.
As per the definition, an organisation’s project portfolio spans the entire application and infrastructure development effort within the organisation. In a nutshell: the basic aim of PPM is to ensure that the projects undertaken are aligned with the strategic objectives of the organisation. Clearly then, strategy precedes PPM – one can’t, by definition, have the latter without the former. This is a critical issue that is sometimes overlooked: the executive board is unlikely to be enthused by PPM unless there are demonstrable strategic benefits that flow from it.
It is worth pointing out that there are several program and portfolio management methodologies, each appropriate for a particular context. This post outlines a light-weight approach, geared towards smaller organisations.
Project portfolio management in three minutes
The main aim of PPM is to ensure that the projects undertaken within the organisation are aligned with its strategy. Outlined below is an approach to PPM that is aimed at doing this.
The broad steps in managing a project portfolio are:
- Develop project evaluation criteria.
- Develop project balancing criteria. Note: Steps (1) and (2) are often combined into a single step.
- Compile a project inventory.
- Score projects in inventory according to criteria developed in step (1)
- Balance the portfolio based on criteria developed in step (2). Note: Steps (4) and (5) are often combined into one step.
- Authorise projects based on steps (4) and (5) subject to resource constraints and interdependencies
- Review the portfolio
I elaborate on these briefly below
1. Develop project evaluation criteria: The criteria used to evaluate projects are obviously central to PPM, as they determine which projects are given priority. Suggested criteria include:
- Fit with strategic objectives of company.
- Improved operational efficiency
- Improved customer satisfaction
- Cost savings
Typically most organisations use a numerical scale for each criterion (1-5 or 1-10) with a weighting assigned to each (0<weighting<1). The weightings should add up to 1. Note that the above criteria are only examples. Appropriate criteria would need to be drawn up in consultation with senior management.
2. Develop balancing criteria: These criteria are used to ensure that the portfolio is balanced, very much like a balanced financial portfolio (on second thoughts, perhaps, this analogy doesn’t inspire much confidence in these financially turbulent times). Possible criteria include:
- Risk vs. reward.
- Internal focus vs. External (market) focus.
- External vs. internal development
3. Compile a project inventory: At its simplest this is a list of projects. Ideally the inventory should also include a business case for each project, outlining the business rationale, high level overview of implementation alternatives, cost-benefit analysis etc. Further, some organisations also include a high-level plan (including resource requirements) in the inventory.
4. Score projects: Ideally this should be done collaboratively between all operational and support units within the organisation. However, if scoring and balancing criteria set are set collaboratively, scoring projects may be a straightforward, non-controversial process. The end-result is a ranked list of projects.
5. Balance the portfolio: Adjust rankings arrived at in (4) based on the balancing criteria. The aim here is to ensure that the active portfolio contains the right mix of projects.
6. Authorise projects: Projects are authorised based on rankings arrived at in the previous step, subject to constraints (financial, resource etc.) and interdependencies. Again, this process should be uncontroversial providing the previous steps are done using a consultative approach. Typically, a cut-off score is set, and all projects above the cut-off are authorised. Sounds easy enough and it is. But it can be an exercise in managing disappointment, as executives whose projects don’t make the cut are prone to go into a sulk.
7. Review the portfolio: The project portfolio should be reviewed at regular intervals, monitoring active project progress and looking at what’s in the project pipeline. The review should evaluate active projects with a view to determining whether they should be continued or not. Projects in the pipeline should be scored and added to the portfolio, and those above the cut-off score should be authorised subject to resource availability and interdependencies.
The steps outlined above provide an overview of a suggested first approach to PPM for organisations beginning down the portfolio management path. As mentioned earlier, this is one approach; there are many others.
Conclusion
Organisational strategy is generally implemented through initiatives that translate to a number of programs and projects. Often these initiatives have complex interdependencies and high risks (not to mention a host of other characteristics). Project portfolio management, as outlined in this note, offers a transparent way to ensure that the organisation gets the most bang for its project buck – i.e that projects are implemented in order of strategic priority.

