Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Management’ Category

The illusion of enterprise risk management – a paper review

with 7 comments

Introduction

Enterprise risk management (ERM) refers to the process by which uncertainties are identified, analysed and managed from an organization-wide perspective. In principle such a perspective enables organisations to deal with risks in a holistic manner, avoiding the silo mentality that plagues much of risk management practice.  This is the claim made of ERM at any rate, and most practitioners accept it as such.  However, whether the claim really holds is another matter altogether. Unfortunately,  most of the available critiques of ERM  are written for academics or risk management experts. In this post I summarise a critique of ERM presented in a paper by Michael Power entitled, The Risk Management of Nothing.

I’ll begin with a brief overview of ERM frameworks and then summarise the main points of the paper along with some of my comments and annotations.

 ERM Frameworks and Definitions

What is ERM?

The best way to answer this question is to look at a couple of well-known ERM frameworks, one from the Casualty Actuarial Society (CAS) and the other from the Committee of Sponsoring Organisations of the Treadway Commission (COSO).

CAS defines ERM as:

… the discipline by which an organization in any industry assesses, controls, exploits, finances, and monitors risks from all sources for the purpose of increasing the organization’s short- and long-term value to its stakeholders.

See this article for an overview of ERM from actuarial perspective.

COSO defines ERM as:

…a process, effected by an entity’s board of directors, management and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives.

The term risk appetite in the above definition refers to the risk an organisation is willing to bear. See the first article in the  June 2003 issue of Internal Auditor for more on the COSO perspective on ERM.

In both frameworks, the focus is very much on quantifying risks through (primarily) financial measures and on establishing accountability for managing these risks in a systematic way.

All this sounds very sensible and uncontroversial. So, where’s the problem?

The problems with ERM

The author of the paper begins with the observation that the basic aim of ERM is to identify risks that can affect an organisation’s objectives and then design controls and mitigation strategies that reduce these risks (collectively) to below a predetermined  value that  is specified by the organisation’s risk appetite. Operationally, identified risks are monitored and corrective action is taken when they go beyond limits specified by the controls, much like the operation of a thermostat.

In this view, risk management is a mechanistic process.  Failures of risk management are seen more as being due to “not doing it right” (implementation failure) or politics getting in the way (organizational friction), rather than a problem with the framework itself. The basic design of the framework is rarely questioned.

Contrary to common wisdom, the author of the paper believes that the design of ERM is flawed in the following three ways:

  1. The idea of a single, organisation-wide risk appetite is simplistic.
  2. The assumption that risk can be dealt with by detailed, process-based rules (suitable for audit and control) is questionable.
  3. The undue focus on developing financial metrics and controls blind it to “bigger picture”, interconnected risks because these cannot be quantified or controlled by such mechanisms.

We’ll now take a look at each of the above in some detail

Appetite vs. appetisation

As mentioned earlier, risk appetite is defined as the risk the organisation is willing to bear. Although ERM frameworks allow for qualitative measures of risk appetite, most organisations implementing ERM tend to prefer quantitative ones. This is a problem because the definition of risk appetite can vary significantly across an organization. For example, the sales and audit functions within an organisation could (will!) have different appetites for risk.  As another example, familiar to anyone who reads the news, is that there is usually a big significant gap between the risk appetites of financial institutions and regulatory authorities.

The difference in risk appetites of different stakeholder groups  is a manifestation of the fact that risk is a social construct – different stakeholder groups view a given risk in different ways, and some may not even see certain risks as risks (witness the behaviour of certain financial “masters of the universe”)

Since a single, organisation-wide risk appetite is difficult to come up with, the author suggests a different approach – one which takes into account the multiplicity of viewpoints in an organisation; a process he calls “risk appetizing”.  This involves getting diverse stakeholders to achieve a consensus / agreement on what constitutes risk appetite. Power argues that this process of reconciling different viewpoints of risk would lead to a more realistic view of the risk the organization is willing to bear. Quoting from the paper:

Conceptualising risk appetising as a process might better direct risk management attention to where it has likely been lacking, namely to the multiplicity of interactions which shape operational and ethical boundaries at the level of organizational practice. COSO-style ERM principles effectively limit the concept of risk appetite within a capital measurement discourse. Framing risk appetite as the process through which ethics and incentives are formed and reformed would not exclude this technical conception, but would bring it closer to the insights of several decades of organization theory.

Explicitly acknowledging the diversity of viewpoints on risk is likely to be closer to reality because:

…a conflictual and pluralistic model is more descriptive of how organizations actually work, and makes lower demands on organizational and political rationality to produce a single ‘appetite’ by explicitly recognising and institutionalising processes by which different appetites and values can be mediated.

Such a process is difficult because it involves getting people who have different viewpoints to agree on what constitutes a sensible definition of risk appetite.

A process bias

A bigger problem, in Power’s view, is that the ERM frameworks overemphasise financial / accounting measures and processes as a means of quantifying and controlling risk. As he puts it ERM:

… is fundamentally an accounting-driven blueprint which emphasises a controls-based approach to risk management. This design emphasis means that efforts at implementation will have an inherent tendency to elaborate detailed controls with corresponding documents trails.

This is a problem because it leads to a “rule-based compliance” mentality wherein risks are managed in a mechanical manner, using bureaucratic processes as a substitute for real thought about risks and how they should be managed. Such a process may work in a make-believe world where all risks are known, but is unlikely to work in one in which there is a great deal of ambiguity.

Power makes the important point that rule-based compliance chews up organizational resources. The tangible effort expended on compliance serves to reassure organizations that they are doing something to manage risks.  This is dangerous because it lulls them into a false sense of security:

Rule-based compliance lays down regulations to be met, and requires extensive evidence, audit trails and box ‘checking’. All this demands considerable work and there is daily pressure on operational staff to process regulatory requirements. Yet, despite the workload volume pressure, this is also a cognitively comfortable world which focuses inwards on routine systems and controls. The auditability of this controls architecture can be theorized as a defence against anxiety and enables organizational agents to feel that their work conforms to legitimised principles.

In this comfortable, prescriptive world of process-based risk management, there is little time to imagine and explore what (else) could go wrong. Further, the latter is often avoided because it is a difficult and often uncomfortable process:

…the imagination of alternative futures is likely to involve the production of discomfort, as compared with formal ‘comfort’ of auditing. The approach can take the form of scenario analysis in which participants from different disciplines in an organization can collectively track the trajectory of potential decisions and events. The process begins as an ‘encounter’ with risk and leads to the confrontation of limitation and ambiguity.

Such a process necessarily involves debate and dialogue – it is essentially a deliberative process. And as Power puts it:

The challenge is to expand processes which support interaction and dialogue and de-emphasise due process – both within risk management practice and between regulator and regulated.

This is right of course, but that’s not all:  a lot of other process-focused disciplines such as project management would also benefit by acknowledging and responding to this challenge.

A limited view of embeddedness

One of the imperatives of ERM is to “embed” risk management within organisations. Among other things, this entails incorporating  risk management explicitly into job descriptions, and making senior managers responsible for managing risks.  Although this is a step in the right direction, Power argues that the concept of embeddeness as articulated in ERM remains limited because  it focuses on specific business entities, ignoring the wider environment and context in which they exist. The essential (but not always obvious) connections between entities are not necessarily accounted for. As Power puts it:

ERM systems cannot represent embeddedness in the sense of interconnectedness; its proponents seem only to demand an intensification of embedding at the individual entity level. Yet, this latter kind of embedding of a compliance driven risk management, epitomised by the Sarbanes-Oxley legislation, is arguably a disaster in itself, by tying up resources and, much worse, cognition and attention in ‘auditized’ representations of business processes.

In short: the focus on following a process-oriented approach to risk management – as mandated by frameworks – has the potential to de-focus attention from risks that are less obvious, but are potentially more significant.

Addressing the limitations

Power believes the flaws in ERM can be addressed by looking to the practice of business continuity management (BCM). BCM addresses the issue of disaster management – i.e. how to keep an organisation functioning in the event of a disaster. Consequently, there is a significant overlap between the aims of BCM and ERM. However, unlike ERM, BCM draws specialists from different fields and emphasizes collective action. Such an approach is therefore more likely to take a holistic view of risk, and that is the real point.

Regardless of the approach one takes, the point is to involve diverse stakeholders and work towards a shared (enterprise-wide) understanding of risks. Only then will it be possible to develop a risk management plan that incorporates the varying, even contradictory, perspectives that exist within an organisation. There are many techniques to work towards a shared understanding of risks, or any other issues for that matter. Some of these are discussed at length in my book.

Conclusion

Power suggests that ERM, as articulated by bodies such as CAS and COSO, flawed because:

  1. It attempts to quantify risk appetite at the organizational level – an essentially impossible task because different organizational stakeholders will have different views of risk. Risk is a social construct.
  2. It advocates a controls and rule-based approach to managing risks. Such a prescriptive “best” practice approach discourages debate and dialogue about risks. Consequently, many viewpoints are missed and quite possibly, so are many risks.
  3. Despite the rhetoric of ERM, implemented risk management controls and processes often overlook connections and dependencies between entities within organisations. So, although risk management appears to be embedded within the organisation, in reality it may not be so.

Power suggests that ERM practice could learn a few lessons from Business Continuity Management (BCM), in particular about the interconnected nature of business risks and the collective action needed to tackle them. Indeed, any approach that attempts to reconcile diverse risk viewpoints will be a huge improvement on current practice. Until then ERM will continue to be an illusion, offering false comfort to those who are responsible for managing risk.

Written by K

July 25, 2012 at 10:31 pm

On the nonlinearity of organisational phenomena

with 5 comments

Introduction

Some time ago I wrote a post entitled, Models and Messes – from best practices to appropriate practices, in which I described the deep connection between the natural sciences and 20th century management.  In particular, I discussed how early management theorists took inspiration from physics. Quoting from that post:

Given the spectacular success of mathematical modeling in the physical and natural sciences, it is perhaps unsurprising that early management theorists attempted to follow the same approach. Fredrick Taylor stated this point of view quite clearly in the introduction to his classic monograph, The Principles of Scientific Management…Taylor’s intent was to prove that management could be reduced to a set of principles that govern all aspects of work in organizations.

In Taylor’s own words, his goal was to “prove that the best management is a true science, resting upon clearly defined laws, rules and principles, as a foundation. And further to show that the fundamental principles of scientific management  are applicable to all human activities…

In the earlier post I discussed how organisational problems elude so-called scientific solutions because they are ambiguous and have a human dimension.  Now I continue the thread, introducing a concept from physics that has permeated much of management thinking, much to the detriment of managerial research and practice. The concept is that of linearity. Simply put, linearity is a mathematical expression of the idea that complex systems can be analysed in terms of their (simpler) components.  I explain this notion in more detail in the following sections.

The post is organised as follows: I begin with a brief introduction to linearity in physics and then describe its social science equivalent.  Following this, I discuss a paper that points out some pitfalls of linear thinking in organisational research and (by extrapolation) to management practice.

Linearity in physics and mathematics

A simplifying assumption underlying much of classical physics is that of equilibrium or stability. A characteristic of a system in equilibrium is that it tends to resist change.  Specifically, if such a system is disturbed, it tends to return to its original state. Of course, physics also deals with systems that are not in equilibrium – the weather, or  a spacecraft on its way to Mars  are examples of such systems.  In general, non-equilibrium systems are described by more complex mathematical models than equilibrium systems.

Now, complex mathematical models – such as those describing the dynamics of weather or even the turbulent flow of water-  can only be solved numerically using computers.  The key complicating factor in such models is that they consist of many interdependent variables that are combined in complex ways. 19th  and early 20th century physicists who had no access to computers had to resort to some tricks in order to make the mathematics of such systems tractable. One of the most common simplifying tricks was to treat the system as being  linear.   Linear systems have mathematical properties that roughly translate to the following in physical terms:

  1. Cause is proportional effect (or output is proportional to input).  This property is called homogeneity.
  2. Any complex effect can be expressed as a sum of a well defined number of simpler effects.  This property is often referred to as additivity, but I prefer the term decomposability.  This notion of decomposability  is also called the principle of superposition.

In contrast, real-life systems (such as the weather) tend to be described by mathematical equations that do not satisfy the above conditions. Such systems are called nonlinear.

Linear systems are well-understood, predictable and frankly, a bit boring –   they hold no surprises and cannot display novel behaviour. The evolution of linear systems is constrained by the equations and initial conditions (where they start from). Once these are known, their future state is completely determined.  Linear systems  cannot display the  range of behaviours that are typical of complex systems. Consequently, when a complex system is converted into a linear one by simplifying the mathematical model, much of the interesting behaviour of the system is lost.

Linearity in organisational theories

It turns out that many organizational theories are based on assumptions of equilibrium (i.e. that organisations are stable) and linearity (i.e. that the socio-economic forces on the organisation are small) . Much like the case of physical systems, such models will predict only small changes about the stable state – i.e. that “business as usual” will continue indefinitely. In a paper published in 1988, Andrew Abbott coined the term General Linear Reality (GLR) to describe this view of reality. GLR is based on the following assumptions:

  1. The world consists of unchanging entities which have variable attributes (eg: a fixed organisation with a varying number of employees)
  2. Small changes to attributes can have only small effects, and effects are manifested as changes to existing attributes.
  3. A given attribute can have only one causal effect – i.e. a single cause has a single effect.
  4. The sequence of events has no effect on the outcome.
  5. Entities and attributes are independent of each other (i.e. no correlation)

The connection between GLR and linearity in physics is quite evident in these assumptions.

The world isn’t linear

But reality isn’t linear – it is very non-linear as many managers learn the hard way. The problem is that the tools they are taught in management schools do not equip them to deal with situations that have changing entities due to feedback effects and  disproportionately large effects from small causes (to mention just a couple of common non-linear effects).

Nevertheless, management research is catching up with reality. For example, in a paper entitled Organizing Far From Equilibriium: Nonlinear changes in organizational fields,  Allan Meyer, Vibha Gaba and Kenneth Collwell highlight limitations of the GLR paradigm. The paper describes three research projects that were aimed at studying how large organisations adapt to change.  Typically when researchers plan such studies, they tacitly make GLR  assumptions regarding cause-effect, independence etc. In the words of Meyer, Gaba and Collwell:

In accord with the canons of general linear reality, as graduate students each of us learned to partition the research process into sequential stages: conceptualizing, designing, observing, analyzing, and reporting. During the conceptual and design stages, researchers are enjoined to make choices that will remain in effect throughout the inquiry. They are directed, for instance, to identify theoretical models, select units and levels of analysis, specify dependent and independent variables, choose sampling frames, and so forth. During the subsequent stages of observation, analysis, and reporting, these parameters are immutable. To change them on the fly could contaminate data or be interpreted as scientific fraud. Stigma attached to “post hoc theorizing,” “data mining” and “dust-bowl empiricism” are handed down from one generation of GLR researchers to the next.

Whilst the studies were in progress, however, each of the organisations that they were studying underwent large, unanticipated changes: in one case employees went on mass strike; in another, the government changed regulations regarding competition; and in the third boom-bust cycles caused massive changes in the business environment. The important point is that these changes invalidated  GLR assumptions completely.  When such “game-changing” forces are in play, it is all but impossible to define a sensible equilibrium state to which organisations can adapt.

In the last two decades, there is a growing body of research which shows that organizations are complex systems that display emergent behaviour.  Mainstream management practice is yet to catch up with these new developments, but the signs are good: in the last few years there have been articles dealing with some of these issues in management journals which often grace the bookshelves of CEOs and senior executives.

To conclude

Mainstream management principles are based on a linear view of reality, a view that is inspired by scientific management and 19th century physics.  In reality, however, organisations evolve in ways that are substantially different from those implied by simplistic cause-effect relationships embodied in linear models.  The sciences have moved on, recognizing that most real-world phenomena are nonlinear, but much of organisational research and management practice remains mired in a linear world.  In view of this it isn’t surprising that many management “best” practices taught in business schools don’t work in the real world.

Related posts:

Models and messes – from best practices to appropriate practices

Cause and effect in management

On the origin of power laws in organizational phenomena

Written by K

July 10, 2012 at 10:48 pm

Insights, intuitions and epiphanies: some reflections on innovation and creativity

leave a comment »

Introduction

The Merriam-Webster dictionary defines the word innovation as:

Innovation (n):  a new idea, method or device.

This definition leaves the door wide open as to what the term means: an innovation could be a novel product to blow the competition away to a new way to organise paperwork that makes it easier to find the hardcopy of the contract you’re after.

Organisations hunt high and low for the magic formula that would enable them to foster and manage innovation. So management gurus, consultants and academics oblige by waxing at length on the best way to inspire and direct innovation (there has to be a process for it, right?). And there’s the paradox:  the more we chase it, the further it seems to recede. But that does not stop organisations from chasing the mirage. In this post I present a few reflections on creativity and innovation based on a couple of personal experiences.

The first story

In the early 90s I started working towards a research degree in chemical engineering at   University of Queensland. Given my theoretical leanings, I naturally gravitated towards the mathematically-oriented field of  fluid dynamics. I’d spoken to a couple of folks working in the area, and finally decided to work with Tony Howes, not only because I found his work interesting, but also thought that his quick intelligence and easygoing manner would make for a good work environment.

I spent a few weeks – or was it months – trying to define a decent research problem, but got nowhere. Tony, sensing that it was time to nudge me towards a decision, suggested a couple of problems relating to a phenomenon that is easily demonstrated in a kitchen sink. If you’re game you may want to make your way to the nearest sink and try the following:

Turn the tap on slowly until water starts to flow out as a cylindrical jet. You will notice that the jet breaks up into near spherical droplets a short distance from the mouth of the tap.

This phenomenon is called jet breakup. Instead of describing it further, I’ll follow the advice that a picture is worth several words (see figure 1).

Photo of jet breakup

Figure 1: A water jet breaking up

If you are interested in knowing why fluid jets tend to break up into drops,  please see the next paragraph; if not, feels free to skip the bracketed section as it is not essential to the story.

[Boring details: The basic cause of break up is surface tension – which is essentially a force that keeps a fluid from becoming a gas. Surface tension arises from the unbalanced “pull” that molecules in the interior of a fluid exert on molecules on the surface. The imbalance occurs because molecules at the surface “feel” a pull only from the interior of the fluid. In contrast, molecules in the interior of the fluid are subjected to the same force on all sides as they are surrounded by fluid. One of the effects of surface tension is that fluid bodies tend to minimise their surface area. The upshot of this for cylindrically shaped jets (such as those emerging from a tap) is that they tend to pinch off into a series of drops because the combined surface area of the drops is less than that of the cylinder.]

To get back to my story:  I realised that I’d already burnt up a few months of a research grant so I agreed to work on one of the problems Tony suggested. Once I’d signed up to it, I hit the books and research journals getting up to speed with the problem. I learnt a lot. Among other things, I learnt that the problem of jet breakup was first studied by Lord Rayleigh in 1878! I also learnt that since the late 1960s, the phenomenon of jet break had enjoyed a bit of a renaissance due to applications such as inkjet printing. Tony had proposed a problem of interest to the metals industry – the production of shot from jets of molten metal. However, it seemed to me that this problem was at best a minor variation on a theme that had already been done to death.

Anyway, regardless of how I felt about it, I was being paid to do research, so I plugged away at it. In the process I developed a good sense for the physics behind the phenomenon, its applications and what had been done up until then. Although I wasn’t too fired up about it, I’d also started work on modelling the molten metal shot problem. It was progress of sorts, but of the dull, desultory kind.

Then one evening in October or November 1994, I had one of those magical Aha moments. …

I was washing up after dinner when I noticed a curious wave-like structure on the thin jet that emerged from the kitchen sink tap and fell onto a plate an inch or two below the tap (the dishes had piled up a while). The wave pattern was absolutely stationary and rather striking. Rather than attempt to describe it any further, I’ll just show you a a  photograph of the phenomenon taken by my colleague Anh Vu.

Photograph of stationary waves on a water jet

Figure 2: Stationary waves on a water jet.

The phenomenon is one that countless folks have noticed, and even I’d seen it before but never paid it much attention. Having been immersed in the theory of fluid jets for so long, I realised at once that the pattern had the same underlying cause as jet breakup. I wondered if any one had published any papers on it. Google Scholar and decent search engines weren’t available so I rushed off to the library find out.  A few hours of searching catalogues and references confirmed that I’d stumbled on to something that could see me through my degree and perhaps even give me a couple of papers.

The next day, I told Tony about it. He was just as excited about it as I was and was more than happy for me to switch topics. I worked feverishly on the problem and within a few months had a theory that related the wavelength of the waves to jet velocity and properties of the fluid.  The work was not a major innovation, but it was novel enough to get me my degree and a couple of papers.

This episode taught me a few things about innovation and creativity, which I list below:

  1. Interesting opportunities lurk in unexpected places: A kitchen sink – who would have thought….
  2. …but it takes work and training to recognise opportunities for what they are: If I hadn’t the background in the physics of fluid jets, I wouldn’t have seen the stationary waves for what they were.
  3. A sense progress is important, even when things aren’t going well: Tony left me to my own devices initially, but then nudged me towards a productive direction when he saw I was going nowhere. This had the effect of giving me a sense of progress towards a goal (my degree), which kept my spirits up through a hard time.
  4. It is best to work on things that interest you, not those that interest others: I stuck to my primary interest (mathematical modelling) rather than do something that was not of much interest but may have been a better career choice.

 The second story

Here’s another story, from a few years later when I was working as an applied mathematician within a polymer processing laboratory.

Some background first – polymer extrusion is an industrial process that is used to create plastic tubing from raw polymer pellets. It involves melting the raw material and driving the melt through a die with the required cross-sectional profile. A common problem encountered in this process is that at high flow rates, the melt emerging from the die has shark skin-like surface imperfections. This phenomenon is sometimes called the melt flow instability.

I was hired to work on a project to model the melt flow instability described above. I began, as researchers always do, by wading through a stack of research papers on the topic. Again this was a topic that had been over-researched in that many different groups had tried many different approaches. However none of them had answered the question definitively. I learnt a lot about modelling polymer flows (quite different from modelling flows of water-like fluids described in the earlier story) but didn’t make any progress on the problem.

Most of the other members in the research group were doing experimental projects, working in the lab doing stuff with real polymers, whilst I was engaged in modelling imaginary ones using simulations. Oddly enough, the folks engaged in the two strands of research did not meet much; I didn’t have much to do with them, and was happy working on my own little projects.

One day, after I’d been in the lab for a year or so, one of the experimentalists knocked on my door to have a chat regarding a problem he was having with a mathematical model he had developed. The reading and background work I had done up to that point enabled me to solve his problem rather quickly. Progress at last – but not in the way I’d imagined.

Encouraged by this, I started talking to others in the group and soon found that they had modelling problems that I could help with. I published a few papers through such collaborations and kept my academic score ticking along. More importantly, though, I got  – for the first time –  a taste  of collaborative work, and  I found that I really enjoyed it. One of the papers that we wrote rated a minor award, which would have helped my academic career had I stayed in the field. However, later that year I decided to switch careers and move to consulting. But that’s another story…

My stint in the polymer lab, very different from my solo research experience, taught me a few more things about creativity and innovation. These are:

  1. Collaboration between diversely skilled individuals enhances creativity. It is important to interact with others, particularly professionals from other disciplines. I’m grateful to my colleagues from the lab  for drawing me out of my “comfort zone” of theoretical work.
  2. Being part of a larger effort does not preclude creativity and innovation – although I did not do any experiments, I was able to develop models that explained some of the phenomena that my colleagues found.
  3. Even modest contributions add value to the end product – great insights and epiphanies aren’t necessary – none of the modelling work that I did was particularly profound or new. It was all fairly routine stuff, done using existing methods and algorithms. Yet, my contributions to the research added a piece that was essential for completeness.

 Reflections and wrap-up

The events related above occurred in a research environment, but the lessons I took away have, I believe, a much wider applicability. Further, although the two stories are quite different – and hold different lessons – there are a few  common themes that run through them. These are:

  1. When doing creative work, one invariably ends up with results that one didn’t intend or expect to find.
  2. A shift in perspective may help in generating new ideas. Looking at things from someone else’s point of view might be just the spark you need.
  3. Things rarely go according to plan, but it is important to keep ones spirits up.
  4. Background is important; it is critical to learn/read as much as possible about the problem you’re attempting to solve.

The above conclusions hold a warning for those who night over-plan and control innovative or creative activities. In both cases I started out by defining what I intended to solve, but ended up solving something else. By the yardstick of a project plan, I failed. But by a more flexible measure, I did alright. By definition, the process of discovery is unpredictable and somewhat opportunistic – one has to be willing and able to redefine goals as one proceed, and at times even throw everything away and start from scratch.

Afterword

I wrote this piece in 2009, intending to post it on Eight to Late. Around that time Paul Culmsee and I were just starting out on our book, The Heretic’s Guide to Best Practices. I was pretty sure this piece would find a place in the book so I held off from blogging it. As it turned out, a modified version ended up in Chapter 4:  Managing Innovation: The Demise of Command and Control.

Written by K

May 17, 2012 at 10:05 pm

Posted in Management

Tagged with ,