Archive for the ‘Causality’ Category
On the nonlinearity of organisational phenomena
Introduction
Some time ago I wrote a post entitled, Models and Messes – from best practices to appropriate practices, in which I described the deep connection between the natural sciences and 20th century management. In particular, I discussed how early management theorists took inspiration from physics. Quoting from that post:
Given the spectacular success of mathematical modeling in the physical and natural sciences, it is perhaps unsurprising that early management theorists attempted to follow the same approach. Fredrick Taylor stated this point of view quite clearly in the introduction to his classic monograph, The Principles of Scientific Management…Taylor’s intent was to prove that management could be reduced to a set of principles that govern all aspects of work in organizations.
In Taylor’s own words, his goal was to “prove that the best management is a true science, resting upon clearly defined laws, rules and principles, as a foundation. And further to show that the fundamental principles of scientific management are applicable to all human activities…”
In the earlier post I discussed how organisational problems elude so-called scientific solutions because they are ambiguous and have a human dimension. Now I continue the thread, introducing a concept from physics that has permeated much of management thinking, much to the detriment of managerial research and practice. The concept is that of linearity. Simply put, linearity is a mathematical expression of the idea that complex systems can be analysed in terms of their (simpler) components. I explain this notion in more detail in the following sections.
The post is organised as follows: I begin with a brief introduction to linearity in physics and then describe its social science equivalent. Following this, I discuss a paper that points out some pitfalls of linear thinking in organisational research and (by extrapolation) to management practice.
Linearity in physics and mathematics
A simplifying assumption underlying much of classical physics is that of equilibrium or stability. A characteristic of a system in equilibrium is that it tends to resist change. Specifically, if such a system is disturbed, it tends to return to its original state. Of course, physics also deals with systems that are not in equilibrium – the weather, or a spacecraft on its way to Mars are examples of such systems. In general, non-equilibrium systems are described by more complex mathematical models than equilibrium systems.
Now, complex mathematical models – such as those describing the dynamics of weather or even the turbulent flow of water- can only be solved numerically using computers. The key complicating factor in such models is that they consist of many interdependent variables that are combined in complex ways. 19th and early 20th century physicists who had no access to computers had to resort to some tricks in order to make the mathematics of such systems tractable. One of the most common simplifying tricks was to treat the system as being linear. Linear systems have mathematical properties that roughly translate to the following in physical terms:
- Cause is proportional effect (or output is proportional to input). This property is called homogeneity.
- Any complex effect can be expressed as a sum of a well defined number of simpler effects. This property is often referred to as additivity, but I prefer the term decomposability. This notion of decomposability is also called the principle of superposition.
In contrast, real-life systems (such as the weather) tend to be described by mathematical equations that do not satisfy the above conditions. Such systems are called nonlinear.
Linear systems are well-understood, predictable and frankly, a bit boring – they hold no surprises and cannot display novel behaviour. The evolution of linear systems is constrained by the equations and initial conditions (where they start from). Once these are known, their future state is completely determined. Linear systems cannot display the range of behaviours that are typical of complex systems. Consequently, when a complex system is converted into a linear one by simplifying the mathematical model, much of the interesting behaviour of the system is lost.
Linearity in organisational theories
It turns out that many organizational theories are based on assumptions of equilibrium (i.e. that organisations are stable) and linearity (i.e. that the socio-economic forces on the organisation are small) . Much like the case of physical systems, such models will predict only small changes about the stable state – i.e. that “business as usual” will continue indefinitely. In a paper published in 1988, Andrew Abbott coined the term General Linear Reality (GLR) to describe this view of reality. GLR is based on the following assumptions:
- The world consists of unchanging entities which have variable attributes (eg: a fixed organisation with a varying number of employees)
- Small changes to attributes can have only small effects, and effects are manifested as changes to existing attributes.
- A given attribute can have only one causal effect – i.e. a single cause has a single effect.
- The sequence of events has no effect on the outcome.
- Entities and attributes are independent of each other (i.e. no correlation)
The connection between GLR and linearity in physics is quite evident in these assumptions.
The world isn’t linear
But reality isn’t linear – it is very non-linear as many managers learn the hard way. The problem is that the tools they are taught in management schools do not equip them to deal with situations that have changing entities due to feedback effects and disproportionately large effects from small causes (to mention just a couple of common non-linear effects).
Nevertheless, management research is catching up with reality. For example, in a paper entitled Organizing Far From Equilibriium: Nonlinear changes in organizational fields, Allan Meyer, Vibha Gaba and Kenneth Collwell highlight limitations of the GLR paradigm. The paper describes three research projects that were aimed at studying how large organisations adapt to change. Typically when researchers plan such studies, they tacitly make GLR assumptions regarding cause-effect, independence etc. In the words of Meyer, Gaba and Collwell:
In accord with the canons of general linear reality, as graduate students each of us learned to partition the research process into sequential stages: conceptualizing, designing, observing, analyzing, and reporting. During the conceptual and design stages, researchers are enjoined to make choices that will remain in effect throughout the inquiry. They are directed, for instance, to identify theoretical models, select units and levels of analysis, specify dependent and independent variables, choose sampling frames, and so forth. During the subsequent stages of observation, analysis, and reporting, these parameters are immutable. To change them on the fly could contaminate data or be interpreted as scientific fraud. Stigma attached to “post hoc theorizing,” “data mining” and “dust-bowl empiricism” are handed down from one generation of GLR researchers to the next.
Whilst the studies were in progress, however, each of the organisations that they were studying underwent large, unanticipated changes: in one case employees went on mass strike; in another, the government changed regulations regarding competition; and in the third boom-bust cycles caused massive changes in the business environment. The important point is that these changes invalidated GLR assumptions completely. When such “game-changing” forces are in play, it is all but impossible to define a sensible equilibrium state to which organisations can adapt.
In the last two decades, there is a growing body of research which shows that organizations are complex systems that display emergent behaviour. Mainstream management practice is yet to catch up with these new developments, but the signs are good: in the last few years there have been articles dealing with some of these issues in management journals which often grace the bookshelves of CEOs and senior executives.
To conclude
Mainstream management principles are based on a linear view of reality, a view that is inspired by scientific management and 19th century physics. In reality, however, organisations evolve in ways that are substantially different from those implied by simplistic cause-effect relationships embodied in linear models. The sciences have moved on, recognizing that most real-world phenomena are nonlinear, but much of organisational research and management practice remains mired in a linear world. In view of this it isn’t surprising that many management “best” practices taught in business schools don’t work in the real world.
Related posts:
Models and messes – from best practices to appropriate practices
Models and messes in management – from best practices to appropriate practices
Scientific models and management
Physicists build mathematical models that represent selected aspects of reality. These models are based on a mix of existing knowledge, observations, intuition and mathematical virtuosity. A good example of such a model is Newton’s law of gravity according to which the gravitational force between two objects (planets, apples or whatever) varies in inverse proportion to the square of the distance between them. The model was a brilliant generalization based on observations made by Newton and others (Johannes Kepler, in particular), supplemented by Newton’s insight that the force that keeps the planets revolving round the sun is the same as the one that made that mythical apple fall to earth. In essence Newton’s law tells us that planetary motions are caused by gravity and it tells us – very precisely – the effects of the cause. In short: it embodies a cause-effect relationship.
[Aside: The validity of a physical model depends on how well it stands up to the test of reality. Newton’s law of gravitation is remarkably successful in this regard: among many other things, it is the basis of orbital calculations for all space missions. The mathematical model expressed by Newton’s law is thus an established scientific principle. That said, it should be noted that models of the physical world are always subject to revision in the light of new information. For example, Newton’s law of gravity has been superseded by Einstein’s general theory of relativity. Nevertheless for most practical applications it remains perfectly adequate.]
Given the spectacular success of modeling in the physical and natural sciences, it is perhaps unsurprising that early management theorists attempted to follow the same approach. Fredrick Taylor stated this point of view quite clearly in the introduction to his classic monograph, The Principles of Scientific Management. Here are the relevant lines:
This paper has been written…to prove that the best management is a true science, resting upon clearly defined laws, rules and principles, as a foundation. And further to show that the fundamental principles of scientific management are applicable to all human activities, from our simplest individual activities to the work of great corporations, which call for the most elaborate cooperation. And briefly, through a series of illustrations, to convince the reader that whenever these principles are correctly applied, results must follow which are truly astounding…
From this it appears that Taylor’s intent was to prove that management could be reduced to a set of principles that govern all aspects of work in organizations.
The question is: how well did it work?
The origin of best practices
Over time, Taylor’s words were used to justify the imposition of one-size-fits-all management practices that ignored human individuality and uniqueness of organisations. Although, Taylor was aware of these factors, he believed commonalities were more important than differences. This thinking is well and alive to this day: although Taylor’s principles are no longer treated as gospel, their spirit lives on in the notion of standardized best practices.
There are now a plethora of standards or best practices for just about any area of management. They are often sold using scientific language, terms such as principles and proof. Consider the following passage taken from from the Official PRINCE2 site:
Because PRINCE2 is generic and based on proven principles, organisations adopting the method as a standard can substantially improve their organisational capability and maturity across multiple areas of business activity – business change, construction, IT, mergers and acquisitions, research, product development and so on.
There are a couple of other things worth noting in the above passage. First, there is an implied cause-effect relationship between the “proven principles” and improvements in “organizational capability and maturity across multiple areas of business activity.” Second, as alluded to above, the human factor is all but factored out – there is an implication that this generic standard can be implemented by anyone anywhere and the results will inevitably be as “truly astounding” as Taylor claimed.
Why best practices are not the best
There are a number of problems with the notion of a best practice. I discuss these briefly below.
First, every organisation is unique. Yes, much is made of commonalities between organisations, but it is the differences that make them unique. Arguably, it is also the differences that give organisations their edge. As Stanley Deetz mentioned in his 2003 Becker lecture:
In today’s world unless you have exceptionally low labor costs, competitive advantage comes from high creativity, highly committed employees and the ability to customize products. All require a highly involved, participating workforce. Creativity requires letting differences make a difference. Most high-end companies are more dependent on the social and intellectual capital possessed by employees than financial investment.
Thoughtless standardization through the use of best practices is a sure way to lose those differences that could make a difference.
Second, in their paper entitled, De-Contextualising Competence: Can Business Best Practice be Bundled and Sold, Jonathan Wareham and Han Gerrits pointed out that organisations operate in vastly varying cultural and social environments. It is difficult to see how best practice approaches with their one-and-a-half-size –fits-all approach would work.
Third , Wareham and Gerrits also pointed out that best practice is often tacit and socially embedded. This invalidates the notion that it can be transferred from an organization in which it works and to another without substantial change. Context is all important.
Lastly, best practices are generally implemented in response to a perceived problem. However, they often address the symptoms rather than the root cause of the problem. For example, a project management process may attempt to improve delivery by better estimation and planning. However, the underlying cause – which may be poor communication or a dysfunctional relationship between users and the IT department –remains unaddressed.
In his 2003 Becker lecture, Stanley Deetz illustrated this point via the following fable:
… about a company formed by very short people. Since they were all short and they wanted to be highly efficient and cut costs, they chose to build their ceiling short and the doorways shorter so that they could have more work space in the same building. And, they were in fact very successful. As they became more and more successful, however, it became necessary for them to start hiring taller people. And, as they hired more and more tall people, they came to realize that tall people were at a disadvantage at this company because they had to walk around stooped over. They had to duck to go through the doorways and so forth. Of course, they hired organizational consultants to help them with the problem.
Initially they had time-and-motion experts come in. These experts taught teams of people how to walk carefully. Tall members learned to duck in stride so that going through the short doors was minimally inconvenient. And they became more efficient by learning how to walk more properly for their environment. Later, because this wasn’t working so well, they hired psychological consultants. These experts taught greater sensitivity to the difficulties of tall members of the organization. Long-term short members learned tolerance knowing that the tall people would come later to meetings, would be somewhat less able to perform their work well. They provided for tall people networks for support…
The parable is an excellent illustration of how best practices can end up addressing symptoms rather than causes.
Ambiguity + the human factor = a mess
Many organisational problems are ambiguous in that cause-effect relationships are unclear. Consequently, different stakeholders can have wildly different opinions as to what the root cause of a problem is. Moreover, there is no way to conclusively establish the validity of a particular point of view. For example, executives may see a delay in a project as being due to poor project management whereas the project manager might see it as being a consequence of poor scope definition or unreasonable timelines. The cause depends on who you ask and there is no way to establish who is right! Unlike problems in physics, organisational problems have a social dimension.
The visionary Horst Rittel coined the evocative term wicked problem to describe problems that involve many stakeholder groups with diverse and often conflicting perspectives. This makes such problems messy. Indeed, Russell Ackoff referred to wicked problems as messes. In his words, “every problem interacts with other problems and is therefore part of a set of interrelated problems, a system of problems…. I choose to call such a system a mess”
Consider an example that is quite common in organisations: the question of how to improve efficiency. Management may frame this issue in terms of tighter managerial control and launch a solution that involves greater oversight. In contrast, a workgroup within the organisation may see their efficiency being impeded by bureaucratic control that results from increased oversight, and thus may believe that the road to efficiency lies in giving workgroups greater autonomy. In this case there is a clear difference between the aims of management (to exert greater control) and those of workgroups (to work autonomously). Ideally, the two ought to talk it over and come up with a commonly agreed approach. Unfortunately they seldom do. The power structure in organisations being what it is, management’s solution usually prevails and, as a consequence, workgroup morale plummets. See this post for an interesting case study on one such situation.
Summing up: a need for appropriate practice, not best practice
The great attraction of best practices, and one of the key reasons for their popularity, is that they offer apparently straightforward solutions to complex problems. However, such problems typically have a social dimension because they affect different stakeholders in different ways. They are messes whose definition depends on who you ask. So there is no agreement on what the problem is, let alone its solution. This fact by itself limits the utility of the best practice approach to organisational problem solving. Purveyors of best practices may use terms like “proven”, “established”, “measurable” etc. to lend an air of scientific respectability to their wares, but the truth is that unless all stakeholders have a shared understanding of the problem and a shared commitment to solving it, the practice will fail.
In our recently published book entitled, The Heretic’s Guide to Best Practices, Paul Culmsee and I describe in detail the issues with the best practice approach to organisational problem-solving. More important, we provide a practical approach that can help you work with stakeholders to achieve a shared understanding of a problem and a shared commitment to a commonly agreed course of action. The methods we discuss can be used in small settings or larger one, so you will find the book useful regardless of where you sit in your organisation’s hierarchy. In essence our book is a manifesto for replacing the concept of best practice with that of appropriate practice – practice with a human face that is appropriate for you in your organisation and particular situation.
Processes and intentions: a note on cause and effect in project management
Introduction
In recent years the project work-form has become an increasingly popular mode of organizing activities in many industries. As the projectization bandwagon has gained momentum there have been few questions asked about the efficacy of project management methodologies. Most practitioners take this as a given and move on to seek advice on how best to implement project management processes. Industry analysts and consultants are, of course, glad to oblige with reams of white papers, strategy papers or whatever else they choose to call them (see this paper by Gartner Research, for example).
Although purveyors of methodologies do not claim their methods guarantee project success, they imply that there is a positive relationship between the two. For example, the PMBOK Guide tells us that, “…the application of appropriate knowledge, processes, skills, tools and techniques can have a significant impact on project success” (see page 4 of the 4th Edition). This post is a brief exploration of the cause-effect relationship between the two.
Necessary but not sufficient
In a post on cause and effect in management, I discussed how the link between high-level management actions and their claimed outcomes is tenuous. The basic reason for this is that there are several factors that can affect organizational outcomes and it is impossible to know beforehand which of these factors are significant. The large number of factors, coupled with the fact that controlled experimentation is impossible in organizations, makes it impossible to establish with certainty that a particular managerial action will lead to a desired outcome.
Most project managers are implicitly aware of this – they know that factors extrinsic to their projects can often spell the difference between success and failure. A mid-project change in organizational priorities is a classic example of such a factor. The effect of such factors can be accounted for using a concept of causality proposed by the philosopher Edgar Singer, and popularised by the management philosopher Russell Ackoff. In a paper entitled Systems, Messes and Interactive Planning, Ackoff had this to say about cause and effect in systems – i.e. entities that interact with each other and their environment, much like organizations and projects do:
It will be recalled that in the Machine Age cause-effect was the central relationship in terms of which all actions and interactions were explained. At the turn of this century the distinguished American philosopher of science E.A. Singer, Jr.noted that cause-effect was used in two different senses. First… a cause is a necessary and sufficient condition for its effect. Second, it was also used when one thing was taken to be necessary but not sufficient for the other. To use Singer’s example, an acorn is necessary but not sufficient for an oak; various soil and weather conditions are also necessary. Similarly, a parent is necessary but not sufficient for his or her child. Singer referred to this second type of cause-effect as producer-product. It has also been referred to since as probabilistic or nondeterministic cause effect.
The role of intentions
A key point is that one cannot ignore the role of human intentions in management. As Sumantra Ghoshal wrote in a classic paper:
The basic building block in the social sciences, the elementary unit of explanation is individual action guided by some intention. In the presence of such intentionality functional [and causal] theories are suspect, except under some special and relatively rare circumstances, because there is no general law in the social sciences comparable to [say] the law of natural selection in biology
A producer-product view has room for human intentions and choices. As Ackoff stated in the his paper on systems and messes,
Singer went on to show why studies that use the producer-product relationship were compatible with, but richer than, studies that used only deterministic cause-effect. Furthermore, he showed that a theory of explanation based on producer-product permitted objective study of functional, goal-seeking and purposeful behavior. The concepts free will and choice were no longer incompatible with mechanism; hence they need no longer be exiled from science.
A producer-product view of cause and effect in project management recognizes that there will be a host of factors that affect project outcomes, many of which are beyond a manager’s ken and control. Further, and possibly more important, it acknowledges the role played by intentions of individuals who work on projects. Let’s take a closer look at these two points.
Processes and intentions
To begin, it is worth noting that that project management lore is rife with projects that failed despite the use of formal project management processes. Worse, in many cases it appears that failure is a consequence of over-zealous adherence to methodology (see my post entitled The myth of the lonely project for a detailed example). In such cases the failure is often attributed to causes other than the processes being used – common reasons include lack of organizational readiness and/or improper implementation of methodology from which the processes are derived. However, these causes can usually be traced back to lack of employee buy-in, i.e. of getting front-line project teams and managers to believe in the utility of the proposed processes and to make them want to use them. It stands to reason that people will use processes only if they believe they will help. So the first action should be to elicit buy-in from people who will be required to use the proposed processes. The most obvious way to do this is by seeking their input in formulating the processes.
Most often though, processes are “sold” to employees in a very superficial way (see this post for a case in point). Worse still, many times organizations do not even bother getting buy-in: the powers that be simply mandate the process with employees having little or no say in how processes are to be used or implemented. This approach is doomed to fail because – as I have discussed in this post – standards, methodologies and best practices capture only superficial aspects of processes. The details required to make processes work can be provided only by project managers and others who work at the coal-face of projects. Consequently employee buy-in shouldn’t be an afterthought, it should be the centerpiece of any strategy to implement a methodology. Buy-in and input are essential to gaining employee commitment, and employee commitment is absolutely essential for the processes to take root.
..and so to sum up
Project management processes are necessary for project success, but they are far from sufficient. Employee intentions and buy-in are critical factors that are often overlooked when implementing these. As a first step to addressing this, project management processes should be considered and implemented in a way that makes sense to those who work on projects. Those who miss this point are setting themselves up for failure.

