Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Causality’ Category

On the anticipation of unintended consequences

with one comment

A couple of weeks ago I bought an anthology  of short stories entitled, An Exploration of Unintended Consequences, written by a school friend, Ajit Chaudhuri. I started reading and couldn’t stop until I got to the last page a couple of hours later. I’ll get to the book towards the end of this piece, but first, a rather long ramble about some thoughts which have been going around my head since I read it.

–x–

Many (most?) of the projects we undertake, or even the actions we perform, both at work and in our personal lives have unexpected side-effects or results that overshadow their originally intended outcomes. To be clear, unexpected does not necessarily mean adverse – consider, for example, Adam Smith’s invisible hand. However, it is also true – at least at the level of collectives (organisations or states) – that negative outcomes far outnumber positive ones. The reason this happens can be understood from a simple argument based on the notion of entropy – i.e., the fact that disorder is far more likely than order. Anyway, as interesting as that may be, it is tangential to the question I want to address in this post, which is:

Is it possible to plan and act in a way which anticipates, or even encourages, positive unintended consequences?

Let’s deal with the “unintended” bit first. And you may ask: does the question make sense? Surely, if an outcome is unintended, then it is necessarily unanticipated (let alone positive).

But is that really so?  In this paper, Frank DeZwart, suggests it isn’t. In particular,  he notes that, ” if unintended effects are anticipated, they are a different phenomenon as they follow from purposive choice and not, like unanticipated effects, from ignorance, error, or ideological blindness.”

As he puts it, “unanticipated consequences can only be unintended, but unintended consequences can be either anticipated or unanticipated.”

So the question posed earlier makes sense, and the key to answering it lies in understanding the difference between purposive and purposeful choice (or action).

–x–

In a classic paper that heralded the birth of cybernetics, Rosenblueth, Wiener and Bigelow noted that, “the term purposeful is meant to denote that the act or behavior may be interpreted as directed to the attainment of a goal-i.e., to a final condition in which the behaving object reaches a definite correlation in time or in space with respect to another object or event.”

Aside 1: The reader will notice that the definition has a decidedly scientific / engineering flavour. So, it is not surprising that philosophers jumped into the fray, and arguments around finer points of the definition ensued (see this sequence of papers, for example). Although interesting, we’ll ignore the debate as it will take us down a rabbit hole from which there is no return.

Aside 2: Interestingly, the Roseblueth-Wiener-Bigelow paper along with this paper by Warren McCulloch and Walter Pitts laid the foundation for cybernetics. A little known fact is that the McCulloch-Pitts paper articulated the basic ideas behind today’s neural networks and Nobel Prize glory, but that’s another story.

Back to our quest: the Rosenblueth-Wiener definition of purposefulness has two assumptions embedded in it:

a) that the goal is well-defined (else, how will an actor know it has been achieved?), and

b) the actor is aware of the goal (else, how will an actor know what to aim for?)

We’ll come back to these in a bit, but let’s continue with the purposeful / purposive distinction first.

As I noted earlier, the cybernetic distinction between purposefulness and purposiveness led to much debate and discussion. Much of the difference of opinion arises from the ways in which diverse disciplines interpret the term. To avoid stumbling into that rabbit hole, I’ll stick to definitions of purposefulness / purposiveness from systems and management domains.

–x–

The first set of definitions is from a 1971 paper by Russell Ackoff in which he attempts to set out clear definitions of systems thinking concepts for management theorists and professionals.

Here are his definitions for purposive and purposeful systems:

A purposive system is a multi-goal seeking system the different goals of which have a common property. Production of that property is the system’s purpose. These types of systems can pursue different goals, but they do not select the goal to be pursued. The goal is determined by the initiating event. But such a system does choose the means by which to pursue its goals.”

and

A purposeful system is one which can produce the same outcome in different ways…[and] can change its goals under constant conditions – it selects ends as well as means and thus displays will. Human beings are the most familiar examples of such systems.”

Ackoff’s  purpose(!) in making the purposive/purposeful distinction was to clarify the difference  between apparent purpose displayed by machines (computers) which he calls purposiveness, and “true” or willed (or human) purpose which he calls purposefulness. Although this seems like a clear cut distinction, it falls apart on closer inspection. The example Ackoff gives for a purposive system is that of a computer which is programmed to play multiple games – say noughts-and-crosses and checkers. The goal differs depending on which game it plays, but the common property is winning. However, this feels like an  artificial distinction: surely winning is a goal, albeit a higher-order one.

–x–

The second set of definitions, due to Peter Checkland, is taken from this module from an Open University course on managing complexity:

Two forms of behaviour in relation to purpose have also been distinguished. One is purposeful behaviour, which [can be described] as behaviour that is willed – there is thus some sense of voluntary action. The other is purposive behaviour – behaviour to which an observer can attribute purpose. Thus, in the example of the government minister, if I described his purpose as meeting some political imperative, I would be attributing purpose to him and describing purposive behaviour. I might possibly say his intention was to deflect the issue for political reasons. Of course, if I were to talk with him I might find out this was not the case at all. He might have been acting in a purposeful manner which was not evident to me.”

This distinction is  strange because the definitions of the two terms are framed from two different perspectives – that of an actor and that of an observer. Surely, when one makes a distinction, one should do so from a single perspective.

…and yet, there is something in this perspective shift which I’ll come back to in a bit.

–x–

The third set of definitions is from Robert Chia and Robin Holt’s classic, Strategy Without Design: The Silent Efficacy of Indirect Action:

Purposive action is action taken to alleviate ourselves from a negative situation we find ourselves in. In everyday engagements, we might act to distance ourselves from an undesirable situation we face, but this does not imply having a pre-established end goal in mind. It is a moving away from rather than a moving towards that constitutes purposive actions. Purposeful actions, on the other hand, presuppose having a desired and clearly articulated end goal that we aspire towards. It is a product of deliberate intention

Finally, here is a distinction we can work with:

  • Purposive actions are aimed at alleviating negative situations (note, this can be framed in a better way, and I’ll get to that shortly)
  • Purposeful actions are those aimed at achieving a clearly defined goal.

The interesting thing is that the above definition of purposive action is consistent with the two observations I made earlier regarding the original Rosenbluth-Wiener-Bigelow definition of purposeful systems

a) purposive actions have  no well-defined end-state (alleviating a negative situation says nothing about what the end-state will look like). That said, someone observing the situation could attribute purpose to the actor because the behaviour appears to be purposeful (see Checkland’s definition above).

b) as the end-state is undefined, the purposive actor cannot know it. However, this need not stop the actor from envisioning what it ought to look like (and indeed, most purposive actors will). 

In a later paper  Chia, wrote, , “…[complex transformations require] an implicit awareness that the potentiality inherent in a situation can be exploited to one’s advantage without adverse costs in terms of resources. Instead of setting out a goal for our actions, we could try to discern the underlying factors whose inner configuration is favourable to the task at hand and to then allow ourselves to be carried along by the momentum and propensity of things.”

Inspired by this, I think it is appropriate to reframe the Chia-Holt definition more positively, by rephrasing it as follows:

“Purposive action is action which exploits the inherent potential in a situation so as to increase the likelihood of positive outcomes for  those who have a stake in the situation”

The above statement includes the Chia-Holt definition as such an action could be a moving away from a negative situation. However, it could also be an action that comes from recognising an opportunity that would otherwise remain unexploited.

–x–

And now, I can finally answer the question I raised at the start regarding anticipated unintended consequences. In brief:

A purposive action, as I have defined above, is one that invariably leads to anticipated unintended consequences.

Moreover, its consequences are often (usually?) positive, even though the specific outcomes are generally  impossible to articulate at the start.

Purposive action is at the heart of emergent design, which is based on doing things that increase the probability of organisational success, but in an unobtrusive manner which avoids drawing attention. Examples of such low-key actions based on recognising the inherent potential of situations  are available in the Chia-Holt book referenced above and in the book I wrote with Alex Scriven.

I should also point out that since purposive action involves recognising the potential of an unfolding situation, there is necessarily an improvisational aspect to it. Moreover, since this potential is typically latent and not obvious to all stakeholders, the action should be taken in a way that does not change the dynamics of the situation. This is why oblique or indirect actions tend to work better than highly visible, head-on ones. Developing the ability to act in such a manner is more about cultivating a disposition that tolerates ambiguity than learning to follow prescribed rules, models or practices.

–x–

So much for purposive action at the level of collectives. Does it, can it, play a role in our individual lives?

The short answer is: yes it can.  A true story might help clarify:

“I can’t handle failure,” she said. “I’ve always been at the top of my class.”

She was being unduly hard on herself. With little programming experience or background in math, machine learning was always going to be hard going.  “Put that aside for now,” I replied. “Just focus on understanding and working your way through it, one step at a time. In four weeks, you’ll see the difference.”

“OK,” she said, “I’ll try.”

She did not sound convinced but to her credit, that’s exactly what she did. Two months later she completed the course with a distinction.

“You did it!” I said when I met her a few weeks after the grades were announced.

“I did,” she grinned. “Do you want to know what made the difference?”

Yes, I nodded.

“Thanks to your advice, I stopped treating it like a game I had to win,” she said, “and that took the pressure right off.  I then started to enjoy learning.”

–x–

And this, finally, brings me back to the collection of short stories written by my friend Ajit. The stories are about purposive actions taken by individuals and their unintended consequences. Consistent with my discussion above, the specific outcomes in the stories could not have been foreseen by the protagonists (all women, by the way), but one can well  imagine them  thinking that their actions would eventually lead to a better place.

That aside, the book is worth picking up because the author is a brilliant raconteur: his stories are not only entertaining, they also give readers interesting insights into everyday life in rural and urban India. The author’s note at the end gives some background information and further reading for those interested in the contexts and settings of the stories.

I found Ajit’s use of inset stories – tales within tales – brilliant. The anthropologist, Mary Catherine Bateson, once wrote, “an inset story is a standard hypnotic device, a trance induction device … at the most obvious level, if we are told that Scheherazade told a tale of fantasy, we are tempted to believe that she, at least, is real.” Ajit uses this device to great effect.

Finally, to support my claim that the stories are hugely entertaining, here are a couple of direct quotes from the book:

The line “Is there anyone here at this table who, deep down, does not think that her husband is a moron?” had me laughing out loud. My dear wife asked me what’s up.  I told her; she had a good laugh too,  and from the tone of her laughter, it was clear she agreed.

Another one: “…some days I’m the pigeon and some days I’m the statue. It’s just that on the days that I’m the pigeon, I try to remember what it is like to be the statue.  And on the days that I’m the statue, I try not to think.” Great advice, which I’ve passed on to my two boys.

–x–

I called Ajit the other day and spoke to him for the first time in over 40 years; another unintended consequence of reading his book.

–x—x–

Written by K

February 18, 2025 at 5:22 am

On the shortcomings of cause-effect based models in management

with 4 comments

Introduction

Business schools perpetuate the myth that the outcomes of changes in organizations can be managed using  models that  are rooted in the scientific-rational mode of enquiry. In essence, such models assume that all important variables that affect an outcome  (i.e. causes) are known and that the relationship between these variables and the outcomes (i.e.  effects) can be represented accurately by simple models.   This is the nature of explanation in the hard sciences such as physics and is pretty much the official line adopted by mainstream management research and teaching – a point I have explored at length in an earlier post.

Now it is far from obvious that a mode of explanation that works for physics will also work for management. In fact, there is enough empirical evidence that most cause-effect based management models do not work in the real world.  Many front-line employees and middle managers need no proof because they have likely lived through failures of such models in their organisations- for example,  when the unintended consequences of  organisational change swamp its intended (or predicted) effects.

In this post I look at the missing element in management models – human intentions –  drawing on this paper by Sumantra Ghoshal which explores  three different modes of explanation that were elaborated by Jon Elster in this book.  My aim in doing this is to highlight the key reason why so many management initiatives fail.

Types of explanations

According to Elster, the nature of what we can reasonably expect from an explanation differs in the natural and social sciences. Furthermore, within the natural sciences, what constitutes an explanation differs in the physical and biological sciences.

Let’s begin with the difference between physics and biology first.

The dominant mode of explanation in physics (and other sciences that deal with inanimate matter) is causal – i.e. it deals with causes and effects as I have described in the introduction. For example, the phenomenon of gravity is explained as being caused by the presence of  matter, the precise relationship being expressed via Newton’s Law of Gravitation (or even more accurately, via Einstein’s General Theory of Relativity).  Gravity is  “explained” by these models because they tell us that it is caused by the presence of matter. More important, if we know the specific configuration of matter in a particular problem, we can accurately predict the effects of gravity – our success in sending unmanned spacecraft to Saturn or Mars depends rather crucially on this.

In biology, the nature of explanation is somewhat different. When studying living creatures we don’t look for causes and effects. Instead we look for explanations based on function. For example,  zoologists do not need to ask how amphibians came to have webbed feet; it is enough for them to know that webbed feet are an adaptation that affords amphibians a survival advantage. They need look no further than this explanation because it is consistent with the Theory of Evolution – that changes in organisms occur by chance, and those that survive do so because they offer the organism a survival advantage. There is no need to look for a deeper explanation in terms of cause and effect.

In social sciences the situation is very  different indeed. The basic unit of explanation in the social sciences is the individual. But an individual is different from an inanimate object or even a non-human organism that reacts to specific stimuli in predictable ways. The key difference is that human actions are guided by intentions, and any explanation of social phenomena ought to start from these intentions.

For completeness I should mention that functional and causal explanations are sometimes possible within the social sciences and management. Typically functional explanations are possible in tightly controlled environments.  For example,  the behaviour and actions of people working within large bureaucracies or assembly lines can be understood on the basis of function. Causal explanations are even rarer, because they are possible only when  focusing on the collective behaviour of large, diverse populations in which the effects of individual intentions are swamped by group diversity. In such special cases, people can indeed be treated as molecules or atoms.

Implications for management

There a couple of interesting implications of restoring intentionality to its rightful place in management studies.

Firstly, as Ghoshal states in his paper:

Management theories at present are overwhelmingly causal or functional in their modes of explanation. Ethics or morality, however, are mental phenomena. As a result they have had to be excluded from our theories and from the practices that such theories have shaped.  In other words, a precondition for making business studies a science as well as a consequence of the resulting belief in determinism has been the explicit denial of any role of moral or ethical considerations in the practice of management

Present day management studies exclude considerations of morals and ethics, except, possibly, as a separate course that has little relation to the other subjects that form a part of the typical business school curriculum. Recognising the role of intentionality restores ethical and moral considerations where they belong – on the centre-stage of management theory and practice.

Secondly, recognizing the role of intentions in determining peoples’ actions helps us see that organizational changes that “start from where people are”  have a much better chance of succeeding than those that are initiated top-down with little or no consultation with rank and file employees. Unfortunately the large majority of organizational change initiatives still start from the wrong place – the top.

Summing up

Most management practices that are taught in business schools and practiced by the countless graduates of these programs are rooted in the belief that certain actions (causes) will lead to specific, desired outcomes (effects). In this article I have discussed how explanations based on cause-effect models, though good for understanding the behaviour of molecules and possibly even mice, are misleading in the world of humans. To achieve sustainable and enduring outcomes  in organisation one has to start from where people are,  and to do that one has to begin by taking their opinions and aspirations seriously.

Written by K

January 3, 2013 at 9:46 pm

Free Will – a book review

with 12 comments

Did I write this review because I wanted to, or is it because my background and circumstances compelled me to?

Some time ago, the answer to this question would have been obvious to me but after reading Free Will by Sam Harris, I’m not so sure.

In brief: the book makes the case that the widely accepted notion of free will is little more than an illusion because our (apparently conscious) decisions originate in causes that lie outside of our conscious control.

Harris begins by noting that the notion of free will is based on the following assumptions:

  1. We could have behaved differently than we actually did in the past.
  2. We are the originators of our present thoughts and actions.

Then, in the space of eighty odd pages (perhaps no more than 15,000 words), he argues that the assumptions are incorrect and looks into some of the implications of his arguments.

The two assumptions are actually interrelated:  if it is indeed true that we are not the originators of our present thoughts and actions then it is unlikely that we could have behaved differently than we did in the past.

A key part of Harris’ argument is the scientifically established fact that we are consciously aware of only a small fraction of the activity that takes place in our brains. This has been demonstrated (conclusively?) by some elegant experiments in neurophysiology.   For example:

  • Activity in the brain’s motor cortex can be detected 300 milliseconds before a person “decides” to move, indicating that the thought about moving arises before the subject is aware of it.
  • Magnetic resonance scanning of certain brain regions can reveal the choice that will be made by a person 7 to 10 seconds before the person consciously makes the decision.

These and other similar experiments pose a direct challenge to the notion of free will: if  my  brain has  already decided on a course before I am aware of it , how can I  claim to be the author of my decisions and, more broadly, my destiny? As Harris puts it:

…I cannot decide what I will think next or intend until a thought or intention arises. What will my next mental state be? I do not know – it just happens. Where is the freedom in that?

The whole notion of free will, he argues, is based on the belief that we control our thoughts and actions.   Harris notes that although we may feel that are in control of the decisions we make, this is but an illusion: we feel that we are free, but this freedom is illusory because our actions are already “decided” before they appear in our consciousness.  To be sure, there are causes underlying our thoughts and actions, but the majority of these lie outside our awareness.

If we accept the above then the role that luck plays in determining our genes, circumstances, environment and attitudes cannot be overstated. Although we may choose to believe that we make our destinies, in reality we don’t.  Some people may invoke demonstrations of willpower – conscious mental effort to do certain things – as proof against Harris’ arguments. However, as Harris notes,

You can change your life and yourself through effort and discipline – but you have whatever capacity for effort and discipline you have in this moment, and not a scintilla more (or less). You are either lucky in this department or you aren’t – and you can’t make your own luck.

Although I may choose to believe that I made the key decisions in my life, a little reflection reveals the tenuous nature of this belief.  Sure,   some decisions I have made resulted in experiences that I would not have had otherwise.   Some of those experiences undoubtedly changed my outlook on life, causing me to do things I would not have done had I not undergone those experiences.  So to that extent, those original choices changed my life.

The question is: could I have decided differently when making those original choices?

Or, considering an even more immediate example:   could I have chosen not to write this review? Or, having written it, could I have chosen not to publish it?

Harris tells us that this question is misguided because you will do what you do. As he states,

…you can do what you decide to do – but you cannot decide what you will decide to do.

We feel that we are free to decide, but the decision we make is the one we make. If we choose to believe that we are free to decide, we are free to do so. However, this is an illusion because our decisions arise from causes that we are unaware of. This is the central point of Harris’ argument.

There are important moral and ethical implications of the loss of free will. For example what happens to the notion of moral responsibility for actions that might harm others? Harris argues that we do not need to invoke the notion of free will in order to see that this is not right – as he tells us, what we condemn in others is the conscious intent to do harm.

Harris is careful to note that his argument against free will does not amount to a laissez-faire approach wherein people are free to do whatever comes to their minds, regardless of consequences for society. As he writes:

….we must encourage people to work to the best of their abilities and discourage free riders wherever we can. And it is wise to hold people responsible for their actions when doing so influences their behavior and brings benefits to society….[however this does not need the] illusion of free will. We need only acknowledge that efforts matter and that people can change. [However] we do not change ourselves precisely – because we have only ourselves with which to do the changing -but we continually influence, and are influenced by, the world around us and the world within us. [italics mine]

Before closing I should mention some shortcomings of the book:

Firstly, Harris does not offer a detailed support for his argument.  Much of what he claims depends on the results of experiments research in neurophysiology that demonstrate the lag between the genesis of a thought in our brains and our conscious awareness of it, yet he describes only a handful experiments detail. That said there are references to many others in the notes.

Secondly, those with training in philosophy may find the book superficial as Harris does not discuss of alternate perspectives on free will.  Such a discussion would have provided much needed balance that some critics have taken him to task for (see this analysis  or this review  for example).

Although the book has the shortcomings I’ve noted, I have to say I enjoyed it because it made me think.  More specifically, it made me think about the way I think.  Maybe it will do the same for you, maybe not  – what happens in your case  may depend on thoughts that are beyond your control.

Written by K

October 28, 2012 at 9:45 pm