Desperately seeking reason(s): Franklin’s Gambit in organisational decision-making
In his wonderful book on obliquity, John Kay tells of a famous letter in which Benjamin Franklin describes a decision-making method. Here is a description of Franklin’s method, excerpted from his letter:
…my Way is, to divide half a Sheet of Paper by a Line into two Columns, writing over the one Pro, and over the other Con. Then during three or four Days Consideration I put down under the different Heads short Hints of the different Motives that at different Times occur to me for or against the Measure. When I have thus got them all together in one View, I endeavour to estimate their respective Weights; and where I find two, one on each side, that seem equal, I strike them both out: If I find a Reason pro equal to some two Reasons con, I strike out the three. If I judge some two Reasons con equal to some three Reasons pro, I strike out the five; and thus proceeding I find at length where the Ballance lies; and if after a Day or two of farther Consideration nothing new that is of Importance occurs on either side, I come to a Determination accordingly.
And tho’ the Weight of Reasons cannot be taken with the Precision of Algebraic Quantities, yet when each is thus considered separately and comparatively, and the whole lies before me, I think I can judge better, and am less likely to take a rash Step; and in fact I have found great Advantage from this kind of Equation, in what may be called Moral or Prudential Algebra.
Modern decision making techniques often claim to do better than Franklin because they use quantitative measures to rate decision options However, as I have pointed out in this post, measures are often misleading. There are those who claim that this can be fixed by “doing it correctly,” but this is a simplistic view for reasons I have discussed at length in this post. So, despite all the so-called “advances” in decision making, it is still pretty much as Franklin wrote: “the Weight of Reasons cannot be taken with the Precision of Algebraic Quantities” .
With that for background, I can now get to the main point of this post. The reader may have wondered about my use of the word gambit rather than technique (or any of its synonyms) in the title of this post. A quick look at this online dictionary tells us that the two words are very different:
Technique (noun): the body of specialized procedures and methods used in any specific field, especially in an area of applied science.
Gambit (noun): a manoeuvre by which one seeks to gain advantage.
Indeed, as Kay mentions in his book, Franklin’s method is often used to justify decisions that are already made – he calls this Franklin’s Gambit.
Think back to some of the recent decisions you have made: did you make the decision first and then find reasons for it or did you weigh up the pros and cons of each option before reaching your decision? If I’m honest, I would have to admit that I have often done the former. This is understandable, even defensible. When we make a decision, we have to make several assumptions regarding the future and how it will unfold. Since this is based on (some times educated) guesswork, it is only natural that we will show a preference for a choice that we are comfortable with. Once we have settled on an option, we seek reasons that would enable us to justify our decision to to others; we would not want them to think we have made a decision based on gut-feel or personal preferences.
This not necessarily bad thing. When decisions cannot be rated meaningfully, any choice that is justifiable is a reasonable one….providing one can convince others affected that it is so. What one should guard against is the mindless use of data and so-called rational methods to back decisions that have no buy in.
Finally, as we all know well from experience, it is never a problem to convince ourself of the rightness of our decision. In fact, Mr. Franklin, despite his pronouncements on Moral Algebra understood this. For, as he once wrote:
…so convenient a thing is it to be a reasonable creature, since it enables one to find or make a reason for everything one had a mind to do.
Indeed, “reasonable” creatures that we are, we will desperately seek reasons for the things we wish to do. The difficulty, as always, lies in convincing other reasonable creatures of our reasonableness.
On the decline and resurrection of Taylorism
Introduction
A couple of years ago Paul Culmsee and I wrote a post on the cyclical decay and recurrence of certain management concepts. The article describes how ideas and practices bubble up into mainstream management awareness and then fade away after the fad passes…only to recur in a morphed form some years later.
It recently occurred to me that this cycle of decay and recurrence is not restricted to good ideas or practices: ideas that, quite frankly, ought to remain consigned to the dustbin of management can also recur. Moreover, they may even do better the second time around because the conditions are right for them to flourish. In this post I discuss how the notion of scientific management, often referred to as Taylorism, after its founder Fredrick Winslow Taylor, has ebbed and flowed in the century or so since it was first proposed.
Taylorism and its alleged demise
The essence of Taylorism is summarised nicely in this quote from Taylor’s monograph, The Principles of Scientific Management:
This paper has been written…to prove that the best management is a true science, resting upon clearly defined laws, rules and principles, as a foundation. And further to show that the fundamental principles of scientific management are applicable to all human activities, from our simplest individual activities to the work of great corporations, which call for the most elaborate cooperation. And briefly, through a series of illustrations, to convince the reader that whenever these principles are correctly applied, results must follow which are truly astounding…
According to standard storyline of management, Taylorism had its heyday in the first few decades of the 20th century and faded away after the notion of the worker as an individual emerged in 1920s. In his wonderful paper, Understanding Taylorism, Craig Littler summarises this mainstream view as follows:
From 1900-20 Taylorism provided the dominant ideas about the worker and worker motivation. But money was not enough and ‘a great new ideas was taking root. The view of the worker as an individual personality emerged strongly around 1920 to command the stage.’ From 1920-1940 the worker was seen as a psychological complex, but then ‘Psychological Man’ (sic) faltered and sociology entered industry: Man (sic) had neighbours!
In short, the official story is that Taylorism was declared dead, if not quite interred, some ninety years ago.
But as we shall see, its ghost still haunts the hallways of the modern, knowledge-based corporation…
The ghost of Taylorism
The standard storyline views Taylorism as a management ideology – a set of ideas that guide management practice. However, as Littler tells us, it is more instructive to see it primarily as a means of organizing work, in other words as a management practice. As Littler tells us,
If we look at Taylorism as a form of work organization then we can proceed to analyse it in terms of three general categories: the division of labour, the structure of control over task performance, and the implicit employment relationship.
To elaborate: Taylorism emphasised a scientific approach to enhancing worker productivity through things such as time and motion studies. In practice this lead to a rigid fragmentation and division of labour coupled with time/effort measurements that enabled top-down planning. Although these efforts were focused on increasing production by improving worker efficiency, they also had the effect of centralising control over task performance and skewing the terms of employment in management’s favour.
…and its new avatar
Even from this brief summary one can see how Taylorism sneaks into the modern workplace. As Martha Crowley and her co-workers state in the abstract to this paper:
The last quarter of the twentieth century has seen an erosion of job security in both manual and professional occupation…employee involvement schemes in manual production and the growth of temporary employment, outsourcing and project-based teams in the professions have influenced working conditions in both settings…these practices represent not a departure from scientific management, as is often presumed, but rather the adoption of Taylorist principles that were not fully manifested in the era of mass production.
Indeed, there is a term, Neo-Taylorism, that describes this newly resurrected avatar of this old ideology.
The resurrection of Taylorism is in no small part due to advances in technology. This is indeed an irony because the very technology that gives us “cognitive surplus” (if one believes what some folks tell us) and enables us to inform the world about “what we are doing right now” also makes it possible for us to be monitored at the workplace in real time. A stark manifestation of this the call centre – which Phil Tailor and Peter Bain refer to as an electronic panopticon and, in a later paper an assembly line in the head.
Of course, one does not need to work in a call centre to see Neo-Taylorism at work; the central ideas of scientific management permeate many modern workplaces. The standard HR cycle of goal-setting, review and performance evaluation, familiar to most folks who work in organisation-land, is but a means of evaluating and/or ranking employees with a view to determining an appropriate reward or punishment. This often does more harm than good as is highlighted in David Auerbach’s critique of Microsoft’s stack ranking process: there is nothing more effective than the threat of termination to ensure a compliant workforce…but engendering team spirit and high performance is another matter altogether.
Concluding remarks
To conclude: the resurrection of Taylorism is no surprise. For although it may have become an unfashionable ideology in the latter part of the first half of the 20th century, its practices and, in particular, the forms of work organisation embodied in it live on. This is true not just in industry but also in the academic world. Indeed, some of the research done in industrial engineering departments the world over serves to burnish and propagate Taylor’s legacy. Taylorism as an ideology may be dead, but as a management practice it lives on and flourishes.
Acknowledgement
Thanks to Greg Lloyd for his pointer to David Auerbach’s critique of Microsoft’s stack ranking process.
The DRS controversy and the undue influence of technology on decision-making
The Decision Review System (DRS) is a technology that is used to reduce umpire errors in cricket. It consists of the following components:
- High-speed visualisation to track the trajectory of a ball as it goes past or is hit by a batsman
- Infra-red and sound-based devices to detect whether or not the bat has actually made contact with the ball.
There were some misgivings about the technology when it was first introduced a few years ago, but the general feeling was that it would be beneficial (see this article by Rob Steen, for example). However, because of concerns raised about the reliability of the technology, the International Cricket Council did not make the use of DRS mandatory.
In the recent Ashes series between England and Australia, there have been some questionable decisions that involved DRS. In one case, a human umpire’s decision was upheld even though DRS evidence did not support it and in another an umpire’s decision was upheld when DRS evidence only partially supported it, See the sidebar in this news item for a summary of these decisions.
Now, as Dan Hodges, points out in an astute post, DRS does not make decisions – it only presents a human decision-maker (the third umpire) with more, and allegedly better, data than is available to another human decision-maker (the on-field umpire). This is a point that is often ignored when decision support systems are used in any kind of decision-making, not just in sports: data does not make decisions, people do. Moreover, they often reach these decisions based on factors that cannot be represented as data.
This is as it should be: technology can at best provide us with more and/or better data but, in situations that really matter, we would not want it making decisions on our behalf. Would we be comfortable with machine diagnoses of our X rays or CT scans?
Taking a broader view, it is undeniable that technology has influenced the decisions we make: from the GPS that directs us when we drive, to Facebook, Linkedin and other social media platforms that make suggestions regarding who we might want to “friend” or who “connect with.” In his book, To Save Everything, Click Here, Evgeny Morozov argues that this is not a positive development. He takes aim at what he calls technological solutionism, the tendency to view all problems as being amenable to technology-based solutions, ignoring other aspects such as social, human and ethical concerns.
Morozov’s interest is largely in the social and political sphere so many of his examples are drawn from social networking and search engine technologies. His concerns relate to the unintended consequences of these pervasive technologies- for example, the loss of privacy that is the consequence of using social media or the subtle distortion of human behaviour through the use of techniques like gamification.
The point I’m making is rather more modest: it is that technology-based decision-making tools can present us with more/better/refined data, but they cannot not absolve us of our responsibility for making decisions. This is particularly evident in the case of ambiguous issues. Indeed, this is why decision-making on such matters has ethical, even metaphysical implications.
And so it is that sports needs human umpires, just as organisations need managers who can make decisions that they are willing to stand by, especially when situations are ambiguous and data is open to interpretation.

