Sherlock Holmes and the case of the failed projects
….as narrated by Dr. John H. Watson M. D.
Foreword
Of all the problems which had been submitted to my friend, Mr. Sherlock Holmes, for consideration during the years of our friendship, there has been one that stands out for the sheer simplicity of its resolution. I have (until now) been loath to disclose details of the case as I felt the resolution to be so trivial as to not merit mention.
So why bring it up after all these years?
Truth be told, I am increasingly of the mind that Holmes’ diagnosis in the Case of the Failed Projects (as I have chosen to call this narrative), though absolutely correct, has been widely ignored. Indeed, the writings of Lord Standish and others from the business press have convinced me that the real lesson from his diagnosis is yet to be learnt by those who really matter: i.e. executives and managers.
As Holmes might have said, this is symptomatic of a larger malaise: that of a widespread ignorance of elementary logic and causality.
A final word before I get into the story. As most readers know, my friend is better known for his work on criminal cases. The present case, though far more mundane in its details, is in my opinion perhaps his most important because of its remarkable implications. The story has, I believe, been told at least once before but, like all such narratives, its effect is much less striking when set forth en bloc in a single half-column of print than when the facts slowly emerge before one’s own eyes.
So, without further ado, then, here is the tale…
—
The narrative
Holmes was going through a lean patch that summer, and it seemed that the only cases that came his way had to do with pilfered pets or suspicious spouses. Such work, if one can call it that, held little allure for him.
He was fed up to the point that he was contemplating a foray into management consulting. Indeed, he was certain he could do as well, if not better than the likes of Baron McKinsey and Lord Gartner (who seemed to be doing well enough). Moreover his success with the case of the terminated PMO had given him some credibility in management circles. As it turned out, it was that very case that led Mr. Bryant (not his real name) to invite us to his office that April morning.
As you may have surmised, Holmes accepted the invitation with alacrity.
—
The basic facts of the issue, as related by Bryant, were simple enough: his organization, which I shall call Big Enterprise, was suffering from an unduly high rate of project failure. I do not recall the exact number but offhand, it was around 70%.
Yes, that’s right: 7 out of every 10 projects that Big Enterprise undertook were over-budget, late or did not fulfil business expectations!
Shocking, you say… yet entirely consistent with the figures presented by Lord Standish and others.
Upon hearing the facts and figures, Holmes asked the obvious question about what Big Enterprise had done to figure out why the failure rate was so high.
“I was coming to that,” said Bryant, “typically after every project we hold a post-mortem. The PMO (which, as you know,I manage) requires this. As a result, we have a pretty comprehensive record of ‘things that went well’ on our projects and things that didn’t. We analysed the data from failed projects and found that there were three main reasons for failure: lack of adequate user input, incomplete or changing user requirements and inadequate executive support.”
“….but these aren’t the root cause,” said Holmes.
“You’re right, they aren’t” said Bryant, somewhat surprised at Holmes’ interjection. “Indeed, we did an exhaustive analysis of each of the projects and even interviewed some of the key team members. We concluded that the root cause of the failures was inadequate governance on the PMO’s part,” said Bryant.
“I don’t understand. Hadn’t you established governance processes prior to the problem? That is after all the raison d’etre of a PMO…”
“Yes we had, but our diagnosis implied those processes weren’t working. They needed to be tightened up.”
“I see,” said Holmes shortly. “I’ll return to that in due course. Please do go on and tell me what you did to address the issue of poor…or inadequate governance, as you put it.”
“Yes, so we put in place processes to address these problems. Specifically, we took the following actions. For the lack of user input, we recommended getting a sign-off from business managers as to how much time their people would commit to the project. For the second issue – incomplete or changing requirements – we recommended that in the short term, more attention be paid to initial requirement gathering, and that this be supported by a stricter change management regime. In the longer term, we recommended that the organization look into the possibility of implementing Agile approaches. For the third point, lack of executive support, we suggested that the problem be presented to the management board and CEO, requesting that they reinforce the importance of supporting project work to senior and middle management.”
Done with his explanation, he looked at the two of us to check if we needed any clarification. “Does this make sense?” he enquired, after a brief pause.
Holmes shook his head, “No Mr. Bryant the actions don’t make sense at all. When faced with problems, the kneejerk reaction is to resort to more control. I submit that your focus on control misled you.”
“Misled? What do you mean?”
“Well, it didn’t work did it? Projects in Big Enterprise continue to fail, which is why we are having this meeting today. The reason your prescription did not work is that you misdiagnosed the issue. The problem is not governance, but something deeper.”
Bryant wore a thoughtful expression as he attempted to digest this. “I do not understand, Mr. Holmes,” he said after a brief pause. “Why don’t you just tell me what the problem is and how can I fix it? Management is breathing down my neck and I have to do something about it soon.”
“To be honest, the diagnosis is obvious, and I am rather surprised you missed it,” said Holmes, “I shall give you a hint: it is bigger, much bigger, than the PMO and its governance processes.”
“I’m lost, Mr. Holmes. I have thought about it long enough but have not been able to come up with anything. You will have to tell me,” said Bryant with a tone that conveyed both irritation and desperation.
“It is elementary, Mr. Bryant, when one has eliminated the other causes, whatever remains, however improbable, must be the truth. Your prior actions have all but established that the problem is not the PMO, but something bigger. So let me ask the simple question: what is the PMO a part of?”
“That’s obvious,” said Bryant, “it’s the organization, of course.”
“Exactly, Mr. Bryant: the problem lies in Big Enterprise’s organisational structures, rules and norms. It’s the entire system that’s the problem, not the PMO per se.”
Bryant looked at him dubiously. “I do not understand how the three points I made earlier – inadequate user involvement, changing requirements and lack executive sponsorship – are due to Big Enterprise’s structures, rules and norms. “
“It’s obvious,” said Holmes, as he proceeded to elaborate how lack of input was a consequence of users having to juggle their involvement in projects with their regular responsibilities. Changes in scope and incomplete requirements were but a manifestation of the fact that users’ regular work pressures permitted only limited opportunities for interaction between users and the project team – and that it was impossible to gather all requirements…or build trust through infrequent interactions between the two parties. And as for lack of executive sponsorship – that was simply a reflection of the fact that the executives could not stay focused on a small number of tasks because they had a number of things that competed for their attention…and these often changed from day to day. This resulted in a reactive management style rather than a proactive or interactive one. Each of these issues was an organizational problem that was well beyond the PMO.
“I see,” said Bryant, somewhat overwhelmed as he realized the magnitude of the problem, “…but this is so much bigger than me. How do I even begin to address it?”
“Well, you are the Head of the PMO, aren’t you? It behooves you to explain this to your management.”
“I can’t do that!” exclaimed Bryant. “I could lose my job for stating these sorts of things, Mr. Holmes – however true they may be. Moreover, I would need incontrovertible evidence…facts demonstrating exactly how each failure was a consequence of organizational structures and norms, and was therefore out of the PMO’s control.”
Holmes chuckled sardonically. “I don’t think facts or ‘incontrovertible proof’ will help you Mr. Bryant. Whatever you say would be refuted using specious arguments…or simply laughed off. In the end, I don’t know what to tell you except that it is a matter for your conscience; you must do as you see fit.”
We left it at that; there wasn’t much else to say. I felt sorry for Bryant. He had come to Holmes for a solution, only to find that solving the problem might involve unacceptable sacrifices.
We bid him farewell, leaving him to ponder his difficult choices.
—-
Afterword
Shortly after our meeting with him, I heard that Bryant had left Big Enterprise. I don’t know what prompted his departure, but I can’t help but wonder if our conversation and his subsequent actions had something to do with it.
…and I think it is pretty clear why Lord Standish and others of his ilk still bemoan the unduly high rate of project failure.
—
Notes
- Sherlock Holmes aficionados may have noted that the foreword to this story bears some resemblance to the first paragraph of the Conan Doyle classic, The Adventure of the Engineer’s Thumb.
- See my post entitled Symptoms not causes, a systems perspective on project failure for a more detailed version of the argument outlined in this story.
- For insight into the vexed question of governance, check out this post by Paul Culmsee and the book I co-authored with him.
Six heresies for business intelligence
What is business intelligence?
I recently asked a few acquaintances to answer this question without referring to that great single point of truth in the cloud. They duly came up with a variety of responses ranging from data warehousing and the names of specific business intelligence tools to particular functions such as reporting or decision support.
After receiving their responses, I did what I asked my respondents not to: I googled the term. Here are a few samples of what I found:
According to CIO magazine, Business intelligence is an umbrella term that refers to a variety of software applications used to analyze an organization’s raw data.
Wikipedia, on the other hand, tells us that BI is a set of theories, methodologies, architectures, and technologies that transform raw data into meaningful and useful information for business purposes.
Finally, Webopedia, tell us that BI [refers to] the tools and systems that play a key role in the strategic planning process of the corporation.
What’s interesting about the above responses and definitions is that they focus largely on processes and methodologies or tools and techniques. Now, without downplaying the importance of either, I think that many of the problems of business intelligence practice come from taking a perspective that is overly focused on methodology and technique. In this post, I attempt to broaden this perspective by making some potentially controversial statements –or heresies – that challenge this view. My aim is not so much to criticize current practice as to encourage – or provoke – business intelligence professionals to take a closer look at some of the assumptions underlie their practices.
The heresies
Without further ado, here are my six heresies for business intelligence practice (in no particular order).
A single point of truth is a mirage
Many organisations embark on ambitious programs to build enterprise data warehouses – unified data repositories that serve as a single source of truth for all business-relevant data. Leaving aside the technical and business issues associated with establishing definitive data sources and harmonizing data, there is the more fundamental question of what is meant by truth.
The most commonly accepted notion of truth is that information (or data in a particular context) is true if it describes something as it actually is. A major issue with this viewpoint is that data (or information) can never fully describe a real-world object or event. For example, when a sales rep records a customer call, he or she notes down only what is required by the customer management system. Other data that may well be more important is not captured or is relegated to a “Notes” or “Comments” field that is rarely if ever searched or accessed. Indeed, data represents only a fraction of the truth, however one chooses to define it – more on this below.
Some might say that it is naïve to expect our databases to capture all aspects of reality, and that what is needed is a broad consensus between all relevant stakeholders as to what constitutes the truth. The problem with this is that such a consensus is often achieved by means that are not democratic. For example, a KPI definition chosen by a manager may be hotly contested by an employee. Nevertheless, the employee has to accept it because that is the way (many) organisations work. Another significant issue is that the notion of relevant stakeholders is itself problematic because it is often difficult to come up with clear criterion by which to define relevance.
There are other ways to approach the notion of truth: for example, one might say that a piece of data is true as long as it is practically useful to deem it so. Such a viewpoint, though common, is flawed because utility is in the eye of the beholder: a sales manager may think it useful to believe a particular KPI whereas a sales rep might disagree (particularly if the KPI portrays the rep in a bad light!).
These varied interpretations of what constitute a truth have implications for the notion of a single point of truth. For one, the various interpretations are incommensurate – they cannot be judged by the same standard. Further, different people may interpret the same piece of data differently. This is something that BI professionals have likely come across – say when attempting to come up with a harmonized definition for a customer record.
In short: the notion of a single point of truth is problematic because there is a great deal of ambiguity about what constitutes a truth.
There is no such thing as raw data
In his book, Memory Practices in the Sciences, Geoffrey Bowker wrote, “Raw data is both an oxymoron and a bad idea; to the contrary, data should be cooked with care.” I love this quote because it tells a great truth (!) about so-called “raw” data.
To elaborate: raw data is never unprocessed. Firstly, the data collector always makes a choice as to what data will be collected and what will not. So in this sense, data already has meaning imposed on it. Second, and perhaps more important, the method of collection affects the data. For example, responses to a survey depend on how the questions are framed and how the survey itself is carried out (anonymous, face-to-face etc.). This is also true for more “objective” data such as costs and expenses. In both cases, the actual numbers depend on specific accounting practices used in the organization. So, raw data is an oxymoron because data is never raw, and as Bowker tells us, we need to ensure that the filters we apply and the methods of collection we use are such that the resulting data is “cooked with care.”
In short: data is never raw, it is always “cooked.”
There are no best practices for business intelligence, only appropriate ones
Many software shops and consultancies devise frameworks and methodologies for business intelligence which they claim are based on best or proven practices. However, those who swallow that line and attempt to implement the practices often find that the results obtained are far from best.
I have discussed the shortcomings of best practices in a general context in an earlier article, and (at greater length) in my book. A problem with best practice approaches is that they assume a universal yardstick of what is best. As a corollary, this also suggest that practices can be transplanted from one organization to another in a wholesale manner, without extensive customisation. This overlooks the fact that organisations are unique, and what works in one may not work in another.
A deeper issue is that much of the knowledge pertaining to best practices is tacit – that is, it cannot be codified in written form. Indeed, what differentiates good business intelligence developers or architects from great ones is not what they learnt from a textbook (or in a training course), but how they actually practice their craft. These consist of things that they do instinctively and would find hard to put into words.
So, instead of looking to import best practices from your favourite vendor, it is better to focus on understanding what goes on in your environment. A critical examination of your environment and processes will reveal opportunities for improvement. These incremental improvements will cumulatively add up to your very own, customized “best practices.”
In short: develop your own business intelligence best practices rather than copying those peddled by “experts.”
Business intelligence does not support strategic decision-making
One of the stated aims of business intelligence systems is to support better business decision making in organisations (see the Wikipedia article, for example). It is true that business intelligence systems are perfectly adequate – even indispensable – for certain decision-making situations. Examples of these include, financial reporting (when done right!) and other operational reporting (inventory, logistics etc). These generally tend to be routine situations with clear cut decision criteria and well-defined processes – i.e. decisions that can be programmed.
In contrast, decisions pertaining to strategic matters cannot be programmed. Examples of such decisions include: dealing with an uncertain business environment, responding to a new competitor etc. The reason such decisions cannot be programmed is that they depend on a host of factors other than data and are generally made in situations that are ambiguous. Typically people use deliberative methods – i.e. methods based on argumentation – to arrive at decisions on such matters. The sad fact is that all the major business tools in the market lack support for deliberative decision-making. Check out this post for more on what can be done about this.
In short: business intelligence does not support strategic decision-making .
Big data is not the panacea it is trumpeted to be
One of the more recent trends in business intelligence is the move towards analyzing increasingly large, diverse, rapidly changing datasets – what goes under the umbrella term big data. Analysing these datasets entails the use of new technologies (e.g. Hadoop and NoSQL) as well as statistical techniques that are not familiar to many mainstream business intelligence professionals.
Much has been claimed for big data; in fact, one might say too much. In this article Tim Harford (aka the Undercover Economist) summarises the four main claims of “big data cheerleaders” as follows (the four phrases below are quoted directly from the article):
- Data analysis produces uncannily accurate results.
- Every single data point can be captured, making old statistical sampling techniques obsolete.
- It is passé to fret about what causes what, because statistical correlation tells us what we need to know.
- Scientific or statistical models aren’t needed.
The problem, as Harford points out, is that all of these claims are incorrect.
Firstly, the accuracy of the results that come out of a big data analysis depend critically on how the analysis is formulated. However, even analyses based on well-founded assumptions can get it wrong, as is illustrated in this article about Google Flu Trends.
Secondly, it is pretty obvious that it is impossible to capture every single data point (also relevant here is the discussion on raw data above – i.e. how data is selected for inclusion).
The third claim is simply absurd. The fact is detecting a correlation is not the same as understanding what is going on – a point made rather nicely by Dilbert. Enough said, I think.
Fourthly, the claim that scientific or statistical models aren’t needed is simply ill-informed. As any big data practitioner will tell you, big data analysis relies on statistics. Moreover, as mentioned earlier, a correlation-based understanding is no understanding at all – it cannot be reliably extrapolated to related situations without the help of hypotheses and (possibly tentative) models of how the phenomenon under study works.
Finally, as Danah Boyd and Kate Crawford point out in this paper , big data changes the meaning of what it means to know something….and it is highly debatable as to whether these changes are for the better. See the paper for more on this point. (Acknowledgement: the title of this post is inspired by the title of the Boyd-Crawford paper).
In short: business intelligence practitioners should not uncritically accept the pronouncements of big data evangelists and vendors.
Business intelligence has ethical implications
This heresy applies to much more than business intelligence: any human activity that affects other people has an ethical dimension. Many IT professionals tend to overlook this facet of their work because they are unaware of it – and sometimes prefer to remain so. Fact is, the decisions business intelligence professionals make with respect to usability, display, testing etc. have a potential impact on the people who use their applications. The impact may be as trivial as having to click a button or filter too many before they get their report, to something more significant, like a data error that leads to a poor business decision.
In short: business intelligence professionals ought to consider how their artefacts and applications affect their users.
In closing
This brings me to the end of my heresies for business intelligence. I suspect there will be a few practitioners who agree with me and (possibly many) others who don’t…and some of the latter may even find specific statements provocative. If so, I consider my job done, for my intent was to get business intelligence practitioners to question a few unquestioned tenets of their profession.
The essence of entrepreneurship
Introduction
In keeping with the standard connotation of the term, Wikipedia defines entrepreneurship as the “process of identifying and starting a business venture, sourcing and organizing the required resources and taking both the risks and rewards associated with the venture.” We are all familiar with stories of successful entrepreneurs; indeed, how can we not be – magazines and books are filled with anecdotes and case studies of entrepreneurial folks whose example we are urged to follow…the inventors of a certain search engine being particularly favoured role models.
Yet, after we are done digesting the rhetoric of gurus and ghostwriters, we seem to be none the wiser. The stories, as entertaining as they are, fail to capture the essence of entrepreneurship.
There is a good reason for this: entrepreneurship is not a process as Wikipedia (and books/gurus) would have us believe. Rather it is about developing sensitivities towards anomalies or disharmonies in our day-to-day lives and then attempting to do something about them. This post, which is based on portions of a brilliant book entitled, Disclosing New Worlds, is an attempt to elaborate on this point. The book is written by an unusual set of authors including a philosopher and an entrepreneur, so it is not surprising that it offers a completely fresh perspective of the topic.
Before I dive into it, a few words about how this article is organized: I begin with some background material that is necessary in order to understand the main arguments in the book. Despite my best efforts, this section is rather long and somewhat involved (I’d appreciate any feedback and/or suggestions for improvement). Following that, I present the authors’ critique of conventional views of entrepreneurship and discuss why they are inadequate. I then (finally!) get to the main topic: a discussion of the essence of entrepreneurship, illustrating some of the key points through a concrete, though somewhat unusual example.
Background: Heidegger, rationalism and postmodernism
The central thesis of the book is based on the philosophy of Martin Heidegger, in particular his thoughts on how we perceive, encounter and deal with the world. For this reason, I will spend some time discussing Heidegger’s philosophy as it pertains to the discussion of entrepreneurship presented in the book.
The best way to understand to Heidegger’s perspective is to contrast it with the two dominant worldviews of our times: the scientific-rational (or Cartesian) worldview that forms the basis of scientific thinking and the postmodern view which emphasizes the role of human choice and radical change. I elaborate on the differences between the Heideggerean worldview on the one hand and Cartesianism and postmodernism on the other. I focus mainly on the Cartesian worldview as it is by far the more dominant one of the two, and will discuss postmodernism only briefly towards the end of this section.
A Cartesian observer perceives the world as being comprised of things and processes that can be observed and analysed in an objective manner. To be sure, the importance of such a mode of thinking cannot be overstated; it is after all what makes science and technology possible. However, and this is a key point, such a view does not come naturally to humans. As Heidegger noted, our actual day-to-day interactions with the world are not objective: we see tables, desks or computers not as objects to be analysed, but as things to be used in a natural way – i.e. without conscious thought. Heidegger coined the term ready-to-hand to denote this non-objective, natural way in which we deal with the world.
Heidegger claimed that we encounter things in the world primarily as being ready-to-hand rather than as objects in their own right. We take an objective attitude towards them only when they breakdown – i.e. when they stop functioning as we expect them to. For example, I become consciously aware of my computer as a computer only when it starts to malfunction. In other words, it is only when the computer stops being ready-to-hand that I see it as an object to be examined in its own right. When it is functioning correctly, however, it is simply a tool that I use without conscious awareness that it is a computer. It is in this sense that the rational-scientific way of viewing the world is not a natural one. Indeed, the rational-scientific mode of thinking completely misses this natural way in which we encounter the world.
Although the foregoing might sound a bit “out there”, it is important to note that Heidegger’s philosophy is primarily practical for it deals with the day-to-day aspects of life. Indeed, our daily lives consist of a number of relatively self-contained worlds: home, work, friends – each with their own set of practices, i.e. things that we do within them in a natural way. A key Heideggerean concept in this connection is that of a disclosive space – which is a set of interrelated practices and ready-to-hand objects that define a particular aspect of our lives. For example, a disclosive space for a writer might include his or her equipment (computer, desk etc.) and practices (writing habits, rituals etc) that he or she may follow when writing.
I’ll use the example of writers to illustrate another important point. Different writers may have different ways of working – each of these define different disclosive spaces although all writers engage in essentially the same activity (i.e. that of writing). The differences between similar disclosive spaces amount to differences in what Heidegger called style. Different writers have different working styles (not to be confused with their writing styles) as do different scientists, bakers, or even IT managers. A style is the way in which our practices within a disclosive space hang together as a whole – that is, it is the way in which we perform our tasks at work or when writing, doing science, baking or even when managing people, projects and processes. This is pretty much in line with the way in which we use the word “style” when we say, “that is (or is not) my style” or simply, “that’s (not) me.”
Another important aspect of Heideggerian thought is the notion of authenticity (see this article for a very readable discussion of authenticity in online interactions). According to Heidegger, being authentic means to act in a way that is true to oneself. This amounts to acting in a way that is consistent with what one really thinks or believes. Among other things, being authentic implies a deep awareness of who one is and what one stands for. Indeed, authenticity (or the lack of it) is reflected in one’s style (Reminder: style is what defines differences between similar disclosive spaces). Authenticity is inconsistent with a rational scientific worldview because it necessarily implies that one acts in an engaged and involved way – the polar opposite of the detached, dispassionate attitude that is valued by rationalism.
From the foregoing, it should be clear that Heidegger emphasizes the “involvedness” with which we engage in our day to day activities, at least, when we are immersed in what we are doing. It is impossible to be objective when one is totally involved with what one is doing. This is completely antithetical to the scientific rational view in which we are supposed to maintain a detached, dispassionate view of the world.
An important corollary of the above is that the scientific-rational view sees the world in an ahistorical (or non-historical) way – i.e. one does not consider one’s actions as being part of an ongoing story. Such an attitude can only result in partial knowledge, for to know things as they really are, one must understand their antecedents. Consider, for example, our current attitude to natural resources: we see them as objects being available for uncontrolled exploitation rather than as non-renewable products of a (historical) process of evolution that ought to be used in a sustainable way. Such a mindset is common to most rational-scientific thinking – history and social consequences are considered to be sideshows that have at best a peripheral relevance to the matter at hand. The dangers of such thinking are becoming increasingly apparent.
The postmodern worldview is at the other end of the spectrum from the rational one. Postmodernism originally developed as a challenge to commonly accepted worldviews such as the scientific-rational one as well as those rooted in cultural traditions. Postmodernism tells us that the scientific worldview does not have universal applicability, and that other modes of thinking (humanism, religion) may be more appropriate in certain domains. Apart from choice, the notion of radical change is central to postmodernism. So, although it is opposed to the rational worldview, it shares with a lack of due consideration of history because it advocates a discontinuous break with the past.
Before going on, it is worth summarizing the key messages of this section. In contrast to the Cartesian and postmodern views, Heidegger tells us that we experience the world (and its contents) in a ready-to-hand manner; that is, we encounter them not as objects to be analysed (as the rational view would have us believe) or to be interpreted as we please (as the postmodernists tell us), but as natural aspects of our day-to-day world. Heidegger emphasized that our identities arise largely from the way we encounter and deal with these aspects of our lives. Different people deal with the same situation in different ways – and each of these ways constitutes a style. As we shall see later, entrepreneurship is a certain style of encountering the world. However, before doing so, let us look at some conventional interpretations of entrepreneurship and see why they are deeply mistaken.
Conventional treatments of entrepreneurship
The authors critique the three major mainstream strands of thought on entrepreneurship:
- The theoretical approach
- The empirical approach
- The virtue-based (or devotional) approach
The theoretical approach is championed by writers such as Peter Drucker, who seek to build theoretical models of entrepreneurship. As he wrote in his classic, Innovation and Entrepreneurship, “Every practice rests on theory.” It is easy to see that this claim is mistake by noting that there are many everyday practices that do not rest on theory – riding a bicycle for example. In the case of entrepreneurship the gap between practice and theory is even wider because there is no well-defined process for entrepreneurship. In his book, Drucker claimed that entrepreneurship can be boiled down to a purposeful search for the “symptoms that indicate opportunities for innovation” and to “know and apply the principles of successful innovation” to these opportunities.
The problem with this viewpoint is that the “symptoms that indicate opportunities” are never obvious. At this very instant there are likely to be many such “symptoms” that we cannot see simply because they are not attuned to them. Some of these might be picked up people who are sensitive to such anomalies…and a small fraction of those who sense these anomalies might care enough to develop a concrete vision to do something about them. This is not a process in the usual sense of the word; it is a deeply personal journey that even the entrepreneur who experiences it would have difficulty articulating.
The empirical viewpoint is championed by those who believe that the “skill” of entrepreneurship is best learnt by studying examples of successful entrepreneurs. This approach consists of analysing a wide variety of case studies through which one develops an understanding of “different types” of entrepreneurship. Indeed, the whole point of case-study based learning is that it is supposed to be a substitute for real world experience – a sort of short-cut to wisdom. The flaw with this logic is easy to see – reading detailed biographies of, say, Barack Obama or Stephen Hawking will not help one internalize the qualities that make a successful politician or physicist.
The virtue-based approach takes the view that successful entrepreneurs have certain qualities or virtues that makes them sensitive to potential entrepreneurial opportunities. George Gilder, a proponent of this view, suggests that the virtues of the successful entrepreneur are giving (philanthropy), humility and commitment. The problem with this view is again easy to see: there are many non-entrepreneurs who have these virtues and, perhaps more important, there are a great many entrepreneurs who have none of them. Nevertheless, the virtue-based approach is possibly closer to the mark it highlights the importance of second order practices – that is, practices that change the way we look at the world. Indeed, as we shall see next, entrepreneurship is a second-order practice.
History making – the essence of entrepreneurship
The concept of a disclosive space discussed above is the key to understanding what entrepreneurship is. As a reminder, a disclosive space is a set of interrelated practices and objects that define a certain aspect of our lives – for example, our driving a car, gardening etc.
When we act within a disclosive space, we are in effect disclosing (or, making apparent) an aspect of our lives. These disclosures are usually unsurprising because we act in customary or expected ways. For example, when we see someone driving or gardening, we have a pretty good idea of what they are doing without having to be told what they are up to. Their actions more or less explain themselves because they correspond to normal or well accepted ways in which humans act. However it is important to note that even though the practices are seen as normal, it doesn’t mean that they cannot be changed or improved – it is only that most of us do not see any scope for improvement.
This brings us to the crux of the argument: an entrepreneur is someone who sees scope for changing customary practices in a novel way. Moreover, since such changes completely transform the style of a disclosive space, in effect they disclose new worlds. Put another way, an entrepreneur is someone who sees anomalies in our customary ways of disclosing. He or she then holds on to those anomalies and attempts to fix or transform them by making changes in customary practices. Indeed, this is precisely what that much overused, overhyped and misunderstood term, innovation, is all about. Quoting from the book:
The kind of thinking that leads to innovation requires an openness to anomalies in life. It requires an interest in holding on to these anomalies in one’s daily life and in seeing clearly how the anomalies look under different conditions. If people do this in an enterprise….then they cannot see their lives and the …space in which they work as being settled…If one is living in the natural settled way of doing things then things happen as they should. The unordinary will appear unnatural and monstrous, not a truth worthy of preservation or [more important] a focus for reorganizing one’s life.
It should also be clear, now, that an entrepreneur must have a good sense of history – to understand what changes he or she wants to bring about and why, an entrepreneur must have a deep understanding of the current situation and its antecedents. Moreover, since such a person transforms established practices, in a more or less radical fashion, he or she is actually making history.
The book describes different ways in which historical disclosing can occur. These have applicability not only to entrepreneurship but also in the social and political sphere. However, for reasons of space I will not go into this. Instead, I will close this article with an example that illustrates the points I have made about entrepreneurship.
An example
In the early 1900s, a clerk at a patent office in Bern wrote a number of landmark papers that transformed physics and our understanding of the world. Indeed, Albert Einstein is a perfect example of entrepreneurship in action. I will focus on just one of his contributions to physics – the special theory of relativity – and show how the way in which he arrived at this theory embodies the points I have made in the previous section (Note: I’ve glossed over some technical details below; the discussion is involved enough as it is!) :
- The pre-Einsteinian worldview was based on classical mechanics, which came out of the work of Newton and others. When Einstein proposed his theory of special relativity, classical mechanics had been around for more than 200 years, and had been successfully used to solve many scientific and engineering problems.
- One of the consequences of classical mechanics is that the speed of any object depends on the state of motion of the person who is observing it. An example will help make this cryptic statement clearer: two trains travelling at the same speed in the same direction are motionless with respect to each other – i.e. to an observer located on one of the trains, the other train will appear to be motionless. However, an observer located on the ground will see both trains as moving.
- The work of James Clerk Maxwell on electromagnetic theory in the late 1800s predicted that the speed of light in a vacuum is a constant – approx. 300,000 km/sec. Experiments showed that the speed of light turned out to be this value regardless of the state of motion of the observer.
- There is a contradiction between (2) and (3), for if Maxwell’s theory is to be consistent with classical mechanics, the speed of light ought to depend on the speed of the observer. However, although many experiments were devised to detect such a dependence, none was ever found.
- Einstein realized that either Newton or Maxwell had to be wrong. He held on to this anomaly for a long time, pondering the best way to resolve it. He finally surmised (for reasons I won’t go into here) that the fault lay with classical mechanics rather than Maxwell’s electromagnetic theory. Very simply, he made the bold guess that classical mechanics is wrong at speeds close to that of light. In effect, Einstein resolved the anomaly by “fixing up” classical mechanics in such a way as to make it consistent with electromagnetic theory. The special theory of relativity is basically the resolution of this anomaly. Indeed, most major scientific advances are made through the resolution of such anomalies.
In brief, then, the special theory of relativity:
- Resolved a key anomaly of late 19th century physics in a completely novel way.
- Disclosed a new world – literally!
In developing the theory, Einstein displayed a unique style of doing physics – for example, since it is impossible to travel at speeds close to that of light, he devised thought experiments to work out the consequences of travelling at such speeds. He also displayed a deep sense of the history of the problem that he was working on: without a thorough understanding of the work of Newton, Maxwell and others, it would have been impossible for him to develop his theory.
In short, Einstein is the quintessential entrepreneur because he made history by disclosing a new world.
Conclusion
Entrepreneurs are those who care deeply about anomalies and have the ability to hold on to and think about them over extended periods of time. In doing so they sometimes resolve the anomalies that worry them, and are then recognized as entrepreneurs. However, there are many who struggle without success, and they are no less entrepreneurial than those who succeed. Such people, whether successful or not, necessarily possess a deep sense of the history of the problem they attempt to address. Indeed, this is must be so, for in resolving the anomaly they care about, they write another chapter of that history.

