Eight to Late

Sensemaking and Analytics for Organizations

Selling AI ethically – a customer perspective

with 5 comments

Artificial intelligence (AI) applications that can communicate in human language seem to capture our attention whilst simultaneously blunting our critical capabilities. Examples of this abound, ranging from claims of AI sentience to apps that are “always here to listen and talk.”    Indeed, a key reason for the huge reach of Large Language Models (LLMs) is that humans can interact with them effortlessly. Quite apart from the contested claims that they can reason, the linguistic capabilities of these tools are truly amazing.

Vendors have been quick to exploit our avidity for AI. Through relentless marketing, backed up by over-the-top hype, they have been able to make inroads into organisations. Their sales pitches tend to focus almost entirely on the benefits of these technologies, with little or no consideration of the downsides.  To put it bluntly, this is unethical. Doubly so because customers are so dazzled by the capabilities of the technology that they rarely ask questions that they should.

AI ethics frameworks (such as this one) overlook this point almost entirely.  Most of them focus on things such as fairness, privacy, reliability, transparency etc. There is no guidance or advice to vendors on selling AI ethically, by which I mean a) avoiding overblown claims, b) being clear about limitations of their products and c) showing customers how they can engage with AI tools meaningfully – i.e., in ways that augment human capabilities rather than replacing them.

In this article, I offer some suggestions on how vendors can help their customers develop a balanced perspective on what AI can do for them. To set the scene, I will begin by recounting the public demo of an AI product in the 1950s which was accompanied by much media noise and public expectations.

Some things, it seems, do not change.

–x–

The modern history of Natural Language Processing (NLP) – the subfield of computer science that deals with enabling computers to “understand” and communicate in human language – can be traced back to the Georgetown-IBM research experiment that was publicly demonstrated in 1954.  The demonstration is trivial by today’s standards. However, as noted by John Hutchins’’ in this paper, “…Although a small-scale experiment of just 250 words and six ‘grammar’ rules it raised expectations of automatic systems capable of high quality translation in the near future…”  Here’s how  Hutchins describes the hype that followed the public demo:

On the 8th January 1954, the front page of the New York Times carried a report of a demonstration the previous day at the headquarters of International Business Machines (IBM) in New York under the headline “Russian is turned into English by a fast electronic translator”: A public demonstration of what is believed to be the first successful use of a machine to translate meaningful texts from one language to another took place here yesterday afternoon. This may be the cumulation of centuries of search by scholars for “a mechanical translator.” Similar reports appeared the same day in many other American newspapers (New York Herald Tribune, Christian Science Monitor, Washington Herald Tribune, Los Angeles Times) and in the following months in popular magazines (Newsweek, Time, Science, Science News Letter, Discovery, Chemical Week, Chemical Engineering News, Electrical Engineering, Mechanical World, Computers and Automation, etc.) It was probably the most widespread and influential publicity that MT (Machine Translation – or NLP by another name) has ever received.”

It has taken about 60 years, but here we are: present day LLMs go well beyond the grail of machine translation. Among other “corporately-useful” things, LLM-based AI products such as Microsoft Copilot can draft documents, create presentations, and even analyse data.  As  these technologies requires no training whatsoever, it is unsurprising that they have captured corporate imagination like never before.

Organisations are avid for AI and vendors are keen to cash in.

Unfortunately, there is a huge information asymmetry around AI that favours vendors: organisations are typically not fully aware of the potential downsides of the technology and vendors tend to exploit this lack of knowledge. In a previous article, I discussed how non-specialists can develop a more balanced perspective by turning to the research literature. However, this requires some effort and unfairly puts the onus entirely on the buyer.  

Surely, vendors have a responsibility too.

–x–

I recently sat through a vendor demo of an LLM-based “enterprise” product. As the presentation unfolded, I made some notes on what the vendor could have said or done to help my colleagues and I make a more informed decision on the technology. I summarise them below in the hope that a vendor or two may consider incorporating them in their sales spiel. OK, here we go:

Draw attention to how LLMs do what they do: it is important that users understand how LLMs do what they do. Vendors should demystify LLM capabilities by giving users an overview of how they do their magic. If users understand how these technologies work, they are less likely to treat their outputs as error-free or oracular truths. Indeed, a recent paper claims that LLM hallucinations (aka erroneous outputs) are inevitable – see this article for a simple overview of the paper.

Demo examples of LLM failures: The research literature has several examples of the failure of LLMs in reasoning tasks – see this article for a summary of some. Demonstrating these failures is important, particularly in view of Open AI’s claim that its new GPT-4o tool can reason. Another point worth highlighting is the bias present in LLM  (and more generally Generative AI) models. For an example, see the image created by the Bing Image Creator – the prompt I used was “large language model capturing a user’s attention.”

Discourage users from outsourcing their thinking: Human nature being what it is, many users will be tempted to use these technologies to do their thinking for them. Vendors need to highlight the dangers of doing so. If users do not think a task through before handing it to an LLM, they will not be able to evaluate its output. Thinking task through includes mapping out the steps and content (where relevant), and having an idea of what a reasonable output should look like.

Avoid anthropomorphising LLMs: Marketing will often attribute agency to LLMs by saying things such as “the AI is thinking” or “it thinks you are asking for…”. Such language suggests that LLMs can think or reason as humans do, and biases users towards attributing agency to these tools.

Highlight potential dangers of use in enterprise settings: Vendors spend a lot of time assuring corporate customers that their organisational data will be held securely. However, exposing organisational data (such as data on corporate OneDrive folders) even within the confines of the corporate network can open the possibility of employees being able to query information that they should not have access. Moreover, formulating such queries is super simple because they can be asked in plain English. Vendors claim that this is not an issue if file permissions are implemented properly in the organisation. However, in my experience, people always tend to overshare files within their organisations. Another danger is that the technology opens the possibility of spying on employees. For example, a manager who wants to know what an employee is up to can ask the LLM about which documents an employee has been working on.

Granted, highlighting the above might make some corporate customers wary of rushing in to implement LLM technologies within their organisations. However, I would argue that this is a good thing for vendors in the long run, as it demonstrates a commitment to implementing AI ethically.

–x–

It is appropriate to end this piece by making a final point via another historical note.

The breakthrough that led to the development LLMs was first reported in a highly cited 2017 paper entitled. “Attention is all you need”. The paper describes an architecture (called transformer) that enables neural networks to accurately learn the multiple contexts in which words occur in a large volume of text. If the volume of text is large enough – say a representative chunk of the internet – then a big enough neural network with billions of nodes can be trained to encode the entire vocabulary of the English language in all possible contexts.

The authors’ choice of the “attention” metaphor is inspired because it suggests that the network “learns to attend to” what is important. In the context of humans, however, the word “attention” means much more than just attending to what is important. It also refers to the deep sense of engagement with what we are attending to. The machines we use should help us deepen that engagement, not reduce (let alone eliminate) it. And therein lies the ethical challenge for AI vendors.

–x–x–

Written by K

June 12, 2024 at 7:45 am

Seeing through AI hype – some thoughts from a journeyman

with 4 comments

It seems that every new release of a Large Language Model (LLM) is accompanied by a firehose of vendor hype and uncritically positive analyst and media comments.

Figure 1: Open AI tweet announcing the release of GPT-4o

The seductions of a new technology make it all too easy to overlook its shortcomings and side-effects, and rush in where “angels may fear to tread.”  Although this is particularly evident in today’s age of LLMs, it is not a new phenomenon. As the anthropologist, Gregory Bateson, noted over forty years ago:

It seems that every important scientific advance provides tools which look to be just what the applied scientists and engineers had hoped for, and usually these [folks] jump in without more ado. Their well-intentioned (but slightly greedy and slightly anxious) efforts usually do as much harm as good, serving at best to make conspicuous the next layer of problems, which must be understood before the applied scientists can be trusted not to do gross damage. Behind every scientific advance there is always a matrix, a mother lode of unknowns out of which the new partial answers have been chiseled. But the hungry, overpopulated, sick, ambitious, and competitive world will not wait, we are told, till more is known, but must rush in where angels fear to tread.

I have very little sympathy for these arguments from the world’s “need.” I notice that those who pander to its needs are often well paid. I distrust the applied scientists’ claim that what they do is useful and necessary. I suspect that their impatient enthusiasm for action, their rarin’-to-go, is not just a symptom of impatience, nor is it pure buccaneering ambition. I suspect that it covers deep epistemological panic.”

The hype and uncritical use of LLM technology are symptoms of this panic. This article is largely about how you and I – as members of the public – can take a more considered view of these technologies and thereby avoid epistemological panic, at least partially. Specifically, I cover two areas: a) the claims that LLMs can reason (see tweet above) and b) the broader question of the impact of the impact of these technologies on our information ecosystem. 

–x–

One expects hype in marketing material from technology vendors. However, these days it seems that some researchers, who really ought to know better, are not immune. As an example, in this paper a bunch of computer scientists from Microsoft Research suggest that LLMs show “sparks of AGI” (Artificial General Intelligence), by which they imply that LLMs can match or surpass human cognitive capabilities such as reasoning. I’ll have more to say about the claim shortly. However, before I go on, a few words about how LLMs work are in order.

The principle behind all LLMs tools, such as GPT, is next token prediction – i.e., the text they generate is drawn from a list of most likely next words, based on the prompt (i.e. the input you provide and the text generated thus far). The text LLMs generate is usually coherent and grammatical, but not always factually correct (as a lawyer found out the hard way) or logically sound (I discuss examples of this below).

The coherence and grammatical correctness are expected because their responses are derived from a massive multidimensional probability distribution based on the data they are trained on, which is a representative chunk of the internet. This is augmented by human feedback via a process that is called reinforcement learning from human feedback (RLHF).

For those interested in finding out more about how LLMs work,  I highly recommend Stephen Wolfram’s long but excellent non-technical essay which is also available in paperback.

Given the above explanation about how LLMs work, it should be clear any claim suggesting LLMs can reason like humans should be viewed with scepticism.

Why?

Because a next-token-predictor cannot reason; it can at best match patterns. As Subbarao Kambhampati puts it, they are approximate retrieval engines. That said, LLMs’ ability to do pattern matching at scale enables them to do some pretty mind-blowing things that look like reasoning.  See my post, More Than Stochastic Parrots, for some examples of this and keep in mind they are from a much older version of ChatGPT.

So, the question is: what exactly are LLMs doing, if not reasoning?

In the next section, I draw on recent research to provide a partial answer to this question. I’ll begin with a brief discussion of some of the popular prompting techniques that seem to demonstrate that LLMs can reason and then highlight some recent critiques of these approaches.

–x–

In a highly cited 2022 paper entitled, Chain-of-Thought (CoT) Prompting Elicits Reasoning in Large Language Models, a team from Google Brain claimed that providing an LLM a “series of intermediate reasoning steps significantly improves [its] ability to perform complex reasoning.” Figure 2 below shows an example from their paper (see this blog post from the research team for a digest version of the paper)

Figure 2: Chain of Thought Prompting (From Wei et. al. 2023)

The original CoT paper was closely followed by this paper (also by a team from Google Brain) claiming that one does not even have to provide intermediate steps. Simply adding “Let’s think step by step” to a prompt will do the trick. The authors called this “zero shot prompting.” Figure 2 from the paper compares few shot and CoT prompting.

Figure 3: Zero Shot Prompting (From Kojima et al 2023)

The above approach works in many common reasoning problems. But does it imply that LLMs can reason? Here’s how Melanie Mitchell puts it in a substack article:

While the above examples of CoT and zero-shot CoT prompting show the language model generating text that looks like correct step-by-step reasoning about the given problem, one can ask if the text the model generates is “faithful”—that is, does it describe the actual process of reasoning that the LLM uses to solve the problem?  LLMs are not trained to generate text that accurately reflects their own internal “reasoning” processes; they are trained to generate only plausible-sounding text in response to a prompt. What, then, is the connection between the generated text and the LLM’s actual processes of coming to an answer?

Incidentally, Mitchell’s substack is well worth subscribing to for a clear-eyed, hype-busting view of AI, and  this book by Arvind Narayanan due to be released in Sept 2024 is also worth keeping an eye out for.

Here are a few interesting research threads that probe LLMs reasoning capabilities:

  • Subbarao Kambhampati’s research group has been investigating LLMs planning abilities. The conclusion they reach is that LLMs cannot plan, but can help in planning. In addition, you may want to view this tutorial by Kambhampati in which he walks viewers through details of tests described in the papers. 
  • This paper from Thomas Griffiths’ research group critiques the Microsoft paper on “sparks of AGI”. As the authors note: “Based on an analysis of the problem that LLMs are trained to solve (statistical next-word prediction), we make three predictions about how LLMs will be influenced by their origin in this task—the embers of autoregression that appear in these systems even as they might show sparks of artificial general intelligence“. In particular, they demonstrate that LLM outputs have a greater probability of being incorrect when one or more of the following three conditions are satisfied a) the probability of the task to be performed is low, b) the probability of the output is low and, c) the probability of the input string is low. The probabilities in these three cases refer to the chances of examples of a) the task, b) the output or c) the input being found on the internet.
  • Somewhat along the same lines as the above, this paper by Zhaofeng Wu and colleagues investigates LLM reasoning capabilities through counterfactual tasks – i.e., variations of tasks commonly found on the internet. An example of a counterfactual task would be adding two numbers in base 8 as opposed to the default base 10. As expected, the authors find the performance of LLMs on counterfactual tasks to be substantially worse than on default tasks.
  • In this paper, Miles Turpin and colleagues show that when LLMs appear to reason, they can systematically misrepresent the reasons for their predictions. In other words, the explanations they provide for how they reached their conclusions can, in some cases, be demonstrated to be incorrect.
  • Finally, in this interesting paper (summarised here), Ben Prystawski and colleagues attempt to understand the reason why CoT prompting works (when it does, that is!). They conclude that “we can expect CoT reasoning to help when a model is tasked with making inferences that span different topics or concepts that do not co-occur often in its training data, but can be connected through topics or concepts that do.” This is very different from human reasoning which is a) embodied, and thus uses data that is tightly coupled – i.e., relevant to the problem at hand and b) uses the power of abstraction (e.g. theoretical models). Research of this kind, aimed at understanding the differences between LLM and human reasoning, can suggest ways to improve the former. But we are a long way from that yet.

To summarise, then:  the “reasoning” capabilities of LLMs are very different from those of humans and can be incorrect in surprising ways. I should also note that although the research described above predates the release of GPT-4o, the newer version does not address any of the shortcomings as there is no fundamental change in the way it is built. It is way too early for published research on this, but see this tweet from a researcher in Kambhampati’s group.

With that said for reasoning, I now move on to the question of the pernicious effects of these technologies on information access and reliability. Although this issue has come to the fore only recently, it is far from new: search technology has been silently mediating our interactions with information for many years.

–x–

In 2008, I came across an interesting critique of Google Search by Tom Slee. The key point he makes is that Google influences what we know simply by the fact that the vast majority of people choose to click on one of the top two or three links presented to them by the search engine. This changes the dynamics of human knowledge. Here’s how Slee puts it, using an evocative analogy of Google as a (biased!) guide through a vastly complicated geography of information:

“Google’s success has changed the way people find their routes. Here is the way it happens. When a new cluster of destinations is built there may be a flurry of interest, with new signposts being erected pointing towards one or another of those competing locations. And those signposts have their own dynamics…But that’s not the end of the story. After some initial burst, no one makes new signposts to this cluster of destinations any more. And no one uses the old signposts to select which particular destination to visit. Instead everyone uses [Google]. It becomes the major determinant of the way people travel; no longer a guide to an existing geography it now shapes the geography itself, becoming the most powerful force of all in many parts of the land.”

To make matters worse, in recent years even the top results by Google are increasingly tainted. As this paper  notes, “we can conclude that higher-ranked pages are on average more optimized, more monetized with affiliate marketing, and they show signs of lower text quality.”   In a very recent development Google has added Generative AI capabilities to its search engine to enhance the quality of search results (Editors Note: LLMs are a kind of Generative AI technology). However, as suggested by this tweet from Melanie Mitchell and this article, the road to accurate and trustworthy AI-powered search is likely be a tortuous one…and to a destination that probably does not exist. 

–x–

As we have seen above, by design, search engines and LLMs “decide” what information should be presented to us, and they do so in an opaque manner. Although the algorithms are opaque, we do know for certain that they use data available on the internet.  This brings up another issue: LLM-generated data is being added to (flooding?) the internet at an unknown rate. In a recent paper Chirag Shah and Emily Bender consider the effect of synthetically generated data on the quality of data on the internet. In particular, they highlight the following issues with LLM-generated data:

  • LLMs are known to propagate biases present in their training data.
  • They lack transparency – the responses generated by LLMs are presented as being authoritative, but with no reference to the original sources.
  • The users have little control over how LLMs generate responses. Often there can be an “illusion of control” as we saw with CoT prompting.

Then there is the issue of how an information access system should work:  should it just present the “right” result and be done with it, or should it encourage users to think for themselves and develop their information literacy skills. The short yet fraught history of search and AI technologies suggests that vendors are likely to prioritise the former over the latter.

–x–

Apart from the above issues of bias, transparency and control, there is the question of whether there are qualitative differences between synthetically generated and human generated data.  This question was addressed by Andrew Peterson in a recent paper entitled, AI and the Problem of Knowledge Collapse.  His argument is based on the empirical observation (in line with theoretical expectations) that any Generative AI trained on a large publicly available corpus will tend to be biased toward returning results that conform to popular opinion – i.e., given a prompt, it is most likely to return a response that reflects the “wisdom of the crowd.”  Consequently, opinions and viewpoints that are smaller in number compared to the mainstream will be underrepresented.

As LLM use becomes more widespread, AI-generated content will flood the internet and will inevitably become a significant chunk of the training data for LLMs. This will further amplify LLMs predilection for popular viewpoints in preference to those in the tail of the probability distribution (because the latter become increasingly underrepresented). Peterson terms this process knowledge collapse – a sort of regression to the average, leading to a homogenisation of the internet.

How do deal with this?

The obvious answer is to put in place measures that encourage knowledge diversity. As Peterson puts it:

…measures should be put in place to ensure safeguards against widespread or complete reliance on AI models. For every hundred people who read a one-paragraph summary of a book, there should be a human somewhere who takes the time to sit down and read it, in hopes that she can then provide feedback on distortions or simplifications introduced elsewhere.”

As an aside, an interesting phenomenon related to the LLM-mediated homogenisation of the information ecosystem was studied in this paper by Shumailov et. al. who found that the quality of LLM responses degrade as they are iteratively trained on their outputs. In their experiments, they showed that if LLMs are trained solely on LLM generated data, the responses degrade to pure nonsense within a few generations. They call this phenomenon model collapse.  Recent research shows that model collapse can be avoided if training data includes a mix of human and AI generated text.  The human element is essential to avoid the pathologies of LLM-generated data.

–x–       

In writing this piece, I found myself going down several research rabbit holes – one paper would lead to another, and then another and so on. Clearly, it would be impossible for me to do justice to all the great work that investigates vendor claims and early adopter hype, but I realised that there is no need for me to do so. My objective in writing this piece is to discuss how we can immunise ourselves against AI-induced epistemological panic and – more importantly – that it is easy to do so. The simple solution is not to take what vendors say at face value and instead turn to the (mostly unbiased) research literature to better understand how these technologies work. Although the details in academic research reports can be quite technical, the practical elements of most papers are generally easy enough to follow, even for non-specialists.

So, I’ll sign off here with a final word from Bateson who, in the mid-1960s, had this to say about the uncritical purpose and panic driven use of powerful technologies:

Today [human purposes] are implemented by more and more effective [technologies]. [We] are now empowered to upset the balances of the body, of society, and of the biological world around us. A pathology—a loss of balance—is threatened…Emergency is present or only just around the corner; and long-term wisdom must therefore be sacrificed to expediency, even though there is a dim awareness that expediency will never give a long-term solution……The problem is systemic and the solution must surely depend upon realizing this fact.”

Half a century later, the uncritical use of Generative AI technology threatens to dilute our cognitive capabilities and the systemic balance of the information ecosystem we rely on. It is up to us to understand and use these technologies in ways that do not outsource our thinking to mindless machines.

–x–x–   

Written by K

May 29, 2024 at 5:13 am

Posted in Understanding AI

The elusive arch – reflections on navigation and wayfinding

with one comment

About a decade ago, when GPS technologies were on the cusp of ubiquity, Nicholas Carr made the following observation in a post on his blog:

“Navigation is the most elemental of our skills — “Where am I?” was the first question a creature had to answer — and it’s the one that gives us our tightest connection to the world. The loss of navigational sense is also often the first sign of a mind in decay…If “Where am I?” is the first question a creature had to answer, that suggests something else about us, something very important: memory and navigational sense may, at their source, be one and the same. The first things an animal had to remember were locational: Where’s my home? Where’s that source of food? Where are those predators? So memory may have emerged to aid in navigation.”

The interesting thing, as he notes in the post, is that the connection between memory and navigation has a scientific basis:

“In a 2013 article  in Nature Neuroscience, Edvard Moser and his colleague György Buzsáki provided extensive experimental evidence that “the neuronal mechanisms that evolved to define the spatial relationship among landmarks can also serve to embody associations among objects, events and other types of factual information.” Out of such associations we weave the memories of our lives. It may well be that the brain’s navigational sense — its ancient, intricate way of plotting and recording movement through space — is the evolutionary font of all memory.”

If this claim has even a smidgen of truth, it should make you think (very hard!) about the negative effects of following canned directions. Indeed, you’ve probably experienced some of these when your GPS – for whatever reason – decided to malfunction mid-trip.

We find our way through unfamiliar physical or mental terrain by “feeling our way” through it, a process of figuring out a route as one proceeds. This process of wayfinding is how we develop our own, personal mental maps of the unfamiliar.

–x–

A couple of weeks ago, I visited a close friend in Tasmania who I hadn’t met for a while. We are both keen walkers, so he had arranged for us to do the Cape Queen Elizabeth Walk  on Bruny Island. The spectacular scenery and cloudy cool weather set the scene for a great day.

From the accounts of others, we knew that the highlight of the walk is the Mars Bluff Arch, a natural formation,  carved out  of rock over eons by the continual pounding waves. We were keen to get to the arch, but the directions we got from the said accounts were somewhat ambiguous. Witness the following accounts from tripadvisor:

“…it is feasible to reach the Arch even at high tide, but you will get wet. There is only one rock outcropping blocking your way when it’s not low tide (do not try to climb over/on it – it’s too dangerous). Take off your shoes, crop your pants, and walk through the ocean – just beside the visible rocks it’s all sand bottom. I did this at mid-tide and the water came up to my knees at the deepest point. It’s only about a 20 foot long section to walk. Try to time it so you don’t get splashed by waves…”

and

“…We went on low tide so we could walk the beach route as it’s really pretty. The other way around is further on and is about 30 mins longer there is a sign giving you the option once you get close to both directions so don’t worry if you do go on high tide. The Arch was a little hard to find once youre on the beach as it quite the way around through rocks and another cove looks like a solid rock from a distance but once your almost on top of it you see the arch…”

and

“…The tide was against us and so we slogged up the track over Mars Bluff with stunning panoramic views to Cape Elizabeth on one side and out to the Fluted Cape on the other. Had we taken the beach access we would not have enjoyed and marvelled at such stunning views! As we descended to the bleached white sand of the dunes it was interesting to try to determine the type of creatures that had left such an of prints and tracks in the sand. Had we not previously known of the arch’s existence, it would have been hard to find- it’s a real hidden gem, a geometric work of art, tucked away beneath the bluff!”

Daniel had looked up the tide charts, so we knew it was likely we’d have to take the longer route. Nevertheless, when we came to the fork, we thought we’d get down to the beach and check out the low tide route just in case.

As it turned out, the tide was up to the rocks. Taking the beach route would have been foolhardy.

We decided to “slog up the Mars Bluff track”. The thing is, when we got to the cove on the far side, we couldn’t find the damn arch.

–x–

In a walk – especially one that’s done for recreation and fun – exploration is the whole point. Google Map style directions – “walk 2 km due east on the track, turn left at the junction…” would destroy the fun of finding things out for oneself.

In contrast, software users don’t want to spend their time exploring routes through a product, they want the most direct path from where they are to where they want to go.  Consequently, good software documentation is unambiguous. It spells out exactly what you need to do to get the product to work the way it should. Technical writers – good ones, at any rate – take great care to ensure that their instructions can be interpreted in only one way.

Surprise is anathema in software, but is welcome in a walk.  

–x–

In his celebrated book, James Carse wrote:

To be prepared against surprise is to be trained. To be prepared for surprise is to be educated.

Much of what passes for education these days is about avoiding surprise. By Carse’s definition, training to be a lawyer or data scientist is not education at all. Why?  I should let Carse speak, for he says it far more eloquently than I ever can:

Education discovers an increasing richness in the past, because it sees what is unfinished there. Training regards the past as finished and the future as to be finished. Education leads toward a continuing self-discovery; training leads toward a final self-definition. Training repeats a completed past in the future. Education continues an unfinished past into the future.”

Do you want to be defined by a label – lawyer or data scientist – or do you see yourself as continuing an unfinished past into the future?

Do you see yourself as navigating your way up a well-trodden corporate ladder, or wayfinding a route of your own making to a destination unknown?

–x–

What is the difference between navigation and wayfinding?

The former is about answering the question “Where am I?” and “How do I get to where I want to go?”. A navigator seeks an efficient route between two spatial coordinates. The stuff between is of little interest. In contrast, wayfinding is about finding ones way through a physical space. A wayfinder figures out a route in an emergent manner, each step being determined by the nature of the terrain, the path traversed and what lies immediately ahead.

Navigators focus on the destination; to them the journey is of little interest. Wayfinders pay attention to their surroundings; to them the journey is the main point.

The destination is a mirage. Once one arrives, there is always another horizon that beckons.

–x–

We climbed the bluff and took in the spectacular views we would have missed had we taken the beach route.

On descending the other side, we came to a long secluded beach, but there was nary a rock in sight, let alone an arch.

I looked the other way, towards the bluff we had just traversed. Only then did I make the connection – the rocks at the foot of the cliff. It should have been obvious that the arch would likely be adjacent to the cliff But then, nothing was obvious, we had no map on which x marked the spot.  

“Let’s head to the cliff,” I said, quickening my pace.  I scrambled over rocks at the foot of the cliff and turned my gaze seaward.

There it was, the elusive arch. Not a mark on map, the real thing.

We took the mandatory photographs and selfies, of course. But we also sensed that no camera could capture the magic of the moment. Putting our devices away, we enjoyed a moment of silence, creating our own memories of the arch, the sea, and the horizon beyond.

–x–x–

Written by K

December 12, 2023 at 4:31 am