Archive for September 2025
Analogy, relevance realisation and the limits of AI
The falling price and increasing pervasiveness of LLM-based AIs make it easy to fall for the temptation to outsource one’s thinking to machines. Indeed, much of the noise from vendors is aimed at convincing you to do just that. To be sure, there is little harm and some benefit in using AI to assist with drudgework (e.g. minuting meetings) providing one ensures that the output is validated (e.g., Are the minutes accurate? Have nuances been captured? Have off-the-record items been redacted?). However, as the complexity of the task increases, there comes a point where only those with domain expertise can use AIs as assistants effectively.
This is unsurprising to those who know that the usefulness of AI output depends critically on both the quality of the prompt and the ability to assess its output. But it is equally unsurprising that vendors will overstate claims about their products’ capabilities and understate the knowledge and experience required to use them well.
Over the last year or so, a number of challenging benchmarks have been conquered by so-called Large Reasoning Models. This begs the question as to whether there are any inherent limitations to the kinds of cognitive tasks that LLM-based AIs are capable of. At this time it is not possible to answer this question definitively, but one can get a sense for the kinds of tasks that would be challenging for machines by analysing examples of high-quality human thinking.
In a previous article, I described two examples highlighting the central role that analogies play in creative scientific work. My aim in the present piece is to make the case that humans will continue to be better than machines at analogical thinking, at least for the foreseeable future.
–x–
The two analogies I described in my previous article are:
- Newton’s intuition that the fall of an apple on the surface of the earth is analogous to the motion of the moon in its orbit. This enabled him to develop arguments that led to the Universal Law of Gravitation.
- Einstein’s assumption that the energy associated with electromagnetic radiation is absorbed or emitted in discrete packets akin to particles. This enabled him to make an analogy between electromagnetic radiation and an ideal gas, leading to a heuristic justification for the existence of photons (particles of light).
AI evangelists will point to papers that demonstrate analogical reasoning in LLMs (see this paper for example). However, most of these works suggest that AIs are nowhere near as good as humans in analogising. Enthusiasts may then argue that it’s a matter of time before AI catches up. I do not think this will happen because there are no objective criteria by which one can judge an analogy to be logically sound. Indeed, as I discuss below, analogies have to be assessed in terms of relevance rather than truth.
–x–
The logical inconsistency of analogical reasoning is best illustrated by an example drawn from a paper by Gregory Bateson in which he compares the following two syllogisms:
All humans are mortal (premise)
Socrates is human (premise)
Therefore, Socrates is mortal (conclusion)
and
Humans die
Grass dies
Humans are grass
The first syllogism is logically sound because it infers something about a particular member of a set from a statement that applies to all members of that set. The second is unsound because it compares members of different sets based on a shared characteristic – it is akin, for example, to saying mud (member of one set) is chocolate (member of another set) because they are both brown (shared characteristic).
The syllogism in grass, as Bateson called it, is but analogy by another name. Though logically incorrect, syllogisms can give rise to fruitful trains of thought. For example, Bateson’s analogy draws our attention to the fact that both humans and grass are living organisms subject to evolution. This might then lead to thoughts on the co-dependency of grass and humans – e.g. the propagation of grass via the creation of lawns for aesthetic purposes.
Though logically and scientifically unsound, syllogisms in grass can motivate new lines thinking. Indeed, Newton’s apple and Einstein’s photon are analogies akin to Bateson’s syllogism in grass.
–x–
The moment of analogical insight is one of seeing connections between apparently unconnected phenomena. This is a process of sensemaking – i.e. one of taking or framing a problem from a given situation. To do this effectively, one must first understand what aspects of the situation are significant, a process that is called relevance realisation.
In a recent paper, Johannes Jaeger and his colleagues note that living organisms exists in a continual flux of information most of which is irrelevant to their purposes. From this information deluge they must recognise the miniscule fraction of signals or cues that might inform their actions. However, as they note,
“Before they can infer (or decide on) anything, living beings must first turn ill-defined problems into well-defined ones, transform large worlds into small, translate intangible semantics into formalized syntax (defined as the rule-based processing of symbols free of contingent, vague, and ambiguous external referents). And they must do this incessantly: it is a defining feature of their mode of existence.”
This process, which living creatures engage in continually, is the central feature of relevance realisation. Again, quoting from the paper,
“…it is correct to say that “to live is to know” [Editor’s note: a quote taken from this paper by Maturana https://www.tandfonline.com/doi/abs/10.1080/03033910.1988.10557705]. At the very heart of this process is the ability to pick out what is relevant — to delimit an arena in a large world. This is not a formalizable or algorithmic process. It is the process of formalizing the world in Hilbert’s sense of turning ill-defined problems into well-defined ones.”
The process of coming up with useful analogies is, at its heart, a matter of relevance realisation.
–x–
The above may seem far removed from Newton’s apple and Einstein’s photon, but it really isn’t. The fact that Einstein’s bold hypothesis took almost twenty years to be accepted despite strong experimental evidence supporting it suggests that the process of relevance realisation in science is a highly subjective, individual matter. It is only through a (often long) process of socialisation and consensus building that “facts” and “theories” become objective. As Einstein stated in a lecture at UCLA in the 1930s:
“Science as something already in existence, already completed, is the most objective, impersonal thing that we humans know. Science as something coming into being, as a goal, is just as subjectively, psychologically conditioned as are all other human endeavours.”
That is, although established scientific facts are (eventually seen as being) objective, the process by which they are initially formulated depends very much on subjective choices made by an individual scientist. Such choices are initially justified via heuristic or analogical (rather than logical) arguments which draw on commonalities between disparate objects or phenomena. Out of an infinity of possible analogies, the scientist picks one that is most relevant to the problem at hand. And as Jaeger and co have argued, this process of relevance realisation cannot be formalised.
–x–
To conclude: unlike humans, LLMs and AIs in general, are incapable of relevance realisation. So, although LLMs might come up with creative analogies by the thousands, they cannot use them to enhance our understanding of the world. Indeed, good analogies – like those of Newton and Einstein – do not so much solve problems as disclose new ways of knowing. They are examples of intellectual entrepreneurship, a uniquely human activity that machines cannot emulate.
–x–x–


