Abstract
The phenomenon of ‘Deep Learning,’ which has given us such science-fiction-like innovations as self-driving cars, as well as visual search tools in photographic applications, is a new form, and subset, of ‘Machine Learning’ made possible by very recent innovations in computing. Machine Learning itself has been around for some decades – essentially pattern-recognition software that requires very substantial computing resources, which were, until recently, mostly theoretical and hard to come by. Machine Learning was one avenue of the field of Artificial Intelligence known as Narrow A.I. – the kind of ‘artificial intelligence’ that was strictly limited in scope as a first-steps starting point of what came, as a result, to be known as General A.I. General A.I., known then as simply, ‘Artificial Intelligence’, was the 1950s dream that brought us such things as Robbie the Robot, and more recently C3PO, and The Terminator: the kind of science fiction characters that remain the only manifestations of General Artificial Intelligence.
‘Deep Learning’ also continues engineering’s 1940s trend of using language in a way that I will contest in this paper: a co-opting of words that have been used, in the past, to describe human activities, using them instead to describe what engineers have managed to make machines do. These co-optations reduce the richness of the word, making its referent an algorithm: a flow diagram that represents the bare essentials of what an engineer can understand and reproduce of a human activity; not the human activity itself. This diagram of the ‘engineering possible’ over-simplifies the human activity it tries to depict. With continued usage, the meaning of the word for us today has all-too-often become reduced to what the engineer has newly defined it to mean: something much less than it once was.
In this paper I attempt to roll back some of these co-optations, and to re-introduce some of the richness of the words that have been taken by engineering. I examine Turing’s seminal paper on the notion of a thinking machine. I use the philosophical insights of Henri Bergson, especially in his book, Matter and Memory, and the discoveries of neuroscience and complexity scientists. I try to show that the answer to Turing’s question, ‘Can machines think?’ remains a resounding, ‘No!’, and that notions such as ‘deep learning’ are in fact not only an inaccurate use of the very human experience of learning, but degrade the latter in using such a term.