Abstract
A theory of the evolution of mind cannot be complete without an explanation of how cognition became representational. Artificial approximations of cognitive evolution do not, in general, produce representational cognition. We take this as an indication that there is a gap in our understanding of what drives evolution towards representational solutions, and propose a theory to fill this gap. We suggest selection for learning and selection for second order learning as the causal factors driving the emergence of innate and acquired forms of representation, respectively. Cognition is commonly viewed as a “black box”—selection works on externally visible behaviour alone, with little regard for implementation structure. Yet even if implementation structure is not constrained by selection on behaviour, implementation structure does affect how easy or difficult it is to make specific modifications to the behaviour. Hence selection for learning can affect the implementation structure of behaviour. Similarly, the implementation structure of learning ability itself is not under direct selection, but selection for second order learning can affect the implementation structure of first order learning. We argue that these indirect selection effects guide evolution towards representational implementations, as structural alignment between implementation structure and environment structure guarantees that simple changes in the environment can be met with simple changes in implementation. We illustrate the theory with examples of computational investigations, and discuss how the theory may help put representational cognition within reach of purely connectionist AI