Explaining Neural Transitions through Resource Constraints

Philosophy of Science 89 (5):1196-1202 (2022)
  Copy   BIBTEX

Abstract

One challenge in explaining neural evolution is the formal equivalence of different computational architectures. If a simple architecture suffices, why should more complex neural architectures evolve? The answer must involve the intense competition for resources under which brains operate. I show how recurrent neural networks can be favored when increased complexity allows for more efficient use of existing resources. Although resource constraints alone can drive a change, recurrence shifts the landscape of what is later evolvable. Hence organisms on either side of a transition boundary may have similar cognitive capacities but very different potential for evolving new capacities.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,369

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2022-05-24

Downloads
20 (#1,046,673)

6 months
7 (#728,225)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Colin Klein
Australian National University

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references