Enactivism and predictive processing: A non-representational view

Philosophical Explorations 21 (2):264-281 (2018)
  Copy   BIBTEX

Abstract

This paper starts by considering an argument for thinking that predictive processing (PP) is representational. This argument suggests that the Kullback–Leibler (KL)-divergence provides an accessible measure of misrepresentation, and therefore, a measure of representational content in hierarchical Bayesian inference. The paper then argues that while the KL-divergence is a measure of information, it does not establish a sufficient measure of representational content. We argue that this follows from the fact that the KL-divergence is a measure of relative entropy, which can be shown to be the same as covariance (through a set of additional steps). It is well known that facts about covariance do not entail facts about representational content. So there is no reason to think that the KL-divergence is a measure of (mis-)representational content. This paper thus provides an enactive, non-representational account of Bayesian belief optimisation in hierarchical PP.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,774

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2018-07-02

Downloads
131 (#165,202)

6 months
25 (#123,676)

Historical graph of downloads
How can I increase my downloads?

Author Profiles