Abstract
This paper starts by considering an argument for thinking that predictive processing (PP) is representational. This argument suggests that the Kullback–Leibler (KL)-divergence provides an accessible measure of misrepresentation, and therefore, a measure of representational content in hierarchical Bayesian inference. The paper then argues that while the KL-divergence is a measure of information, it does not establish a sufficient measure of representational content. We argue that this follows from the fact that the KL-divergence is a measure of relative entropy, which can be shown to be the same as covariance (through a set of additional steps). It is well known that facts about covariance do not entail facts about representational content. So there is no reason to think that the KL-divergence is a measure of (mis-)representational content. This paper thus provides an enactive, non-representational account of Bayesian belief optimisation in hierarchical PP.