The Computational Explanatory Gap
Abstract
Efforts to study consciousness using computational models over the last two decades have received a decidedly mixed reception. Investigators in mainstream AI have largely ignored this work, and some members of the philosophy community have argued that the whole endeavour is futile. Here we suggest that very substantial progress has been made, to the point where the use of computational simulations has become an increasingly accepted approach to the scientific study of consciousness. However, efforts to create a phenomenally conscious machine have been much less successful. We believe that a major reason for this is a computational explanatory gap: our inability to understand/explain the implementation of high-level cognitive algorithms in terms of neurocomputational processing. Contrary to prevailing views, we suggest that bridging this gap is not only critical to further progress in the area of machine consciousness, but is also an important step towards understanding the hard problem. We briefly describe some recent progress that has been made towards bridging this gap, and assess whether any computational correlates of consciousness have been identified