Abstract
In recent years, state-of-the-art artificial intelligence systems have started to show signs of what might be seen as human level intelligence. More specifically, large language models such as OpenAI’s GPT-3, and more recently Google’s PaLM and DeepMind’s GATO, are performing amazing feats involving the generation of texts. However, it is acknowledged by many researchers that contemporary language models, and more generally, learning systems, still lack important capabilities, such as understanding, reasoning and the ability to employ knowledge of the world and common sense in order to reach or at least advance toward general intelligence. Some believe that scaling will eventually bring about these capabilities; others think that a different architecture is needed. In this paper, we focus on the latter, with the purpose of integrating a theoretical–philosophical conception of understanding as knowledge of dependence relations, with the high-level requirements and engineering design of a robust AI system, which integrates machine learning and symbolic components.