Abstract
There are various ways of achieving an enlarged understanding of a concept of interest. One way is by giving its proper definition. Another is by giving something else a proper definition and then using it to model or formally represent the original concept. Between the two we find varying shades of grey. We might open up a concept by a direct lexical definition of the predicate that expresses it, or by a theory whose theorems define it implicitly. At the other end of the spectrum, the modelling-this-as-that option also admits of like variation, ranging from models rooted in formal representability theorems to models conceived of as having only heuristic value. There exist on both sides of this divide further differences still. In one of them, both the definiendum and definiens of a definition are words or phrases of some common natural language. In others, the item of interest is a natural language expression and its representation is furnished by the artificial linguistic system that models it. The modern history of these approaches is both very large and growing. Much of this evolution has given too short a shrift to the history of the demotion of ‘intuitive’ concepts in favour of the artificially contrived ones intended to model them. A working assumption of this article is that in the absence of a good understanding of what motivated the modelling-turn in the foundations of mathematics and the intuitive theory of truth, the whole notion of formal representability will have been inadequately understood. In the interests of space, I will concentrate on seminal issues in set theory as dealt with by Russell and Frege, and in the theory of truth in natural languages as dealt with by Tarski. The nub of the present focus is the representational role of model theory in the logics of formalized languages.