The Turing Ratio: A Framework for Open-Ended Task Metrics
Abstract
The Turing Test is of limited use for entities differing substantially from human performance levels. We suggest an extension of Turing’s idea to a more differentiated measure - the "Turing Ratio" - which provides a framework for comparing human and algorithmic task performance, up to and beyond human performance levels. Games and talent levels derived from pairwise comparisons provide examples of the concept. We also discuss the related notions of intelligence amplification and task breadth. Intelligence amplification measures total computational efficiency (the computational benefit gained relative to investment, including programmer time, hardware, and so on); we argue that evolutionary computation is a key amplifier of human intelligence. Task breadth is an attempt to weight Turing Ratios by the frequency and importance of the task they measure - doing well at a broad range of tasks is an empirical definition of “intelligence”. Measuring Turing Ratios and considering task breadth, prior knowledge, and time series of the measures may yield long-term insight into both open-ended computational approaches and the underlying task domains being measured.