Abstract
The performance of a connectionist learning system on a simple problem has been described by Hinton and is briefly reviewed here: a finite set is learned from a finite collection of finite sets, and the system generalizes correctly from partial information by finding simple "features" of the environment. For comparison, a very similar problem is formulated in the Gold paradigm of discrete learning functions. To get generalization similar to the connectionist system, a non-conservative learning strategy is required. We define a simple, non-conservative strategy that generalizes like the connectionist system, finding simple "features" of the environment. By placing an arbitrary finite bound on the number and complexity of the features to be found, learning can be guaranteed relative to a probabilistic criterion of success. However, this approach to induction has essentially the same problems as many others that have failed.