Abstract
Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and that they use this near-identity relation to distinguish sentences that are consistent or inconsistent with a familiar grammar. Recent simulations that were claimed to show that this model did not really learn these grammars [Vilcu, M., & Hadley, R. F. (2005). Minds and Machines, 15, 359–382] confounded syntactic types with speech sounds and did not perform standard statistical tests of results.