A post by Cameron Buckner
Nativists in psychology like Steven Pinker and Gary Marcus often warn their readers about the dangers of empiricism. In particular, they worry that many neural network modelers are reviving the minimalist goals of behaviorist psychology, which “through most of the 20th century…tried to explain all of human behavior by appealing to a couple of simple mechanisms of association and conditioning” (Pinker 2003) without any other forms of innate structure whatsoever (see, e.g., Marcus, 2018). Unfortunately, as their fellow nativists Laurence & Margolis (2015) observe, casting current disputes in cognitive science in these terms has the consequence that “the empiricists” no longer really exist…and maybe never did. While most empiricists are like radical behaviorists in eschewing innate ideas, almost all the other empiricists agree that a significant amount of innate, general-purpose cognitive machinery is required to extract abstract ideas from experience.
Recent criticisms of “Deep Learning” (LeCun, Bengio, & Hinton 2015) are a case in point (e.g. Marcus 2018; Lake et al. 2017). Critics worry that all the impressive things that Deep Neural Networks (DNNs) appear to do—from recognizing objects in photographs at human or superhuman levels of accuracy, beating human experts at chess, Go, or Starcraft II, or predicting protein folds better than molecular biologists who have devoted their lives to the task—are just the results of massive amounts of computation being directed at “statistics”, “linear algebra”, or “curve-fitting”, which, without the structure provided by innate ideas, will never scale up to human-like intelligence. Of course, everything the brain does could be described as mere “neural firings”, so the problem can’t just be that a DNN’s operations can be thinly redescribed; there must be specific things that human brains can do which DNNs, in principle, cannot. Many other suggestions have been offered here, but to illustrate my points about empiricism and cognitive architecture, I will focus on a brilliant list of operations that Jerry Fodor (2003) thinks are required for rational cognition but that empiricists cannot explain. This list includes:1) synthesizing exemplars of abstract categories for use in reasoning, 2) fusing together simpler ideas into novel composites (e.g. unicorn), 3) making decisions in novel contexts on the basis of simulated experience, and 4) distinguishing causal and semantic relations between thoughts.
Read More