Display options
Share it on

Front Psychol. 2018 Apr 13;9:374. doi: 10.3389/fpsyg.2018.00374. eCollection 2018.

Attentional Bias in Human Category Learning: The Case of Deep Learning.

Frontiers in psychology

Catherine Hanson, Leyla Roskan Caglar, Stephen José Hanson

Affiliations

  1. Rutgers Brain Imaging Center, Newark, NJ, United States.
  2. RUBIC and Psychology Department and Center for Molecular and Behavioral Neuroscience, Rutgers University-Newark, Newark, NJ, United States.

PMID: 29706907 PMCID: PMC5909172 DOI: 10.3389/fpsyg.2018.00374

Abstract

Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987) showed that stimuli can have structures with features that are statistically uncorrelated (separable) or statistically correlated (integral) within categories. Humans find it much easier to learn categories having separable features, especially when attention to only a subset of relevant features is required, and harder to learn categories having integral features, which require consideration of all of the available features and integration of all the relevant category features satisfying the category rule (Garner, 1974). In contrast to humans, a single hidden layer backpropagation (BP) neural network has been shown to learn both separable and integral categories equally easily, independent of the category rule (Kruschke, 1993). This "failure" to replicate human category performance appeared to be strong evidence that connectionist networks were incapable of modeling human attentional bias. We tested the presumed limitations of attentional bias in networks in two ways: (1) by having networks learn categories with exemplars that have high feature complexity in contrast to the low dimensional stimuli previously used, and (2) by investigating whether a Deep Learning (DL) network, which has demonstrated humanlike performance in many different kinds of tasks (language translation, autonomous driving, etc.), would display human-like attentional bias during category learning. We were able to show a number of interesting results. First, we replicated the failure of BP to differentially process integral and separable category structures when low dimensional stimuli are used (Garner, 1974; Kruschke, 1993). Second, we show that using the same low dimensional stimuli, Deep Learning (DL), unlike BP but similar to humans, learns separable category structures more quickly than integral category structures. Third, we show that even BP can exhibit human like learning differences between integral and separable category structures when high dimensional stimuli (face exemplars) are used. We conclude, after visualizing the hidden unit representations, that DL appears to extend initial learning due to feature development thereby reducing destructive feature competition by incrementally refining feature detectors throughout later layers until a tipping point (in terms of error) is reached resulting in rapid asymptotic learning.

Keywords: attentional bias; categorization; condensation; deep learning; filtration; learning theory; neural networks

References

  1. Philos Trans R Soc Lond B Biol Sci. 1998 Aug 29;353(1373):1295-306 - PubMed
  2. Psychol Rev. 1964 Nov;71:491-504 - PubMed
  3. Science. 1987 Sep 11;237(4820):1317-23 - PubMed
  4. Psychol Bull. 1978 Nov;85(6):1256-74 - PubMed
  5. Psychon Bull Rev. 2007 Aug;14(4):560-76 - PubMed
  6. Psychol Rev. 2004 Apr;111(2):309-32 - PubMed
  7. Neural Comput. 2006 Jul;18(7):1527-54 - PubMed
  8. Nature. 2015 May 28;521(7553):436-44 - PubMed

Publication Types

Grant support