Display options
Share it on

Neuroimage. 2021 Dec 05;246:118780. doi: 10.1016/j.neuroimage.2021.118780. Epub 2021 Dec 05.

Brain signals of a Surprise-Actor-Critic model: Evidence for multiple learning modules in human decision making.

NeuroImage

Vasiliki Liakoni, Marco P Lehmann, Alireza Modirshanechi, Johanni Brea, Antoine Lutti, Wulfram Gerstner, Kerstin Preuschoff

Affiliations

  1. École Polytechnique Fédérale de Lausanne (EPFL), School of Computer and Communication Sciences and School of Life Sciences, Lausanne, Switzerland. Electronic address: [email protected].
  2. École Polytechnique Fédérale de Lausanne (EPFL), School of Computer and Communication Sciences and School of Life Sciences, Lausanne, Switzerland.
  3. Laboratoire de recherche en neuroimagerie (LREN), Department of Clinical Neurosciences, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
  4. Geneva Finance Research Institute & Interfaculty Center for Affective Sciences, University of Geneva, Geneva, Switzerland.

PMID: 34875383 DOI: 10.1016/j.neuroimage.2021.118780

Abstract

Learning how to reach a reward over long series of actions is a remarkable capability of humans, and potentially guided by multiple parallel learning modules. Current brain imaging of learning modules is limited by (i) simple experimental paradigms, (ii) entanglement of brain signals of different learning modules, and (iii) a limited number of computational models considered as candidates for explaining behavior. Here, we address these three limitations and (i) introduce a complex sequential decision making task with surprising events that allows us to (ii) dissociate correlates of reward prediction errors from those of surprise in functional magnetic resonance imaging (fMRI); and (iii) we test behavior against a large repertoire of model-free, model-based, and hybrid reinforcement learning algorithms, including a novel surprise-modulated actor-critic algorithm. Surprise, derived from an approximate Bayesian approach for learning the world-model, is extracted in our algorithm from a state prediction error. Surprise is then used to modulate the learning rate of a model-free actor, which itself learns via the reward prediction error from model-free value estimation by the critic. We find that action choices are well explained by pure model-free policy gradient, but reaction times and neural data are not. We identify signatures of both model-free and surprise-based learning signals in blood oxygen level dependent (BOLD) responses, supporting the existence of multiple parallel learning modules in the brain. Our results extend previous fMRI findings to a multi-step setting and emphasize the role of policy gradient and surprise signalling in human learning.

Copyright © 2021. Published by Elsevier Inc.

Keywords: Behavior; Human learning; Reinforcement learning; Sequential decision making; Surprise; fMRI

Publication Types