Japanese

Mathematical Models of Decision Making and Learning Makoto Ito 1 , Kenji Doya 1,2 1Okinawa Institute of Science and Technology, Neural Computation Unit 2ATR Computational Neuroscience Laboratories Keyword: decision making , reinforcement learning , Q-learning , action value , basal ganglia pp.791-798
Published Date 2008/7/1
DOI https://doi.org/10.11477/mf.1416100312
  • Abstract
  • Look Inside
  • Reference

Abstract

 Computational models of reinforcement learning have recently been applied to analysis of brain imaging and neural recording data to identity neural correlates of specific processes of decision making, such as valuation of action candidates and parameters of value learning. However, for such model-based analysis paradigms, selecting an appropriate model is crucial. In this study we analyze the process of choice learning in rats using stochastic rewards. We show that "Q-learning," which is a standard reinforcement learning algorithm, does not adequately reflect the features of choice behaviors. Thus, we propose a generalized reinforcement learning (GRL) algorithm that incorporates the negative reward effect of reward loss and forgetting of values of actions not chosen. Using the Bayesian estimation method for time-varying parameters, we demonstrated that the GRL algorithm can predict an animal's choice behaviors as efficiently as the best Markov model. The results suggest the usefulness of the GRL for the model-based analysis of neural processes involved in decision making.


Copyright © 2008, Igaku-Shoin Ltd. All rights reserved.

基本情報

電子版ISSN 1344-8129 印刷版ISSN 1881-6096 医学書院

関連文献

もっと見る

文献を共有