Different neural correlates of reward expectation and reward expectation error in the putamen and caudate nucleus during stimulus-action-reward association learning.
نویسندگان
چکیده
To select appropriate behaviors leading to rewards, the brain needs to learn associations among sensory stimuli, selected behaviors, and rewards. Recent imaging and neural-recording studies have revealed that the dorsal striatum plays an important role in learning such stimulus-action-reward associations. However, the putamen and caudate nucleus are embedded in distinct cortico-striatal loop circuits, predominantly connected to motor-related cerebral cortical areas and frontal association areas, respectively. This difference in their cortical connections suggests that the putamen and caudate nucleus are engaged in different functional aspects of stimulus-action-reward association learning. To determine whether this is the case, we conducted an event-related and computational model-based functional MRI (fMRI) study with a stochastic decision-making task in which a stimulus-action-reward association must be learned. A simple reinforcement learning model not only reproduced the subject's action selections reasonably well but also allowed us to quantitatively estimate each subject's temporal profiles of stimulus-action-reward association and reward-prediction error during learning trials. These two internal representations were used in the fMRI correlation analysis. The results revealed that neural correlates of the stimulus-action-reward association reside in the putamen, whereas a correlation with reward-prediction error was found largely in the caudate nucleus and ventral striatum. These nonuniform spatiotemporal distributions of neural correlates within the dorsal striatum were maintained consistently at various levels of task difficulty, suggesting a functional difference in the dorsal striatum between the putamen and caudate nucleus during stimulus-action-reward association learning.
منابع مشابه
Heterarchical reinforcement-learning model for integration of multiple cortico-striatal loops: fMRI examination in stimulus-action-reward association learning
The brain's most difficult computation in decision-making learning is searching for essential information related to rewards among vast multimodal inputs and then integrating it into beneficial behaviors. Contextual cues consisting of limbic, cognitive, visual, auditory, somatosensory, and motor signals need to be associated with both rewards and actions by utilizing an internal representation ...
متن کاملReward processing in primate orbitofrontal cortex and basal ganglia.
This article reviews and interprets neuronal activities related to the expectation and delivery of reward in the primate orbitofrontal cortex, in comparison with slowly discharging neurons in the striatum (caudate, putamen and ventral striatum, including nucleus accumbens) and midbrain dopamine neurons. Orbitofrontal neurons showed three principal forms of reward-related activity during the per...
متن کاملMany hats: intratrial and reward level-dependent BOLD activity in the striatum and premotor cortex.
Human functional magnetic resonance imaging (fMRI) studies, as well as lesion, drug, and single-cell recording studies in animals, suggest that the striatum plays a key role in associating sensory events with rewarding actions, both by facilitating reward processing and prediction (i.e., reinforcement learning) and by biasing and later updating action selection. Previous human neuroimaging rese...
متن کاملPanel Session What Does Dopamine Say: Clues from Computational Modeling
Background: Reinforcement learning models now play a central role in modern attempts to understand how the brain categorizes and values events traditionally framed by psychology as rewards and punishments. These models provide a way to design and interpret of reward expectancy experiments in humans across a wide range of rewarding dimensions. They also provide a connection to computational mode...
متن کاملDose dependent dopaminergic modulation of reward-based learning in Parkinson's disease.
Learning to select optimal behavior in new and uncertain situations is a crucial aspect of living and requires the ability to quickly associate stimuli with actions that lead to rewarding outcomes. Mathematical models of reinforcement-based learning to select rewarding actions distinguish between (1) the formation of stimulus-action-reward associations, such that, at the instant a specific stim...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Journal of neurophysiology
دوره 95 2 شماره
صفحات -
تاریخ انتشار 2006