Skip to main content
Canna~Fangled Abstracts

Cannabinoids and value-based decision making: implications for neurodegenerative disorders.

By July 28, 2012No Comments
pm8
Logo of nihpa

Basal Ganglia. Author manuscript; available in PMC Sep 1, 2013.
Published in final edited form as:
Basal Ganglia. Sep 1, 2012; 2(3): 131–138.

Published online Jul 28, 2012.doi:  10.1016/j.baga.2012.06.005

PMCID: PMC3496267
NIHMSID: NIHMS398805

Cannabinoids and value-based decision making: implications for neurodegenerative disorders

Abstract

In recent years, disturbances in cognitive function have been increasingly recognized as important symptomatic phenomena in neurodegenerative diseases, including Parkinson’s Disease (PD). Value-based decision making in particular is an important executive cognitive function that is not only impaired in patients with PD, but also shares neural substrates with PD in basal ganglia structures and the dopamine system. Interestingly, the endogenous cannabinoid system modulates dopamine function and subsequently value-based decision making. This review will provide an overview of the interdisciplinary research that has influenced our understanding of value-based decision making and the role of dopamine, particularly in the context of reinforcement learning theories, as well as recent animal and human studies that demonstrate the modulatory role of activation of cannabinoid receptors by exogenous agonists or their naturally occurring ligands. The implications of this research for the symptomatology of and potential treatments for PD are also discussed.

Keywords: cannabinoid, cognition, decision making, dopamine, Parkinson’s disease, reinforcement learning

Introduction

Disturbances in executive cognitive functions, including decision making, are prominent clinical features in various psychiatric disorders, such as attention-deficit hyperactivity disorder, mood and anxiety disorders, schizophrenia and substance use disorders [1]. In recent years, the notion that cognitive disturbances and impairments in decision making are important symptomatic phenomena in neurodegenerative disorders such as Parkinson’s disease (PD) has gained increasing interest [25]. Interestingly, recent evidence suggests that these cognitive impairments might arise in the prediagnostic and early stages of PD [68] and are possibly caused by functional loss in the corticostriatal circuitry subserving cognitive functions [9].

In general terms, decision making refers to the selection of appropriate actions from various available options based on cost-benefit evaluations and subjective values of the outcomes of these actions. As such, decision making is a complex mental construct that is composed of several cognitive functions that should theoretically lead to adaptive behavioral outcomes or to maintain psychological or physiological homeostasis [10]. These functions and goal-directed action selection in decision making are driven by various neurotransmitter systems in the brain and have in particular been associated with dopamine function [11,12]. Over the last decades there has been a rise in decision making experimental data, partly due to the development and availability of laboratory tasks assessing aspects of real-life decision making in humans and preclinical animal models [13]. Altogether, these studies have greatly increased our understanding of the scientific basis and neurobiology of decision making, not the least because it is a subject that is studied from multiple disciplines including economics, psychology, neuroscience and computer science [14].

In addition to dopamine modulation of decision making, there is accumulating evidence of cannabinoid involvement in executive cognitive functions including decision making [15,16]. The endocannabinoid neurotransmitter system consists of at least two receptors, cannabinoid CB1 and cannabinoid CB2 of which primarily the former is highly expressed in the central nervous system. These Gi/o-protein coupled receptors, of which the vast majority is expressed presynaptically, are activated by their endogenous signaling molecules, such as anandamide (AEA) and 2-arachydonylglycerol (2-AG), and in response directly modulate the probability of release of several neurotransmitters including GABA, glutamate and indirectly dopamine [17,18]. Moreover, cannabinoid CB1 receptors are densely expressed in the brain including frontal cortical regions and several nuclei of the basal ganglia such as the striatum, globus pallidus and substantia nigra [1921].

Interestingly, despite the cannabinoid CB1 receptor antagonist Rimonabant being withdrawn from the market, there is large therapeutic potential of cannabinoid mechanisms in several metabolic, psychiatric and neurodegenerative disorders [22,23].

This review aims at providing more insight into this convergence of cannabinoids, dopamine and value-based decision making in the context of neurodegenerative disorders and in particular PD. To this aim, we first will provide background on different theories of reinforcement learning as a framework for value-based decision making, and we will briefly discuss the role of dopamine in these processes. Next, we will discuss the involvement of the basal ganglia and importance of the endogenous cannabinoid system and its interactions with the dopaminergic system in decision making. Finally, we will review and discuss the available empirical evidence obtained from both clinical and preclinical studies of cannabinoid modulation of value-based decision making.

Theoretical history of reinforcement learning

Reinforcement learning (RL) is a well-supported computational framework for learning values in order to achieve optimal outcomes, which has gained popularity in the study of value-based decision making and its neural mechanisms [24]. The modern rendition of RL has grown from a fairly interdisciplinary history, beginning with animal learning paradigms of psychology and evolving through mathematical formulations and artificial learning research [25]. Both Bush and Mosteller’s first formal mathematical model [26] and Rescorla and Wagner’s subsequent version [27] postulated that learning only occurs at unexpected events [25,28]. Additionally, in the Rescorla-Wagner model, predictions for a given trial represent the sum of predictions from individual stimuli [25]. Despite its substantial explanatory power, however, the Rescorla-Wagner model could not account for either second-order conditioning, of which a common example is the conditioned value of money to humans, or temporal relationships between stimuli within a trial [25].

The solution to these limitations came from two researchers working on artificial intelligence, who extended the Rescorla-Wagner model such that the decision-making agent seeks to estimate an average sum of all future rewards, rather than just the one in the immediate future [24,25]. These temporal-difference models (TD) are much more focused on goal-directed learning than their predecessors, and redefine the problem from one of learning values from past events to predicting the values of future events [24]. This distinction is important for thinking about the stimuli from which RL models learn; while Bush-Mosteller and Rescorla-Wagner models suggest learning from a weighted average of past rewards and the immediately experienced reward, TD models would learn from information that violates the agent’s expectations for the sum of all future rewards [28]. For this theorized process of learning to occur, the TD model necessitates a neural mechanism for recording prediction errors.

Dopamine and reinforcement learning

Support from neural data and computational models have converged upon the midbrain dopamine system as encoding this key signal [29,30]. A substantial amount of research has implicated the dopamine system as a key player in value-based decision making, especially in instances of positive reinforcement [31]. Specifically, evidence has accumulated under the framework of a reward prediction error hypothesis (RPE), which posits that dopamine neuronal activity encodes the difference between expected and received rewards [29,30]. Within TD models of RL, the RPE embodies an essential mechanism for the proposed trial-and-error learning process [24,32,33]. The seminal work of Schultz and his colleagues illustrated this principle through recordings from the midbrain dopamine neurons of awake, behaving monkeys [30,34]. These recordings showed that when a visual or auditory stimulus (conditioned stimulus) precedes a fruit or juice reward (unconditioned stimulus), the dopamine neurons increase their phasic burst firing upon receipt of the reward. However, this response occurs only during the learning phase. After the animal learns to predict a juice reward from the visual or auditory cue, an increase in dopaminergic burst firing is seen at the unexpected cue and not to the subsequently predicted reward. If the predicted reward is not delivered, a negative prediction error has occurred, and recordings show a corresponding decrease, or a pause, in the rate of dopaminergic firing [30,34,35]. These findings illustrated dopamine response to stimuli predicting rewards over the rewards themselves. Moreover, this pattern of dopaminergic activity specifically conforms to the RPE predicted by TD algorithms [29,30,36,37]. Further evidence has also shown that dopaminergic responses to conditioned stimuli are proportional to differing magnitudes and probabilities of predicted rewards [3840], as well as rewards delivered after a delay [4142]. Importantly, functional magnetic resonance imaging (fMRI) studies in human subjects have supported the biological and behavioral applicability of RL and TD models [e.g. 4345].

Limitations of the dopamine RPE hypothesis

Despite the accumulation of support for the dopamine RPE hypothesis, there are also noteworthy limitations which include contradictory data [46,47], as well as overarching problems concerning, for example, the treatment of Pavlovian vs. instrumental learning paradigms, limitations of the simple behavioral tasks currently in use, and facets of dopamine function that extend beyond its short-latency phasic firing [46]. Within the broad RL framework itself, the role and expression of a dopaminergic RPE are couched in subtly varying theories of value learning and action selection [32,33,4850]. Additionally, there are several alternate theories that posit non-RPE explanations for dopamine function, with varying degrees of empirical support [31]. Such alternatives include the salience [51], incentive salience [52,53] and agency [54] hypotheses, which propose dopamine responses to salient stimuli, separate systems for “wanted” compared to “liked” stimuli, or sensory prediction errors that reinforce agency and novel actions, respectively. These hypotheses of dopamine function have proven difficult to disentangle, perhaps due in large part to a more general problem in the experimental treatment of latent variables, such as “rewards,” “predictions,” or “salience,” which are not directly observable and must therefore be related to an observable variable [55].

The axiomatic approach and its advantages

Caplin and Dean proposed an axiomatic approach as a solution to clarify the role of dopamine in decision making, and more specifically RL [55]. Borrowed from economics, this standard methodology encapsulates core theoretical tenets in compact mathematical statements [56]. These axioms then serve as testable predictions, the criteria to which empirical data must conform in order to admit the theory in question. Caplin and Dean applied this method to the RPE hypothesis of dopamine function [28,5557].

Experiments conducted under the axiomatic framework addressed the major problems attributed to traditional regression-based tests [55,57]. Importantly, the axioms nonparametrically define latent variables in terms of the variable of interest, namely the dopaminergic response, in order to avoid jointly testing auxiliary assumptions concerning the operationalization of latent variables and to allow categorical rejections of the entire class of RPE models if the data violate any given axiom [57]. Additionally, the strict mathematical formalization of relevant variables facilitates the differentiation between alternate explanations of dopamine activity [55,57]. Moreover, the axiomatic approach allows for hierarchical testing so that axiomatic representations can also be made for more refined sub-hypotheses. Finally, if the data violate one or more axioms, these axioms can become focal points for precise revisions to the model, creating a close link between theory and data [55].

Thus far, experiments conducted within this axiomatic framework have supported an RPE model of dopamine function in various areas of the brain. The first formal axiomatic test of a dopamine RPE found such a signal in the activity of the nucleus accumbens, a principal target of midbrain dopamine neurons discussed below [58]. Additionally, fMRI scans of the caudate, putamen, amygdala, medial prefrontal cortex, and anterior cingulate cortex showed that activity in these regions also satisfied the axiomatic RPE model [59]. Meanwhile, the anterior insula was found to be in strong violation of the RPE axioms, and seems to encode salience instead [58,59]. These parallel findings illustrate a common theme in theories of dopamine function, which emphasize that dopamine needs not be restricted to serving only one function, nor that a particular function can be served only by dopamine [28,31]. It should be noted that while the regions imaged have been identified as receiving direct dopaminergic projections, the blood oxygen level dependent (BOLD) fMRI signal is not a corollary of dopamine activity alone. Furthermore, BOLD signals in the midbrain dopamine structures did not provide evidence for an RPE model, although the researchers note that this finding may be partly due to the difficulty of imaging these structures [58]. Nevertheless, these experiments provide proof of method for the axiomatic approach. Furthermore, the axiomatic approach can be applied to any data series, including BOLD or electrophysiological recordings, such that future studies can effectively build upon these initial findings [57].

In summary, Caplin and Dean’s axiomatic approach to the reward prediction error hypothesis addressed several central complaints against RPE. Also, the successful use of this axiomatic model illustrates the advantages of a neuroeconomic approach and, in general, the increased power that can be leveraged through cross-disciplinary interaction [14,60]. The development of RL theories similarly exemplifies the benefits of interdisciplinary cooperation in advancing the study of decision making. RL theories and the axiomatic approach also share another common characteristic in that both investigational frameworks exhibit a close interplay between theory and empirical evidence, particularly in demonstrating the role of dopamine in value-based decision making. In RL theories, the convergence of computational models and neural data enhanced the study and understanding of RL and helped identify the dopamine system as encoding an RPE in accordance with TD models [29,30,48]. Additionally, while axiomatic methods have been applied most extensively to the dopamine RPE model, the advantages of this approach can be extended more broadly to different components of decision making as well as different neural systems [57].

Striatal involvement in value-based decision making

In addition to the well-established involvement of prefrontal cortical regions in decision-making [9,61], rodent studies have provided a vast amount of evidence supporting the pivotal role of the ventral striatum in decision making processes involving cost-benefit assessments. For example, excitotoxic lesions of the nucleus accumbens impair effort-based and delay-based decision making, as well as decision making under risk as has been excellently reviewed elsewhere [13]. On the other hand, lesioning the dorsal part of the striatum, does not seem to affect value-based decision making in rats [62].

Neuroimaging studies in healthy volunteers also strongly suggest that the ventral striatum represents an important component of the decision-making circuit. More specifically, the subjective value of delayed rewards in intertemporal choice paradigms is represented in the nucleus accumbens [e.g. 6368]. In one of these studies, however, evaluations related to effort were found not to require ventral striatal activation [68]. Nevertheless, task-related activity of the ventral striatum has also been observed in decision-making under risk [69] and uncertainty [70].

Thus, the role of the basal ganglia in value-based decision-making stemming from BOLD studies, seems largely restricted to the ventral striatum/nucleus accumbens. In this regard, a recent primate study indicates that the caudate nucleus might also be important for cost-benefit analyses. With their experiments, involving single-neuronal recordings in rhesus monkeys, Cai and colleagues revealed that neurons in both the ventral and the dorsal striatum encode reward value during an intertemporal choice task [71]. Taken together, currently available data primarily highlight a pivotal role of the ventral striatum in the corticostriatal circuitry subserving value-based decision-making.

Cannabinoids have a modulatory role on dopamine systems in a manner that is relevant to value-based decision making

As pointed out previously, an accumulating body of evidence suggests that dopamine plays an integral role in value-based decision making [11,12]. While the precise behavioral outcome resulting from dopamine release likely varies depending on the pattern of dopaminergic neural activity and the postsynaptic target [13,72], subsecond bursts of mesolimbic dopamine release in the core region of the nucleus accumbens are theorized to modulate cost-benefit assessments by carrying information concerning reward value [73]. When animals are required to make value-based decisions using predictive environmental information (i.e., cues) for example, the concentration of subsecond dopamine release increases as a function of the expected reward magnitude [40,7476]. These cue-evoked dopamine release events are sufficient in concentration to occupy low-affinity dopamine D1 receptors within the nucleus accumbens [77,78] and, through subsequent modulatory actions, are thought to strengthen reward seeking in a manner resulting in the procurement of larger reward [7981].

Cannabinoid CB1 receptor agonists modulate subsecond dopamine release by disinhibiting midbrain dopamine neurons. Both the primary psychoactive component of Cannabis sativa, Δ9-tetrahydrocannabinol (Δ9-THC), and synthetic compounds that exhibit a high affinity for the cannabinoid CB1 receptor (e.g., WIN 55,212-2) increase subsecond dopamine release events [82,83]. These exogenous cannabinoids are unable to directly stimulate dopaminergic neural activity however, due to an absence of cannabinoid CB1 receptors on midbrain dopamine cell bodies [84]. Rather, they are thought to increase bursts of dopaminergic neural activity by suppressing GABAergic release and, thereby, indirectly disinhibit dopamine neurons [85]. In support of this theory, applying cannabinoid CB1 receptor agonists to ventral tegmental area (VTA) brain slices decreases GABAergic inhibitory post-synaptic currents in a GABAA receptor dependent manner [86], while the expected increase in dopaminergic neural activity is blocked by pretreatment of GABAA receptor antagonists [87].

The finding that exogenously administered cannabinoid CB1 receptor agonists modulate dopamine signaling related to value-based decision making implies that the endogenous cannabinoid system might also contribute. 2-AG, an endogenous cannabinoid and full CB1 receptor agonist [88], is an ideal candidate to modulate subsecond dopamine release during value-based decision making. The synthetic enzymes (e.g., diacylglycerol lipase-α(DGL-α)), required to generate 2-arachydonlylglycerol [89,90] are abundantly expressed in midbrain dopamine neurons [91] and are activated exclusively during periods of high neural activity [92], as occurs during cue-evoked dopamine signaling. Based on what is found in other brain regions we speculate that when dopamine neurons fire in high frequency bursts (>20Hz), thereby generating subsecond surges in dopamine concentration in the NAc [93], intracellular Ca2+increases within the dopamine cell bodies and leads to the on-demand synthesis of 2-AG via activation of DGL-α [90,92,94]. Once synthesized, 2-AG retrogradely activates presynaptic cannabinoid CB1 receptors [95], thus suppressing GABA-mediated inhibition of IPSC amplitude, which could theoretically lead to depolarization-induced suppression of inhibition [95]. This conceptualization of how 2-AG modulates dopamine neural activity is consistent with the growing consensus that 2-AG is the primary endogenous cannabinoid involved in regulating synaptic plasticity [89,90].

Augmenting 2-AG concentrations increases the motivation to procure reward, strengthens reward seeking and facilitates cue-evoked dopamine signaling. Motivation to obtain food reward, as assessed using a progressive ratio schedule, is enhanced by either systemically treating animals with 2-AG [96] or by reducing its enzymatic degradation using monoacylglycerol lipase inhibitors (e.g., JZL184) [97]. Likewise, increasing 2-AG levels in the brain energizes responding for reward, as assessed by a decrease in response latency, when reward delivery is predicted by the presentation of conditioned stimulus [97]. This 2-AG induced facilitation in reward seeking is accompanied by greater cue-evoked dopamine release events detected in the nucleus accumbens [97]. Importantly, increasing 2-AG concentration in the VTA alone is sufficient to enhance cue-evoked dopamine signaling and reward seeking [97], thus supporting the theory that 2-AG is critically involved in regulating dopamine signaling within local microcircuits in the midbrain during reward directed behavior (Figure 1).

Figure 1

A theoretical ventral tegmental area microcircuit during value-based decision making. After encountering a cue predicting a large reward, conditioned glutamate release occurs in the ventral tegmental area (1), thus resulting in Ca2+ influx into the dopamine 

Empirical evidence for cannabinoid receptor modulation of value-based decision making

Consistent with findings from rodent studies, the human brain contains high densities of the cannabinoid CB1 receptor in frontocortical and striatal regions [98]. In accordance, accumulating evidence from human neuroimaging studies employing both fMRI and Positron Emission Tomography (PET) approaches indicates that marijuana and THC modulate the activation of prefrontal cortical and subcortical brain regions subserving dopamine function and decision making processes [99]. Furthermore, and relevant to value-based decision making as outlined earlier, THC induces release of dopamine in the human striatum [100] matching findings in laboratory animals [82,83].

Although the effects of cannabinoids have been well-documented for a variety of executive cognitive functions including attentional processes, time estimation and working memory [15,16], to date relatively fewer studies have focused on cannabinoid effects on decision making in humans under laboratory settings. The value of delayed rewards or uncertain rewards, as assessed in a delay discounting and a probability discounting task, were not affected by acute challenges with THC in humans [101]. In these decision making tasks the subjective value of the reward was either altered by imposing hypothetical delays on the availability of the reward (delay discounting) or by manipulating the likelihood and predictability of reward (probability discounting). These findings are paralleled by preclinical data demonstrating that the synthetic cannabinoid CB1 receptor agonist WIN55,212-2 does not alter delay discounting in rats [102]. Furthermore, challenges with various cannabinoid CB1 receptor antagonists (SR141716A and O2050) do not modulate the value of delayed reward in rats, suggesting that endogenous cannabinoid tone is not critically involved in this form of delay-based decision making [102,103]. In contrast to the effects of THC in humans, THC alters the value of delayed rewards in rats and shifts the preference towards more self-controlled choice [103]. The observation that SR141716A fully reversed the effects of THC indicates a cannabinoid CB1 receptor-mediated mechanism in promoting diminished delay discounting.

Interestingly, the sensitivity to reinforcement in humans is sensitive to alteration by challenges with THC. In a concurrent random interval procedure, where one response option led to a fixed monetary gain and the other to decreasing monetary gain, THC promotes preference for the latter, less beneficial, choice in subjects occasionally using marijuana [104]. In extension of these findings, THC also induces risky decision making in occasional marijuana users in a task where subjects choose between a non-risky option (small monetary gain, probability of 1.0) and a risky option (larger monetary gain and monetary losses, probability 0.5) leading to zero expected value [105]. Thus, under conditions with uncertainty about the likelihood of punishment, activation of cannabinoid CB1 receptors influences the sensitivity to reinforcement as well as punishment. These findings have been further substantiated by several recent studies implementing neurocognitive risk-based decision making tasks such as the Iowa Gambling Task and related gambling tasks in healthy volunteers and marijuana users [106,107]. Briefly, in the Iowa Gambling Task originally developed by Bechara and coworkers [108] subjects have to make a cost-benefit assessment based on their decisions and are able to draw cards from one of four decks to obtain monetary reward. The expected value of cards drawn from two “risky” decks is negative and will lead to a net loss of money as a result of high gains and even higher losses, whereas the expected value of drawing cards from the other two “safe” decks is positive and will lead to monetary reward. Heavy marijuana use has been associated with an increased preference for risky decisions leading to monetary loss [109] and a positive correlation has been reported between the magnitude of use and risky decision making [107], although comparable effects of THC on decision making are not consistently observed in frequent marijuana users [110]. In line with the former findings, in a related gambling task, THC challenges in healthy volunteers increased the choice of decisions with a zero-expected value and altered aspects of processing decisions, for instance by reduced attention towards losses and faster reaction times related to gambles with large gains [106]. This has recently been further confirmed using computational models of the Iowa Gambling Task showing that heavy cannabis users are indifferent to loss magnitude and perceived both small and large losses as equal minor negative outcomes [111]. Thus, cannabinoid activity modulates human cost-benefit assessments and the motivational processes therein, and this is possibly explained by its modulatory role on dopamine function. Neuroimaging studies have further uncovered how marijuana use and THC exposure might impact the neural circuits implicated in gambling behavior and risky decisions among which the orbitofrontal cortex and dorsolateral prefrontal cortex are key regions [108]. PET studies have demonstrated that although acute THC exposure is known to increase activity and regional blood flow in these subregions of the prefrontal cortex [112], disturbed decision making in 25-day abstinent heavy marijuana users has been associated with lowered activity in the orbitofrontal cortex and dorsolateral prefrontal cortex [113]. This contrasts recent PET data showing that in 1-day abstinent heavy marijuana smokers regional blood flow in the ventromedial prefrontal cortex and cerebellum was increased during performance in the Iowa gambling task [114]. In keeping with the aforementioned behavioral findings of altered cost-benefit processing induced by THC [106] or in heavy marijuana users [111], fMRI approaches indicate accompanying reductions in brain activation in regions such as the anterior cingulate cortex, medial frontal cortex and cerebellum, particularly during loss of reward [115,116]. Notably, despite the high densities of cannabinoid CB1 receptors in basal ganglia structures in the human brain [19], their involvement and possible differential activation by exogenous cannabinoids in risky decision making is not as pronounced as that of prefrontal cortical regions from the current neuroimaging work. In this respect, it would be highly interesting for future studies to employ neuroimaging approaches in e.g. PD patients with a history of marijuana use and focus on prefrontal cortical activation. Whereas the pathophysiological mechanisms in PD are predominantly subcortical, alterations in cortico-striato-thalamo-cortical loops [117,118] may give rise to the cognitive disturbances observed in PD. Indeed, this notion is supported by neurocomputational models that strongly predict empirical findings in PD [119121].

Concluding remarks

This review aimed at 1) providing a background in reinforcement learning as a framework to increase our understanding of different components of value-based decision making and 2) highlighting the importance of cannabinoid signaling that, via its modulatory actions on the dopaminergic system, modulates value-based decision making. Particularly, in view of neurodegenerative disorders such as PD this topic is gaining increasing interest. First, there is now accumulating evidence that executive cognitive disturbances, including value-based decision making, are prominent features of the disorder even in the early stages [25,8]. For example, there are several studies that have demonstrated impaired performance in gambling tasks such as the Iowa gambling task in PD [8,122124], although this finding has not been replicated in all studies [125127]. These observed disturbances in decision making in PD might result from the ongoing neurodegenerative processes in the dopaminergic system and nuclei of the basal ganglia and cortical connectivity that are an essential part of the corticostriatal loops subserving reinforcement learning and decision making [9,128].

Second, in view of the clinical management of PD, targeting the endogenous cannabinoid system might provide new therapeutic opportunities in addition to the existing dopamine-mimetic compounds. Although the latter class of drugs is clinically effective in ameliorating the motor symptoms of the disorder, prescription of dopamine agonist medications, and in particular levodopa, in PD might result in serious adverse side-effects such as levodopa-induced dyskinesias [129]. Furthermore, levodopa use has also been linked to the development of pathological gambling and impaired decision making in PD [5,130]. With regard to endogenous cannabinoids and PD [22,131], AEA levels in cerebrospinal fluid are elevated in non-medicated PD patients [132] and cannabinoid CB1 receptor binding is increased in the basal ganglia in post-mortem brains of PD patients [133]. These findings are supported by earlier work in animal models of PD showing enhanced endocannabinoid signaling (AEA, 2-AG) in various nuclei of the basal ganglia such as the striatum, substantia nigra and globus pallidus related to disturbances in motor behavior [134,135]. Thus, enhanced activity of the endogenous cannabinoid system is associated with the motor symptomatology of the disorder and this would favor the development of novel cannabinoid CB1 receptor antagonist-based strategies as a therapeutic intervention for PD. Whether this observed enhanced activity of the endogenous cannabinoid system in PD also contributes to the aforementioned decision making disturbances in the disorder is an interesting question that certainly warrants further investigation. The observed adverse effects of cannabinoid CB1 receptor agonists such as THC on value-based decision reviewed here, and the proposed endogenous cannabinoid-dopamine interaction in value-based decision making (Figure 1), may offer an explanation for these phenomena. In view of this notion, second generation cannabinoid CB1 receptor antagonist targeted medications are likely of therapeutic potential and may possibly exert a dual mode action through amelioration of motor disturbances as well as improving impaired decision making in PD. A potential caveat of such a pharmacotherapeutic approach, that certainly requires further investigation, might reside in the observed enhancement of striatal glutamatergic signaling by cannabinoid CB1 receptor antagonism in an experimental model of PD [136], the former which has been associated with the pathophysiology of levodopa-induced dyskinesia in PD [137].

Acknowledgements

Angela M. Lee was supported by a Fulbright Program grant from the U.S. Department of State, which was funded through the Netherland-America Foundation. Joseph F. Cheer and Erik B. Oleson are funded through the National Institute on Drug Abuse (R01DA022340, F32DA032266).

Footnotes

Publisher’s Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

 

References

1. American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 4th ed. Washington, DC: American Psychiatric Association; 1994.
2. Gleichgerrcht E, Ibanez A, Roca M, Torralva T, Manes F. Decision-making cognition in neurodegenerative diseases. Nat Rev Neurol.2010;6:611–623. [PubMed]
3. Milenkova M, Mohammadi B, Kollewe K, Schrader C, Fellbrich A, Wittfoth M, et al. Intertemporal choice in Parkinson’s disease. Mov Disord. 2011;26:2004–2010. [PubMed]
4. Pagonabarraga J, García-Sánchez C, Llebaria G, Pascual-Sedano B, Gironell A, Kulisevsky J. Controlled study of decision-making and cognitive impairment in Parkinson’s disease. Mov Disord.2007;22:1430–1435. [PubMed]
5. Voon V, Dalley JW. Impulsive choice – Parkinson disease and dopaminergic therapy. Nat Rev Neurol. 2011;7:541–542. [PubMed]
6. Elgh E, Domellof M, Linder J, Edstrom M, Stenlund H, Forsgren L. Cognitive function in early Parkinson’s disease: a population-based study. Eur J Neurol. 2009;16:1278–1284. [PubMed]
7. Rodriguez-Oroz MC, Jahanshahi M, Krack P, Litvan I, Macias R, Bezard E, et al. Initial clinical manifestations of Parkinson’s disease: features and pathophysiological mechanisms. Lancet Neurol.2009;8:1128–1139. [PubMed]
8. Ibarretxe-Bilbao N, Junque C, Tolosa E, Marti MJ, Valldeoriola F, Bargallo N, et al. Neuroanatomical correlates of impaired decision-making and facial emotion recognition in early Parkinson’s disease.Eur J Neurosci. 2009;30:1162–1171. [PubMed]
9. Miller EK, Cohen JD. An integrative theory of prefrontal cortex function. Annu Rev Neurosci. 2001;24:167–202. [PubMed]
10. Paulus MP. Decision-making dysfunctions in psychiatry–altered homeostatic processing? Science. 2007;318:602–606. [PubMed]
11. Balleine BW, Delgado MR, Hikosaka O. The role of the dorsal striatum in reward and decision-making. J Neurosci. 2007;27:8161–8165. [PubMed]
12. Rogers RD. The roles of dopamine and serotonin in decision making: evidence from pharmacological experiments in humans.Neuropsychopharmacology. 2011;36:114–132. [PMC free article][PubMed]
13. Floresco SB, St Onge JR, Ghods-Sharifi S, Winstanley CA. Cortico-limbic-striatal circuits subserving different forms of cost-benefit decision making. Cogn Affect Behav Neurosci. 2008;8:375–389. [PubMed]
14. Rangel A, Camerer C, Montague PR. A framework for studying the neurobiology of value-based decision making. Nature Rev Neurosci. 2008;9:545–556. [PubMed]
15. Pattij T, Wiskerke J, Schoffelmeer ANM. Cannabinoid modulation of executive functions. Eur J Pharmacol. 2008;585:458–463.[PubMed]
16. Solowij N, Michie PT. Cannabis and cognitive dysfunction: parallels with endophenotypes of schizophrenia? J Psychiatry Neurosci. 2007;32:30–52. [PMC free article] [PubMed]
17. Freund TF, Katona I, Piomelli D. Role of endogenous cannabinoids in synaptic signaling. Physiol Rev. 2003;83:1017–1066.[PubMed]
18. Mackie K, Stella N. Cannabinoid receptors and endocannabinoids: evidence for new players. AAPS J. 2006;8:E298–E306. [PMC free article] [PubMed]
19. Glass M, Dragunow M, Faull RL. Cannabinoid receptors in the human brain: a detailed anatomical and quantitative autoradiographic study in the fetal, neonatal and adult human brain. Neuroscience. 1997;77:299–318. [PubMed]
20. Matsuda LA, Bonner TI, Lolait SJ. Localization of cannabinoid receptor mRNA in rat brain. J Comp Neurol. 1993;327:535–550.[PubMed]
21. Tsou K, Brown S, Sañudo-Peña MC, Mackie K, Walker JM. Immunohistochemical distribution of cannabinoid CB1 receptors in the rat central nervous system. Neuroscience. 1998;83:393–411.[PubMed]
22. Bisogno T, Di Marzo V. Cannabinoid receptors and endocannabinoids: role in neuroinflammatory and neurodegenerative disorders. CNS Neurol Disord Drug Targets.2011;9:564–573. [PubMed]
23. Ward SJ, Raffa RB. Rimonabant redux and strategies to improve the future outlook of CB1 receptor neutral-antagonist/inverse-agonist therapies. Obesity. 2011;19:1325–1334. [PubMed]
24. Sutton RS, Barto AG. Reinforcement learning: an introduction.Cambridge, MA: MIT Press; 1998.
25. Niv Y. Reinforcement learning in the brain. J Math Psychol.2009;53:139–154.
26. Bush RR, Mosteller F. A mathematical model for simple learning.Psychol Rev. 1951;58(5):313–323. [PubMed]
27. Rescorla R, Wagner A. A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In: Black A, Prokasy W, editors. Classical conditioning II: Current research and theory. New York, NY: Appleton-Century-Crofts; 1972. pp. 64–99.
28. Glimcher PW. Understanding dopamine and reinforcement learning: the dopamine reward prediction error hypothesis. Proc Natl Acad Sci U S A. 2011;108(Suppl 3):15647–15654.[PMC free article] [PubMed]
29. Montague PR, Dayan P, Sejnowski TJ. A framework for mesencephalic dopamine systems based on predictive Hebbian learning. J Neurosci. 1996;16:1936–1947. [PubMed]
30. Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275:1593–1599. [PubMed]
31. Wise RA. Dopamine, learning and motivation. Nat Rev Neurosci.2004;5:483–494. [PubMed]
32. Suri RE. TD models of reward predictive responses in dopamine neurons. Neural Netw. 2002;15:523–533. [PubMed]
33. Samson RD, Frank MJ, Fellous JM. Computational models of reinforcement learning: the role of dopamine as a reward signal.Cogn Neurodyn. 2010;4:91–105. [PMC free article] [PubMed]
34. Mirenowicz J, Schultz W. Importance of unpredictability for reward responses in primate dopamine neurons. J Neurophysiol.1994;72:1024–1027. [PubMed]
35. Ljungberg T, Apicella P, Schultz W. Responses of monkey dopamine neurons during learning of behavioral reactions. J Neurophysiol. 1992;67:145–163. [PubMed]
36. Bayer HM, Glimcher PW. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron. 2005;47:129–141. [PMC free article] [PubMed]
37. Bayer HM, Lau B, Glimcher PW. Statistics of midbrain dopamine neuron spike trains in the awake primate. J Neurophysiol.2007;98:1428–1439. [PubMed]
38. Fiorillo CD, Tobler PN, Schultz W. Discrete coding of reward probability and uncertainty by dopamine neurons. Science.2003;299:1898–1902. [PubMed]
39. Niv Y, Duff MO, Dayan P. Dopamine, uncertainty and TD learning. Behav Brain Funct. 2005;1:6. [PMC free article] [PubMed]
40. Tobler PN, Fiorillo CD, Schultz W. Adaptive coding of reward value by dopamine neurons. Science. 2005;307:1642–1645. [PubMed]
41. Roesch MR, Calu DJ, Schoenbaum G. Dopamine neurons encode the better option in rats deciding between differently delayed or sized rewards. Nat Neurosci. 2007;10:1615–1624. [PMC free article][PubMed]
42. Kobayashi S, Schultz W. Influence of reward delays on responses of dopamine neurons. J Neurosci. 2008;28:7837–7846.[PMC free article] [PubMed]
43. O’Doherty JP, Dayan P, Friston K, Critchley H, Dolan RJ. Temporal difference models and reward-related learning in the human brain. Neuron. 2003;38:329–337. [PubMed]
44. Christopoulos GI, Tobler PN, Bossaerts P, Dolan RJ, Schultz W. Neural correlates of value, risk, and risk aversion contributing to decision making under risk. J Neurosci. 2009;29:12574–12583.[PMC free article] [PubMed]
45. Niv Y, Edlund JA, Dayan P, O’Doherty JP. Neural prediction errors reveal a risk-sensitive reinforcement-learning process in the human brain. J Neurosci. 2012;32:551–562. [PubMed]
46. Dayan P, Niv Y. Reinforcement learning: the good, the bad and the ugly. Curr Opin Neurobiol. 2008;18:185–196. [PubMed]
47. Redgrave P, Coizet V, Reynolds J. Phasic Dopamine Signaling and Basal Ganglia Function. In: Steiner H, Tseng K, editors.Handbook of Basal Ganglia Structure and Function. Burlington, MA: Academic Press; 2010.
48. Daw ND, Doya K. The computational neurobiology of learning and reward. Curr Opin Neurobiol. 2006;16:199–204. [PubMed]
49. O’Reilly RC, Frank MJ, Hazy TE, Watz B. PVLV: the primary value and learned value Pavlovian learning algorithm. Behav Neurosci. 2007;121:31–49. [PubMed]
50. Frank MJ. Computational models of motivated action selection in corticostriatal circuits. Curr Opin Neurobiol. 2011;21:381–386.[PubMed]
51. Horvitz JC. Mesolimbocortical and nigrostriatal dopamine responses to salient non-reward events. Neuroscience. 2000;96:651–656. [PubMed]
52. Berridge KC, Robinson TE. What is the role of dopamine in reward: hedonic impact, reward learning, or incentive salience?Brain Res Brain Res Rev. 1998;28:309–369. [PubMed]
53. McClure SM, Daw ND, Montague PR. A computational substrate for incentive salience. Trends Neurosci. 2003;26:423–428. [PubMed]
54. Redgrave P, Gurney K. The short-latency dopamine signal: a role in discovering novel actions? Nat Rev Neurosci. 2006;7:967–975.[PubMed]
55. Caplin A, Dean M. Axiomatic methods, dopamine and reward prediction error. Curr Opin Neurobiol. 2008;18:197–202. [PubMed]
56. Caplin A, Dean M. Dopamine, reward prediction error, and economics. Quart J Econom. 2008;123:663–701.
57. Caplin A, Dean M. The neureconomic theory of learning. Am Econom Rev. 2007;97:148–152.
58. Caplin A, Dean M, Glimcher PW, Rutledge RB. Measuring beliefs and rewards: A neuroeconomic approach. Quart J Econom.2010;25:923–960.
59. Rutledge RB, Dean M, Caplin A, Glimcher PW. Testing the reward prediction error hypothesis with an axiomatic model. J Neurosci. 2010;30:13525–13536. [PMC free article] [PubMed]
60. Camerer CF. Neuroeconomics: opening the gray box. Neuron.2008;60:416–419. [PubMed]
61. Dalley JW, Cardinal RN, Robbins TW. Prefrontal executive and cognitive functions in rodents: neural and neurochemical substrates.Neurosci Biobehav Rev. 2004;28:771–784. [PubMed]
62. Braun S, Hauber W. The dorsomedial striatum mediates flexible choice behavior in spatial tasks. Behav Brain Res. 2011;220:288–293.[PubMed]
63. McClure SM, Laibson DI, Loewenstein G, Cohen JD. Separate neural systems value immediate and delayed monetary rewards.Science. 2004;306:503–507. [PubMed]
64. Kable JW, Glimcher PW. The neural correlates of subjective value during intertemporal choice. Nat Neurosci. 2007;10:1625–1633.[PMC free article] [PubMed]
65. Ballard K, Knutson B. Dissociable neural representations of future reward magnitude and delay during temporal discounting.Neuroimage. 2009;45:143–150. [PMC free article] [PubMed]
66. Bickel WK, Pitcock JA, Yi R, Angtuaco EJ. Congruence of BOLD response across intertemporal choice conditions: fictive and real money gains and losses. J Neurosci. 2009;29:8839–8846.[PMC free article] [PubMed]
67. Wittmann M, Leland DS, Paulus MP. Time and decision making: differential contribution of the posterior insular cortex and the striatum during a delay discounting task. Exp Brain Res.2007;179:643–653. [PubMed]
68. Prevost C, Pessiglione M, Metereau E, Clery-Melin ML, Dreher JC. Separate valuation subsystems for delay and effort decision costs.J Neurosci. 2010;30:14080–14090. [PubMed]
69. Preuschoff K, Bossaerts P, Quartz SR. Neural differentiation of expected reward and risk in human subcortical structures. Neuron.2006;51:381–390. [PubMed]
70. Dreher JC, Kohn P, Berman KF. Neural coding of distinct statistical properties of reward information in humans. Cereb Cortex.2006;16:561–573. [PubMed]
71. Cai X, Kim S, Lee D. Heterogeneous coding of temporally discounted values in the dorsal and ventral striatum during intertemporal choice. Neuron. 2011;69:170–182. [PMC free article][PubMed]
72. Schultz W. Multiple dopamine functions at different time courses. Annu Rev Neurosci. 2007;30:259–288. [PubMed]
73. Phillips PE, Walton ME, Jhou TC. Calculating utility: preclinical evidence for cost-benefit analysis by mesolimbic dopamine.Psychopharmacology (Berl) 2007;191:483–495. [PubMed]
74. Gan JO, Walton ME, Phillips PE. Dissociable cost and benefit encoding of future rewards by mesolimbic dopamine. Nat Neurosci.2010;13:25–27. [PMC free article] [PubMed]
75. Day JJ, Jones JL, Wightman RM, Carelli RM. Phasic nucleus accumbens dopamine release encodes effort- and delay-related costs.Biol Psychiatry. 2010;68:306–309. [PMC free article] [PubMed]
76. Beyene M, Carelli RM, Wightman RM. Cue-evoked dopamine release in the nucleus accumbens shell tracks reinforcer magnitude during intracranial self-stimulation. Neuroscience. 2010;169:1682–1688. [PMC free article] [PubMed]
77. Cheer JF, Aragona BJ, Heien ML, Seipel AT, Carelli RM, Wightman RM. Coordinated accumbal dopamine release and neural activity drive goal-directed behavior. Neuron. 2007;54:237–244.[PubMed]
78. Dreyer JK, Herrik KF, Berg RW, Hounsgaard JD. Influence of phasic and tonic dopamine release on receptor activation. J Neurosci. 2010;30:14273–14283. [PubMed]
79. Hauber W, Sommer S. Prefrontostriatal circuitry regulates effort-related decision making. Cereb Cortex. 2009;19:2240–2247.[PubMed]
80. Salamone JD. The involvement of nucleus accumbens dopamine in appetitive and aversive motivation. Behav Brain Res. 1994;61:117–133. [PubMed]
81. Sokolowski J, Salamone J. The role of accumbens dopamine in lever pressing and response allocation: effects of 6-OHDA injected into core and dorsomedial shell. Pharmacol Biochem Behav.1998;59:557–566. [PubMed]
82. Gessa GL, Melis M, Muntoni AL, Diana M. Cannabinoids activate mesolimbic dopamine neurons by an action on cannabinoid CB1 receptors. Eur J Pharmacol. 1998;341:39–44. [PubMed]
83. Cheer JF, Wassum KM, Heien ML, Phillips PE, Wightman RM. Cannabinoids enhance subsecond dopamine release in the nucleus accumbens of awake rats. J Neurosci. 2004;24:4393–4400.[PubMed]
84. Julian MD, Martin AB, Cuellar B, Rodriguez De Fonseca F, Navarro M, Moratalla R, et al. Neuroanatomical relationship between type 1 cannabinoid receptors and dopaminergic systems in the rat basal ganglia. Neuroscience. 2003;119:309–318. [PubMed]
85. Lupica CR, Riegel AC. Endocannabinoid release from midbrain dopamine neurons: a potential substrate for cannabinoid receptor antagonist treatment of addiction. Neuropharmacology.2005;48:1105–1116. [PubMed]
86. Szabo B, Siemes S, Wallmichrath I. Inhibition of GABAergic neurotransmission in the ventral tegmental area by cannabinoids.Eur J Neurosci. 2002;15:2057–2061. [PubMed]
87. Cheer JF, Marsden CA, Kendall DA, Mason R. Lack of response suppression follows repeated ventral tegmental cannabinoid administration: an in vitro electrophysiological study. Neuroscience.2000;99:661–667. [PubMed]
88. Savinainen JR, Jarvinen T, Laine K, Laitinen JT. Despite substantial degradation, 2-arachidonoylglycerol is a potent full efficacy agonist mediating CB(1) receptor-dependent G-protein activation in rat cerebellar membranes. Br J Pharmacol.2001;134:664–672. [PMC free article] [PubMed]
89. Tanimura A, Yamazaki M, Hashimotodani Y, Uchigashima M, Kawata S, Abe M, et al. The endocannabinoid 2-arachidonoylglycerol produced by diacylglycerol lipase alpha mediates retrograde suppression of synaptic transmission. Neuron. 2010;65:320–327.[PubMed]
90. Melis M, Pistis M, Perra S, Muntoni AL, Pillolla G, Gessa GL. Endocannabinoids mediate presynaptic inhibition of glutamatergic transmission in rat ventral tegmental area dopamine neurons through activation of CB1 receptors. J Neurosci. 2004;24:53–62.[PubMed]
91. Matyas F, Urban GM, Watanabe M, Mackie K, Zimmer A, Freund TF, et al. Identification of the sites of 2-arachidonoylglycerol synthesis and action imply retrograde endocannabinoid signaling at both GABAergic and glutamatergic synapses in the ventral tegmental area. Neuropharmacology. 2008;54:95–107. [PMC free article][PubMed]
92. Wilson RI, Nicoll RA. Endocannabinoid signaling in the brain.Science. 2002;296:678–682. [PubMed]
93. Sombers LA, Beyene M, Carelli RM, Wightman RM. Synaptic overflow of dopamine in the nucleus accumbens arises from neuronal activity in the ventral tegmental area. J Neurosci. 2009;29(6):1735–1742. [PMC free article] [PubMed]
94. Alger BE, Kim J. Supply and demand for endocannabinoids.Trends Neurosci. 2011;34:304–315. [PMC free article] [PubMed]
95. Wilson RI, Nicoll RA. Endogenous cannabinoids mediate retrograde signalling at hippocampal synapses. Nature.2001;410:588–592. [PubMed]
96. Wakley AA, Rasmussen EB. Effects of cannabinoid drugs on the reinforcing properties of food in gestationally undernourished rats.Pharmacol Biochem Behav. 2009;94:30–36. [PubMed]
97. Oleson EB, Beckert MV, Morra JT, Lansink CS, Cachope R, Abdullah RA, et al. Endocannabinoids shape accumbal encoding of cue-motivated behavior via CB1 receptor activation in the ventral tegmentum. Neuron. 2012;73:360–373. [PMC free article] [PubMed]
98. Burns HD, Van Laere K, Sanabria-Bohorquez S, Hamill TG, Bormans G, Eng WS, et al. [18F]MK-9470, a positron emission tomography (PET) tracer for in vivo human PET brain imaging of the cannabinoid-1 receptor. Proc Natl Acad Sci U S A.2007;104:9800–9805. [PMC free article] [PubMed]
99. Martin-Santos R, Fagundo AB, Crippa JA, Atakan Z, Bhattacharyya S, Allen P, et al. Neuroimaging in cannabis use: a systematic review of the literature. Psychol Med. 2010;40:383–398.[PubMed]
100. Bossong MG, van Berckel BN, Boellaard R, Zuurman L, Schuit RC, Windhorst AD, et al. Delta 9-tetrahydrocannabinol induces dopamine release in the human striatum.Neuropsychopharmacology. 2009;34:759–766. [PubMed]
101. McDonald J, Schleifer L, Richards JB, de Wit H. Effects of THC on behavioral measures of impulsivity in humans.Neuropsychopharmacology. 2003;28:1356–1365. [PubMed]
102. Pattij T, Janssen MC, Schepers I, Gonzalez-Cuevas G, De Vries TJ, Schoffelmeer AN. Effects of the cannabinoid CB1 receptor antagonist rimonabant on distinct measures of impulsive behavior in rats. Psychopharmacology (Berl) 2007;193:85–96. [PMC free article][PubMed]
103. Wiskerke J, Stoop N, Schetters D, Schoffelmeer AN, Pattij T. Cannabinoid CB1 receptor activation mediates the opposing effects of amphetamine on impulsive action and impulsive choice. PLoS One.2011;6:e25856. [PMC free article] [PubMed]
104. Lane SD, Cherek DR. Marijuana effects on sensitivity to reinforcement in humans. Neuropsychopharmacology.2002;26:520–529. [PubMed]
105. Lane SD, Cherek RD, Tcheremissine OV, Lieving LM, Pietras CJ. Acute marijuana effects on human risk tasking.Neuropsychopharmacology. 2005;30:800–809. [PubMed]
106. Rogers RD, Wakeley J, Robson PJ, Bhagwagar Z, Makela P. The effects of low doses of Δ9 tetrahydrocannabinol on reinforcement processing in the risky decision-making of young healthy adults.Neuropsychopharmacology. 2007;32:417–428. [PubMed]
107. Verdejo-Garcia A, Benbrook A, Funderburk F, David P, Cadet JL, Bolla KI. The differential relationship between cocaine use and marijuana use on decision-making performance over repeat testing with the Iowa Gamling Task. Drug Alc Depend. 2007;90:2–11.[PMC free article] [PubMed]
108. Bechara A, Damasio AR, Damasio H, Anderson SW. Insensitivity to future consequences following damage to human prefrontal cortex. Cognition. 1994;50:7–15. [PubMed]
109. Whitlow CT, Liguori A, Livengood LB, Hart SL, Mussat-Whitlow BJ, Lamborn CM, et al. Long-term heavy marijuana users make costly decisions on a gambling task. Drug Alc Depend. 2004;76:107–111. [PubMed]
110. Vadhan NP, Hart CL, Van Gorp WG, Gunderson EW, Haney M, Foltin RW. Acute effects of smoked marijuana on decision making, as assessed by a modified gambling task, in experienced marijuana users. J Clin Exp Neuropsychol. 2007;29:357–364. [PubMed]
111. Fridberg DJ, Queller S, Ahn WY, Kim W, Bishara AJ, Busemeyer JR, et al. Cognitive mechanisms underlying risky decision-making in chronic cannabis users. J Math Psychol. 2010;54:28–38.[PMC free article] [PubMed]
112. Mathew RJ, Wilson WH, Turkington TG, Hawk TC, Coleman RE, DeGrado TR, et al. Time course of tetrahydrocannabinol-induced changes in regional cerebral blood flow measured with positron emission tomography. Psychiatry Res. 2002;116:173–185.[PubMed]
113. Bolla KI, Eldreth DA, Matochik JA, Cadet JL. Neural substrates of faulty decision-making in abstinent marijuana users. Neuroimage.2005;26:480–492. [PubMed]
114. Vaidya JG, Block RI, O’Leary DS, Ponto LB, Ghoneim MM, Bechara A. Effects of chronic marijuana use on brain activity during monetary decision-making. Neuropsychopharmacology.2012;37:618–629. [PMC free article] [PubMed]
115. van Hell HH, Jager G, Bossong MG, Brouwer A, Jansma JM, Zuurman L, et al. Involvement of the endocannabinoid system in reward processing in the human brain. Psychopharmacology (Berl)2012;219:981–990. [PMC free article] [PubMed]
116. Wesley MJ, Hanlon CA, Porrino LJ. Poor decision-making by chronic marijuana users is associated with decreased functional responsiveness to negative consequences. Psychiat Res Neuroimaging. 2011;191:51–59. [PMC free article] [PubMed]
117. Alexander GE, DeLong MR, Strick PL. Parallel organization of functionally segregated circuits linking basal ganglia and cortex.Annu Rev Neurosci. 1986;9:357–381. [PubMed]
118. Postuma RB, Dagher A. Basal ganglia functional connectivity based on a meta-analysis of 126 positron emission tomography and functional magnetic resonance imaging publications. Cereb Cortex.2006;16:1508–1521. [PubMed]
119. Owen AM, Doyon J, Dagher A, Sadikot A, Evans AC. Abnormal basal ganglia outflow in Parkinson’s disease identified with PET – Implications for higher cortical functions. Brain. 1998;121:949–965.[PubMed]
120. Frank MJ, Seeberger LC, O’Reilly RC. By carrot or by stick: cognitive reinforcement learning in parkinsonism. Science.2004;306:1940–1943. [PubMed]
121. Wiecki TV, Frank MJ. Neurocomputational models of motor and cognitive deficits in Parkinson’s disease. Prog Brain Res.2010;183:275–297. [PubMed]
122. Brand M, Labudda K, Kalbe E, Hilker R, Emmans D, Fuchs G, et al. Decision-making impairments in patients with Parkinson’s disease. Behav Neurol. 2004;15:77–85. [PubMed]
123. Mimura M, Oeda R, Kawamura M. Impaired decision-making in Parkinson’s disease. Parkinsonism Relat Disord. 2006;12:169–175.[PubMed]
124. Kobayakawa M, Koyama S, Mimura M, Kawamura M. Decision making in Parkinson’s disease: Analysis of behavioral and physiological patterns in the Iowa gambling task. Mov Disord.2008;23:547–552. [PubMed]
125. Thiel A, Hilker R, Kessler J, Habedank B, Herholz K, Heiss WD. Activation of basal ganglia loops in idiopathic Parkinson’s disease: a PET study. J Neural Transm. 2003;110:1289–1301. [PubMed]
126. Euteneuer F, Schaefer F, Stuermer R, Boucsein W, Timmermann L, Barbe MT, et al. Dissociation of decision-making under ambiguity and decision-making under risk in patients with Parkinson’s disease: a neuropsychological and psychophysiological study.Neuropsychologia. 2009;47:2882–2890. [PubMed]
127. Poletti M, Frosini D, Lucetti C, Del Dotto P, Ceravolo R, Bonuccelli U. Decision making in de novo Parkinson’s disease. Mov Disord. 2010;25:1432–1436. [PubMed]
128. Maia TV, Frank MJ. From reinforcement learning models to psychiatric and neurological disorders. Nat Neurosci. 2011;14:154–162. [PubMed]
129. Nutt JG. Levodopa-induced dyskinesias: review, observations and speculations. Neurology. 1990;40:340–345. [PubMed]
130. Djamshidian A, Cardoso F, Grosset D, Bowden-Jones H, Lees AJ. Pathological gambling in Parkinson’s disease – a review of the literature. Mov Disord. 2011;26:1976–1984. [PubMed]
131. Giuffrida A, McMahon LR. In vivo pharmacology of endocannabinoids and their metabolic inhibitors: therapeutic implications in Parkinson’s disease and abuse liability. Prostaglandins Other Lipid Mediat. 2010;91:90–103. [PMC free article] [PubMed]
132. Pisani A, Fezza F, Galati S, Battista N, Napolitano S, Finazzi-Agro A, et al. High endogenous cannabinoid levels in the cerebrospinal fluid of untreated Parkinson’s disease patients. Ann Neurol. 2005;57:777–779. [PubMed]
133. Lastres-Becker I, Cebeira M, de Ceballos ML, Zeng BY, Jenner P, Ramos JA, et al. Increased cannabinoid CB1 receptor binding and activation of GTP-binding proteins in the basal ganglia of patients with Parkinson’s syndrome and of MPTP-treated marmosets. Eur J Neurosci. 2001;14:1827–1832. [PubMed]
134. Di Marzo V, Hill MP, Bisogno T, Crossman AR, Brotchie JM. Enhanced levels of endogenous cannabinoids in the globus pallidus are associated with a reduction in movement in an animal model of Parkinson’s disease. FASEB J. 2000;14:1432–1438. [PubMed]
135. Van der Stelt M, Fox SH, Hill M, Crossman AR, Petrosino S, Di Marzo V, et al. A role for endocannabinoids in the generation of parkinsonism and levodopa-induced dyskinesia in MPTP-lesioned non-human primate models of Parkinson’s disease. FASEB J.2005;19:1140–1142. [PubMed]
136. Garcia-Arencibia M, Ferraro L, Tanganelli S, Fernandez-Ruiz J.Neurosci Lett. 2008;438:10–13. [PubMed]
137. Brotchie JM, Lee J, Venderova K. Levodopa-induced dyskinesia in Parkinson’s disease. J Neural Transm. 2005;112:359–391.[PubMed]
pot logo blog2