The situation in which an operant response terminates a stimulus that precedes a shock is called

  • Journal List
  • J Exp Anal Behav
  • v.85(3); 2006 May
  • PMC1459844

J Exp Anal Behav. 2006 May; 85(3): 407–423.

Abstract

Behavioral pharmacology is a maturing science that has made significant contributions to the study of drug effects on behavior, especially in the domain of drug-behavior interactions. Less appreciated is that research in behavioral pharmacology can have, and has had, implications for the experimental analysis of behavior, especially its conceptualizations and theory. In this article, I outline three general strategies in behavioral pharmacology research that have been employed to increase understanding of behavioral processes. Examples are provided of the general characteristics of the strategies and of implications of previous research for behavior theory. Behavior analysis will advance as its theories are challenged.

Keywords: drug effects, drugs as tools, drug self-administration, drug discrimination, private events, punishment, match to sample

In their seminal textbook on behavioral pharmacology, Thompson and Schuster (1968) listed three goals for the discipline. The first goal was to develop and refine behavioral procedures that would be effective in helping to screen pharmacological agents for potential clinical effectiveness. That is an enterprise that is very much alive and well (e.g., Brunner, Nestler, & Leahy, 2002). The second goal was to employ sophisticated behavioral techniques to analyze the mechanisms of action of behaviorally active drugs. Work toward this goal has remained the core of behavioral pharmacology (Poling & Byrne, 2000; van Haaren, 1993). The third goal was, “The use of drugs as a means or tool for the analysis of complex behavioral processes.” (p. 4). This objective has been less frequently discussed. The present paper focuses on this third aim. That is, the thesis of this paper is that drug effects can have implications for our understanding of behavioral processes. In what follows, I present three general strategies, and examples of each, that can be and have been used in pursuit of this final aim.

The three general strategies are 1) using drugs as stimuli, 2) using drugs to produce perturbations, and 3) taking advantage of specific behavioral or physiological actions of drugs. These three strategies provide ways of using drugs as tools to expand the experimental analysis of behavior. Each of these has already yielded findings having clear importance for behavioral theory—importance which in some cases has yet to be widely acknowledged (see McKearney, 1977). There may well be other strategies that are effective in using drugs to understand behavior, but the three that are presented here cover most of what has been accomplished to date.

I. Drugs as Stimuli

Drugs can serve in a variety of stimulus functions. Two that are of particular interest to this discussion are 1) when drugs reinforce behavior, and 2) when drugs serve as discriminative stimuli.

Drugs as Reinforcers

The study of drugs as reinforcers is commonly called research on drug self-administration, and it is a very large field of investigation (Griffiths, Bigelow, & Henningfield, 1980; Meisch & Lemaire, 1993; see Grant & Bennett, 2003; Perkins, 1999; Shaham, Shalev, Lu, DeWit, & Stewart, 2003 for recent reviews and summaries of current research topics). The study of drug-reinforced behavior has contributed importantly not only to understanding the neurobiology of addiction (Ator & Griffiths, 2003), and to the development of better treatments for drug addiction (e.g., Higgins & Wong; 1998; Rose & Behm, 2004; Silverman et al., 2002), but also contributes significantly to our understanding of reinforcement as a behavioral process.

Although some may argue that reinforcement is strictly a descriptive term, in reality it is a theoretical term. One approach to theoretical explanation is seeing “the like in the unlike” (Duhem, 1954). For example, the time it takes a fired bullet, a falling feather, or a felled tree (three apparently quite dissimilar occurrences) to hit the ground is determined by gravitational attraction. That is, despite the obvious differences among the three events, there is a unitary physical process that governs them. Similarly, use of the common term reinforcement to apply to the occurrence of a very wide array of behavioral activities, ranging from pressing a lever to imitation, and to an almost equally wide range of consequences, implies a common behavioral process. Such an implication is a theoretical claim as it suggests that a single behavioral process, called reinforcement, governs the dynamic changes seen in seemingly dissimilar situations.

There are at least three ways that the study of drug self-administration has contributed, and continues to contribute, to our understanding of reinforcement. First, such research has extended the generality of the concept itself. Each time a new class of drug or a new self-administration procedure (e.g., intramuscular vs. intravenous injection) yields an effect that we can call reinforcement, the generality of the concept of reinforcement is extended. Second, research with drugs as reinforcers serves to challenge theories of reinforcement, particularly those theories whose aim is to be able to predict in advance what sorts of contingency relations will result in reinforcement, for example, response-deprivation theory (Timberlake & Allison, 1974). Third, careful attention to definitional issues has helped to distinguish drug reinforcement from other effects that drugs may have in procedures intended to study drugs as reinforcers. This third point will be illustrated by a recent report by Donny et al. (2003).

Just as many sorts of consequences, ranging from the opportunity to see things to the opportunity to eat, have been shown to be effective as reinforcers, the opportunity to inject many behaviorally active pharmacological compounds has been shown to be effective as well. When parameters of drug presentation such as schedule, delay, and amount have been varied, effects similar to those when the same parameters have been manipulated with nondrug reinforcers have been observed (Beardsley & Balster, 1993; Goldberg, 1973; Woods & Downs, 1974). Thus, drugs as consequences can function identically to other kinds of consequences, attesting to the reasonableness of categorizing them alike as reinforcers. In addition, the fact that drugs can reinforce behavior brings to drug dependence the power of the experimental analysis of behavior.

Current theory directed at predicting the sorts of contingency arrangements that will result in reinforcement has focused largely on the opportunity to behave afforded by presentation of the putative reinforcer (Allison, 1993; Gawley, Timberlake, & Lucas, 1986; Timberlake & Allison, 1974). That is, whether a stimulus is reinforcing depends on what the stimulus permits the subject to do. Premack (1959) originally proposed that reinforcement was relative, depending on baseline probabilities of action. For example, presenting a food pellet to a rat consequent on a lever press provides the rat the opportunity to eat. In Premack's view, eating (especially when food deprived) is high probability behavior, whereas pressing a lever is lower probability behavior, and the two probabilities can be assessed experimentally (e.g., Premack, 1962). Reinforcement occurs in this situation because the opportunity to engage in higher probability behavior follows the emission of the lower probability behavior.

Premack's formulation has been largely supplanted by the response-deprivation view suggested by Timberlake and Allison (1974), which specifies that the contingently available activity be one that has been constrained below its level when continuously available, but the core of the view remains that reinforcers are best characterized by the behavior that they occasion. That administration of a drug via an intravenous catheter can reinforce behavior (as amply demonstrated in hundreds of studies of drug self-administration) serves to challenge views like those of Premack (1959) and Timberlake and Allison. It is difficult to conceive of delivery of a drug into an animal's vein as setting the occasion for any particular activity. Intravenous drug administration requires no action of any kind on the animal's part, so theories of reinforcement that suggest the necessity of some kind of behavior instigated by the reinforcing consequence are not accommodated by drug self-administration. The phenomenon of intravenous drug self-administration therefore presents a challenge (as does reinforcement by electrical stimulation of certain brain regions; Olds & Milner, 1954) in the formulation of an overall theory of reinforcement. Perhaps the view that a reinforcer sets the occasion for behavior will need to be re-evaluated when developing such theories. In any event, it appears that additional theoretical and empirical work is needed.

Another common aspect of many theories of reinforcement is that they point to what are now called “establishing operations” (Michael, 1982, 1993; see also Kantor, 1959) as important features of situations in which reinforcement can be observed. Establishing operations are conditions implemented to enhance the effectiveness of certain stimuli as reinforcement. For example, to enhance the effectiveness of food as reinforcement one can deprive an organism of food. Here, food deprivation is the establishing operation. The response-deprivation view of reinforcement points to a reduction in eating below “free” levels as essential for reinforcement. There are other sorts of establishing operations that depend on prior learning, as when one encounters bland food, an establishing operation for the reinforcing effectiveness of spices; or when dismantling a piece of equipment one is confronted with a Phillips-head screw and the reinforcing effectiveness of a Phillips-head screwdriver increases. There are often no obvious establishing operations necessary for drug administration to serve as reinforcement, so here again, drug self-administration presents a challenge to those who are attempting to generate a general theory of reinforcement.

The third way research on drug self-administration has assisted in issues more broadly concerned with the analysis of behavior is that it provides examples of the importance of carefully defining the concept of reinforcement when trying to account for behavioral effects. A timely and noteworthy example is provided in recent research by Donny et al. (2003). Those investigators studied the phenomenon of nicotine self-administration in rats. Establishing and maintaining behavior by nicotine reinforcement has long been a difficult task, and prior to 1990 there were only scattered reports of success (e.g., Goldberg, Spealman, & Goldberg, 1981), and many (unpublished) failures. In the late 1980's, however, techniques were developed that appeared to provide reliable methods for establishing and maintaining operant responding using intravenous nicotine as the reinforcer (Corrigall & Coen, 1989). Since that time, scores of research papers, emanating from a wide array of laboratories, have employed the technique pioneered by Corrigall and colleagues.

The procedure that has become standard is the following. In a two-lever chamber, rats initially are trained to press one of the levers, and this training is accomplished by using food pellets as a reinforcer (the rats are food deprived). Once lever pressing is established, each lever press produces a food pellet and an intravenous injection of nicotine. Presses on the other lever (control lever) have no programmed effects. Each press on the active lever is followed by a 1-min timeout (TO) period, during which lights in the chamber are extinguished. Next, food presentation is terminated so that the only consequence of lever pressing is injection of nicotine, followed by the timeout. Under these conditions, rates of lever pressing, although low (usually resulting in about 10 drug deliveries per hour), are higher than those on the control lever. Importantly, when saline is substituted for nicotine rates fall to the level obtained on the control lever. Thus, two features of the data and procedure together support the view that reinforcement occurred: pressing the lever had a specific consequence (nicotine administration) and the frequency of the response increased relative to control levels. Nicotine administration therefore appeared to serve as a reinforcer, albeit one that supported relatively low response rates under the studied conditions.

Missing from the assessment, however, until the research by Donny et al. (2003), was an important test. In determining whether a behavioral consequence serves as a reinforcer, it important to determine if the consequential relationship between the response and the suspected reinforcer is necessary to produce the observed increase in frequency of the response (Catania, 1998). A classic example of why this test is needed is provided by a hypothetical experiment to determine if pinching reinforces crying in an infant. We might initially arrange that each time the baby cries, we administer a painful pinch. Sure enough, the frequency of crying will increase. But is the increase evidence for reinforcement? The crucial test is to pinch the baby independently of its crying and measure how much crying occurs. In this example, if we pinch the baby as often as we did in the “reinforcement” phase of the procedure, we are likely to see just as much crying. We would conclude, therefore, that merely presenting the pinches, independent of crying, would increase crying, so the increase in crying when the pinches depended on crying was not evidence for reinforcement.

Essentially, Donny et al. (2003) performed this important test in the case of nicotine self-administration. Specifically, they trained one group of rats under the standard procedure to self-administer nicotine. Rats in a second group were “yoked” to rats in the self-administration group. That is, when they pressed the active lever they received the timeout, but they received nicotine injections independently of those presses. Nicotine injections occurred at exactly the same times they did for rats in the first group. Representative results are shown in Figure 1. The important message in these data is that the two groups responded essentially equivalently. When saline was substituted for nicotine, responding declined in the self-administration group (filled circles), but it declined to the same extent in the yoked group (unfilled circles). These data illustrate that the dependency between lever pressing and nicotine injection was not necessary for the maintenance of lever pressing. They also show that response-dependent nicotine administration did not serve as a reinforcer. One possible interpretation of what nicotine did in these experiments was to enhance the effectiveness of the postresponse timeout period as reinforcement. It has been shown in rats, which are nocturnal animals, that response-dependent presentation of darkness can serve as a reinforcer that maintains relatively low response rates (Barnes, Kish, & Wood, 1959; Keller, 1941). The timeout in the Donny et al. study included turning out the lights, and may have been the stimulus maintaining behavior. In any event, their results call for additional research and analysis of what happens in experiments in which lever pressing results in the injection of nicotine. The Donny et al. study thus provides a clear and useful example of the interplay between theory in the experimental analysis of behavior and research in behavioral pharmacology.

The situation in which an operant response terminates a stimulus that precedes a shock is called

Presses per session on a lever that resulted in intravenous injection of either nicotine or saline.

Filled circles show data from a group of rats that received nicotine, according to a fixed-ratio 5 schedule, in sessions 16–20 and 24–28 and that received injections of the saline vehicle according to the same schedule during sessions 21–23. Each injection was accompanied by a visual stimulus complex (VS) that included 1 min of darkness. Open circles show averages (bars show SEM) from a group of rats that received injections of nicotine or saline independent of their lever presses. The injections for a particular rat occurred at the same time as for a rat in the first group; that is, the injections were “yoked” to those of a rat in the first group. For the rats in the second group, each 5th lever press resulted in presentation of the stimulus complex. (From Donny et al., 2003. Reprinted with permission.)

Drugs as Discriminative Stimuli

As the large literature on drug discrimination (e.g., Samele, Shine, & Stolerman, 1991) reveals, drug administrations can serve a discriminative function. In a typical experiment, an animal is presented two response alternatives. If a drug is administered before the session, reinforcement is available as a consequence for one of the responses, but not for the second. If the drug vehicle is administered before the session, the second response is reinforced. Under such conditions, for a wide variety of drugs and doses, the behavior of animals has come under appropriate stimulus control. That is, animals emit primarily the response appropriate to the drug stimulus administered before the session.

One of the main uses of drug-discrimination procedures is to provide evidence related to abuse liability (Ator & Griffiths, 2003). One way this is accomplished is by training an animal to discriminate a drug with known abuse liability, for example morphine, from vehicle. Next, doses of a drug suspected to have abuse liability are tested in these animals. If the animals respond as if the drug is morphine, that is, if they predominantly press the lever associated with reinforcement when morphine was administered before sessions, then the tested drug is likely to have abuse potential. When a drug produces such results it is said to generalize to (some say “substitute for”) the drug used in original training of the discrimination. Attempts to develop drugs that have morphine's clinically useful effects but do not have abuse potential are facilitated by drug-discrimination tests like that just described (Woods, Young, & Herling, 1982).

In a related vein, drug-discrimination procedures also are employed in a quest to develop drug-abuse treatments. Accurate drug-discrimination performance is presumed to be based on interoceptive-stimulus effects of the discriminated drug. To the extent that those effects (often called “subjective effects”) play a role in drug abuse, the discovery of drugs or procedures that diminish or counter those effects may lead to candidate treatment procedures (see Winger, 1998).

Drug-discrimination procedures also have been useful in understanding basic pharmacology. It is interesting, and surprising in some ways (given the widespread distribution of specific receptors in the brain and the fact that many drugs act at multiple receptor sites), that such procedures have been valuable in determining that particular drugs share activity at the same receptor. Once a discrimination has been established with one drug, drugs that act primarily at the same receptor generally substitute completely for the original training drug.

Drug-discrimination procedures also extend our understanding of the development of stimulus control in general. Such procedures, for example, extend the generality of discrimination-training procedures to circumstances in which stimuli are presented only once per day.

For purposes of the present article, however, the feature of drug discrimination that is most interesting is that the phenomenon has implications for the stimulus control of behavior by “private events.” One of the interesting features of human behavior is that speakers provide verbal descriptions of events to which only the speaking individual has access. For example, only I have direct access to my hunger pangs, what I am thinking, what I am imagining, etc. The puzzle is how I could have learned to talk about these private events. The problem does not exist for stimuli that are available to everyone. When I am learning to talk, those who help me learn can see public stimuli, so if I called a chair a table, I could be immediately corrected. In contrast, if I reacted to a pain in my toe by saying, “I have a hunger pang,” how could anyone set me straight? B. F. Skinner (1945) suggested ways in which those who teach us to talk can solve this problem (see also Wittgenstein, 1953). One of the important ways that we can learn to speak of private events is when those who instruct us have access to “public accompaniments.” For example, we can be taught about pain when someone sticks a needle in our arm. An observer can see the needle stick, which presumably is accompanied by pain. We also are likely to react by withdrawing our arm and crying out. Based on all that publicly available information, the observer might say something like, “That hurts, doesn't it?” or “That must hurt.” By doing so the observer helps us learn when to say something hurts. Many episodes like this occur across the course of a lifetime, with many different observers, so we have ample opportunity to learn to speak of private events, even though the verbal community does not share them.

The drug-discrimination procedure models some interesting features of the interpretation offered by Skinner. In a sense, the experimenter serves as the animal's “verbal community” by reinforcing correct responses and extinguishing incorrect ones. Experimenters have access to the public accompaniments of the drug state in the sense that they know what was injected, whether drug or vehicle. The experimenter does not, however, have access to the private stimuli generated by drug administration. Just as with human learning about private events, therefore, the contingencies are arranged in accord with the public accompaniments, with the hope that they correspond to private stimuli. Successful training of a discrimination is validation that they do.

In one important and potentially useful way, the standard drug-discrimination procedure is not in accord with what happens when humans learn to speak about their private events. In the human case both the learner and the verbal community commonly have access to the associated public events. When a needle pricks my arm, usually I can see the needle and monitor my overt reactions, just as the verbal community can. Animals in drug-discrimination experiments, however, do not have available to them the public stimuli accompanying the injection to which the experimenter as verbal community reacts.

To date, little research has focused on using drug-discrimination procedures to study issues surrounding the development and maintenance of behavior control by private stimuli, although some very good beginnings have occurred (e.g., DeGrandpre, Bickel, & Higgins, 1992; Lubinski & Thompson, 1987) The field, therefore, is ripe for research. Many important questions that would be difficult, or impossible, to study in humans can be addressed using drug-discrimination procedures. For example, one of the challenges to Skinner's interpretation is why the public accompaniments do not overshadow the private events. Because the training and associated social consequences have to be based on public stimuli to which both the learner and the verbal community usually have access, why don't those stimuli come to be the primary cues for talking about the situation? The drug-discrimination procedure can be used to deal experimentally with this issue. When drug stimuli have been compounded, overshadowing has been observed (e.g., Mariathasan & Stolerman, 1993), and overshadowing also has been observed when drug and exteroceptive stimuli have been presented together (e.g., Järbe, Hiltunen, & Swedberg, 1989). Thus, it is clear that basic stimulus-compounding phenomena can be observed in drug discriminations. Because drug-discrimination procedures keep public information out of view for the animal, public information could be added systematically to see how it affects development (and maintenance, for that matter) of discriminations. Other important factors also can be systematically manipulated. For example, dose (and presumably stimulus magnitude) could be crossed with presence or absence of associated public stimuli to see if overshadowing depends on salience or magnitude of the private sensations. Or, issues of fidelity between public and private stimuli could be assessed, as could the role of different sorts of contingencies. None of these has been examined systematically, and all should be.

To summarize, employing drugs as either reinforcing or discriminative stimuli can and has had implications for our understanding of behavior. In the case of drug self-administration, the results have extended the generality of the concept of reinforcement and contribute to the development of theories about what events will function as reinforcers. In addition, as the Donny et al. (2003) study illustrates, experiments in behavioral pharmacology can serve as examples that guide the definition of reinforcement. In the case of drugs as discriminative stimuli, in addition to extending the generality of discrimination-training procedures, research can speak to the issue of stimulus control by private events, a domain that has received relatively little empirical study by those involved in the experimental analysis of behavior.

II. Using Drugs to Produce Perturbations

The second major category of procedures in which drugs help to elucidate behavioral processes is to use the drug as an outside influence. The logic of this approach is straightforward and is used widely in the physical sciences. The basic idea is to subject two entities, in our case apparently similar behavioral situations, to the same external influence. If the external influence produces different results in the two situations, then the conclusion is that although apparently the same, the two are not. Consider an analogous situation in chemistry where one has two unknown clear liquids. Both can be subjected to heat, and if they boil at different temperatures then it is known that the two are not identical. Similarly in physics, the particles resulting from collisions in a particle accelerator are differentiated by their differential reactions to a magnetic field. Because this category can be particularly helpful in a young science like the science of behavior, several examples of how it has been accomplished are presented below.

The first two examples show how drug experiments have provided provocative information about the basic behavioral process of punishment, information that has yet to be incorporated into theories of punishment. The first set of experiments to be described was reported by McKearney (1976), who studied effects of amphetamine and pentobarbital on punished responding in squirrel monkeys. Two conditions were arranged and in both, lever pressing was maintained under a fixed-interval (FI) 5-min schedule. In the first, the consequence that maintained responding was the presentation of a food pellet. In the second, responding was maintained by termination of a stimulus associated with the periodic delivery of brief electric shocks (a schedule of shock-stimulus-complex termination; see Morse & Kelleher, 1966). Similar temporal patterns and rates of responding were established in the two conditions. Next, a fixed-ratio (FR) 30 schedule of presentation of a brief electric shock was added in both conditions, and subsequent rates of responding were substantially reduced. That is, punishment was observed under both sets of conditions.

Representative data for the experiments with the drugs are shown in the cumulative records of Figure 2. Note first that control performances were quite similar. The data in the right column show the “conventional” effects on responding maintained by food presentation. d-Amphetamine either did not affect (0.1 mg/kg) or decreased (0.3 mg/kg) the already low rates of punished responding that was maintained by food delivery, whereas pentobarbital increased it. Contrast these results, however, with those on punished responding maintained by stimulus-shock termination. Under the schedule of stimulus-shock termination, amphetamine produced substantial increases in rates of responding, whereas pentobarbital produced only decreases. To sum up, both sets of behavioral circumstances met the definitional criteria for punishment—response-dependent presentation of a stimulus (the brief shock) resulted in decreased rates of responding. By purely behavioral criteria, therefore, the two circumstances exemplify the same process, punishment. The drug experiments, however, indicate that things are not so straightforward. The conditions of behavioral maintenance were important determinants of the drug effects. These results, therefore, suggest that applying the same term, punishment, to describe the behavioral processes involved in the two sets of contingencies may be an oversimplification. It also is possible that the difference in drug effects implies that the two different maintenance conditions do not represent instances of a single process called reinforcement. Additional parametric research, for example, varying both punishment intensity and consequence magnitude in the two maintenance conditions, will be necessary before firm conclusions can be made.

The situation in which an operant response terminates a stimulus that precedes a shock is called

Cumulative response records from two monkeys, S-532 and S-525, responding under either an FI 5-min schedule of stimulus-shock termination (left column) or an equal-valued schedule of food presentation (right column).

The pen stepped with each lever press and reset to the baseline when each FI was completed. Pips on the record (and on the event line) indicate the delivery of brief electric shocks that were presented according to an FR 30 schedule. The top two records are from control performance. Those below show records of responding under the indicated doses of amphetamine and pentobarbital. (From McKearney, 1976).

A second example also involves the concept of punishment. Branch, Nicholson, and Dworkin (1977) studied the effects of pentobarbital on punished responding of pigeons. Most research on punishment with pigeons has employed electric shock (using the methods pioneered by Azrin; see Azrin & Holz, 1966) as the punishing stimulus, and the standard result of administering pentobarbital has been to increase response rates under punishment (i.e., an “anti-punishment effect,” Geller & Seifter, 1960; McMillan, 1973; Morse, 1964). In our research, we were interested in producing equal rates of punished and unpunished responding, and in our initial experiment we used timeout (TO) as punishment. Specifically, a two-component multiple schedule was employed. One component was a random-interval (RI) 6-min schedule of food presentation. In the other, the schedule of food presentation was RI 1 min, a schedule that results in higher response rates than RI 6 min. In this component, however, every third response, on average, resulted in a 20-s TO. The addition of the TO contingency lowered response rate in that component such that rates and patterns of responding were roughly equivalent in the two components. We then examined effects of a range of doses of pentobarbital. As Figure 3 illustrates, response rates in both components were decreased in a dose-dependent manner. A second experiment in which response-dependent and response-independent presentation of the TO were compared confirmed that punishment had occurred in the RI 1-min component, so a third experiment with two of the original pigeons replaced the TOs with brief electric shocks. The results of that experiment are shown in Figure 4. With electric shock as the punishing stimulus, pentobarbital resulted in the characteristic increases in responding usually seen in behavior whose frequency has been reduced by punishment.

The situation in which an operant response terminates a stimulus that precedes a shock is called

Rates of key pecking under a multiple schedule in which one component involved punishment by response-dependent timeout (open circles), as a function of dose of sodium pentobarbital.

Points above C show averages from sessions without drug; points above V show effects of injecting the distilled water vehicle. Vertical bars on points show ranges. Each graph is for a separate pigeon. (From Branch, Nicholson, & Dworkin, 1977)

The situation in which an operant response terminates a stimulus that precedes a shock is called

The Branch et al. (1977) study, therefore, showed that whether punished responding was increased by pentobarbital depended on the stimulus used as the punisher. This finding implies that although the formal definitional requirements for punishment were met in all the experiments and the rates of responding produced by the two different kinds of punishment were essentially the same, the behavioral processes involved were not identical. Once again, as with the study by McKearney (1976), experiments with drugs reveal the behavioral situation to be more complicated than a simple account based on the standard principles would suggest. One way to incorporate the Branch et al. findings into current behavioral theory would be to suggest that it is useful to distinguish positive punishment (e.g., that with electric shock) from negative punishment (e.g., punishment by timeout). Whether that sort of accommodation is reasonable remains to be investigated.

Important implications for behavioral processes other than punishment also have been generated in experiments examining drug—behavior interactions. One study whose importance has generally been overlooked, yet which is still timely, is by Newland and Marr (1985). On the surface, their study is deceptively simple. It involved training pigeons to respond under a two-alternative matching-to-sample procedure. Only two conditional discriminations were taught. If the sample (center) key was red, a peck to the red comparison stimulus (presented on one of the two side keys) was reinforced; if the sample stimulus was green, a peck to the green comparison stimulus was reinforced. Such an arrangement results in pigeons being confronted with four kinds of trials: 1) red left, red sample, green right (RRG); 2) green left, red sample, red right (GRR); 3) red left, green sample, green right (RGG); and 4) green left, green sample, red right (GGR). Newland and Marr did not simply examine the effects of chlorpromazine and imipramine on overall accuracy, as one might presume to do if assuming that the drugs might exert their effects on conditional stimulus control as a general behavioral process. Nor did they simply test for effects on accuracy on red-sample trials versus green-sample trials, as one might presume to do on the basis that two separate discriminations had been trained; that is, peck red if red and peck green if green (cf. Carter & Werner, 1978). Newland and Marr instead examined effects separately on the four different trial configurations. The results are illuminating and are depicted in Figure 5.

The situation in which an operant response terminates a stimulus that precedes a shock is called

Percent correct responses in a two-alternative match-to-sample procedure, as a function of dose of imipramine (left column) or chlorpromazine (right column).

Each row shows data from one pigeon, and bars on the control points indicate ranges. The functions are presented according to trial-array type. For example, GGR indicates that the data are from trials when the left comparison key was green, the sample (center) key was green, and the right comparison key was red. (From Newland & Marr, 1985).

The drugs often decreased accuracy, but did so to different degrees for the different trial configurations. In some cases the results are dramatic. Consider the chlorpromazine data for Pigeon P7, for example. The drug produced no changes in accuracy for two configurations, yet resulted in decreases, substantial in the case of the RGG configuration, for the other two. That is, neither did the drug decrease accuracy generally, nor did it selectively decrease accuracy on red-sample or green-sample trials. Instead, the drug effect depended on the precise display across the keys. Examination of data from other pigeons also shows similar dissociations (e.g., P72's data under imipramine). Such findings support the notion that the entire stimulus array can play an important role in stimulus control in matching-to-sample procedures with pigeons. That is, the pigeons' behavior was not simply under conditional stimulus control by the sample stimulus (cf. Carter & Werner, 1978) nor by the sameness of colors on the sample and comparison-stimulus keys. Instead, because the drugs produced disparate effects, depending on the particular array of stimuli, the results are consistent with the view that the entire stimulus array, including the locations of the various key colors, participated in the stimulus control of side-key pecks. It is interesting to note that recent research on stimulus control in matching-to-sample procedures, almost fifteen years after publication of the Newland and Marr (1985) paper, has led to essentially the same conclusion (e.g., Lionello & Urcuioli, 1998).

As each of the foregoing examples in this section illustrates, drug-induced perturbations can provide evidence related to whether behavioral relations that appear to exemplify a particular behavioral process (or set of processes) actually do represent a unified process. In many cases, behavioral pharmacology research has revealed that matters are more complicated than normally assumed, and that our current conceptualizations may not be fully adequate to deal with that complexity. From its beginnings (e.g., Dews 1955, 1958) research in behavioral pharmacology has indicated that the precise conditions of maintenance of behavior play a signal role in how behavior is established and maintained and how it reacts to external influences. The examples provided in this section serve, I hope, to reinforce that view.

III. Taking Advantage of Particular Drug Effects

Using Predictable Physiological Effects

Drugs administered to behaving animals can alter not only overt behavior but also measurable concurrent physiological responses. To the extent that effects on the two classes of responses diverge, information is gained about causal relationships between the physiological responses and the behavioral responses. If the two classes of response are affected differently or at different doses, then the case that one causes the other is weakened. As an example of research that illustrates such a dissociation, consider a study by Hunt and Campbell (1997). They studied behavior that occurred regularly before the signaled delivery of food. Specifically, rats were exposed to a trials procedure in which a 10-s illumination of a light was followed immediately by presentation of a food pellet. Rearing and orientation were measured, as was heart rate. After exposure to several trials, both the behavioral and physiological measures revealed changes. Rearing and orientation were increased during the signal, whereas heart rate was decreased. In this case, the change in heart rate might be considered part of (or perhaps reflective of) the private events associated with the presentation of the signal, whereas the changes in overt behavior might be considered responses to those events. Such an interpretation implies that the private events are primary, and causal in a chain of events including stimulus, private events, and behavior.

Hunt and Campbell (1997) then continued their analysis by studying effects of atropine and atenolol. These drugs were chosen because of their known effects on heart rate. Figure 6 summarizes the results of their experiments. The behavioral data are plotted in the left panel. The two drugs, at the doses tested, produced no changes in the overt behavior from that observed in the control (no drug) condition. The right panel, however, illustrates that the drugs were not without effect. That graph reveals that under control conditions (filled circles), heart rate slowed during the pre-food (i.e., light-CS) signal. Especially interesting are the data from atropine. The dose chosen completely eliminated changes in heart rate during the signal. That is, those results reveal a clear dissociation between the physiological response and the behavioral responses, indicating that the physiological response is not a precursor to the behavioral ones. Of course, it is not likely that heart rate alone was the only physiological response altered by the conditioning procedure, and it would be premature to claim that the results show that “feelings” and overt behavior were dissociated, but many theories of emotional behavior suggest that internal events are essential manifestations of emotions (e.g., Power, 1999). To the extent that heart-rate change was a facet of the important internal events engendered by experience with upcoming food, data like those from Hunt and Campbell challenge such a view, and in so doing inform behavioral theory. They are consistent, as well, with other research revealing dissociations between overt emotional behavior and associated physiological responses (e.g., Brady, Kelly, & Plumlee, 1969)

The situation in which an operant response terminates a stimulus that precedes a shock is called

Left graph: Measure of general orienting (expressed as percentage of time devoted to orienting) behavior as a function of time preceding and during a stimulus (CS) that reliably occurred before presentation of food.

Filled circles show control performance (mean from a group of 16 rats), open squares show effects of preceding the test with administration of atenolol, and open circles show effects of administering atropine. Right graph: Change in heart rate under the same three sets of conditions. (From Hunt & Campbell, 1997. Reprinted with permission)

Using Predictable Behavioral Effects

As behavioral pharmacology has matured as a scientific enterprise, a corpus of data has been generated that allows prediction of certain behavioral effects. For example, it is generally well established that intermediate doses of psychomotor stimulants will increase the relatively low response rates under reinforcement schedules that require that responses be spaced in time, that is, schedules which arrange for reinforcement of interresponse times (IRTs) longer than some minimum (IRT>t schedules, also called DRL schedules). Those same doses usually result in decreases in the high rates of responding under schedules in which reinforcement is delivered after varying numbers of response (i.e., variable-ratio, or VR, schedules). Research by Smith (1986) provides an example of how these “standard” effects were used to examine a theoretical issue.

Smith (1986) was concerned with studying the “reinforcement loss” hypothesis of drug tolerance (see Corfield-Sumner & Stolerman, 1978; Schuster, Dockens, & Woods, 1966). The reinforcement-loss hypothesis holds that tolerance to a drug's effects on free-operant behavior will develop if the drug's initial effect is to reduce the frequency of reinforcement. In Smith's study rats were trained to press a lever under a two-component multiple schedule. One component was a random-ratio (RR) 40 schedule (each response had a probability of .025 of being reinforced), and the other was a DRL 20-s schedule (interresponse times of 20 s or longer were reinforced). Smith then administered amphetamine, which produced the characteristic effect; rates under the RR schedule were decreased, and those under the DRL schedule were increased. Accompanying these changes were alterations in the frequency of reinforcement. In both components reinforcement frequency was decreased. The reinforcement-loss view, therefore, predicts that if the drug is given repeatedly before each session, tolerance will develop in both components of the multiple schedule

The results of the study are summarized in Figure 7. The upper graph shows response rates, and the lower graph displays reinforcement rates, all averaged across animals (N = 5) and blocks of sessions. The data from session blocks 8 through 17 show clear development of tolerance to effects in the RR schedule, but no tolerance to the rate increases under the DRL schedule. Thus, even though there was a reduction in the frequency of reinforcement during the DRL schedule, from about 50 per session to fewer than 10 per session, tolerance did not develop. In the next phase of the study, Smith (1986) removed the RR component, and tolerance to the rate-increasing effects under the DRL schedule quickly developed, but as soon as the RR component was reintroduced, that tolerance was no longer in evidence.

By using the known effects of amphetamine, therefore, Smith (1986) was able to discover a limitation to the reinforcement-loss view. In the context of the multiple schedule, amphetamine resulted in a substantial decrease in the very high rate of reinforcement that occurred under the RR schedule, and it may be that the decrease in some way overshadowed the contemporaneous decrease in reinforcement frequency in the DRL component. To test that possibility, Smith next removed the RR component from the schedule. When the potential for the relatively high rate of reinforcement that resulted under the RR schedule was taken out of the experimental situation, tolerance was evident in the DRL component. That result suggests that the reinforcement loss in the DRL component was sufficient to lead to tolerance. Strikingly, when the RR schedule was reintroduced, the tolerance previously evident in the DRL component immediately disappeared. Smith's results indicate circumstances in which reinforcement loss was not sufficient to result in tolerance, thus forcing an elaboration or modification of the reinforcement-loss view. Smith's (1986) data may well have more significance for theory in behavioral pharmacology than for the experimental analysis of behavior, but they are not without importance for the latter. The findings serve as additional evidence that the effectiveness of reinforcement in controlling behavior depends not only on the contingencies and establishing operations alone, but also on the context in which those contingencies occur. They also relate to the role of rate of reinforcement in the control of behavior. Some current behavioral theories place substantial emphasis on the role of reinforcement rate (e.g., Herrnstein, 1970; Nevin & Grace, 2000). Smith's results suggest that effects resulting from a particular rate of reinforcement can depend on contemporaneous rates of reinforcement resulting from other sets of contingencies.

Using Unplanned Behavioral Effects

In many cases, the precise effects that a drug will have in a behavioral preparation are not known in advance of experimentation, but even in cases such as those it is possible on occasion to take advantage of drug-induced behavioral changes to enhance understanding of the behavioral processes involved. A good example of this kind of event is provided in research by Laties (1972). He was studying drug effects on behavior under a fixed-consecutive-number (FCN) procedure. Specifically, pigeons worked in a chamber with two keys, and food reinforcement occurred if the pigeon made eight or more consecutive pecks on one of the two keys before pecking the other. If the pigeon pecked the second key before completing eight or more on the first key, the count reset and the pigeon had to start over. The behavior of pigeons comes under excellent control under such a procedure, with a substantial majority of sequences of pecks on the first key equaling or slightly exceeding eight.

Of theoretical interest is how the pigeon “does it.” That is, there are at least two possible sources of stimulus control that could lead to accurate performance in the task: Switches to the second key could be under the control of the number of pecks just made on the first key (as would be suggested by the way in which the procedure is arranged); it is also possible, however, that because pecks on the first key occur at a fairly constant rate, switches could be under the control of time taken to complete eight pecks. A study of haloperidol by Laties (1972) provided information that helps to distinguish between the two possibilities. Haloperidol had the interesting effect of reducing pecking rate on the first key in a dose-dependent manner. Under nondrug conditions, pigeons pecked the key about 75 times per min. Under the largest dose of haloperidol, the rate was reduced to about 25 pecks per min. Thus, under this dose of the drug, it took a pigeon about three times as long to finish eight pecks as it had without drug. If switches were under control of the time taken to complete the eight pecks on the first key, then accuracy would be reduced substantially by this dose of the drug. Accuracy, however, was not decreased by this or any other dose of the drug. That finding, coupled with results with other drugs that decreased both rate and accuracy, supports the view that the important controlling variable in the FCN procedure is the response count, not time taken to complete the count. In this case, therefore, Laties was able to take advantage of an unpredicted drug effect to gain information about the sources of behavioral control in a complex behavioral procedure.

Taking advantage of particular drug effects, either planned or unplanned, has occurred less often as a tactic in using drugs to assist in understanding behavior. The approach, nevertheless, has been and presumably can continue to be an effective method in gleaning behavioral significance from drug experiments.

Summary and Conclusions

The thesis of this article is that research in behavioral pharmacology has implications not only for issues surrounding behavioral effects of drugs, but it can have significance for the understanding of behavior in general. I have outlined three general strategies that have been and can be employed. Of course, the three categories of approach are not entirely distinct, but they do help to organize the research that has been done, and may help to conceptualize such experiments in the future. One message that I hope is heeded is that research in behavioral pharmacology already has produced data that have important implications for behavioral theory, implications that may not yet have had the impact that they deserve. Frequently, research with drugs has served to challenge the adequacy of current behavioral conceptualizations. It is only by challenging existing assumptions and concepts that a theory of behavior can advance. Behavioral pharmacology has provided such challenges and promises to supply more.

The situation in which an operant response terminates a stimulus that precedes a shock is called

Average, for a group of rats, lever-pressing rates (upper graph) and rates of food-pellet delivery (lower graph) across blocks of three sessions.

Filled circles show data from the random-ratio (RR) 40 component of a multiple schedule, and open circles are data from the DRL component of the schedule. The leftmost three points show effects of administering amphetamine after each session. The next 10 points show data from sessions preceded by injection of amphetamine. In blocks 18–20 the RR component was removed, and the remaining points show data from when it was reinstated. (From Smith,1986. Reprinted with permission)

Acknowledgments

Preparation of this paper was aided by USPHS grants DA04074 and DA14249 from the National Institute on Drug Abuse.

References

  • Allison J. Response deprivation, reinforcement, and economics. Journal of the Experimental Analysis of Behavior. 1993;60:129–140. [PMC free article] [PubMed] [Google Scholar]
  • Ator N.A, Griffiths R.R. Principles of drug abuse liability assessment in laboratory animals. Drug and Alcohol Dependence. 2003;70:S55–72. [PubMed] [Google Scholar]
  • Azrin N.H, Holz W.C. Punishment. In: Honig W.K, editor. Operant behavior: Areas of research and application. New York: Appleton-Century-Crofts; 1966. pp. 380–447. In. [Google Scholar]
  • Barnes G.W, Kish G.B, Wood W.O. The effect of light intensity when onset or termination of illumination is used as reinforcing stimulus. Psychological Record. 1959;9:53–60. [Google Scholar]
  • Beardsley P.M, Balster R.L. The effects of delay of reinforcement and dose on the self-administration of cocaine and procaine in rhesus monkeys. Drug and Alcohol Dependence. 1993;34:37–43. [PubMed] [Google Scholar]
  • Brady J.V, Kelly D, Plumlee L. Autonomic and behavioral responses of the rhesus monkey to emotional conditioning. Annals of the New York Academy of Sciences. 1969;159:959–975. [PubMed] [Google Scholar]
  • Branch M.N, Nicholson G, Dworkin S.I. Punishment-specific effects of pentobarbital: Dependency on the type of punisher. Journal of the Experimental Analysis of Behavior. 1977;28:285–293. [PMC free article] [PubMed] [Google Scholar]
  • Brunner D, Nestler E, Leahy E. In need of high-throughput behavioral systems. Drug Discovery Today. 2002;7:S107–112. [PubMed] [Google Scholar]
  • Carter D.E, Werner T.J. Complex learning and information processing by pigeons: A critical analysis. Journal of the Experimental Analysis of Behavior. 1978;29:565–601. [PMC free article] [PubMed] [Google Scholar]
  • Catania A.C. Learning (4th ed) Upper Saddle River, NJ: Prentice Hall; 1998. [Google Scholar]
  • Corfield-Sumner P.K, Stolerman I.P. Behavioral tolerance. In: Blackman D.E, Sanger D.J, editors. Contemporary research in behavioral pharmacology. New York: Plenum Press; 1978. pp. 391–448. In. [Google Scholar]
  • Corrigall W.A, Coen K.M. Nicotine maintains robust self-administration in rats on a limited-access schedule. Psychopharmacology. 1989;99:473–478. [PubMed] [Google Scholar]
  • DeGrandpre R.J, Bickel W.K, Higgins S.T. Emergent equivalence relations between interoceptive (drug) and exteroceptive (visual) stimuli. Journal of the Experimental Analysis of Behavior. 1992;58:9–18. [PMC free article] [PubMed] [Google Scholar]
  • Dews P.B. Differential sensitivity to pentobarbital of pecking performance in pigeons depending on the schedule of reward. Journal of Pharmacology and Experimental Therapeutics. 1955;113:393–401. [PubMed] [Google Scholar]
  • Dews P.B. Analysis of effects of pharmacological agents in behavioral terms. Federation Proceedings. 1958;17:1024–1030. [PubMed] [Google Scholar]
  • Donny E.C, Chaudhri N, Caggiula A.R, Evans-Martin F.F, Booth S, Gharib M.A, Clements L.A, Sved A.F. Operant responding for a visual reinforcer in rats is enhanced by noncontingent nicotine: implications for nicotine self-administration and reinforcement. Psychopharmacology. 2003;69:68–76. [PubMed] [Google Scholar]
  • Duhem P. The aim and structure of physical theory (P.P. Weiner, Trans.) Princeton, NJ: Princeton University Press; 1954. [Google Scholar]
  • Gawley D.J, Timberlake W, Lucas G.A. Schedule constraint on the average drink burst and the regulation of wheel running and drinking in rats. Journal of Experimental Psychology: Animal Behavior Processes. 1986;12:78–94. [PubMed] [Google Scholar]
  • Geller I, Seifter J. The effects of meprobamate, barbiturates, d-amphetamine, and promazine on experimentally induced conflict in the rat. Psychopharmacologia (Berl.) 1960;1:482–492. [Google Scholar]
  • Goldberg S.R. Comparable behavior maintained under fixed-ratio and second-order schedules of food presentation, cocaine injection or d-amphetamine injection in the squirrel monkey. Journal of Pharmacology and Experimental Therapeutics. 1973;186:18–30. [PubMed] [Google Scholar]
  • Goldberg S.R, Spealman R.D, Goldberg D.M. Persistent behavior at high rates maintained by intravenous self-administration of nicotine. Science. 1981 Oct 30;214:573–575. [PubMed] [Google Scholar]
  • Grant K.A, Bennett A.J. Advances in nonhuman primate alcohol abuse and alcoholism research. Pharmacology & Therapeutics. 2003;100:235–255. [PubMed] [Google Scholar]
  • Griffiths R.R, Bigelow G.E, Henningfield J.E. Similarities in animal and human drug taking behavior. In: Mello N.K, editor. Advances in substance abuse: Behavioral and biological research: Vol. 1. London: Jessica Kingsley Publishers; 1980. pp. 1–90. In. [Google Scholar]
  • Herrnstein R.J. On the law of effect. Journal of the Experimental Analysis of Behavior. 1970;13:243–266. [PMC free article] [PubMed] [Google Scholar]
  • Higgins S.T, Wong C.J. Treating cocaine abuse: What does research tell us? In: Higgins S.T, Katz J.L, editors. Cocaine abuse: Behavior, pharmacology, and clinical applications. New York: Academic Press; 1998. pp. 343–361. In. [Google Scholar]
  • Hunt P.S, Campbell B.A. Autonomic and behavioral correlates of appetitive conditioning in rats. Behavioral Neuroscience. 1997;111:494–502. [PubMed] [Google Scholar]
  • Järbe T.U.C, Hiltunen A.J, Swedberg M.D.B. Compound drug discrimination learning. Drug Development Research. 1989;16:111–122. [Google Scholar]
  • Kantor J.R. Interbehavioral psychology. Granville, OH: Principia Press; 1959. [Google Scholar]
  • Keller F.S. Light aversion in the white rat. Psychological Record. 1941;4:235–250. [Google Scholar]
  • Laties V.G. The modification of drug effects on behavior by external discriminative stimuli. Journal of Pharmacology and Experimental Therapeutics. 1972;183:1–13. [PubMed] [Google Scholar]
  • Lionello K.M, Urcuioli P.J. Control by sample location in pigeons matching to sample. Journal of the Experimental Analysis of Behavior. 1998;70:235–251. [PMC free article] [PubMed] [Google Scholar]
  • Lubinski D, Thompson T. An animal model of the interpersonal communication of interoceptive (private) states. Journal of the Experimental Analysis of Behavior. 1987;48:1–15. [PMC free article] [PubMed] [Google Scholar]
  • Mariathasan E.A, Stolerman I.P. Overshadowing of nicotine discrimination in rats: A model for behavioural mechanisms of drug action? Behavioural Pharmacology. 1993;4:209–215. [PubMed] [Google Scholar]
  • McKearney J.W. Punishment of responding under schedules of stimulus-shock termination: Effects of d-amphetamine and pentobarbital. Journal of the Experimental Analysis of Behavior. 1976;26:281–287. [PMC free article] [PubMed] [Google Scholar]
  • McKearney J.W. Asking questions about behavior. Perspectives in Biology and Medicine. 1977;21:109–119. [PubMed] [Google Scholar]
  • McMillan D.E. Drugs and punished responding. III. Punishment intensity as a determinant of drug effect. Psychopharmacologia (Berl.) 1973;30:61–74. [PubMed] [Google Scholar]
  • Meisch R.A, Lemaire G.A. Drug self-administration. In: van Haaren F, editor. Methods in behavioral pharmacology. Amsterdam: Elsevier; 1993. pp. 257–300. In. [Google Scholar]
  • Michael J. Distinguishing between discriminative and motivational functions of stimuli. Journal of the Experimental Analysis of Behavior. 1982;37:149–155. [PMC free article] [PubMed] [Google Scholar]
  • Michael J. Establishing operations. The Behavior Analyst. 1993;16:191–206. [PMC free article] [PubMed] [Google Scholar]
  • Morse W.H. Effect of amobarbital and chlorpromazine on punished behavior in the pigeon. Psychopharmacologia (Berl.) 1964;6:286–294. [PubMed] [Google Scholar]
  • Morse W.H, Kelleher R.T. Schedules using noxious stimuli I: Multiple fixed-ratio and fixed-interval termination of schedule complexes. Journal of the Experimental Analysis of Behavior. 1966;9:267–290. [PMC free article] [PubMed] [Google Scholar]
  • Nevin J.A, Grace R.C. Behavioral momentum and the Law of Effect. Behavioral and Brain Sciences. 2000;23:73–90. [PubMed] [Google Scholar]
  • Newland M.C, Marr M.J. The effects of chlorpromazine and imipramine on rate and stimulus control of matching to sample. Journal of the Experimental Analysis of Behavior. 1985;44:49–68. [PMC free article] [PubMed] [Google Scholar]
  • Olds J, Milner P. Positive reinforcement produced by electrical stimulation of septal area and other regions of rat brain. Journal of Comparative and Physiological Psychology. 1954;47:419–427. [PubMed] [Google Scholar]
  • Perkins K.A. Nicotine self-administration. Nicotine & Tobacco Research. 1999;1:S133–137. [PubMed] [Google Scholar]
  • Poling A, Byrne T. Introduction to behavioral pharmacology. Reno, NV: Context Press; 2000. [Google Scholar]
  • Power M.J. Two routes to emotion: Some implications of multi-level theories of emotion for therapeutic practice. Behavioral and Cognitive Psychotherapy. 1999;27:129–141. [Google Scholar]
  • Premack D. Toward empirical behavioral laws: I. Positive reinforcement. Psychological Review. 1959;66:219–233. [PubMed] [Google Scholar]
  • Premack D. Reversibility of the reinforcement relation. Science. 1962 Apr 20;136:255–257. [PubMed] [Google Scholar]
  • Rose J.E, Behm F.M. Extinguishing the rewarding value of smoke cues: Pharmacological and behavioral treatments. Nicotine & Tobacco Research. 2004;6:523–532. [PubMed] [Google Scholar]
  • Samele C, Shine P.J, Stolerman I.P. A bibliography of drug discrimination research, 1989–1991. Behavioural Pharmacology. 1991;3:171–192. [PubMed] [Google Scholar]
  • Schuster C.R, Dockens W.S, Woods J.H. Behavioral variables affecting the development of amphetamine tolerance. Psychopharmacologia (Berl.) 1966;9:170–182. [PubMed] [Google Scholar]
  • Shaham Y, Shalev U, Lu L, DeWit H, Stewart J. The reinstatement model of drug relapse: History, methodology, and major findings. Psychopharmacology. 2003;168:3–20. [PubMed] [Google Scholar]
  • Silverman K, Svikis D, Wong C.J, Hampton J, Stitzer M.L, Bigelow G.E. A reinforcement-based therapeutic workplace for the treatment of drug abuse: Three-year abstinence outcomes. Experimental and Clinical Psychopharmacology. 2002;10:228–240. [PubMed] [Google Scholar]
  • Skinner B.F. The operational analysis of psychological terms. Psychological Review. 1945;52:270–277. [Google Scholar]
  • Smith J.B. Effects of chronically administered d-amphetamine on spaced responding maintained under multiple and single-component schedules. Psychopharmacology. 1986;88:296–300. [PubMed] [Google Scholar]
  • Thompson T, Schuster C.R. Behavioral pharmacology. Englewood Cliffs, NJ: Prentice Hall; 1968. [Google Scholar]
  • Timberlake W, Allison J. Response deprivation: An empirical approach to instrumental performance. Psychological Review. 1974;81:146–164. [Google Scholar]
  • van Haaren F. Methods in behavioral pharmacology. Amsterdam: Elsevier; 1993. [Google Scholar]
  • Winger G. Preclinical evaluations of pharmacotherapies for cocaine abuse. In: Higgins S.T, Katz J.L, editors. Cocaine abuse: Behavior, pharmacology, and clinical applications. New York: Academic Press; 1998. pp. 135–158. In. [Google Scholar]
  • Wittgenstein L. Philosophical investigations. New York: Macmillan; 1953. [Google Scholar]
  • Woods J.H, Downs D.A. Codeine- and cocaine-reinforced responding in rhesus monkeys: Effects of dose on response rates under a fixed-ratio schedule. Journal of Pharmacology and Experimental Therapeutics. 1974;191:179–188. [PubMed] [Google Scholar]
  • Woods J.H, Young A.M, Herling S. Classification of narcotics on the basis of their reinforcing, discriminative, and antagonist effects in rhesus monkeys. Federation Proceedings. 1982;41:221–227. [PubMed] [Google Scholar]


Articles from Journal of the Experimental Analysis of Behavior are provided here courtesy of Society for the Experimental Analysis of Behavior