Hunger, Eating, and Health Why Do Many People Eat Too Much?

REFRENCE This Course text book in the assignment many citations thank you

Pinel, J. P. (10/2010). Biopsychology, 8th Edition [VitalSource Bookshelf version]. Retrieved from http://online.vitalsource.com/books/9781269533744

Connect with a professional writer in 5 simple steps

Please provide as many details about your writing struggle as possible

Academic level of your paper

Type of Paper

When is it due?

How many pages is this assigment?

12 Hunger, Eating, and Health Why Do Many People Eat Too Much?

12.1 Digestion, Energy Storage, and Energy Utilization

12.2 Theories of Hunger and Eating: Set Points versus Positive Incentives

12.3 Factors That Determine What, When, and How Much We Eat

12.4 Physiological Research on Hunger and Satiety

12.5 Body Weight Regulation: Set Points versus Settling Points

12.6 Human Obesity: Causes, Mechanisms, and Treatments

12.7 Anorexia and Bulimia Nervosa

Eating is a behavior that is of interest to virtually everyone. We all do it, and most of us derive great pleasure from it. But for many of us, it becomes a source of serious personal and health problems.

 Watch

You Are What You Eat

www.mypsychlab.com

Most eating-related health problems in industrialized nations are associated with eating too much—the average American consumes 3,800 calories per day, about twice the average daily requirement (see Kopelman, 2000). For example, it is estimated that 65% of the adult U.S. population is either overweight or clinically obese, qualifying this problem for epidemic status (see Abelson & Kennedy, 2004; Arnold, 2009). The resulting financial and personal costs are huge. Each year in the United States, about $100 billion is spent treating obesity-related disorders (see Olshansky et al., 2005). Moreover, each year, an estimated 300,000 U.S. citizens die from disorders caused by their excessive eating (e.g., diabetes, hypertension, cardiovascular diseases, and some cancers). Although the United States is the trend-setter when it comes to overeating and obesity, many other countries are not far behind (Sofsian, 2007). Ironically, as overeating and obesity have reached epidemic proportions, there has been a related increase in disorders associated with eating too little (see Polivy & Herman, 2002). For example, almost 3% of American adolescents currently suffer from anorexia or bulimia, which can be life-threatening in extreme cases.

 Watch

Thinking about Hunger

www.mypsychlab.com

The massive increases in obesity and other eating-related disorders that have occurred over the last few decades in many countries stand in direct opposition to most people’s thinking about hunger and eating. Many people—and I assume that this includes you—believe that hunger and eating are normally triggered when the body’s energy resources fall below a prescribed optimal level, or set point . They appreciate that many factors influence hunger and eating, but they assume that the hunger and eating system has evolved to supply the body with just the right amount of energy.

Thinking Creatively

This chapter explores the incompatibility of the set-point assumption with the current epidemic of eating disorders. If we all have hunger and eating systems whose primary function is to maintain energy resources at optimal levels, then eating disorders should be rare. The fact that they are so prevalent suggests that hunger and eating are regulated in some other way. This chapter will repeatedly challenge you to think in new ways about issues that impact your health and longevity and will provide new insights of great personal relevance—I guarantee it.

Before you move on to the body of the chapter, I would like you to pause to consider a case study. What would a severely amnesic patient do if offered a meal shortly after finishing one? If his hunger and eating were controlled by energy set points, he would refuse the second meal. Did he?

The Case of the Man Who Forgot Not to Eat

Clinical Implications

R.H. was a 48-year-old male whose progress in graduate school was interrupted by the development of severe amnesia for long-term explicit memory. His amnesia was similar in pattern and severity to that of H.M., whom you met in Chapter 11, and an MRI examination revealed bilateral damage to the medial temporal lobes.

The meals offered to R.H. were selected on the basis of interviews with him about the foods he liked: veal parmigiana (about 750 calories) plus all the apple juice he wanted. On one occasion, he was offered a second meal about 15 minutes after he had eaten the first, and he ate it. When offered a third meal 15 minutes later, he ate that, too. When offered a fourth meal he rejected it, claiming that his “stomach was a little tight.”

Then, a few minutes later, R.H. announced that he was going out for a good walk and a meal. When asked what he was going to eat, his answer was “veal parmigiana.”

Clearly, R.H.’s hunger (i.e., motivation to eat) did not result from an energy deficit (Rozin et al., 1998). Other cases like that of R.H. have been reported by Higgs and colleagues (2008).

12.1 Digestion, Energy Storage, and Energy Utilization

The primary purpose of hunger is to increase the probability of eating, and the primary purpose of eating is to supply the body with the molecular building blocks and energy it needs to survive and function (see Blackburn, 2001). This section provides the foundation for our consideration of hunger and eating by providing a brief overview of the processes by which food is digested, stored, and converted to energy.

Digestion

The gastrointestinal tract and the process of digestion are illustrated in Figure 12.1 on page 300. Digestion is the gastrointestinal process of breaking down food and absorbing its constituents into the body. In order to appreciate the basics of digestion, it is useful to consider the body without its protuberances, as a simple living tube with a hole at each end. To supply itself with energy and other nutrients, the tube puts food into one of its two holes—the one with teeth—and passes the food along its internal canal so that the food can be broken down and partially absorbed from the canal into the body. The leftovers are jettisoned from the other end. Although this is not a particularly appetizing description of eating, it does serve to illustrate that, strictly speaking, food has not been consumed until it has been digested.

FIGURE 12.1 The gastrointestinal tract and the process of digestion.

Energy Storage in the Body

As a consequence of digestion, energy is delivered to the body in three forms: (1) lipids (fats), (2) amino acids (the breakdown products of proteins), and (3) glucose (a simple sugar that is the breakdown product of complex carbohydrates, that is, starches and sugars).

The body uses energy continuously, but its consumption is intermittent; therefore, it must store energy for use in the intervals between meals. Energy is stored in three forms: fats, glycogen, and proteins. Most of the body’s energy reserves are stored as fats, relatively little as glycogen and proteins (see Figure 12.2). Thus, changes in the body weights of adult humans are largely a consequence of changes in the amount of their stored body fat.

Why is fat the body’s preferred way of storing energy? Glycogen, which is largely stored in the liver and muscles, might be expected to be the body’s preferred mode of energy storage because it is so readily converted to glucose—the body’s main directly utilizable source of energy. But there are two reasons why fat, rather than glycogen, is the primary mode of energy storage: One is that a gram of fat can store almost twice as much energy as a gram of glycogen; the other is that glycogen, unlike fat, attracts and holds substantial quantities of water. Consequently, if all your fat calories were stored as glycogen, you would likely weigh well over 275 kilograms (600 pounds).

FIGURE 12.2 Distribution of stored energy in an average person.

Three Phases of Energy Metabolism

There are three phases of energy metabolism (the chemical changes by which energy is made available for an organism’s use): the cephalic phase, the absorptive phase, and the fasting phase. The cephalic phase is the preparatory phase; it often begins with the sight, smell, or even just the thought of food, and it ends when the food starts to be absorbed into the bloodstream. The absorptive phase is the period during which the energy absorbed into the bloodstream from the meal is meeting the body’s immediate energy needs. The fasting phase is the period during which all of the unstored energy from the previous meal has been used and the body is withdrawing energy from its reserves to meet its immediate energy requirements; it ends with the beginning of the next cephalic phase. During periods of rapid weight gain, people often go directly from one absorptive phase into the next cephalic phase, without experiencing an intervening fasting phase.

The flow of energy during the three phases of energy metabolism is controlled by two pancreatic hormones: insulin and glucagon. During the cephalic and absorptive phases, the pancreas releases a great deal of insulin into the bloodstream and very little glucagon. Insulin does three things: (1) It promotes the use of glucose as the primary source of energy by the body. (2) It promotes the conversion of bloodborne fuels to forms that can be stored: glucose to glycogen and fat, and amino acids to proteins. (3) It promotes the storage of glycogen in liver and muscle, fat in adipose tissue, and proteins in muscle. In short, the function of insulin during the cephalic phase is to lower the levels of bloodborne fuels, primarily glucose, in anticipation of the impending influx; and its function during the absorptive phase is to minimize the increasing levels of bloodborne fuels by utilizing and storing them.

In contrast to the cephalic and absorptive phases, the fasting phase is characterized by high blood levels of glucagon and low levels of insulin. Without high levels of insulin, glucose has difficulty entering most body cells; thus, glucose stops being the body’s primary fuel. In effect, this saves the body’s glucose for the brain, because insulin is not required for glucose to enter most brain cells. The low levels of insulin also promote the conversion of glycogen and protein to glucose. (The conversion of protein to glucose is called gluconeogenesis .)

On the other hand, the high levels of fasting-phase glucagon promote the release of free fatty acids from adipose tissue and their use as the body’s primary fuel. The high glucagon levels also stimulate the conversion of free fatty acids to ketones , which are used by muscles as a source of energy during the fasting phase. After a prolonged period without food, however, the brain also starts to use ketones, thus further conserving the body’s resources of glucose.

Figure 12.3 summarizes the major metabolic events associated with the three phases of energy metabolism.

FIGURE 12.3 The major events associated with the three phases of energy metabolism: the cephalic, absorptive, and fasting phases.

12.2 Theories of Hunger and Eating: Set Points versus Positive Incentives

One of the main difficulties I have in teaching the fundamentals of hunger, eating, and body weight regulation is the set-point assumption . Although it dominates most people’s thinking about hunger and eating (Assanand, Pinel, & Lehman, 1998a, 1998b), whether they realize it or not, it is inconsistent with the bulk of the evidence. What exactly is the set-point assumption?

Set-Point Assumption

Most people attribute hunger (the motivation to eat) to the presence of an energy deficit, and they view eating as the means by which the energy resources of the body are returned to their optimal level—that is, to the energy set point. Figure 12.4 summarizes this set-point assumption. After a meal (a bout of eating), a person’s energy resources are assumed to be near their set point and to decline thereafter as the body uses energy to fuel its physiological processes. When the level of the body’s energy resources falls far enough below the set point, a person becomes motivated by hunger to initiate another meal. The meal continues, according to the set-point assumption, until the energy level returns to its set point and the person feels satiated (no longer hungry).

FIGURE 12.4 The energy set-point view that is the basis of many people’s thinking about hunger and eating.

Set-point models assume that hunger and eating work in much the same way as a thermostat-regulated heating system in a cool climate. The heater increases the house temperature until it reaches its set point (the thermostat setting). The heater then shuts off, and the temperature of the house gradually declines until it becomes low enough to turn the heater back on. All set-point systems have three components: a set-point mechanism, a detector mechanism, and an effector mechanism. The set-point mechanism defines the set point, the detector mechanism detects deviations from the set point, and the effector mechanism acts to eliminate the deviations. For example, the set-point, detector, and effector mechanisms of a heating system are the thermostat, the thermometer, and the heater, respectively.

All set-point systems are negative feedback systems —systems in which feedback from changes in one direction elicit compensatory effects in the opposite direction. Negative feedback systems are common in mammals because they act to maintain homeostasis —a stable internal environment—which is critical for mammals’ survival (see Wenning, 1999). Set-point systems combine negative feedback with a set point to keep an internal environment fixed at the prescribed point. Set-point systems seemed necessary when the adult human brain was assumed to be immutable: Because the brain couldn’t change, energy resources had to be highly regulated. However, we now know that the adult human brain is plastic and capable of considerable adaptation. Thus, there is no longer a logical imperative for the set-point regulation of eating. Throughout this chapter, you will need to put aside your preconceptions and base your thinking about hunger and eating entirely on the empirical evidence.

Glucostatic and Lipostatic Set-Point Theories of Hunger and Eating

In the 1940s and 1950s, researchers working under the assumption that eating is regulated by some type of set-point system speculated about the nature of the regulation. Several researchers suggested that eating is regulated by a system that is designed to maintain a blood glucose set point—the idea being that we become hungry when our blood glucose levels drop significantly below their set point and that we become satiated when eating returns our blood glucose levels to their set point. The various versions of this theory are collectively referred to as the glucostatic theory . It seemed to make good sense that the main purpose of eating is to defend a blood glucose set point, because glucose is the brain’s primary fuel.

The lipostatic theory is another set-point theory that was proposed in various forms in the 1940s and 1950s. According to this theory, every person has a set point for body fat, and deviations from this set point produce compensatory adjustments in the level of eating that return levels of body fat to their set point. The most frequently cited support for the theory is the fact that the body weights of adults stay relatively constant.

The glucostatic and lipostatic theories were viewed as complementary, not mutually exclusive. The glucostatic theory was thought to account for meal initiation and termination, whereas the lipostatic theory was thought to account for long-term regulation. Thus, the dominant view in the 1950s was that eating is regulated by the interaction between two set-point systems: a short-term glucostatic system and a long-term lipostatic system. The simplicity of these 1950s theories is appealing. Remarkably, they are still being presented as the latest word in some textbooks; perhaps you have encountered them.

Problems with Set-Point Theories of Hunger and Eating

Thinking Creatively

Set-point theories of hunger and eating have several serious weaknesses (see de Castro & Plunkett, 2002). You have already learned one fact that undermines these theories: There is an epidemic of obesity and overweight, which should not occur if eating is regulated by a set point. Let’s look at three more major weaknesses of set-point theories of hunger and eating.

Evolutionary Perspective

• First, set-point theories of hunger and eating are inconsistent with basic eating-related evolutionary pressures as we understand them. The major eating-related problem faced by our ancestors was the inconsistency and unpredictability of the food supply. Thus, in order to survive, it was important for them to eat large quantities of good food when it was available so that calories could be banked in the form of body fat. Any ancestor—human or otherwise—that stopped feeling hungry as soon as immediate energy needs were met would not have survived the first hard winter or prolonged drought. For any warm-blooded species to survive under natural conditions, it needs a hunger and eating system that prevents energy deficits, rather than one that merely responds to them once they have developed. From this perspective, it is difficult to imagine how a set-point hunger and feeding system could have evolved in mammals (see Pinel, Assanand, & Lehman, 2000).

• Second, major predictions of the set-point theories of hunger and eating have not been confirmed. Early studies seemed to support the set-point theories by showing that large reductions in body fat, produced by starvation, or large reductions in blood glucose, produced by insulin injections, induce increases in eating in laboratory animals. The problem is that reductions in blood glucose of the magnitude needed to reliably induce eating rarely occur naturally. Indeed, as you have already learned in this chapter, about 65% of U.S. adults have a significant excess of fat deposits when they begin a meal. Conversely, efforts to reduce meal size by having subjects consume a high-calorie drink before eating have been largely unsuccessful; indeed, beliefs about the caloric content of a premeal drink often influence the size of a subsequent meal more than does its actual caloric content (see Lowe, 1993).

• Third, set-point theories of hunger and eating are deficient because they fail to recognize the major influences on hunger and eating of such important factors as taste, learning, and social influences. To convince yourself of the importance of these factors, pause for a minute and imagine the sight, smell, and taste of your favorite food. Perhaps it is a succulent morsel of lobster meat covered with melted garlic butter, a piece of chocolate cheesecake, or a plate of sizzling homemade french fries. Are you starting to feel a bit hungry? If the homemade french fries—my personal weakness—were sitting in front of you right now, wouldn’t you reach out and have one, or maybe the whole plateful? Have you not on occasion felt discomfort after a large main course, only to polish off a substantial dessert? The usual positive answers to these questions lead unavoidably to the conclusion that hunger and eating are not rigidly controlled by deviations from energy set points.

Positive-Incentive Perspective

The inability of set-point theories to account for the basic phenomena of eating and hunger led to the development of an alternative theoretical perspective (see Berridge, 2004). The central assertion of this perspective, commonly referred to as positive-incentive theory , is that humans and other animals are not normally driven to eat by internal energy deficits but are drawn to eat by the anticipated pleasure of eating—the anticipated pleasure of a behavior is called its positive-incentive value (see Bolles, 1980; Booth, 1981; Collier, 1980; Rolls, 1981; Toates, 1981). There are several different positive-incentive theories, and I refer generally to all of them as the positive-incentive perspective.

Evolutionary Perspective

The major tenet of the positive-incentive perspective on eating is that eating is controlled in much the same way as sexual behavior: We engage in sexual behavior not because we have an internal deficit, but because we have evolved to crave it. The evolutionary pressures of unexpected food shortages have shaped us and all other warm-blooded animals, who need a continuous supply of energy to maintain their body temperatures, to take advantage of good food when it is present and eat it. According to the positive-incentive perspective, it is the presence of good food, or the anticipation of it, that normally makes us hungry, not an energy deficit.

According to the positive-incentive perspective, the degree of hunger you feel at any particular time depends on the interaction of all the factors that influence the positive-incentive value of eating (see Palmiter, 2007). These include the following: the flavor of the food you are likely to consume, what you have learned about the effects of this food either from eating it previously or from other people, the amount of time since you last ate, the type and quantity of food in your gut, whether or not other people are present and eating, whether or not your blood glucose levels are within the normal range. This partial list illustrates one strength of the positive-incentive perspective. Unlike set-point theories, positive-incentive theories do not single out one factor as the major determinant of hunger and ignore the others. Instead, they acknowledge that many factors interact to determine a person’s hunger at any time, and they suggest that this interaction occurs through the influence of these various factors on the positive-incentive value of eating (see Cabanac, 1971).

In this section, you learned that most people think about hunger and eating in terms of energy set points and were introduced to an alternative way of thinking—the positive-incentive perspective. Which way is correct? If you are like most people, you have an attachment to familiar ways of thinking and a resistance to new ones. Try to put this tendency aside and base your views about this important issue entirely on the evidence.

You have already learned about some of the major weaknesses of strict set-point theories of hunger and eating. The next section describes some of the things that biopsychological research has taught us about hunger and eating. As you progress through the section, notice the superiority of the positive-incentive theories over set-point theories in accounting for the basic facts.

12.3 Factors That Determine What, When, and How Much We Eat

This section describes major factors that commonly determine what we eat, when we eat, and how much we eat. Notice that energy deficits are not included among these factors. Although major energy deficits clearly increase hunger and eating, they are not a common factor in the eating behavior of people like us, who live in food-replete societies. Although you may believe that your body is short of energy just before a meal, it is not. This misconception is one that is addressed in this section. Also, notice how research on nonhumans has played an important role in furthering understanding of human eating.

Factors That Determine What We Eat

Certain tastes have a high positive-incentive value for virtually all members of a species. For example, most humans have a special fondness for sweet, fatty, and salty tastes. This species-typical pattern of human taste preferences is adaptive because in nature sweet and fatty tastes are typically characteristic of high-energy foods that are rich in vitamins and minerals, and salty tastes are characteristic of sodium-rich foods. In contrast, bitter tastes, for which most humans have an aversion, are often associated with toxins. Superimposed on our species-typical taste preferences and aversions, each of us has the ability to learn specific taste preferences and aversions (see Rozin & Shulkin, 1990).

Evolutionary Perspective

Learned Taste Preferences and Aversions

Animals learn to prefer tastes that are followed by an infusion of calories, and they learn to avoid tastes that are followed by illness (e.g., Baker & Booth, 1989; Lucas & Sclafani, 1989; Sclafani, 1990). In addition, humans and other animals learn what to eat from their conspecifics. For example, rats learn to prefer flavors that they experience in mother’s milk and those that they smell on the breath of other rats (see Galef, 1995, 1996; Galef, Whishkin, & Bielavska, 1997). Similarly, in humans, many food preferences are culturally specific—for example, in some cultures, various nontoxic insects are considered to be a delicacy. Galef and Wright (1995) have shown that rats reared in groups, rather than in isolation, are more likely to learn to eat a healthy diet.

Learning to Eat Vitamins and Minerals

How do animals select a diet that provides all of the vitamins and minerals they need? To answer this question, researchers have studied how dietary deficiencies influence diet selection. Two patterns of results have emerged: one for sodium and one for the other essential vitamins and minerals. When an animal is deficient in sodium, it develops an immediate and compelling preference for the taste of sodium salt (see Rowland, 1990). In contrast, an animal that is deficient in some vitamin or mineral other than sodium must learn to consume foods that are rich in the missing nutrient by experiencing their positive effects; this is because vitamins and minerals other than sodium normally have no detectable taste in food. For example, rats maintained on a diet deficient in thiamine (vitamin B1) develop an aversion to the taste of that diet; and if they are offered two new diets, one deficient in thiamine and one rich in thiamine, they often develop a preference for the taste of the thiamine-rich diet over the ensuing days, as it becomes associated with improved health.

If we, like rats, are capable of learning to select diets that are rich in the vitamins and minerals we need, why are dietary deficiencies so prevalent in our society? One reason is that, in order to maximize profits, manufacturers produce foods that have the tastes we prefer but lack many of the nutrients we need to maintain our health. (Even rats prefer chocolate chip cookies to nutritionally complete rat chow.) The second reason is illustrated by the classic study of Harris and associates (1933). When thiamine-deficient rats were offered two new diets, one with thiamine and one without, almost all of them learned to eat the complete diet and avoid the deficient one. However, when they were offered ten new diets, only one of which contained the badly needed thiamine, few developed a preference for the complete diet. The number of different substances, both nutritious and not, consumed each day by most people in industrialized societies is immense, and this makes it difficult, if not impossible, for their bodies to learn which foods are beneficial and which are not.

Thinking Creatively

There is not much about nutrition in this chapter: Although it is critically important to eat a nutritious diet, nutrition seems to have little direct effect on our feelings of hunger. However, while I am on the topic, I would like to direct you to a good source of information about nutrition that could have a positive effect on your health: Some popular books on nutrition are dangerous, and even governments, inordinately influenced by economic considerations and special-interest groups, often do not provide the best nutritional advice (see Nestle, 2003). For sound research-based advice on nutrition, check out an article by Willett and Stampfer (2003) and the book on which it is based, Eat, Drink, and Be Healthy by Willett, Skerrett, and Giovannucci (2001).

Factors That Influence When We Eat

Evolutionary Perspective

Collier and his colleagues (see Collier, 1986) found that most mammals choose to eat many small meals (snacks) each day if they have ready access to a continuous supply of food. Only when there are physical costs involved in initiating meals—for example, having to travel a considerable distance—does an animal opt for a few large meals.

The number of times humans eat each day is influenced by cultural norms, work schedules, family routines, personal preferences, wealth, and a variety of other factors. However, in contrast to the usual mammalian preference, most people, particularly those living in family groups, tend to eat a few large meals each day at regular times. Interestingly, each person’s regular mealtimes are the very same times at which that person is likely to feel most hungry; in fact, many people experience attacks of malaise (headache, nausea, and an inability to concentrate) when they miss a regularly scheduled meal.

Premeal Hunger

I am sure that you have experienced attacks of premeal hunger. Subjectively, they seem to provide compelling support for set-point theories. Your body seems to be crying out: “I need more energy. I cannot function without it. Please feed me.” But things are not always the way they seem. Woods has straightened out the confusion (see Woods, 1991; Woods & Ramsay, 2000; Woods & Strubbe, 1994).

According to Woods, the key to understanding hunger is to appreciate that eating meals stresses the body. Before a meal, the body’s energy reserves are in reasonable homeostatic balance; then, as a meal is consumed, there is a homeostasis-disturbing influx of fuels into the bloodstream. The body does what it can to defend its homeostasis. At the first indication that a person will soon be eating—for example, when the usual mealtime approaches—the body enters the cephalic phase and takes steps to soften the impact of the impending homeostasis-disturbing influx by releasing insulin into the blood and thus reducing blood glucose. Woods’s message is that the strong, unpleasant feelings of hunger that you may experience at mealtimes are not cries from your body for food; they are the sensations of your body’s preparations for the expected homeostasis-disturbing meal. Mealtime hunger is caused by the expectation of food, not by an energy deficit.

Thinking Creatively

As a high school student, I ate lunch at exactly 12:05 every day and was overwhelmed by hunger as the time approached. Now, my eating schedule is different, and I never experience noontime hunger pangs; I now get hungry just before the time at which I usually eat. Have you had a similar experience?

Pavlovian Conditioning of Hunger

In a classic series of Pavlovian conditioning experiments on laboratory rats, Weingarten (1983, 1984, 1985) provided strong support for the view that hunger is often caused by the expectation of food, not by an energy deficit. During the conditioning phase of one of his experiments, Weingarten presented rats with six meals per day at irregular intervals, and he signaled the impending delivery of each meal with a buzzer-and-light conditional stimulus. This conditioning procedure was continued for 11 days. Throughout the ensuing test phase of the experiment, the food was continuously available. Despite the fact that the subjects were never deprived during the test phase, the rats started to eat each time the buzzer and light were presented—even if they had recently completed a meal.

Factors That Influence How Much We Eat

The motivational state that causes us to stop eating a meal when there is food remaining is satiety . Satiety mechanisms play a major role in determining how much we eat.

Satiety Signals

As you will learn in the next section of the chapter, food in the gut and glucose entering the blood can induce satiety signals, which inhibit subsequent consumption. These signals depend on both the volume and the nutritive density (calories per unit volume) of the food.

Evolutionary Perspective

The effects of nutritive density have been demonstrated in studies in which laboratory rats have been maintained on a single diet. Once a stable baseline of consumption has been established, the nutritive density of the diet is changed. Some rats learn to adjust the volume of food they consume to keep their caloric intake and body weights relatively stable. However, there are major limits to this adjustment: Rats rarely increase their intake sufficiently to maintain their body weights if the nutritive density of their conventional laboratory feed is reduced by more than 50% or if there are major changes in the diet’s palatability.

Sham Eating

The study of sham eating indicates that satiety signals from the gut or blood are not necessary to terminate a meal. In sham-eating experiments, food is chewed and swallowed by the subject; but rather than passing down the subject’s esophagus into the stomach, it passes out of the body through an implanted tube (see Figure 12.5).

FIGURE 12.5 The sham-eating preparation.

Because sham eating adds no energy to the body, set-point theories predict that all sham-eaten meals should be huge. But this is not the case. Weingarten and Kulikovsky (1989) sham fed rats one of two differently flavored diets: one that the rats had naturally eaten many times before and one that they had never eaten before. The first sham meal of the rats that had previously eaten the diet was the same size as the previously eaten meals of that diet; then, on ensuing days they began to sham eat more and more (see Figure 12.6). In contrast, the rats that were presented with the unfamiliar diet sham ate large quantities right from the start. Weingarten and Kulikovsky concluded that the amount we eat is influenced largely by our previous experience with the particular food’s physiological effects, not by the immediate effect of the food on the body.

FIGURE 12.6 Change in the magnitude of sham eating over repeated sham-eating trials. The rats in one group sham ate the same diet they had eaten before the sham-eating phase; the rats in another group sham ate a diet different from the one they had previously eaten. (Based on Weingarten, 1990.)

Appetizer Effect and Satiety

The next time you attend a dinner party, you may experience a major weakness of the set-point theory of satiety.

Thinking Creatively

If appetizers are served, you will notice that small amounts of food consumed before a meal actually increase hunger rather than reducing it. This is the appetizer effect . Presumably, it occurs because the consumption of a small amount of food is particularly effective in eliciting cephalic-phase responses.

Serving Size and Satiety

Many experiments have shown that the amount of consumption is influenced by serving size (Geier, Rozin, & Doros, 2006). The larger the servings, the more we tend to eat. There is even evidence that we tend to eat more when we eat with larger spoons.

Social Influences and Satiety

Feelings of satiety may also depend on whether we are eating alone or with others. Redd and de Castro (1992) found that their subjects consumed 60% more when eating with others. Laboratory rats also eat substantially more when fed in groups.

In humans, social factors have also been shown to reduce consumption. Many people eat less than they would like in order to achieve their society’s ideal of slenderness, and others refrain from eating large amounts in front of others so as not to appear gluttonous. Unfortunately, in our culture, females are influenced by such pressures more than males are, and, as you will learn later in the chapter, some develop serious eating disorders as a result.

Sensory-Specific Satiety

The number of different tastes available at each meal has a major effect on meal size. For example, the effect of offering a laboratory rat a varied diet of highly palatable foods—a cafeteria diet —is dramatic. Adults rats that were offered bread and chocolate in addition to their usual laboratory diet increased their average intake of calories by 84%, and after 120 days they had increased their average body weights by 49% (Rogers & Blundell, 1980). The spectacular effects of cafeteria diets on consumption and body weight clearly run counter to the idea that satiety is rigidly controlled by internal energy set points.

The effect on meal size of cafeteria diets results from the fact that satiety is to a large degree sensory-specific. As you eat one food, the positive-incentive value of all foods declines slightly, but the positive-incentive value of that particular food plummets. As a result, you soon become satiated on that food and stop eating it. However, if another food is offered to you, you will often begin eating again.

In one study of sensory-specific satiety (Rolls et al., 1981), human subjects were asked to rate the palatability of eight different foods, and then they ate a meal of one of them. After the meal, they were asked to rate the palatability of the eight foods once again, and it was found that their rating of the food they had just eaten had declined substantially more than had their ratings of the other seven foods. Moreover, when the subjects were offered an unexpected second meal, they consumed most of it unless it was the same as the first.

Booth (1981) asked subjects to rate the momentary pleasure produced by the flavor, the smell, the sight, or just the thought of various foods at different times after consuming a large, high-calorie, high-carbohydrate liquid meal. There was an immediate sensory-specific decrease in the palatability of foods of the same or similar flavor as soon as the liquid meal was consumed. This was followed by a general decrease in the palatability of all substances about 30 minutes later. Thus, it appears that signals from taste receptors produce an immediate decline in the positive-incentive value of similar tastes and that signals associated with the postingestive consequences of eating produce a general decrease in the positive-incentive value of all foods.

Rolls (1990) suggested that sensory-specific satiety has two kinds of effects: relatively brief effects that influence the selection of foods within a single meal, and relatively enduring effects that influence the selection of foods from meal to meal. Some foods seem to be relatively immune to long-lasting sensory-specific satiety; foods such as rice, bread, potatoes, sweets, and green salads can be eaten almost every day with only a slight decline in their palatability (Rolls, 1986).

Evolutionary Perspective

The phenomenon of sensory-specific satiety has two adaptive consequences. First, it encourages the consumption of a varied diet. If there were no sensory-specific satiety, a person would tend to eat her or his preferred food and nothing else, and the result would be malnutrition. Second, sensory-specific satiety encourages animals that have access to a variety of foods to eat a lot; an animal that has eaten its fill of one food will often begin eating again if it encounters a different one (Raynor & Epstein, 2001). This encourages animals to take full advantage of times of abundance, which are all too rare in nature.

Thinking Creatively

This section has introduced you to several important properties of hunger and eating. How many support the set-point assumption, and how many are inconsistent with it?

Scan Your Brain

Are you ready to move on to the discussion of the physiology of hunger and satiety in the following section? Find out by completing the following sentences with the most appropriate terms. The correct answers are provided at the end of the exercise. Before proceeding, review material related to your incorrect answers and omissions.

1. The primary function of the ______ is to serve as a storage reservoir for undigested food.

2. Most of the absorption of nutrients into the body takes place through the wall of the ______, or upper intestine.

3. The phase of energy metabolism that is triggered by the expectation of food is the ______ phase.

4. During the absorptive phase, the pancreas releases a great deal of ______ into the bloodstream.

5. During the fasting phase, the primary fuels of the body are ______.

6. During the fasting phase, the primary fuel of the brain is ______.

7. The three components of a set-point system are a set-point mechanism, a detector, and an ______.

8. The theory that hunger and satiety are regulated by a blood glucose set point is the ______ theory.

9. Evidence suggests that hunger is greatly influenced by the current ______ value of food.

10. Most humans have a preference for sweet, fatty, and ______ tastes.

11. There are two mechanisms by which we learn to eat diets containing essential vitamins and minerals: one mechanism for ______ and another mechanism for the rest.

12. Satiety that is specific to the particular foods that produce it is called ______ satiety.

Scan Your Brain answers:

(1) stomach,

(2) duodenum,

(3) cephalic,

(4) insulin,

(5) free fatty acids,

(6) glucose,

(7) effector,

(8) glucostatic,

(9) positiveincentive,

(10) salty,

(11) sodium,

(12) sensory-specific.

12.4 Physiological Research on Hunger and Satiety

Now that you have been introduced to set-point theories, the positive-incentive perspective, and some basic factors that affect why, when, and how much we eat, this section introduces you to five prominent lines of research on the physiology of hunger and satiety.

Role of Blood Glucose Levels in Hunger and Satiety

As I have already explained, efforts to link blood glucose levels to eating have been largely unsuccessful. However, there was a renewed interest in the role of glucose in the regulation of eating in the 1990s, following the development of methods of continually monitoring blood glucose levels. In the classic experiment of Campfield and Smith (1990), rats were housed individually, with free access to a mixed diet and water, and their blood glucose levels were continually monitored via a chronic intravenous catheter (i.e., a hypodermic needle located in a vein). In this situation, baseline blood glucose levels rarely fluctuated more than 2%. However, about 10 minutes before a meal was initiated, the levels suddenly dropped about 8% (see Figure 12.7).

FIGURE 12.7 The meal-related changes in blood glucose levels observed by Campfield and Smith (1990).

Do the observed reductions in blood glucose before a meal lend support to the glucostatic theory of hunger? I think not, for five reasons:

• It is a simple matter to construct a situation in which drops in blood glucose levels do not precede eating (e.g., Strubbe & Steffens, 1977)—for example, by unexpectedly serving a food with a high positive-incentive value.

• The usual premeal decreases in blood glucose seem to be a response to the intention to start eating, not the other way round. The premeal decreases in blood glucose are typically preceded by increases in blood insulin levels, which indicates that the decreases do not reflect gradually declining energy reserves but are actively produced by an increase in blood levels of insulin (see Figure 12.7).

• If an expected meal is not served, blood glucose levels soon return to their previous homeostatic level.

• The glucose levels in the extracellular fluids that surround CNS neurons stay relatively constant, even when blood glucose levels drop (see Seeley & Woods, 2003).

• Injections of insulin do not reliably induce eating unless the injections are sufficiently great to reduce blood glucose levels by 50% (see Rowland, 1981), and large premeal infusions of glucose do not suppress eating (see Geiselman, 1987).

Myth of Hypothalamic Hunger and Satiety Centers

In the 1950s, experiments on rats seemed to suggest that eating behavior is controlled by two different regions of the hypothalamus: satiety by the ventromedial hypothalamus (VMH) and feeding by the lateral hypothalamus (LH) —see Figure 12.8. This theory turned out to be wrong, but it stimulated several important discoveries.

FIGURE 12.8 The locations in the rat brain of the ventromedial hypothalamus and the lateral hypothalamus.

VMH Satiety Center

In 1940, it was discovered that large bilateral electrolytic lesions to the ventromedial hypothalamus produce hyperphagia (excessive eating) and extreme obesity in rats (Hetherington & Ranson, 1940). This VMH syndrome has two different phases: dynamic and static. The dynamic phase , which begins as soon as the subject regains consciousness after the operation, is characterized by several weeks of grossly excessive eating and rapid weight gain. However, after that, consumption gradually declines to a level that is just sufficient to maintain a stable level of obesity; this marks the beginning of the static phase . Figure 12.9 illustrates the weight gain and food intake of an adult rat with bilateral VMH lesions.

The most important feature of the static phase of the VMH syndrome is that the animal maintains its new body weight. If a rat in the static phase is deprived of food until it has lost a substantial amount of weight, it will regain the lost weight once the deprivation ends; conversely, if it is made to gain weight by forced feeding, it will lose the excess weight once the forced feeding is curtailed.

Paradoxically, despite their prodigious levels of consumption, VMH-lesioned rats in some ways seem less hungry than unlesioned controls. Although VMH-lesioned rats eat much more than normal rats when palatable food is readily available, they are less willing to work for it (Teitelbaum, 1957) or to consume it if it is slightly unpalatable (Miller, Bailey, & Stevenson, 1950). Weingarten, Chang, and Jarvie (1983) showed that the finicky eating of VMH-lesioned rats is a consequence of their obesity, not a primary effect of their lesion; they are no less likely to consume unpalatable food than are unlesioned rats of equal obesity.

LH Feeding Center

In 1951, Anand and Brobeck reported that bilateral electrolytic lesions to the lateral hypothalamus produce aphagia —a complete cessation of eating. Even rats that were first made hyperphagic by VMH lesions were rendered aphagic by the addition of LH lesions. Anand and Brobeck concluded that the lateral region of the hypothalamus is a feeding center. Teitelbaum and Epstein (1962) subsequently discovered two important features of the LH syndrome. First, they found that the aphagia was accompanied by adipsia —a complete cessation of drinking. Second, they found that LH-lesioned rats partially recover if they are kept alive by tube feeding. First, they begin to eat wet, palatable foods, such as chocolate chip cookies soaked in milk, and eventually they will eat dry food pellets if water is concurrently available.

Reinterpretation of the Effects of VMH and LH Lesions

Thinking Creatively

The theory that the VMH is a satiety center crumbled in the face of two lines of evidence. One of these lines showed that the primary role of the hypothalamus is the regulation of energy metabolism, not the regulation of eating. The initial interpretation was that VMH-lesioned animals become obese because they overeat; however, the evidence suggests the converse—that they overeat because they become obese. Bilateral VMH lesions increase blood insulin levels, which increases lipogenesis (the production of body fat) and decreases lipolysis (the breakdown of body fat to utilizable forms of energy)—see Powley et al. (1980). Both are likely to be the result of the increases in insulin levels that occur following the lesion. Because the calories ingested by VMH-lesioned rats are converted to fat at a high rate, the rats must keep eating to ensure that they have enough calories in their blood to meet their immediate energy requirements (e.g., Hustvedt & Løvø, 1972); they are like misers who run to the bank each time they make a bit of money and deposit it in a savings account from which withdrawals cannot be made.

FIGURE 12.9 Postoperative hyper-phagia and obesity in a rat with bilateral VMH lesions. (Based on Teitelbaum, 1961.)

The second line of evidence that undermined the theory of a VMH satiety center has shown that many of the effects of VMH lesions are not attributable to VMH damage. A large fiber bundle, the ventral noradrenergic bundle, courses past the VMH and is thus inevitably damaged by large electrolytic VMH lesions; in particular, fibers that project from the nearby paraventricular nuclei of the hypothalamus are damaged (see Figure 12.10). Bilateral lesions of the noradrenergic bundle (e.g., Gold et al., 1977) or the paraventricular nuclei (Leibowitz, Hammer, & Chang, 1981) produce hyperphagia and obesity, just as VMH lesions do.

Most of the evidence against the notion that the LH is a feeding center has come from a thorough analysis of the effects of bilateral LH lesions. Early research focused exclusively on the aphagia and adipsia that are produced by LH lesions, but subsequent research has shown that LH lesions produce a wide range of severe motor disturbances and a general lack of responsiveness to sensory input (of which food and drink are but two examples). Consequently, the idea that the LH is a center specifically dedicated to feeding no longer warrants serious consideration.

FIGURE 12.10 Location of the paraventricular nucleus in the rat hypothalamus. Note that the section through the hypothalamus is slightly different than the one in Figure 12.8.

Role of the Gastrointestinal Tract in Satiety

One of the most influential early studies of hunger was published by Cannon and Washburn in 1912. It was a perfect collaboration: Cannon had the ideas, and Washburn had the ability to swallow a balloon. First, Washburn swallowed an empty balloon tied to the end of a thin tube. Then, Cannon pumped some air into the balloon and connected the end of the tube to a water-filled glass U-tube so that Washburn’s stomach contractions produced a momentary increase in the level of the water at the other end of the U-tube. Washburn reported a “pang” of hunger each time that a large stomach contraction was recorded (see Figure 12.11).

FIGURE 12.11 The system developed by Cannon and Washburn in 1912 for measuring stomach contractions. They found that large stomach contractions were related to pangs of hunger.

Cannon and Washburn’s finding led to the theory that hunger is the feeling of contractions caused by an empty stomach, whereas satiety is the feeling of stomach distention. However, support for this theory and interest in the role of the gastrointestinal tract in hunger and satiety quickly waned with the discovery that human patients whose stomach had been surgically removed and whose esophagus had been hooked up directly to their duodenum (the first segment of the small intestine, which normally carries food away from the stomach) continued to report feelings of hunger and satiety and continued to maintain their normal body weight by eating more meals of smaller size.

In the 1980s, there was a resurgence of interest in the role of the gastrointestinal tract in eating. It was stimulated by a series of experiments that indicated that the gastrointestinal tract is the source of satiety signals. For example, Koopmans (1981) transplanted an extra stomach and length of intestine into rats and then joined the major arteries and veins of the implants to the recipients’ circulatory systems (see Figure 12.12). Koopmans found that food injected into the transplanted stomach and kept there by a noose around the pyloric sphincter decreased eating in proportion to both its caloric content and volume. Because the transplanted stomach had no functional nerves, the gastrointestinal satiety signal had to be reaching the brain through the blood. And because nutrients are not absorbed from the stomach, the bloodborne satiety signal could not have been a nutrient. It had to be some chemical or chemicals that were released from the stomach in response to the caloric value and volume of the food—which leads us nicely into the next subsection.

Hunger and Satiety Peptides

Evolutionary Perspective

Soon after the discovery that the stomach and other parts of the gastrointestinal tract release chemical signals to the brain, evidence began to accumulate that these chemicals were peptides, short chains of amino acids that can function as hormones and neurotransmitters (see Fukuhara et al., 2005). Ingested food interacts with receptors in the gastrointestinal tract and in so doing causes the tract to release peptides into the bloodstream. In 1973, Gibbs, Young, and Smith injected one of these gut peptides, cholecystokinin (CCK) , into hungry rats and found that they ate smaller meals. This led to the hypothesis that circulating gut peptides provide the brain with information about the quantity and nature of food in the gastrointestinal tract and that this information plays a role in satiety (see Badman & Flier, 2005; Flier, 2006).

There has been considerable support for the hypothesis that peptides can function as satiety signals (see Gao & Horvath, 2007; Ritter, 2004). Several gut peptides have been shown to bind to receptors in the brain, particularly in areas of the hypothalamus involved in energy metabolism, and a dozen or so (e.g., CCK, bombesin, glucagon, alpha-melanocyte-stimulating hormone, and somatostatin) have been reported to reduce food intake (see Batterham et al., 2006; Zhang et al., 2005). These have become known as satiety peptides (peptides that decrease appetite).

FIGURE 12.12 Transplantation of an extra stomach and length of intestine in a rat. Koopmans (1981) implanted an extra stomach and length of intestine in each of his experimental subjects. He then connected the major blood vessels of the implanted stomachs to the circulatory systems of the recipients. Food injected into the extra stomach and kept there by a noose around the pyloric sphincter decreased eating in proportion to its volume and caloric value.

In studying the appetite-reducing effects of peptides, researchers had to rule out the possibility that these effects are not merely the consequence of illness (see Moran, 2004). Indeed, there is evidence that one peptide in particular, CCK, induces illness: CCK administered to rats after they have eaten an unfamiliar substance induces a conditioned taste aversion for that substance, and CCK induces nausea in human subjects. However, CCK reduces appetite and eating at doses substantially below those that are required to induce taste aversion in rats, and thus it qualifies as a legitimate satiety peptide.

Several hunger peptides (peptides that increase appetite) have also been discovered. These peptides tend to be synthesized in the brain, particularly in the hypothalamus. The most widely studied of these are neuropeptide Y, galanin, orexin-A, and ghrelin (e.g., Baird, Gray, & Fischer, 2006; Olszewski, Schiöth & Levine, 2008; Williams et al., 2004).

The discovery of the hunger and satiety peptides has had two major effects on the search for the neural mechanisms of hunger and satiety. First, the sheer number of these hunger and satiety peptides indicates that the neural system that controls eating likely reacts to many different signals (Nogueiras & Tschöp, 2005; Schwartz & Azzara, 2004), not just to one or two (e.g., not just to glucose and fat). Second, the discovery that many of the hunger and satiety peptides have receptors in the hypothalamus has renewed interest in the role of the hypothalamus in hunger and eating (Gao & Horvath, 2007; Lam, Schwartz, & Rossetti, 2006; Luquet et al., 2005). This interest was further stimulated by the discovery that microinjection of gut peptides into some sites in the hypothalamus can have major effects on eating. Still, there is a general acceptance that hypothalamic circuits are only one part of a much larger system (see Berthoud & Morrison, 2008; Cone, 2005).

Serotonin and Satiety

Evolutionary Perspective

The monoaminergic neurotransmitter serotonin is another chemical that plays a role in satiety. The initial evidence for this role came from a line of research in rats. In these studies, serotonin-produced satiety was found to have three major properties (see Blundell & Halford, 1998):

• It caused the rats to resist the powerful attraction of highly palatable cafeteria diets.

• It reduced the amount of food that was consumed during each meal rather than reducing the number of meals (see Clifton, 2000).

• It was associated with a shift in food preferences away from fatty foods.

This profile of effects suggested that serotonin might be useful in combating obesity in humans. Indeed, serotonin agonists (e.g., fenfluramine, dexfenfluramine, fluoxetine) have been shown to reduce hunger, eating, and body weight under some conditions (see Blundell & Halford, 1998). Later in this chapter, you will learn about the use of serotonin to treat human obesity (see De Vry & Schreiber, 2000).

Prader-Willi Syndrome: Patients with Insatiable Hunger

Prader-Willi syndrome could prove critical in the discovery of the neural mechanisms of hunger and satiety (Goldstone, 2004). Individuals with Prader-Willi syndrome, which results from an accident of chromosomal replication, experience insatiable hunger, little or no satiety, and an exceptionally slow metabolism. In short, the Prader-Willi patient acts as though he or she is starving. Other common physical and neurological symptoms include weak muscles, small hands and feet, feeding difficulties in infancy, tantrums, compulsivity, and skin picking. If untreated, most patients become extremely obese, and they often die in early adulthood from diabetes, heart disease, or other obesity-related disorders. Some have even died from gorging until their stomachs split open. Fortunately, Miss A. was diagnosed in infancy and received excellent care, which kept her from becoming obese (Martin et al., 1998).

Prader-Willi Syndrome: The Case of Miss A.

Clinical Implications

Miss A. was born with little muscle tone. Because her sucking reflex was so weak, she was tube fed. By the time she was 2 years old, her hypotonia (below-normal muscle tone) had resolved itself, but a number of characteristic deformities and developmental delays began to appear.

At 31/2 years of age, Miss A. suddenly began to display a voracious appetite and quickly gained weight. Fortunately, her family maintained her on a low-calorie diet and kept all food locked away.

Miss A. is moderately retarded, and she suffers from psychiatric problems. Her major problem is her tendency to have tantrums any time anything changes in her environment (e.g., a substitute teacher at school). Thanks largely to her family and pediatrician, she has received excellent care, which has minimized the complications that arise with Prader-Willi syndrome—most notably those related to obesity and its pathological effects.

Although the study of Prader-Willi syndrome has yet to provide any direct evidence about the neural mechanisms of hunger and eating, there has been a marked surge in its investigation. This increase has been stimulated by the recent identification of the genetic cause of the condition: an accident of reproduction that deletes or disrupts a section of chromosome 15 coming from the father. This information has provided clues about genetic factors in appetite.

12.5 Body Weight Regulation: Set Points versus Settling Points

One strength of set-point theories of eating is that they explain body weight regulation. You have already learned that set-point theories are largely inconsistent with the facts of eating, but how well do they account for the regulation of body weight? Certainly, many people in our culture believe that body weight is regulated by a body-fat set point (Assanand, Pinel, & Lehman, 1998a, 1998b). They believe that when fat deposits are below a person’s set point, a person becomes hungrier and eats more, which results in a return of body-fat levels to that person’s set point; and, conversely, they believe that when fat deposits are above a person’s set point, a person becomes less hungry and eats less, which results in a return of body-fat levels to their set point.

Set-Point Assumptions about Body Weight and Eating

You have already learned that set-point theories do a poor job of explaining the characteristics of hunger and eating. Do they do a better job of accounting for the facts of body weight regulation? Let’s begin by looking at three lines of evidence that challenge fundamental aspects of many set-point theories of body weight regulation.

Variability of Body Weight

The set-point model was expressly designed to explain why adult body weights remain constant. Indeed, a set-point mechanism should make it virtually impossible for an adult to gain or lose large amounts of weight. Yet, many adults experience large and lasting changes in body weight (see Booth, 2004). Moreover, set-point thinking crumbles in the face of the epidemic of obesity that is currently sweeping fast-food societies (Rosenheck, 2008).

Set-point theories of body weight regulation suggest that the best method of maintaining a constant body weight is to eat each time there is a motivation to eat, because, according to the theory, the main function of hunger is to defend the set point. However, many people avoid obesity only by resisting their urges to eat.

Set Points and Health

One implication of set-point theories of body weight regulation is that each person’s set point is optimal for that person’s health—or at least not incompatible with good health. This is why popular psychologists commonly advise people to “listen to the wisdom of their bodies” and eat as much as they need to satisfy their hunger. Experimental results indicate that this common prescription for good health could not be further from the truth.

Two kinds of evidence suggest that typical ad libitum (free-feeding) levels of consumption are unhealthy (see Brownell & Rodin, 1994). First are the results of studies of humans who consume fewer calories than others. For example, people living on the Japanese island of Okinawa seemed to eat so few calories that their eating habits became a concern of health officials. When the health officials took a closer look, here is what they found (see Kagawa, 1978). Adult Okinawans were found to consume, on average, 20% fewer calories than other adult Japanese, and Okinawan school children were found to consume 38% fewer calories than recommended by public health officials. It was somewhat surprising then that rates of morbidity and mortality and of all aging-related diseases were found to be substantially lower in Okinawa than in other parts of Japan, a country in which overall levels of caloric intake and obesity are far below Western norms. For example, the death rates from stroke, cancer, and heart disease in Okinawa were only 59%, 69%, and 59%, respectively, of those in the rest of Japan. Indeed, the proportion of Okinawans living to be over 100 years of age was up to 40 times greater than that of inhabitants of various other regions of Japan.

Thinking Creatively

The Okinawan study and the other studies that have reported major health benefits in humans who eat less (e.g., Manson et al., 1995; Meyer et al., 2006; Walford & Walford, 1994) are not controlled experiments; therefore, they must be interpreted with caution. For example, perhaps it is not simply the consumption of fewer calories that leads to health and longevity; perhaps in some cultures people who eat less tend to eat healthier diets.

Evolutionary Perspective

Evolutionary Perspective

Controlled experimental demonstrations in over a dozen different mammalian species, including monkeys (see Coleman et al., 2009), of the beneficial effects of calorie restriction constitute the second kind of evidence that ad libitum levels of consumption are unhealthy. Fortunately, the results of such controlled experiments do not present the same problems of interpretation as do the findings of the Okinawa study and other similar correlational studies in humans. In typical calorie-restriction experiments, one group of subjects is allowed to eat as much as they choose, while other groups of subjects have their caloric intake of the same diets substantially reduced (by between 25% and 65% in various studies). Results of such experiments have been remarkably consistent (see Bucci, 1992; Masoro, 1988; Weindruch, 1996; Weindruch & Walford, 1988): In experiment after experiment, substantial reductions in the caloric intake of balanced diets have improved numerous indices of health and increased longevity. For example, in one experiment (Weindruch et al., 1986), groups of mice had their caloric intake of a well-balanced commercial diet reduced by either 25%, 55%, or 65% after weaning. All levels of dietary restriction substantially improved health and increased longevity, but the benefits were greatest in the mice whose intake was reduced the most. Those mice that consumed the least had the lowest incidence of cancer, the best immune responses, and the greatest maximum life span—they lived 67% longer than mice that ate as much as they liked. Evidence suggests that dietary restriction can have beneficial effects even if it is not initiated until later in life (Mair et al., 2003; Vaupel, Carey, & Christensen, 2003).

One important point about the results of the calorie-restriction experiments is that the health benefits of the restricted diets may not be entirely attributable to loss of body fat (see Weindruch, 1996). In some dietary restriction studies, the health of subjects has improved even if they did not reduce their body fat, and there are often no significant correlations between amount of weight loss and improvements in health. This suggests excessive energy consumption, independent of fat accumulation, may accelerate aging with all its attendant health problems (Lane, Ingram, & Roth, 2002; Prolla & Mattson, 2001).

Thinking Creatively

Remarkably, there is evidence that dietary restriction can be used to treat some neurological conditions. Caloric restriction has been shown to reduce seizure susceptibility in human epileptics (see Maalouf, Rho, & Mattson, 2008) and to improve memory in the elderly (Witte et al., 2009). Please stop and think about the implications of all these findings about calorie restriction. How much do you eat?

Regulation of Body Weight by Changes in the Efficiency of Energy Utilization

Implicit in many set-point theories is the premise that body weight is largely a function of how much a person eats. Of course, how much someone eats plays a role in his or her body weight, but it is now clear that the body controls its fat levels, to a large degree, by changing the efficiency with which it uses energy. As a person’s level of body fat declines, that person starts to use energy resources more efficiently, which limits further weight loss (see Martin, White, & Hulsey, 1991); conversely, weight gain is limited by a progressive decrease in the efficiency of energy utilization. Rothwell and Stock (1982) created a group of obese rats by maintaining them on a cafeteria diet, and they found that the resting level of energy expenditure in these obese rats was 45% greater than in control rats.

This point is illustrated by the progressively declining effectiveness of weight-loss programs. Initially, low-calorie diets produce substantial weight loss. But the rate of weight loss diminishes with each successive week on the diet, until an equilibrium is achieved and little or no further weight loss occurs. Most dieters are familiar with this disappointing trend. A similar effect occurs with weight-gain programs (see Figure 12.13 on page 316).

The mechanism by which the body adjusts the efficiency of its energy utilization in response to its levels of body fat has been termed diet-induced thermogenesis . Increases in the levels of body fat produce increases in body temperature, which require additional energy to maintain them—and decreases in the level of body fat have the opposite effects (see Lazar, 2008).

There are major differences among humans both in basal metabolic rate (the rate at which energy is utilized to maintain bodily processes when resting) and in the ability to adjust the metabolic rate in response to changes in the levels of body fat. We all know people who remain slim even though they eat gluttonously. However, the research on calorie-restricted diets suggests that these people may not eat with impunity: There may be a health cost to pay for overeating even in the absence of obesity.

Set Points and Settling Points in Weight Control

The theory that eating is part of a system designed to defend a body-fat set point has long had its critics (see

FIGURE 12.13 The diminishing effects on body weight of a low-calorie diet and a high-calorie diet.

Booth, Fuller, & Lewis, 1981; Wirtshafter & Davis, 1977), but for many years their arguments were largely ignored and the set-point assumption ruled. This situation has been changing: Several prominent reviews of research on hunger and weight regulation generally acknowledge that a strict set-point model cannot account for the facts of weight regulation, and they argue for a more flexible model (see Berthoud, 2002; Mercer & Speakman, 2001; Woods et al., 2000). Because the body-fat set-point model still dominates the thinking of many people, I want to review the main advantages of an alternative and more flexible regulatory model: the settling-point model. Can you change your thinking?

Thinking Creatively

According to the settling-point model, body weight tends to drift around a natural settling point —the level at which the various factors that influence body weight achieve an equilibrium. The idea is that as body-fat levels increase, changes occur that tend to limit further increases until a balance is achieved between all factors that encourage weight gain and all those that discourage it.

The settling-point model provides a loose kind of homeostatic regulation, without a set-point mechanism or mechanisms to return body weight to a set point. According to the settling-point model, body weight remains stable as long as there are no long-term changes in the factors that influence it; and if there are such changes, their impact is limited by negative feedback. In the settling-point model, the negative feedback merely limits further changes in the same direction, whereas in the set-point model, negative feedback triggers a return to the set point. A neuron’s resting potential is a well-known biological settling point—see Chapter 4.

 Simulate

Leaky Barrel

www.mypsychlab.com

The seductiveness of the set-point mechanism is attributable in no small part to the existence of the thermostat model, which provides a vivid means of thinking about it. Figure 12.14 presents an analogy I like to use to think about the settling-point mechanism. I call it the leaky-barrel model : (1) The amount of water entering the hose is analogous to the amount of food available to the subject; (2) the water pressure at the nozzle is analogous to the positive-incentive value of the available food; (3) the amount of water entering the barrel is analogous to the amount of energy consumed; (4) the water level in the barrel is analogous to the level of body fat; (5) the amount of water leaking from the barrel is analogous to the amount of energy being expended; and (6) the weight of the barrel on the hose is analogous to the strength of the satiety signal.

The main advantage of the settling-point model of body weight regulation over the body-fat set-point model is that it is more consistent with the data. Another advantage is that in those cases in which both models make the same prediction, the settling-point model does so more parsimoniously—that is, with a simpler mechanism that requires fewer assumptions. Let’s use the leaky-barrel analogy to see how the two models account for four key facts of weight regulation.

• Body weight remains relatively constant in many adult animals. On the basis of this fact, it has been argued that body fat must be regulated around a set point. However, constant body weight does not require, or even imply, a set point. Consider the leaky-barrel model. As water from the tap begins to fill the barrel, the weight of the water in the barrel increases. This increases the amount of water leaking out of the barrel and decreases the amount of water entering the barrel by increasing the pressure of the barrel on the hose. Eventually, this system settles into an equilibrium where the water level stays constant; but because this level is neither predetermined nor actively defended, it is a settling point, not a set point.

FIGURE 12.14 The leaky-barrel model: a settling-point model of eating and body weight homeostasis.

• Many adult animals experience enduring changes in body weight. Set-point systems are designed to maintain internal constancy in the face of fluctuations of the external environment. Thus, the fact that many adult animals experience long-term changes in body weight is a strong argument against the set-point model. In contrast, the settling-point model predicts that when there is an enduring change in one of the parameters that affect body weight—for example, a major increase in the positive-incentive value of available food—body weight will drift to a new settling point.

• If a subject’s intake of food is reduced, metabolic changes that limit the loss of weight occur; the opposite happens when the subject overeats. This fact is often cited as evidence for set-point regulation of body weight; however, because the metabolic changes merely limit further weight changes rather than eliminating those that have occurred, they are more consistent with a settling-point model. For example, when water intake in the leaky-barrel model is reduced, the water level in the barrel begins to drop; but the drop is limited by a decrease in leakage and an increase in inflow attributable to the falling water pressure in the barrel. Eventually, a new settling point is achieved, but the reduction in water level is not as great as one might expect because of the loss-limiting changes.

• After an individual has lost a substantial amount of weight (by dieting, exercise, or the surgical removal of fat), there is a tendency for the original weight to be regained once the subject returns to the previous eating-and energy-related lifestyle. Although this finding is often offered as irrefutable evidence of a body-weight set point, the settling-point model readily accounts for it. When the water level in the leaky-barrel model is reduced—by temporarily decreasing input (dieting), by temporarily increasing output (exercising), or by scooping out some of the water (surgical removal of fat)—only a temporary drop in the settling point is produced. When the original conditions are reinstated, the water level inexorably drifts back to the original settling point.

Thinking Creatively

Does it really matter whether we think about body weight regulation in terms of set points or settling points—or is making such a distinction just splitting hairs? It certainly matters to biopsychologists: Understanding that body weight is regulated by a settling-point system helps them better understand, and more accurately predict, the changes in body weight that are likely to occur in various situations; it also indicates the kinds of physiological mechanisms that are likely to mediate these changes. And it should matter to you. If the set-point model is correct, attempting to change your body weight would be a waste of time; you would inevitably be drawn back to your bodyweight set point. On the other hand, the leaky-barrel model suggests that it is possible to permanently change your body weight by permanently changing any of the factors that influence energy intake and output.

Scan Your Brain

Are you ready to move on to the final two sections of the chapter, which deal with eating disorders? This is a good place to pause and scan your brain to see if you understand the physiological mechanisms of eating and weight regulation. Complete the following sentences by filling in the blanks. The correct answers are provided at the end of the exercise. Before proceeding, review material related to your incorrect answers and omissions.

1. The expectation of a meal normally stimulates the release of ______ into the blood, which reduces blood glucose.

2. In the 1950s, the ______ hypothalamus was thought to be a satiety center.

3. A complete cessation of eating is called ______.

4. ______ is the breakdown of body fat to create usable forms of energy.

5. The classic study of Washburn and Cannon was the perfect collaboration: Cannon had the ideas, and Washburn could swallow a ______.

6. CCK is a gut peptide that is thought to be a ______ peptide.

7. ______ is the monoaminergic neurotransmitter that seems to play a role in satiety.

8. Okinawans eat less and live ______.

9. Experimental studies of ______ have shown that typical ad libitum (free-feeding) levels of consumption are unhealthy in many mammalian species.

10. As an individual grows fatter, further weight gain is minimized by diet-induced ______.

11. ______ models are more consistent with the facts of body-weight regulation than are set-point models.

12. ______ are to set points as leaky barrels are to settling points.

Scan Your Brain answers:

(1) insulin,

(2) ventromedial,

(3) aphagia, restriction,

(4) Lipolysis,

(5) balloon,

(6) satiety,

(7) Serotonin,

(8) longer,

(9) calorie

(10) thermogenesis,

(11) Settling-point,

(12) Thermostats.

12.6 Human Obesity: Causes, Mechanisms, and Treatments

This is an important point in this chapter. The chapter opened by describing the current epidemic of obesity and overweight and its adverse effects on health and longevity and then went on to discuss behavioral and physiological factors that influence eating and weight. Most importantly, as the chapter progressed, you learned that some common beliefs about eating and weight regulation are incompatible with the evidence, and you were challenged to think about eating and weight regulation in unconventional ways that are more consistent with current evidence. Now, the chapter completes the circle with two sections on eating disorders: This section focuses on obesity, and the next covers anorexia and bulimia. I hope that by this point you realize that obesity is currently a major health problem and will appreciate the relevance of what you are learning to your personal life and the lives of your loved ones.

Who Needs to Be Concerned about Obesity?

Almost everyone needs to be concerned about the problem of obesity. If you are currently overweight, the reason for concern is obvious: The relation between obesity and poor health has been repeatedly documented (see Eilat-Adar, Eldar, & Goldbourt, 2005; Ferrucci & Alley, 2007; Flegal et al., 2007; Hjartåker et al., 2005; Stevens, McClain, & Truesdale, 2006). Moreover, some studies have shown that even individuals who are only a bit overweight run a greater risk of developing health problems (Adams et al., 2006; Byers, 2006; Jee et al., 2006), as do obese individuals who manage to keep their blood pressure and blood cholesterol at normal levels (Yan et al., 2006). And the risk is not only to one’s own health: Obese women are at increased risk of having infants with health problems (Nohr et al., 2007).

Even if you are currently slim, there is cause for concern about the problem of obesity. The incidence of obesity is so high that it is almost certain to be a problem for somebody you care about. Furthermore, because weight tends to increase substantially with age, many people who are slim as youths develop serious weight problems as they age.

There is cause for special concern for the next generation. Because rates of obesity are increasing in most parts of the world (Rosenheck, 2008; Sofsian, 2007), public health officials are concerned about how they are going to handle the growing problem. For example, it has been estimated that over one-third of the children born in the United States in 2000 will eventually develop diabetes, and 10% of these will develop related life-threatening conditions (see Haslam, Sattar, & Lean, 2006; Olshansky et al., 2005).

Why Is There an Epidemic of Obesity?

Evolutionary Perspective

Let’s begin our analysis of obesity by considering the pressures that are likely to have led to the evolution of our eating and weight-regulation systems (see Flier & Maratos-Flier, 2007; Lazar, 2005; Pinel et al., 2000). During the course of evolution, inconsistent food supplies were one of the main threats to survival. As a result, the fittest individuals were those who preferred high-calorie foods, ate to capacity when food was available, stored as many excess calories as possible in the form of body fat, and used their stores of calories as efficiently as possible. Individuals who did not have these characteristics were unlikely to survive a food shortage, and so these characteristics were passed on to future generations.

The development of numerous cultural practices and beliefs that promote consumption has augmented the effects of evolution. For example, in my culture, it is commonly believed that one should eat three meals per day at regular times, whether one is hungry or not; that food should be the focus of most social gatherings; that meals should be served in courses of progressively increasing palatability; and that salt, sweets (e.g., sugar), and fats (e.g., butter or cream) should be added to foods to improve their flavor and thus increase their consumption.

Each of us possesses an eating and weight-regulation system that evolved to deal effectively with periodic food shortages, and many of us live in cultures whose eating-related practices evolved for the same purpose. However, our current environment differs from our “natural” environment in critical food-related ways. We live in an environment in which an endless variety of foods of the highest positive-incentive and caloric value are readily and continuously available. The consequence is an appallingly high level of consumption.

Why Do Some People Become Obese While Others Do Not?

Why do some people become obese while others living under the same obesity-promoting conditions do not? At a superficial level, the answer is obvious: Those who are obese are those whose energy intake has exceeded their energy output; those who are slim are those whose energy intake has not exceeded their energy output (see Nestle, 2007). Although this answer provides little insight, it does serve to emphasize that two kinds of individual differences play a role in obesity: those that lead to differences in energy input and those that lead to differences in energy output.

Differences in Consumption

There are many factors that lead some people to eat more than others who have comparable access to food. For example, some people consume more energy because they have strong preferences for the taste of high-calorie foods (see Blundell & Finlayson, 2004; Epstein et al., 2007); some consume more because they were raised in families and/or cultures that promote excessive eating; and some consume more because they have particularly large cephalic-phase responses to the sight or smell of food (Rodin, 1985).

Differences in Energy Expenditure

 Watch

Eating and the Brain

www.mypsychlab.com

With respect to energy output, people differ markedly from one another in the degree to which they can dissipate excess consumed energy. The most obvious difference is that people differ substantially in the amount of exercise they get; however, there are others. You have already learned about two of them: differences in basal metabolic rate and in the ability to react to fat increases by diet-induced thermogenesis. The third factor is called NEAT , or nonexercise activity thermogenesis, which is generated by activities such as fidgeting and the maintenance of posture and muscle tone (Ravussin & Danforth, 1999) and can play a small role in dissipating excess energy (Levine, Eberhardt, & Jensen, 1999; Ravussin, 2005).

Genetic Differences

Given the number of factors that can influence food consumption and energy metabolism, it is not surprising that many genes can influence body weight. Indeed, over 100 human chromosome loci (regions) have already been linked to obesity (see Fischer et al., 2009; Rankinen et al., 2006). However, because body weight is influenced by so many genes, it is proving difficult to understand how their interactions with one another and with experience contribute to obesity in healthy people. Although it is proving difficult to unravel the various genetic factors that influence variations in body weight among the healthy, single gene mutations have been linked to pathological conditions that involve obesity. You will encounter an example of such a condition later in this section.

Why Are Weight-Loss Programs Typically Ineffective?

Figure 12.15 describes the course of the typical weight-loss program. Most weight-loss programs are unsuccessful in the sense that, as predicted by the settling-point model, most of the lost weight is regained once the dieter stops following the program and the original conditions are reestablished. The key to permanent weight loss is a permanent lifestyle change.

FIGURE 12.15 The five stages of a typical weight-loss program.

Exercise has many health-promoting effects; however, despite the general belief that exercise is the most effective method of losing weight, several studies have shown that it often contributes little to weight loss (e.g., Sweeney et al., 1993). One reason is that physical exercise normally accounts for only a small proportion of total energy expenditure: About 80% of the energy you expend is used to maintain the resting physiological processes of your body and to digest your food (Calles-Escandon & Horton, 1992). Another reason is that our bodies are efficient machines, burning only a small number of calories during a typical workout. Moreover, after exercise, many people feel free to consume extra drinks and foods that contain more calories than the relatively small number that were expended during the exercise.

Leptin and the Regulation of Body Fat

Fat is more than a passive storehouse of energy; it actively releases a peptide hormone called leptin . The discovery of leptin has been extremely influential (see Elmquist & Flier, 2004). The following three subsections describe (1) the discovery of leptin, (2) how its discovery has fueled the development of a new approach to the treatment of human obesity, and (3) how the understanding that leptin (and insulin) are feedback signals led to the discovery of a hypothalamic nucleus that plays an important role in the regulation of body fat.

Obese Mice and the Discovery of Leptin

In 1950, a spontaneous genetic mutation occurred in the mouse colony being maintained in the Jackson Laboratory at Bar Harbor, Maine. The mutant mice were homozygous for the gene (ob), and they were grossly obese, weighing up to three times as much as typical mice. These mutant mice are commonly referred to as ob/ob mice . See Figure 12.16.

Evolutionary Perspective

Ob/ob mice eat more than control mice; they convert calories to fat more efficiently; and they use their calories more efficiently. Coleman (1979) hypothesized that ob/ob mice lack a critical hormone that normally inhibits fat production and maintenance.

In 1994, Friedman and his colleagues characterized and cloned the gene that is mutated in ob/ob mice (Zhang et al., 1994). They found that this gene is expressed only in fat cells, and they characterized the protein that it normally encodes, a peptide hormone that they named leptin. Because of their mutation, ob/ob mice lack leptin. This finding led to an exciting hypothesis: Perhaps leptin is a negative feedback signal that is normally released from fat stores to decrease appetite and increase fat metabolism. Could leptin be administered to obese humans to reverse the current epidemic of obesity?

FIGURE 12.16 An ob/ob mouse and a control mouse.

Leptin, Insulin, and the Arcuate Melanocortin System

There was great fanfare when leptin was discovered. However, it was not the first peptide hormone to be discovered that seems to function as a negative feedback signal in the regulation of body fat (see Schwartz, 2000; Woods, 2004). More than 25 years ago, Woods and colleagues (1979) suggested that the pancreatic peptide hormone insulin serves such a function.

At first, the suggestion that insulin serves as a negative feedback signal for body fat regulation was viewed with skepticism. After all, how could the level of insulin in the body, which goes up and then comes back down to normal following each meal, provide the brain with information about gradually changing levels of body fat? It turns out that insulin does not readily penetrate the blood–brain barrier, and its levels in the brain were found to stay relatively stable—indeed, high levels of glucose are toxic to neurons (Tomlinson & Gardiner, 2008). The following findings supported the hypothesis that insulin serves as a negative feedback signal in the regulation of body fat:

• Brain levels of insulin were found to be positively correlated with levels of body fat (Seeley et al., 1996).

• Receptors for insulin were found in the brain (Baura et al., 1993).

• Infusions of insulin into the brains of laboratory animals were found to reduce eating and body weight (Campfield et al., 1995; Chavez, Seeley, & Woods, 1995).

Why are there two fat feedback signals? One reason may be that leptin levels are more closely correlated with subcutaneous fat (fat stored under the skin), whereas insulin levels are more closely correlated with visceral fat (fat stored around the internal organs of the body cavity)—see Hug & Lodish (2005). Thus, each fat signal provides different information. Visceral fat is more common in males than females and poses the greater threat to health (Wajchenberg, 2000). Insulin, but not leptin, is also involved in glucose regulation (see Schwartz & Porte, 2005).

The discovery that leptin and insulin are signals that provide information to the brain about fat levels in the body provided a means for discovering the neural circuits that participate in fat regulation. Receptors for both peptide hormones are located in many parts of the nervous system, but most are in the hypothalamus, particularly in one area of the hypothalamus: the arcuate nucleus .

A closer look at the distribution of leptin and insulin receptors in the arcuate nucleus indicated that these receptors are not randomly distributed throughout the nucleus. They are located in two classes of neurons: neurons that release neuropeptide Y (the gut hunger peptide that you read about earlier in the chapter), and neurons that release melanocortins , a class of peptides that includes the gut satiety peptide α-melanocyte-stimulating hormone (alpha-melanocyte-stimulating hormone). Attention has been mostly focused on the melanocortin-releasing neurons in the arcuate nucleus (often referred to as the melanocortin system ) because injections of α-melanocyte-stimulating hormone have been shown to suppress eating and promote weight loss (see Horvath, 2005; Seeley & Woods, 2003). It seems, however, that the melanocortin system is only a minor component of a much larger system: Elimination of leptin receptors in the melanocortin system produces only a slight weight gain (see Münzberg & Myers, 2005).

Leptin as a Treatment for Human Obesity

The early studies of leptin seemed to confirm the hypothesis that it could function as an effective treatment for obesity. Receptors for leptin were found in the brain, and injecting it into ob/ob mice reduced both their eating and their body fat (see Seeley & Woods, 2003). All that remained was to prove leptin’s effectiveness in human patients.

However, when research on leptin turned from ob/ob mice to obese humans, the program ran into two major snags. First, obese humans—unlike ob/ob mice—were found to have high, rather than low, levels of leptin (see Münzberg & Myers, 2005). Second, injections of leptin did not reduce either the eating or the body fat of obese humans (see Heymsfield et al., 1999).

Clinical Implications

Why the actions of leptin are different in humans and ob/ob mice has yet to be explained. Nevertheless, efforts to use leptin in the treatment of human obesity have not been a total failure. Although few obese humans have a genetic mutation to the ob gene, leptin is a panacea for those few who do. Consider the following case.

The Case of the Child with No Leptin

The patient was of normal weight at birth, but her weight soon began to increase at an excessive rate. She demanded food continually and was disruptive when denied food. As a result of her extreme obesity, deformities of her legs developed, and surgery was required.

She was 9 when she was referred for treatment. At this point, she weighed 94.4 kilograms (about 210 pounds), and her weight was still increasing at an alarming rate. She was found to be homozygous for the ob gene and had no detectable leptin. Thus, leptin therapy was commenced.

The leptin therapy immediately curtailed the weight gain. She began to eat less, and she lost weight steadily over the 12-month period of the study, a total of 16.5 kilograms (about 36 pounds), almost all in the form of fat. There were no obvious side effects (Farooqi et al., 1999).

Treatment of Obesity

Because obesity is such a severe health problem, there have been many efforts to develop an effective treatment. Some of these—such as the leptin treatment you just read about—have worked for a few, but the problem of obesity continues to grow. The following two subsections discuss two treatments that are at different stages of development: serotonergic agonists and gastric surgery.

Serotonergic Agonists

Because—as you have already learned—serotonin agonists have been shown to reduce food consumption in both human and nonhuman subjects, they have considerable potential in the treatment of obesity (Halford & Blundell, 2000a). Serotonin agonists seem to act by a mechanism different from that for leptin and insulin, which produce long-term satiety signals based on fat stores. Serotonin agonists seem to increase short-term satiety signals associated with the consumption of a meal (Halford & Blundell, 2000b).

Clinical Implications

Serotonin agonists have been found in various studies of obese patients to reduce the following: the urge to eat high-calorie foods, the consumption of fat, the subjective intensity of hunger, the size of meals, the number of between-meal snacks, and bingeing. Because of this extremely positive profile of effects and the severity of the obesity problem, serotonin agonists (fenfluramine and dexfenfluramine) were rushed into clinical use. However, they were subsequently withdrawn from the market because chronic use was found to be associated with heart disease in a small, but significant, number of users. Currently, the search is on for serotonergic weight-loss medications that do not have dangerous side effects.

Gastric Surgery

Cases of extreme obesity sometimes warrant extreme treatment. Gastric bypass is a surgical treatment for extreme obesity that involves short-circuiting the normal path of food through the digestive tract so that its absorption is reduced. The first gastric bypass was done in 1967, and it is currently the most commonly prescribed surgical treatment for extreme obesity. An alternative is the adjustable gastric band procedure, which involves surgically positioning a hollow silicone band around the stomach to reduce the flow of food through it; the circumference of the band can be adjusted by injecting saline into the band through a port that is implanted in the skin. One advantage of the gastric band over the gastric bypass is that the band can readily be removed.

The gastric bypass and adjustable gastric band are illustrated in Figure 12.17. A meta-analysis of studies comparing the two procedures found both to be highly effective (Maggard et al., 2005). However, neither procedure is effective unless patients change their eating habits.

12.7 Anorexia and Bulimia Nervosa

Clinical Implications

In contrast to obesity, anorexia nervosa is a disorder of underconsumption (see Södersten, Bergh, & Zandian, 2006). Anorexics eat so little that they experience health-threatening weight loss; and despite their emaciated appearance, they often perceive themselves as fat (see Benninghoven et al., 2006). Anorexia nervosa is a serious condition; In approximately 10% of diagnosed cases, complications from starvation result in death (Birmingham et al., 2005), and there is a high rate of suicide among anorexics (Pompili et al., 2004).

Anorexia nervosa is related to bulimia nervosa. Bulimia nervosa is a disorder characterized by periods of not eating interrupted by bingeing (eating huge amounts of food in short periods of time) followed by efforts to immediately eliminate the consumed calories from the body by voluntary purging (vomiting); by excessive use of laxatives, enemas, or diuretics; or by extreme exercise. Bulimics may be obese or of normal weight. If they are underweight, they are diagnosed as bingeing anorexics.

Relation between Anorexia and Bulimia

Thinking Creatively

Are anorexia nervosa and bulimia nervosa really different disorders, as current convention dictates? The answer to this question depends on one’s perspective. From the perspective of a physician, it is important to distinguish between these disorders because starvation produces different health problems than does repeated bingeing and purging. For example, anorexics often require treatment for reduced metabolism, bradycardia (slow heart rate), hypotension (low blood pressure), hypothermia (low body temperature), and anemia (deficiency of red blood cells) (Miller et al., 2005). In contrast, bulimics often require treatment for irritation and inflammation of the esophagus, vitamin and mineral deficiencies, electrolyte imbalance, dehydration, and acid reflux.

FIGURE 12.17 Two surgical methods for treating extreme obesity: gastric bypass and adjustable gastric band. The gastric band can be tightened by injecting saline into the access port implanted just beneath the skin.

Although anorexia and bulimia nervosa may seem like very different disorders from a physician’s perspective, scientists often find it more appropriate to view them as variations of the same disorder. According to this view, both anorexia and bulimia begin with an obsession about body image and slimness and extreme efforts to lose weight. Both anorexics and bulimics attempt to lose weight by strict dieting, but bulimics are less capable of controlling their appetites and thus enter into a cycle of starvation, bingeing, and purging (see Russell, 1979). The following are other similarities that support the view that anorexia and bulimia are variants of the same disorder (see Kaye et al., 2005):

• Both anorexics and bulimics tend to have distorted body images, seeing themselves as much fatter and less attractive than they are in reality (see Grant et al., 2002).

• In practice, many patients seem to straddle the two diagnoses and cannot readily be assigned to one or the other categories and many patients flip-flop between the two diagnoses as their circumstances change (Lask & Bryant-Waugh, 2000; Santonastaso et al., 2006; Tenconi et al., 2006).

• Anorexia and bulimia show the same pattern of distribution in the population. Although their overall incidence in the population is low (lifetime incidence estimates for American adults are 0.6% and 1.0% for anorexia and bulimia, respectively; Hudson et al., 2007), both conditions occur more commonly among educated females in affluent cultural groups (Lindberg & Hjern, 2003).

• Both anorexia and bulimia are highly correlated with obsessive-compulsive disorder and depression (Kaye et al., 2004; O’Brien & Vincent, 2003).

• Neither disorder responds well to existing therapies. Short-term improvements are common, but relapse is usual (see Södersten et al., 2006).

Anorexia and Positive Incentives

Thinking Creatively

The positive-incentive perspective on eating suggests that the decline in eating that defines both anorexia (and bulimia) is likely a consequence of a corresponding decline in the positive-incentive value of food. However, the positive-incentive value of food for anorexia patients has received little attention—in part, because anorexic patients often display substantial interest in food. The fact that many anorexic patients are obsessed with food—continually talking about it, thinking about it, and preparing it for others (Crisp, 1983)—seems to suggest that food still holds a high positive-incentive value for them. However, to avoid confusion, it is necessary to keep in mind that the positive-incentive value of interacting with food is not necessarily the same as the positive-incentive value of eating food—and it is the positive-incentive value of eating food that is critical when considering anorexia nervosa.

A few studies have examined the positive-incentive value of various tastes in anorexic patients (see, e.g., Drewnowski et al., 1987; Roefs et al., 2006; Sunday & Halmi, 1990). In general, these studies have found that the positive-incentive value of various tastes is lower in anorexic patients than in control participants. However, these studies grossly underestimate the importance of reductions in the positive-incentive value of food in the etiology of anorexia nervosa, because the anorexic participants and the normal-weight control participants were not matched for weight—such matching is not practical.

We can get some insight into the effects of starvation on the positive-incentive value of food by studying starvation. That starvation normally triggers a radical increase in the positive-incentive value of food has been best documented by the descriptions and behavior of participants voluntarily undergoing experimental semistarvation. When asked how it felt to starve, one participant replied:

I wait for mealtime. When it comes I eat slowly and make the food last as long as possible. The menu never gets monotonous even if it is the same each day or is of poor quality. It is food and all food tastes good. Even dirty crusts of bread in the street look appetizing. (Keys et al., 1950, p. 852)

Anorexia Nervosa: A Hypothesis

The dominance of set-point theories in research into the regulation of hunger and eating has resulted in widespread inattention to one of the major puzzles of anorexia: Why does the adaptive massive increase in the positive-incentive value of eating that occurs in victims of starvation not occur in starving anorexics? Under conditions of starvation, the positive-incentive value of eating normally increases to such high levels that it is difficult to imagine how anybody who was starving—no matter how controlled, rigid, obsessive, and motivated that person was—could refrain from eating in the presence of palatable food. Why this protective mechanism is not activated in severe anorexics is a pressing question about the etiology of anorexia nervosa.

Thinking Creatively

I believe that part of the answer lies in the research of Woods and his colleagues on the aversive physiological effects of meals. At the beginning of meals, people are normally in reasonably homeostatic balance, and this homeostasis is disrupted by the sudden infusion of calories. The other part of the answer lies in the finding that the aversive effects of meals are much greater in people who have been eating little (Brooks & Melnik, 1995). Meals, which produce adverse, but tolerable, effects in healthy individuals, may be extremely aversive for individuals who have undergone food deprivation. Evidence for the extremely noxious effects that eating meals has on starving humans is found in the reactions of World War II concentration camp victims to refeeding—many were rendered ill and some were even killed by the food given to them by their liberators (Keys et al., 1950; see also Soloman & Kirby, 1990).

 Watch

Anorexia

www.mypsychlab.com

So why do severe anorexics not experience a massive increase in the positive-incentive value of eating, similar to the increase experienced by other starving individuals? The answer may be meals—meals forced on these patients as a result of the misconception of our society that meals are the healthy way to eat. Each meal consumed by an anorexic may produce a variety of conditioned taste aversions that reduce the motivation to eat. This hypothesis needs to be addressed because of its implication for treatment: Anorexic patients—or anybody else who is severely under-nourished—should not be encouraged, or even permitted, to eat meals. They should be fed—or infused with—small amounts of food intermittently throughout the day.

Thinking Creatively

I have described the preceding hypothesis to show you the value of the new ideas that you have encountered in this chapter: The major test of a new theory is whether it leads to innovative hypotheses. A while ago, as I was perusing an article on global famine and malnutrition, I noticed an intriguing comment: One of the clinical complications that results from feeding meals to famine victims is anorexia (Blackburn, 2001). What do you make of this?

The Case of the Anorexic Student

Clinical Implications

In a society in which obesity is the main disorder of consumption, anorexics are out of step. People who are struggling to eat less have difficulty understanding those who have to struggle to eat. Still, when you stare anorexia in the face, it is difficult not to be touched by it.

She began by telling me how much she had been enjoying the course and how sorry she was to be dropping out of the university. She was articulate and personable, and her grades were high—very high. Her problem was anorexia; she weighed only 82 pounds, and she was about to be hospitalized.

“But don’t you want to eat?” I asked naively. “Don’t you see that your plan to go to medical school will go up in smoke if you don’t eat?”

“Of course I want to eat. I know I am terribly thin—my friends tell me I am. Believe me, I know this is wrecking my life. I try to eat, but I just can’t force myself. In a strange way, I am pleased with my thinness.”

She was upset, and I was embarrassed by my insensitivity. “It’s too bad you’re dropping out of the course before we cover the chapter on eating,” I said, groping for safer ground.

“Oh, I’ve read it already,” she responded. “It’s the first chapter I looked at. It had quite an effect on me; a lot of things started to make more sense. The bit about positive incentives and learning was really good. I think my problem began when eating started to lose its positive-incentive value for me—in my mind, I kind of associated eating with being fat and all the boyfriend problems I was having. This made it easy to diet, but every once in a while I would get hungry and binge, or my parents would force me to eat a big meal. I would eat so much that I would feel ill. So I would put my finger down my throat and make myself throw up. This kept me from gaining weight, but I think it also taught my body to associate my favorite foods with illness—kind of a conditioned taste aversion. What do you think of my theory?”

Her insightfulness impressed me; it made me feel all the more sorry that she was going to discontinue her studies. After a lengthy chat, she got up to leave, and I walked her to the door of my office. I wished her luck and made her promise to come back for a visit. I never saw her again, but the image of her emaciated body walking down the hallway from my office has stayed with me.

Themes Revisited

Thinking Creatively

Three of the book’s four themes played prominent roles in this chapter. The thinking creatively theme was prevalent as you were challenged to critically evaluate your own beliefs and ambiguous research findings, to consider the scientific implications of your own experiences, and to think in new ways about phenomena with major personal and clinical implications. The chapter ended by using these new ideas to develop a potentially important hypothesis about the etiology of anorexia nervosa. Because of its emphasis on thinking, this chapter is my personal favorite.

Evolutionary Perspective

Both aspects of the evolutionary perspective theme were emphasized repeatedly. First, you saw how thinking about hunger and eating from an evolutionary perspective leads to important insights. Second, you saw how controlled research on nonhuman species has contributed to our current understanding of human hunger and eating.

Clinical Implications

Finally, the clinical implications theme pervaded the chapter, but it was featured in the cases of the man who forgot not to eat, the child with Prader-Willi syndrome, the child with no leptin, and the anorexic student.

Think about It

1. Set-point theories suggest that attempts at permanent weight loss are a waste of time. On the basis of what you have learned in this chapter, design an effective and permanent weight-loss program.

2. Most of the eating-related health problems of people in our society occur because the conditions in which we live are different from those in which our species evolved. Discuss.

3. On the basis of what you have learned in this chapter, develop a feeding program for laboratory rats that would lead to obesity. Compare this program with the eating habits prevalent in your culture.

4. What causes anorexia nervosa? Summarize the evidence that supports your view.

5. Given the weight of evidence, why is the set-point theory of hunger and eating so prevalent?

Key Terms

Set point (p. 299)

12.1 Digestion, Energy Storage, and Energy Utilization

Digestion (p. 299)

Lipids (p. 300)

Amino acids (p. 300)

Glucose (p. 300)

Cephalic phase (p. 301)

Absorptive phase (p. 301)

Fasting phase (p. 301)

Insulin (p. 301)

Glucagon (p. 301)

Gluconeogenesis (p. 301)

Free fatty acids (p. 301)

Ketones (p. 301)

12.2 Theories of Hunger and Eating: Set Points versus Positive Incentives

Set-point assumption (p. 302)

Negative feedback systems (p. 303)

Homeostasis (p. 303)

Glucostatic theory (p. 303)

Lipostatic theory (p. 303)

Positive-incentive theory (p. 304)

Positive-incentive value (p. 304)

12.3 Factors That Determine What, When, and How Much We Eat

Satiety (p. 306)

Nutritive density (p. 306)

Sham eating (p. 306)

Appetizer effect (p. 307)

Cafeteria diet (p. 308)

Sensory-specific satiety (p. 308)

12.4 Physiological Research on Hunger and Satiety

Ventromedial hypothalamus (VMH) (p. 309)

Lateral hypothalamus (LH) (p. 310)

Hyperphagia (p. 310)

Dynamic phase (p. 310)

Static phase (p. 310)

Aphagia (p. 310)

Adipsia (p. 310)

Lipogenesis (p. 310)

Lipolysis (p. 310)

Paraventricular nuclei (p. 311)

Duodenum (p. 312)

Cholecystokinin (CCK) (p. 312)

Prader-Willi syndrome (p. 313)

12.5 Body Weight Regulation: Set Points versus Settling Points

Diet-induced thermogenesis (p. 315)

Basal metabolic rate (p. 315)

Settling point (p. 316)

Leaky-barrel model (p. 316)

12.6 Human Obesity: Causes, Mechanisms, and Treatments

NEAT (p. 319)

Leptin (p. 320)

Ob/ob mice (p. 320)

Subcutaneous fat (p. 321)

Visceral fat (p. 321)

Arcuate nucleus (p. 321)

Neuropeptide Y (p. 321)

Melanocortins (p. 321)

Melanocortin system (p. 321)

Gastric bypass (p, 322)

Adjustable gastric band procedure (p. 322)

12.7 Anorexia and Bulimia Nervosa

Anorexia nervosa (p. 322)

Bulimia nervosa (p. 322)

 Quick Review

Test your comprehension of the chapter with this brief practice test. You can find the answers to these questions as well as more practice tests, activities, and other study resources at www.mypsychlab.com.

1. The phase of energy metabolism that often begins with the sight, the smell, or even the thought of food is the

a. luteal phase.

b. absorptive phase.

c. cephalic phase.

d. fasting phase.

e. none of the above

2. The ventromedial hypothalamus (VH) was once believed to be

a. part of the hippocampus.

b. a satiety center.

c. a hunger center.

d. static.

e. dynamic.

3. Patients with Prader-Willi syndrome suffer from

a. anorexia nervosa.

b. bulimia.

c. an inability to digest fats.

d. insatiable hunger.

e. lack of memory for eating.

4. In comparison to obese people, slim people tend to

a. have longer life expectancies.

b. be healthier.

c. be less efficient in their use of body energy.

d. all of the above

e. both a and b

5. Body fat releases a horm

(Pinel, 10/2010, pp. 299-326)

13 Hormones and Sex What’s Wrong with the Mamawawa?

13.1 Neuroendocrine System

13.2 Hormones and Sexual Development of the Body

13.3 Hormones and Sexual Development of Brain and Behavior

13.4 Three Cases of Exceptional Human Sexual Development

13.5 Effects of Gonadal Hormones on Adults

13.6 Neural Mechanisms of Sexual Behavior

13.7 Sexual Orientation and Sexual Identity

This chapter is about hormones and sex, a topic that some regard as unfit for conversation but that fascinates many others. Perhaps the topic of hormones and sex is so fascinating because we are intrigued by the fact that our sex is so greatly influenced by the secretions of a small pair of glands. Because we each think of our gender as fundamental and immutable, it is a bit disturbing to think that it could be altered with a few surgical snips and some hormone injections. And there is something intriguing about the idea that our sex lives might be enhanced by the application of a few hormones. For whatever reason, the topic of hormones and sex is always a hit with my students. Some remarkable things await you in this chapter; let’s go directly to them.

Men-Are-Men-and-Women-Are-Women Assumption

Many students bring a piece of excess baggage to the topic of hormones and sex: the men-are-men-and-women-are-women assumption—or “mamawawa.” This assumption is seductive; it seems so right that we are continually drawn to it without considering alternative views. Unfortunately, it is fundamentally flawed.

The men-are-men-and-women-are-women assumption is the tendency to think about femaleness and maleness as discrete, mutually exclusive, opposite categories. In thinking about hormones and sex, this general attitude leads one to assume that females have female sex hormones that give them female bodies and make them do “female” things, and that males have male sex hormones that give them male bodies and make them do opposite “male” things. Despite the fact that this approach to hormones and sex is inconsistent with the evidence, its simplicity, symmetry, and comfortable social implications draw us to it. That’s why this chapter grapples with it throughout. In so doing, this chapter encourages you to think about hormones and sex in new ways that are more consistent with the evidence.

Thinking Creatively

Developmental and Activational Effects of Sex Hormones

Before we begin discussing hormones and sex, you need to know that hormones influence sex in two fundamentally different ways (see Phoenix, 2008): (1) by influencing the development from conception to sexual maturity of the anatomical, physiological, and behavioral characteristics that distinguish one as female or male; and (2) by activating the reproduction-related behavior of sexually mature adults. Both the developmental (also called organizational) and activational effects of sex hormones are discussed in different sections of this chapter. Although the distinction between the developmental and activational effects of sex hormones is not always as clear as it was once assumed to be—for example, because the brain continues to develop into the late teens, adolescent hormone surges can have both effects—the distinction is still useful (Cohen-Bendahan, van de Beek, & Berenbaum, 2005).

13.1 Neuroendocrine System

This section introduces the general principles of neuroendocrine function. It introduces these principles by focusing on the glands and hormones that are directly involved in sexual development and behavior.

FIGURE 13.1 The endocrine glands.

The endocrine glands are illustrated in Figure 13.1. By convention, only the organs whose primary function appears to be the release of hormones are referred to as endocrine glands. However, other organs (e.g., the stomach, liver, and intestine) and body fat also release hormones into general circulation (see Chapter 12), and they are thus, strictly speaking, also part of the endocrine system.

Glands

There are two types of glands: exocrine glands and endocrine glands. Exocrine glands (e.g., sweat glands) release their chemicals into ducts, which carry them to their targets, mostly on the surface of the body. Endocrine glands (ductless glands) release their chemicals, which are called hormones , directly into the circulatory system. Once released by an endocrine gland, a hormone travels via the circulatory system until it reaches the targets on which it normally exerts its effect (e.g., other endocrine glands or sites in the nervous system).

Gonads

Central to any discussion of hormones and sex are the gonads —the male testes (pronounced TEST-eez) and the female ovaries (see Figure 13.1). As you learned in Chapter 2, the primary function of the testes and ovaries is the production of sperm cells and ova, respectively. After copulation (sexual intercourse), a single sperm cell may fertilize an ovum to form one cell called a zygote , which contains all of the information necessary for the normal growth of a complete adult organism in its natural environment (see Primakoff & Myles, 2002). With the exception of ova and sperm cells, each cell of the human body has 23 pairs of chromosomes. In contrast, the ova and sperm cells contain only half that number, one member of each of the 23 pairs. Thus, when a sperm cell fertilizes an ovum, the resulting zygote ends up with the full complement of 23 pairs of chromosomes, one of each pair from the father and one of each pair from the mother.

Of particular interest in the context of this chapter is the pair of chromosomes called the sex chromosomes , so named because they contain the genetic programs that direct sexual development. The cells of females have two large sex chromosomes, called X chromosomes. In males, one sex chromosome is an X chromosome, and the other is called a Y chromosome. Consequently, the sex chromosome of every ovum is an X chromosome, whereas half the sperm cells have X chromosomes and half have Y chromosomes. Your sex with all its social, economic, and personal ramifications was determined by which of your father’s sperm cells won the dash to your mother’s ovum. If a sperm cell with an X sex chromosome won, you are a female; if one with a Y sex chromosome won, you are a male.

You might reasonably assume that X chromosomes are X-shaped and Y chromosomes are Y-shaped, but this is incorrect. Once a chromosome has duplicated, the two products remain joined at one point, producing an X shape. This is true of all chromosomes, including Y chromosomes. Because the Y chromosome is much smaller than the X chromosome, early investigators failed to discern one small arm and thus saw a Y. In humans, Y-chromosome genes encode only 27 proteins; in comparison, about 1,500 proteins are encoded by X-chromosome genes (see Arnold, 2004).

Writing this section reminded me of my seventh-grade basketball team, the “Nads.” The name puzzled our teacher because it was not at all like the names usually favored by pubescent boys—names such as the “Avengers,” the “Marauders,” and the “Vikings.” Her puzzlement ended abruptly at our first game as our fans began to chant their support. You guessed it: “Go Nads, Go! Go Nads, Go!” My 14-year-old spotted-faced teammates and I considered this to be humor of the most mature and sophisticated sort. The teacher didn’t.

Classes of Hormones

Vertebrate hormones fall into one of three classes: (1) amino acid derivatives, (2) peptides and proteins, and (3) steroids. Amino acid derivative hormones are hormones that are synthesized in a few simple steps from an amino acid molecule; an example is epinephrine, which is released from the adrenal medulla and synthesized from tyrosinePeptide hormones and protein hormones are chains of amino acids—peptide hormones are short chains, and protein hormones are long chains. Steroid hormones are hormones that are synthesized from cholesterol, a type of fat molecule.

The hormones that influence sexual development and the activation of adult sexual behavior (i.e., the sex hormones) are all steroid hormones. Most other hormones produce their effects by binding to receptors in cell membranes. Steroid hormones can influence cells in this fashion; however, because they are small and fat-soluble, they can readily penetrate cell membranes and often affect cells in a second way. Once inside a cell, the steroid molecules can bind to receptors in the cytoplasm or nucleus and, by so doing, directly influence gene expression (amino acid derivative hormones and peptide hormones affect gene expression less commonly and by less direct mechanisms). Consequently, of all the hormones, steroid hormones tend to have the most diverse and long-lasting effects on cellular function (Brown, 1994).

Sex Steroids

The gonads do more than create sperm and egg cells; they also produce and release steroid hormones. Most people are surprised to learn that the testes and ovaries release the very same hormones. The two main classes of gonadal hormones are androgens and estrogens testosterone is the most common androgen, and estradiol is the most common estrogen. The fact that adult ovaries tend to release more estrogens than they do androgens and that adult testes release more androgens than they do estrogens has led to the common, but misleading, practice of referring to androgens as “the male sex hormones” and to estrogens as “the female sex hormones.” This practice should be avoided because of its men-are-men-and-women-are-women implication that androgens produce maleness and estrogens produce femaleness. They don’t.

The ovaries and testes also release a third class of steroid hormones called progestins . The most common progestin is progesterone , which in women prepares the uterus and the breasts for pregnancy. Its function in men is unclear.

Because the primary function of the adrenal cortex —the outer layer of the adrenal glands (see Figure 13.1)—is the regulation of glucose and salt levels in the blood, it is not generally thought of as a sex gland. However, in addition to its principal steroid hormones, it does release small amounts of all of the sex steroids that are released by the gonads.

Hormones of the Pituitary

The pituitary gland is frequently referred to as the master gland because most of its hormones are tropic hormones. Tropic hormones are hormones whose primary function is to influence the release of hormones from other glands (tropic means “able to stimulate or change something”). For example, gonadotropin is a pituitary tropic hormone that travels through the circulatory system to the gonads, where it stimulates the release of gonadal hormones.

The pituitary gland is really two glands, the posterior pituitary and the anterior pituitary, which fuse during the course of embryological development. The posterior pituitary develops from a small outgrowth of hypothalamic tissue that eventually comes to dangle from the hypothalamus on the end of the pituitary stalk (see Figure 13.2). In contrast, the anterior pituitary begins as part of the same embryonic tissue that eventually develops into the roof of the mouth; during the course of development, it pinches off and migrates upward to assume its position next to the posterior pituitary. It is the anterior pituitary that releases tropic hormones; thus, it is the anterior pituitary in particular, rather than the pituitary in general, that qualifies as the master gland.

Female Gonadal Hormone Levels Are Cyclic; Male Gonadal Hormone Levels Are Steady

Although men and women possess the same hormones, these hormones are not present at the same levels, and they do not necessarily perform the same functions. The major difference between the endocrine function of women and men is that in women the levels of gonadal and gonadotropic hormones go through a cycle that repeats itself every 28 days or so. It is these more-or-less regular hormone fluctuations that control the female menstrual cycle . In contrast, human males are, from a neuroendocrine perspective, rather dull creatures; males’ levels of gonadal and gonadotropic hormones change little from day to day.

Evolutionary Perspective

FIGURE 13.2 A midline view of the posterior and anterior pituitary and surrounding structures.

Because the anterior pituitary is the master gland, many early scientists assumed that an inherent difference between the male and female anterior pituitary was the basis for the difference in male and female patterns of gonadotropic and gonadal hormone release. However, this hypothesis was discounted by a series of clever transplant studies conducted by Geoffrey Harris in the 1950s (see Raisman, 1997). In these studies, a cycling pituitary removed from a mature female rat became a steady-state pituitary when transplanted at the appropriate site in a male, and a steady-state pituitary removed from a mature male rat began to cycle once transplanted into a female. What these studies established was that anterior pituitaries are not inherently female (cyclical) or male (steady-state); their patterns of hormone release are controlled by some other part of the body. The master gland seemed to have its own master. Where was it?

Neural Control of the Pituitary

The nervous system was implicated in the control of the anterior pituitary by behavioral research on birds and other animals that breed only during a specific time of the year. It was found that the seasonal variations in the light–dark cycle triggered many of the breeding-related changes in hormone release. If the lighting conditions under which the animals lived were reversed, for example, by having the animals transported across the equator, the breeding seasons were also reversed. Somehow, visual input to the nervous system was controlling the release of tropic hormones from the anterior pituitary.

Evolutionary Perspective

The search for the particular neural structure that controlled the anterior pituitary turned, naturally enough, to the hypothalamus, the structure from which the pituitary is suspended. Hypothalamic stimulation and lesion experiments quickly established that the hypothalamus is the regulator of the anterior pituitary, but how the hypothalamus carries out this role remained a mystery. You see, the anterior pituitary, unlike the posterior pituitary, receives no neural input whatsoever from the hypothalamus, or from any other neural structure (see Figure 13.3).

Control of the Anterior and Posterior Pituitary by the Hypothalamus

There are two different mechanisms by which the hypothalamus controls the pituitary: one for the posterior pituitary and one for the anterior pituitary. The two major hormones of the posterior pituitary, vasopressin and oxytocin , are peptide hormones that are synthesized in the cell bodies of neurons in the paraventricular nuclei and supraoptic nuclei on each side of the hypothalamus (see Figure 13.3 and Appendix VI). They are then transported along the axons of these neurons to their terminals in the posterior pituitary and are stored there until the arrival of action potentials causes them to be released into the bloodstream. (Neurons that release hormones into general circulation are called neurosecretory cells.) Oxytocin stimulates contractions of the uterus during labor and the ejection of milk during suckling. Vasopressin (also called antidiuretic hormone) facilitates the reabsorption of water by the kidneys.

The means by which the hypothalamus controls the release of hormones from the neuron-free anterior pituitary was more difficult to explain. Harris (1955) suggested that the release of hormones from the anterior pituitary was itself regulated by hormones released from the hypothalamus. Two findings provided early support for this hypothesis. The first was the discovery of a vascular network, the hypothalamopituitary portal system , that seemed well suited to the task of carrying hormones from the hypothalamus to the anterior pituitary. As Figure 13.4 on page 332 illustrates, a network of hypothalamic capillaries feeds a bundle of portal veins that carries blood down the pituitary stalk into another network of capillaries in the anterior pituitary. (A portal vein is a vein that connects one capillary network with another.) The second finding was the discovery that cutting the portal veins of the pituitary stalk disrupts the release of anterior pituitary hormones until the damaged veins regenerate (Harris, 1955).

Discovery of Hypothalamic Releasing Hormones

It was hypothesized that the release of each anterior pituitary hormone is controlled by a different hypothalamic hormone. The hypothalamic hormones that were thought to stimulate the release of an anterior pituitary hormone were referred to as releasing hormones ; those thought to inhibit the release of an anterior pituitary hormone were referred to as release-inhibiting factors .

FIGURE 13.3 The neural connections between the hypothalamus and the pituitary. All neural input to the pituitary goes to the posterior pituitary; the anterior pituitary has no neural connections.

Efforts to isolate the putative (hypothesized) hypothalamic releasing and inhibitory factors led to a major breakthrough in the late 1960s. Guillemin and his colleagues isolated thyrotropin-releasing hormone from the hypothalamus of sheep, and Schally and his colleagues isolated the same hormone from the hypothalamus of pigs. Thyrotropin-releasing hormone triggers the release of thyrotropin from the anterior pituitary, which in turn stimulates the release of hormones from the thyroid gland. For their efforts, Guillemin and Schally were awarded Nobel Prizes in 1977.

FIGURE 13.4 Control of the anterior and posterior pituitary by the hypothalamus.

Evolutionary Perspective

Schally’s and Guillemin’s isolation of thyrotropin-releasing hormone confirmed that hypothalamic releasing hormones control the release of hormones from the anterior pituitary and thus provided the major impetus for the isolation and synthesis of several other releasing hormones. Of direct relevance to the study of sex hormones was the subsequent isolation of gonadotropin-releasing hormone by Schally and his group (Schally, Kastin, & Arimura, 1971). This releasing hormone stimulates the release of both of the anterior pituitary’s gonadotropins: follicle-stimulating hormone (FSH) and luteinizing hormone (LH) . All hypothalamic releasing hormones, like all tropic hormones, have proven to be peptides.

Regulation of Hormone Levels

Hormone release is regulated by three different kinds of signals: signals from the nervous system, signals from hormones, and signals from nonhormonal chemicals in the blood.

Regulation by Neural Signals

All endocrine glands, with the exception of the anterior pituitary, are directly regulated by signals from the nervous system. Endocrine glands located in the brain (i.e., the pituitary and pineal glands) are regulated by cerebral neurons; those located outside the CNS are innervated by the autonomic nervous system—usually by both the sympathetic and parasympathetic branches, which often have opposite effects on hormone release.

The effects of experience on hormone release are usually mediated by signals from the nervous system. It is extremely important to remember that hormone release can be regulated by experience—for example, many species that breed only in the spring are often prepared for reproduction by the release of sex hormones triggered by the increasing daily duration of daylight. This means that an explanation of any behavioral phenomenon in terms of a hormonal mechanism does not necessarily rule out an explanation in terms of an experiential mechanism. Indeed, hormonal and experiential explanations may merely be different aspects of the same hypothetical mechanism.

Thinking Creatively

Regulation by Hormonal Signals

The hormones themselves also influence hormone release. You have already learned, for example, that the tropic hormones of the anterior pituitary influence the release of hormones from their respective target glands. However, the regulation of endocrine function by the anterior pituitary is not a one-way street. Circulating hormones often provide feedback to the very structures that influence their release: the pituitary gland, the hypothalamus, and other sites in the brain. The function of most hormonal feedback is the maintenance of stable blood levels of the hormones. Thus, high gonadal hormone levels usually have effects on the hypothalamus and pituitary that decrease subsequent gonadal hormone release, and low levels usually have effects that increase hormone release.

Regulation by Nonhormonal Chemicals

Circulating chemicals other than hormones can play a role in regulating hormone levels. Glucose, calcium, and sodium levels in the blood all influence the release of particular hormones. For example, you learned in Chapter 12 that increases in blood glucose increase the release of insulin from the pancreas, and insulin, in turn, reduces blood glucose levels.

Pulsatile Hormone Release

Hormones tend to be released in pulses (see Armstrong et al., 2009; Khadra & Li, 2006); they are discharged several times per day in large surges, which typically last no more than a few minutes. Hormone levels in the blood are regulated by changes in the frequency and duration of the hormone pulses. One consequence of pulsatile hormone release is that there are often large minute-to-minute fluctuations in the levels of circulating hormones (e.g., Koolhaas, Schuurman, & Wierpkema, 1980). Accordingly, when the pattern of human male gonadal hormone release is referred to as “steady,” it means that there are no major systematic changes in circulating gonadal hormone levels from day to day, not that the levels never vary.

Summary Model of Gonadal Endocrine Regulation

Figure 13.5 is a summary model of the regulation of gonadal hormones. According to this model, the brain controls the release of gonadotropin-releasing hormone from the hypothalamus into the hypothalamo-pituitary portal system, which carries it to the anterior pituitary. In the anterior pituitary, the gonadotropin-releasing hormone stimulates the release of gonadotropin, which is carried by the circulatory system to the gonads. In response to the gonadotropin, the gonads release androgens, estrogens, and progestins, which feed back into the pituitary and hypothalamus to regulate subsequent gonadal hormone release.

Armed with this general perspective of neuroendocrine function, you are ready to consider how gonadal hormones direct sexual development and activate adult sexual behavior.

13.2 Hormones and Sexual Development of the Body

You have undoubtedly noticed that humans are dimorphic—that is, they come in two standard models: female and male. This section describes how the development of female and male bodily characteristics is directed by hormones.

FIGURE 13.5 A summary model of the regulation of gonadal hormones.

Sexual differentiation in mammals begins at fertilization with the production of one of two different kinds of zygotes: either one with an XX (female) pair of sex chromosomes or one with an XY (male) pair. It is the genetic information on the sex chromosomes that normally determines whether development will occur along female or male lines. But be cautious here: Do not fall into the seductive embrace of the men-are-men-and-women-are-women assumption. Do not begin by assuming that there are two parallel but opposite genetic programs for sexual development, one for female development and one for male development. As you are about to learn, sexual development unfolds according to an entirely different principle, one that males who still stubbornly cling to notions of male preeminence find unsettling. This principle is that we are all genetically programmed to develop female bodies; genetic males develop male bodies only because their fundamentally female program of development is overruled.

Thinking Creatively

Fetal Hormones and Development of Reproductive Organs

Gonads

Figure 13.6 illustrates the structure of the gonads as they appear 6 weeks after fertilization. Notice that at this stage of development, each fetus, regardless of its genetic sex, has the same pair of gonadal structures, called primordial gonads (primordial means “existing at the beginning”). Each primordial gonad has an outer covering, or cortex, which has the potential to develop into an ovary; and each has an internal core, or medulla, which has the potential to develop into a testis.

Six weeks after conception, the Sry gene on the Y chromosome of the male triggers the synthesis of Sry protein (see Arnold, 2004; Wu et al., 2009), and this protein causes the medulla of each primordial gonad to grow and to develop into a testis. There is no female counterpart of Sry protein; in the absence of Sry protein, the cortical cells of the primordial gonads automatically develop into ovaries. Accordingly, if Sry protein is injected into a genetic female fetus 6 weeks after conception, the result is a genetic female with testes; or if drugs that block the effects of Sry protein are injected into a male fetus, the result is a genetic male with ovaries. Such “mixed-sex” individuals expose in a dramatic fashion the weakness of mamawawa thinking (thinking of “male” and “female” as mutually exclusive, opposite categories).

Internal Reproductive Ducts

Six weeks after fertilization, both males and females have two complete sets of reproductive ducts. They have a male Wolffian system , which has the capacity to develop into the male reproductive ducts (e.g., the seminal vesicles, which hold the fluid in which sperm cells are ejaculated; and the vas deferens, through which the sperm cells travel to the seminal vesicles). And they have a female Müllerian system , which has the capacity to develop into the female ducts (e.g., the uterus; the upper part of the vagina; and the fallopian tubes, through which ova travel from the ovaries to the uterus, where they can be fertilized).

In the third month of male fetal development, the testes secrete testosterone and Müllerian-inhibiting substance . As Figure 13.7 illustrates, the testosterone stimulates the development of the Wolffian system, and the Müllerian-inhibiting substance causes the Müllerian system to degenerate and the testes to descend into the scrotum —the sac that holds the testes outside the body cavity. Because it is testosterone—not the sex chromosomes—that triggers Wolffian development, genetic females who are injected with testosterone during the appropriate fetal period develop male reproductive ducts along with their female ones.

FIGURE 13.6 The development of an ovary and a testis from the cortex and the medulla, respectively, of the primordial gonadal structure that is present 6 weeks after conception.

The differentiation of the internal ducts of the female reproductive system (see Figure 13.7) is not under the control of ovarian hormones; the ovaries are almost completely inactive during fetal development. The development of the Müllerian system occurs in any fetus that is not exposed to testicular hormones during the critical fetal period. Accordingly, normal female fetuses, ovariectomized female fetuses, and orchidectomized male fetuses all develop female reproductive ducts (Jost, 1972). Ovariectomy is the removal of the ovaries, and orchidectomy is the removal of the testes (the Greek word orchis means “testicle”). Gonadectomy , or castration, is the surgical removal of gonads—either ovaries or testes.

FIGURE 13.7 The development of the internal ducts of the male and female reproductive systems from the Wolffian and Müllerian systems, respectively.

External Reproductive Organs

There is a basic difference between the differentiation of the external reproductive organs and the differentiation of the internal reproductive organs (i.e., the gonads and reproductive ducts). As you have just read, every normal fetus develops separate precursors for the male (medulla) and female (cortex) gonads and for the male (Wolffian system) and female (Müllerian system) reproductive ducts; then, only one set, male or female, develops. In contrast, both male and female genitals —external reproductive organs—develop from the same precursor. This bipotential precursor and its subsequent differentiation are illustrated in Figure 13.8 on page 336.

 Simulate Differentiating the External Genitals: The Penis and the Vagina www.mypsychlab.com

In the second month of pregnancy, the bipotential precursor of the external reproductive organs consists of four parts: the glans, the urethral folds, the lateral bodies, and the labioscrotal swellings. Then it begins to differentiate. The glans grows into the head of the penis in the male or the clitoris in the female; the urethral folds fuse in the male or enlarge to become the labia minora in the female; the lateral bodies form the shaft of the penis in the male or the hood of the clitoris in the female; and the labioscrotal swellings form the scrotum in the male or the labia majora in the female.

Like the development of the internal reproductive ducts, the development of the external genitals is controlled by the presence or absence of testosterone. If testosterone is present at the appropriate stage of fetal development, male external genitals develop from the bipotential precursor; if testosterone is not present, development of the external genitals proceeds along female lines.

Puberty: Hormones and Development of Secondary Sex Characteristics

During childhood, levels of circulating gonadal hormones are low, reproductive organs are immature, and males and females differ little in general appearance. This period of developmental quiescence ends abruptly with the onset of puberty—the transitional period between childhood and adulthood during which fertility is achieved, the adolescent growth spurt occurs, and the secondary sex characteristics develop. Secondary sex characteristics are those features other than the reproductive organs that distinguish sexually mature men and women. The body changes that occur during puberty are illustrated in Figure 13.9 on page 337.

Puberty is associated with an increase in the release of hormones by the anterior pituitary (see Grumbach, 2002). The increase in the release of growth hormone —the only anterior pituitary hormone that does not have a gland as its primary target—acts directly on bone and muscle tissue to produce the pubertal growth spurt. Increases in the release of gonadotropic hormone and adrenocorticotropic hormone cause the gonads and adrenal cortex to increase their release of gonadal and adrenal hormones, which in turn initiate the maturation of the genitals and the development of secondary sex characteristics.

The general principle guiding normal pubertal sexual maturation is a simple one: In pubertal males, androgen levels are higher than estrogen levels, and masculinization is the result; in pubertal females, the estrogens predominate, and the result is feminization. Individuals castrated prior to puberty do not become sexually mature unless they receive replacement injections of androgens or estrogens.

But even during puberty, its only period of relevance, the men-are-men-and-women-are-women assumption stumbles badly. You see, androstenedione , an androgen that is released primarily by the adrenal cortex, is normally responsible for the growth of pubic hair and axillary hair (underarm hair) in females. It is hard to take seriously the practice of referring to androgens as “male hormones” when one of them is responsible for the development of the female pattern of pubic hair growth. The male pattern is a pyramid, and the female pattern is an inverted pyramid (see Figure 13.9).

FIGURE 13.8 The development of male and female external reproductive organs from the same bipotential precursor.

Do you remember how old you were when you started to go through puberty? In most North American and European countries, puberty begins at about 10.5 years of age for girls and 11.5 years for boys. I am sure you would have been unhappy if you had not started puberty until you were 15 or 16, but this was the norm in North America and Europe just a century and a half ago. Presumably, this acceleration of puberty has resulted from improvements in dietary, medical, and socioeconomic conditions.

13.3 Hormones and Sexual Development of Brain and Behavior

Biopsychologists have been particularly interested in the effects of hormones on the sexual differentiation of the brain and the effects of brain differences on behavior. This section reveals how seminal studies conducted in the 1930s generated theories that have gradually morphed, under the influence of subsequent research, into our current views. But first, let’s take a quick look at the differences between male and female brains.

Sex Differences in the Brain

The brains of men and women may look the same on casual inspection, and it may be politically correct to believe that they are—but they are not. The brains of men tend to be about 15% larger than those of women, and many other anatomical differences between average male and female brains have been documented. There are statistically significant sex differences in the volumes of various nuclei and fiber tracts, in the numbers and types of neural and glial cells that compose various structures, and in the numbers and types of synapses that connect the cells in various structures. Sexual dimorphisms (male–female structural differences) of the brain are typically studied in nonhuman mammals, but many have also been documented in humans (see Arnold, 2003; Cahill, 2005, 2006; de Vries & Södersten, 2009).

Let’s begin with the first functional sex difference to be identified in mammalian brains. It set the stage for everything that followed.

First Discovery of a Sex Difference in Mammalian Brain Function

The first attempts to discover sex differences in the mammalian brain focused on the factors that control the development of the steady and cyclic patterns of gonadotropin release in males and females, respectively. The seminal experiments were conducted by Pfeiffer in 1936. In his experiments, some neonatal rats (males and females) were gonadectomized and some were not, and some received gonad transplants (ovaries or testes) and some did not.

FIGURE 13.9 The changes that normally occur in males and females during puberty.

Evolutionary Perspective

Remarkably, Pfeiffer found that gonadectomizing neonatal rats of either genetic sex caused them to develop into adults with the female cyclic pattern of gonadotropin release. In contrast, transplantation of testes into gonadectomized or intact female neonatal rats caused them to develop into adults with the steady male pattern of gonadotropin release. Transplantation of ovaries had no effect on the pattern of hormone release. Pfeiffer concluded that the female cyclic pattern of gonadotropin release develops unless the preprogrammed female cyclicity is overridden by testosterone during perinatal development (see Harris & Levine, 1965).

Pfeiffer incorrectly concluded that the presence or absence of testicular hormones in neonatal rats influenced the development of the pituitary because he was not aware of something we know today: The release of gonadotropins from the anterior pituitary is controlled by the hypothalamus. Once this was discovered, it became apparent that Pfeiffer’s experiments had provided the first evidence of the role of perinatal (around the time of birth) androgens in overriding the preprogrammed cyclic female pattern of gonadotropin release from the hypothalamus and initiating the development of the steady male pattern. This 1960s modification of Pfeiffer’s theory of brain differentiation to include the hypothalamus was consistent with the facts of brain differentiation as understood at that time, but subsequent research necessitated major revisions. The first of these major revisions became known as the aromatization hypothesis.

Neuroplasticity

Aromatization Hypothesis

What is aromatization? All gonadal and adrenal sex hormones are steroid hormones, and because all steroid hormones are derived from cholesterol, they have similar structures and are readily converted from one to the other. For example, a slight change to the testosterone molecule that occurs under the influence of the enzyme (a protein that influences a biochemical reaction without participating in it) aromatase converts testosterone to estradiol. This process is called aromatization (see Balthazart & Ball, 1998).

According to the aromatization hypothesis , perinatal testosterone does not directly masculinize the brain; the brain is masculinized by estradiol that has been aromatized from perinatal testosterone. Although the idea that estradiol—the alleged female hormone—masculinizes the brain may seem counterintuitive, there is strong evidence for it. Most of the evidence is of two types, both coming from experiments on rats and mice: (1) findings demonstrating masculinizing effects on the brain of early estradiol injections, and (2) findings showing that masculinization of the brain does not occur in response to testosterone that is administered with agents that block aromatization or in response to androgens that cannot be aromatized (e.g., dihydrotestosterone).

How do genetic females of species whose brains are masculinized by estradiol keep from being masculinized by their mothers’ estradiol, which circulates through the fetal blood supply? Alpha fetoprotein is the answer. Alpha fetoprotein is present in the blood of rats during the perinatal period, and it deactivates circulating estradiol by binding to it (Bakker et al., 2006; Bakker & Baum, 2007; De Mees et al., 2006). How, then, does estradiol masculinize the brain of the male fetus in the presence of the deactivating effects of alpha fetoprotein? Because testosterone is immune to alpha fetoprotein, it can travel unaffected from the testes to the brain cells where it is converted to estradiol. Estradiol is not broken down in the brain because alpha fetoprotein does not readily penetrate the blood–brain barrier.

Modern Perspectives on Sexual Differentiation of Mammalian Brains

The view that the female program is the default program of brain development and is normally overridden in genetic males by perinatal exposure to testosterone aromatized to estradiol remained the preeminent theory of the sexual differentiation of the brain as long as research focused on the rat hypothalamus. Once studies of brain differentiation began to include other parts of the brain and other species, it became apparent that no single mechanism can account for the development of sexual dimorphisms of mammalian brains. The following findings have been particularly influential in shaping current views:

• Various sexual differences in brain structure and function have been found to develop by different mechanisms; for example, aromatase is found in only a few areas of the rat brain (e.g., the hypothalamus), and it is only in these areas that aromatization is critical for testosterone’s masculinizing effects (see Ball & Balthazart, 2006; Balthazart & Ball, 2006).

• Sexual differences in the brain have been found to develop by different mechanisms in different mammalian species (see McCarthy, Wright, & Schwartz, 2009); for example, aromatization plays a less prominent role in primates than in rats and mice (see Zuloaga et al., 2008).

Evolutionary Perspective

• Various sex differences in the brain have been found to develop at different stages of development (Bakker & Baum, 2007); for example, many differences do not develop until puberty (Ahmed et al., 2008; Sisk & Zehr, 2005), a possibility ignored by early theories.

• Sex chromosomes have been found to influence brain development independent of their effect on hormones (Arnold, 2009; Jazon & Cahill, 2010); for example, different patterns of gene expression exist in the brains of male and female mice before the gonads become functional (Dewing et al., 2003).

• Although the female program of brain development had been thought to proceed normally in the absence of gonadal steroids, recent evidence suggests that estradiol plays an active role; knockout mice without the gene that forms estradiol receptors do not display a normal female pattern of brain development (see Bakker & Baum, 2007).

In short, there is overwhelming evidence that various sexual differences in mammalian brains emerge at different stages of development under different genetic and hormonal influences (see Wagner, 2006). Although the conventional view that a female program of development is the default does an excellent job of explaining differentiation of the reproductive organs, it falters badly when it comes to differentiation of the brain.

In studying the many sexual differences of mammalian brains, it is easy to lose sight of the main point: We still do not understand how any of the anatomical differences that have been identified influence behavior.

Perinatal Hormones and Behavioral Development

In view of the fact that perinatal hormones influence the development of the brain, it should come as no surprise that they also influence the development of behavior. Much of the research on perinatal hormones and behavioral development was conducted before the discoveries about brain development that we have just considered. Consequently, most of the studies have been based on the idea of a female default program that can be overridden by testosterone and have assessed the effects of perinatal testosterone exposure on reproductive behaviors in laboratory animals.

Phoenix and colleagues (1959) were among the first to demonstrate that the perinatal injection of testosterone masculinizes and defeminizes a genetic female’s adult copulatory behavior. First, they injected pregnant guinea pigs with testosterone. Then, when the litters were born, the researchers ovariectomized the female offspring. Finally, when these ovariectomized female guinea pigs reached maturity, the researchers injected them with testosterone and assessed their copulatory behavior. Phoenix and his colleagues found that the females that had been exposed to perinatal testosterone displayed more male-like mounting behavior in response to testosterone injections in adulthood than did adult females that had not been exposed to perinatal testosterone. And when, as adults, the female guinea pigs were injected with progesterone and estradiol and mounted by males, they displayed less lordosis —the intromission-facilitating arched-back posture that signals female rodent receptivity.

Evolutionary Perspective

In a study complementary to that of Phoenix and colleagues, Grady, Phoenix, and Young (1965) found that the lack of early exposure of male rats to testosterone both feminizes and demasculinizes their copulatory behavior as adults. Male rats castrated shortly after birth failed to display the normal male copulatory pattern of mounting, intromission (penis insertion), and ejaculation (ejection of sperm) when they were treated with testosterone and given access to a sexually receptive female; and when they were injected with estrogen and progesterone as adults, they exhibited more lordosis than did uncastrated controls.

The aromatization of perinatal testosterone to estradiol seems to be important for both the defeminization and the masculinization of rodent copulatory behavior (Goy & McEwen, 1980; Shapiro, Levine, & Adler, 1980). In contrast, that aromatization does not seem to be critical for these effects in monkeys (Wallen, 2005).

When it comes to the effects of perinatal testosterone on behavioral development, timing is critical. The ability of single injections of testosterone to masculinize and defeminize the rat brain seems to be restricted to the first 11 days after birth.

Because much of the research on hormones and behavioral development has focused on the copulatory act, we know less about the role of hormones in the development of proceptive behaviors (solicitation behaviors) and in the development of gender-related behaviors that are not directly related to reproduction. However, perinatal testosterone has been reported to disrupt the proceptive hopping, darting, and ear wiggling of receptive female rats; to increase the aggressiveness of female mice; to disrupt the maternal behavior of female rats; and to increase rough social play in female monkeys and rats.

Ethical considerations prohibit experimental studies of the developmental effects of hormones on human development. However, there have been many correlational studies of clinical cases and of ostensibly healthy individuals who received abnormal prenatal exposure to androgens (due to their own pathology or to drugs taken by their mothers). The results have been far from impressive. Cohen-Bendahan, van de Beek, and Berenbaum (2005) reviewed the extensive research literature and concluded that, despite many inconsistencies, the weight of evidence indicated that prenatal androgen exposure contributes to the differences in interests, spatial ability, and aggressiveness typically observed between men and women. However, there was no convincing evidence that differences in prenatal androgen exposure contribute to behavioral differences observed among women or among men.

Before you finish this subsection, I want to clarify an important point. If you are like many of my students, you may be wondering why biopsychologists who study the development of male–female behavioral differences always measure masculinization separately from defeminization and feminization separately from demasculinization. If you think that masculinization and defeminization are the same thing and that feminization and demasculinization are the same thing, you have likely fallen into the trap of the men-are-men-and-women-are-women assumption—that is, into the trap of thinking of maleness and femaleness as discrete, mutually exclusive, opposite categories. In fact, male behaviors and female behaviors can co-exist in the same individual, and they do not necessarily change in opposite directions if the individual receives physiological treatment such as hormones or brain lesions. For example, “male” behaviors (e.g., mounting receptive females) have been observed in the females of many different mammalian species, and “female” behaviors (e.g., lordosis) have been observed in males (see Dulac & Kimchi, 2007). And, lesions in medial preoptic areas have been shown to abolish male reproductive behaviors in both male and female rats, without affecting female behaviors (Singer, 1968). Think about this idea carefully, it plays an important role in later sections of the chapter.

Thinking Creatively

Scan Your Brain

Before you proceed to a consideration of three cases of exceptional human sexual development, scan your brain to see whether you understand the basics of normal sexual development. Fill in the blanks in the following sentences. The correct answers are provided at the end of the exercise. Review material related to your errors and omissions before proceeding.

1. Six weeks after conception, the Sry gene on the Y chromosome of the human male triggers the production of ______.

2. In the absence of the Sry protein, the cortical cells of the primordial gonads develop into ______.

3. In the third month of male fetal development, the testes secrete testosterone and ______ substance.

4. The hormonal factor that triggers the development of the human Müllerian system is the lack of ______ around the third month of fetal development.

5. The scrotum and the ______ develop from the same bipotential precursor.

6. The female pattern of cyclic ______ release from the anterior pituitary develops in adulthood unless androgens are present in the body during the perinatal period.

7. It has been hypothesized that perinatal testosterone must first be changed to estradiol before it can masculinize the male rat brain. This is called the ______ hypothesis.

8. ______ is normally responsible for pubic and axillary hair growth in human females during puberty.

9. Girls usually begin puberty ______ boys do.

10. The simplistic, seductive, but incorrect assumption that sexual differentiation occurs because male and female sex hormones trigger programs of development that are parallel but opposite to one another has been termed the ______.

Scan Your Brain answers:

(1) Sry protein,

(2) ovaries,

(3) Müllerian-inhibiting,

(4) androgens (or testosterone),

(5) labia majora,

(6) gonadotropin,

(7) aromatization,

(8) Androstenedione,

(9) before,

(10) mamawawa.

13.4 Three Cases of Exceptional Human Sexual Development

This section discusses three cases of abnormal sexual development. I am sure you will be intrigued by these three cases, but that is not the only reason I have chosen to present them. My main reason is expressed by a proverb: The exception proves the rule. Most people think this proverb means that the exception “proves” the rule in the sense that it establishes its truth, but this is clearly wrong: The truth of a rule is challenged by, not confirmed by, exceptions to it. The word proof comes from the Latin probare, which means “to test”—as in proving ground or printer’s proof—and this is the sense in which it is used in the proverb. Hence, the proverb means that the explanation of exceptional cases is a major challenge for any theory.

So far in this chapter, you have learned the “rules” according to which hormones seem to influence normal sexual development. Now, three exceptional cases are offered to prove (to test) these rules.

The Case of Anne S., the Woman Who Wasn’t

Anne S., an attractive 26-year-old female, sought treatment for two sex-related disorders: lack of menstruation and pain during sexual intercourse (Jones & Park, 1971). She sought help because she and her husband of 4 years had been trying without success to have children, and she correctly surmised that her lack of a menstrual cycle was part of the problem. A physical examination revealed that Anne was a healthy young woman. Her only readily apparent peculiarity was the sparseness and fineness of her pubic and axillary hair. Examination of her external genitals revealed no abnormalities; however, there were some problems with her internal genitals. Her vagina was only 4 centimeters long, and her uterus was underdeveloped.

Clinical Implications

At the start of this chapter, I said that you would encounter some remarkable things, and the diagnosis of Anne’s case certainly qualifies as one of them. Anne’s doctors concluded that her sex chromosomes were those of a man. No, this is not a misprint; they concluded that Anne, the attractive young housewife, had the genes of a genetic male. Three lines of evidence supported their diagnosis. First, analysis of cells scraped from the inside of Anne’s mouth revealed that they were of the male XY type. Second, a tiny incision in Anne’s abdomen, which enabled Anne’s physicians to look inside, revealed a pair of internalized testes but no ovaries. Finally, hormone tests revealed that Anne’s hormone levels were those of a male.

Anne suffers from complete androgenic insensitivity syndrome ; all her symptoms stem from a mutation to the androgen receptor gene that rendered her androgen receptors totally unresponsive (see Fink et al., 1999; Goldstein, 2000). Complete androgen insensitivity is rare, occurring in about 5 of 100,000 male births.

During development, Anne’s testes released normal amounts of androgens for a male, but her body could not respond to them because of the mutation to her androgen receptor gene; and thus, her development proceeded as if no androgens had been released. Her external genitals, her brain, and her behavior developed along female lines, without the effects of androgens to override the female program, and her testes could not descend from her body cavity with no scrotum for them to descend into. Furthermore, Anne did not develop normal internal female reproductive ducts because, like other genetic males, her testes released Müllerian-inhibiting substance; that is why her vagina was short and her uterus undeveloped. At puberty, Anne’s testes released enough estrogens to feminize her body in the absence of the counteracting effects of androgens; however, adrenal androstenedione was not able to stimulate the growth of pubic and axillary hair.

Although the samples are small, patients with complete androgen insensitivity have been found to be comparable to genetic females. All aspects of their behavior that have been studied—including gender identity, sexual orientation, interests, and cognitive abilities—have been found to be typically female (see Cohen-Bendahan, van de Beek, & Berenbaum, 2005).

An interesting issue of medical ethics is raised by the androgenic insensitivity syndrome. Many people believe that physicians should always disclose all relevant findings to their patients. If you were Anne’s physician, would you tell her that she is a genetic male? Would you tell her husband? Her doctor did not. Anne’s vagina was surgically enlarged, she was counseled to consider adoption, and, as far as I know, she is still happily married and unaware of her genetic sex. On the other hand, I have heard from several women who suffer from partial androgenic insensitivity, and they recommended full disclosure. They had faced a variety of sexual ambiguities throughout their lives, and learning the cause helped them.

The Case of the Little Girl Who Grew into a Boy

The patient—let’s call her Elaine—sought treatment in 1972. Elaine was born with somewhat ambiguous external genitals, but she was raised by her parents as a girl without incident, until the onset of puberty, when she suddenly began to develop male secondary sex characteristics. This was extremely distressing. Her treatment had two aspects: surgical and hormonal. Surgical treatment was used to increase the size of her vagina and decrease the size of her clitoris; hormonal treatment was used to suppress androgen release so that her own estrogen could feminize her body. Following treatment, Elaine developed into an attractive young woman—narrow hips and a husky voice being the only signs of her brush with masculinity. Fifteen years later, she was married and enjoying a normal sex life (Money & Ehrhardt, 1972).

Clinical Implications

Elaine suffered from adrenogenital syndrome, which is the most common disorder of sexual development, affecting about 1 in 10,000. Adrenogenital syndrome is caused by congenital adrenal hyperplasia —a congenital deficiency in the release of the hormone cortisol from the adrenal cortex, which results in compensatory adrenal hyperactivity and the excessive release of adrenal androgens. This has little effect on the development of males, other than accelerating the onset of puberty, but it has major effects on the development of genetic females. Females who suffer from the adrenogenital syndrome are usually born with an enlarged clitoris and partially fused labia. Their gonads and internal ducts are usually normal because the adrenal androgens are released too late to stimulate the development of the Wolffian system.

 Watch Intersexuals www.mypsychlab.com

Most female cases of adrenogenital syndrome are diagnosed at birth. In such cases, the abnormalities of the external genitals are immediately corrected, and cortisol is administered to reduce the levels of circulating adrenal androgens. Following early treatment, adrenogenital females grow up to be physically normal except that the onset of menstruation is likely to be later than normal. This makes them good subjects for studies of the effects of fetal androgen exposure on psychosexual development.

Adrenogenital teenage girls who have received early treatment tend to display more tomboyishness, greater strength, and more aggression than most teenage girls, and they tend to prefer boys’ clothes and toys, play mainly with boys, and daydream about future careers rather than motherhood (e.g., Collaer et al., 2008; Hines, 2003; Matthews et al., 2009). However, it is important not to lose sight of the fact that many teenage girls display similar characteristics—and why not? Accordingly, the behavior of treated adrenogenital females, although tending toward the masculine, is usually within the range considered to be normal female behavior by the current standards of our culture.

The most interesting questions about the development of females with adrenogenital syndrome concern their romantic and sexual preferences as adults. They seem to lag behind normal females in dating and marriage—perhaps because of the delayed onset of their menstrual cycle. Most are heterosexual, although a few studies have found an increased tendency for these women to express interest in bisexuality or homosexuality and a tendency to be less involved in heterosexual relationships (see Gooren, 2006). Complicating the situation further is the fact that these slight differences may not be direct consequences of early androgen exposure but arise from the fact that some adrenogenital girls have ambiguous genitalia and other male characteristics (e.g., body hair), which may result in different experiential influences.

Thinking Creatively

Prior to the development of cortisol therapy in 1950, genetic females with adrenogenital syndrome were left untreated. Some were raised as boys and some as girls, but the direction of their pubertal development was unpredictable. In some cases, adrenal androgens predominated and masculinized their bodies; in others, ovarian estrogens predominated and feminized their bodies. Thus, some who were raised as boys were transformed at puberty into women and some who were raised as girls were transformed into men, with devastating emotional consequences.

 Listen Psychology in the News: Sexuality and Gender www.mypsychlab.com

The Case of the Twin Who Lost His Penis

One of the most famous cases in the literature on sexual development is that of a male identical twin whose penis was accidentally destroyed during circumcision at the age of 7 months. Because there was no satisfactory way of surgically replacing the lost penis, a respected expert in such matters, John Money, recommended that the boy be castrated, that an artificial vagina be created, that the boy be raised as a girl, and that estrogen be administered at puberty to feminize the body. After a great deal of consideration and anguish, the parents followed Money’s advice.

Money’s (1975) report of this case of ablatio penis has been influential. It has been seen by some as the ultimate test of the nature–nurture controversy (see Chapter 2) with respect to the development of sexual identity and behavior. It seemed to pit the masculinizing effects of male genes and male prenatal hormones against the effects of being reared as a female. And the availability of a genetically identical control subject, the twin brother, made the case all the more interesting. According to Money, the outcome of this case strongly supported the social-learning theory of sexual identity. Money reported in 1975, when the patient was 12, that “she” had developed as a normal female, thus confirming his prediction that being gonadectomized, having the genitals surgically altered, and being raised as a girl would override the masculinizing effects of male genes and early androgens.

Clinical Implications

A long-term follow-up study published by impartial experts tells an entirely different story (Diamond & Sigmundson, 1997). Despite having female genitalia and being treated as a female, John/Joan developed along male lines. Apparently, the organ that determines the course of psychosocial development is the brain, not the genitals (Reiner, 1997). The following paraphrases from Diamond and Sigmundson’s report give you a glimpse of John/Joan’s life:

From a very early age, Joan tended to act in a masculine way. She preferred boys’ activities and games and displayed little interest in dolls, sewing, or other conventional female activities. When she was four, she was watching her father shave and her mother put on lipstick, and she began to put shaving cream on her face. When she was told to put makeup on like her mother, she said, “No, I don’t want no makeup, I want to shave.”

“Things happened very early. As a child, I began to see that I felt different about a lot of things than I was supposed to. I suspected I was a boy from the second grade on.”

Despite the absence of a penis, Joan often tried to urinate while standing, and she would sometimes go to the boys’ lavatory.

Joan was attractive as a girl, but as soon as she moved or talked her masculinity became apparent. She was teased incessantly by the other girls, and she often retaliated violently, which resulted in her expulsion from school.

Joan was put on an estrogen regimen at the age of 12 but rebelled against it. She did not want to feminize; she hated her developing breasts and refused to wear a bra.

At 14, Joan decided to live as a male and switched to John. At that time, John’s father tearfully revealed John’s entire early history to him. “All of a sudden everything clicked. For the first time I understood who and what I was.”

John requested androgen treatment, a mastectomy (surgical removal of breasts), and phaloplasty (surgical creation of a penis). He became a handsome and popular young man. He married at the age of 25 and adopted his wife’s children. He is strictly heterosexual.

John’s ability to ejaculate and experience orgasm returned following his androgen treatments. However, his early castration permanently eliminated his reproductive capacity.

“John” remained bitter about his early treatment and his inability to produce offspring. To save others from his experience, he cooperated in writing his biography, As Nature Made Him (Colapinto, 2000). His real name was David Reimer (see Figure 13.10). David never recovered from his emotional scars. On May 4, 2004, he committed suicide.

David Reimer’s case suggests that the clinical practice of surgically modifying a person’s sex at birth should be curtailed. Any such irrevocable treatments should await early puberty and the emergence of the patient’s sexual identity and sexual attraction. At that stage, a compatible course of treatment can be selected.

Do the Exceptional Cases Prove the Rule?

Do current theories of hormones and sexual development pass the test of the three preceding cases of exceptional sexual development? In my view, the answer is “yes.” Although current theories do not supply all of the answers, especially when it comes to brain dimorphisms and behavior, they have contributed greatly to the understanding of exceptional patterns of sexual differentiation of the body.

Thinking Creatively

FIGURE 13.10 David Reimer, the twin whose penis was accidentally destroyed.

For centuries, cases of abnormal sexual development have befuddled scholars, but now, armed with a basic understanding of the role of hormones in sexual development, they have been able to make sense of some of the most puzzling of such cases. Moreover, the study of sexual development has pointed the way to effective treatments. Judge these contributions for yourself by comparing your current understanding of these three cases with the understanding that you would have had if you had encountered them before beginning this chapter.

Notice one more thing about the three cases: Each of the three subjects was male in some respects and female in others. Accordingly, each case is a serious challenge to the men-are-men-and-women-are-women assumption: Male and female are not opposite, mutually exclusive categories.

13.5 Effects of Gonadal Hormones on Adults

Once an individual reaches sexual maturity, gonadal hormones begin to play a role in activating reproductive behavior. These activational effects are the focus of the first two parts of this section. They deal with the role of hormones in activating the reproduction-related behavior of men and women, respectively. The third part of this section deals with anabolic steroids, and the fourth describes the neuroprotective effects of estradiol.

Male Reproduction-Related Behavior and Testosterone

The important role played by gonadal hormones in the activation of male sexual behavior is clearly demonstrated by the asexualizing effects of orchidectomy. Bremer (1959) reviewed the cases of 157 orchidectomized Norwegians. Many had committed sex-related offenses and had agreed to castration to reduce the length of their prison terms.

Two important generalizations can be drawn from Bremer’s study. The first is that orchidectomy leads to a reduction in sexual interest and behavior; the second is that the rate and the degree of the loss are variable. About half the men became completely asexual within a few weeks of the operation; others quickly lost their ability to achieve an erection but continued to experience some sexual interest and pleasure; and a few continued to copulate successfully, although somewhat less enthusiastically, for the duration of the study. There were also body changes: a reduction of hair on the trunk, extremities, and face; the deposition of fat on the hips and chest; a softening of the skin; and a reduction in strength.

Of the 102 sex offenders in Bremer’s study, only 3 were reconvicted of sex offenses. Accordingly, he recommended castration as an effective treatment of last resort for male sex offenders.

Why do some men remain sexually active for months after orchidectomy, despite the fact that testicular hormones are cleared from their bodies within days? It has been suggested that adrenal androgens may play some role in the maintenance of sexual activity in some castrated men, but there is no direct evidence for this hypothesis.

Orchidectomy removes, in one fell swoop—or, to put it more precisely, in two fell swoops—a pair of glands that release many hormones. Because testosterone is the major testicular hormone, the major symptoms of orchidectomy have been generally attributed to the loss of testosterone, rather than to the loss of some other testicular hormone or to some nonhormonal consequence of the surgery. The therapeutic effects of replacement injections of testosterone have confirmed this assumption.

The Case of the Man Who Lost and Regained His Manhood

The very first case report of the effects of testosterone replacement therapy concerned an unfortunate 38-year-old World War I veteran, who was castrated in 1918 at the age of 19 by a shell fragment that removed his testes but left his penis undamaged.

Clinical Implications

His body was soft; it was as if he had almost no muscles at all; his hips had grown wider and his shoulders seemed narrower than when he was a soldier. He had very little drive….

Just the same this veteran had married, in 1924, and you’d wonder why, because the doctors had told him he would surely be impotent [unable to achieve an erection]. . . . he made some attempts at sexual intercourse “for his wife’s satisfaction” but he confessed that he had been unable to satisfy her at all….

Dr. Foss began injecting it [testosterone] into the feeble muscles of the castrated man….

After the fifth injection, erections were rapid and prolonged. . . . But that wasn’t all. During twelve weeks of treatment he had gained eighteen pounds, and all his clothes had become too small….testosterone had resurrected a broken man to a manhood he had lost forever. (de Kruif, 1945, pp. 97–100)

Since this first clinical trial, testosterone has breathed sexuality into the lives of many men. Testosterone does not, however, eliminate the sterility (inability to reproduce) of males who lack functional testes.

The fact that testosterone is necessary for male sexual behavior has led to two widespread assumptions: (1) that the level of a man’s sexuality is a function of the amount of testosterone he has in his blood, and (2) that a man’s sex drive can be increased by increasing his testosterone levels. Both assumptions are incorrect. Sex drive and testosterone levels are uncorrelated in healthy men, and testosterone injections do not increase their sex drive.

Evolutionary Perspective

It seems that each healthy male has far more testosterone than is required to activate the neural circuits that produce his sexual behavior and that having more than the minimum is of no advantage in this respect (Sherwin, 1988). A classic experiment by Grunt and Young (1952) clearly illustrates this point.

First, Grunt and Young rated the sexual behavior of each of the male guinea pigs in their experiment. Then, on the basis of the ratings, the researchers divided the male guinea pigs into three experimental groups: low, medium, and high sex drive. Following castration, the sexual behavior of all of the guinea pigs fell to negligible levels within a few weeks (see Figure 13.11), but it recovered after the initiation of a series of testosterone replacement injections. The important point is that although each subject received the same, very large replacement injections of testosterone, the injections simply returned each to its previous level of copulatory activity. The conclusion is clear: With respect to the effects of testosterone on sexual behavior, more is not necessarily better.

Dihydrotestosterone, a nonaromatizable androgen, restores the copulatory behavior of castrated male primates (e.g., Davidson, Kwan, & Greenleaf, 1982); however, it fails to restore the copulatory behavior of castrated male rodents (see MacLusky & Naftolin, 1981). These findings indicate that the restoration of copulatory behavior by testosterone occurs by different mechanisms in rodents and primates: It appears to be a direct effect of testosterone in primates, but appears to be produced by estradiol aromatized from testosterone in rodents (see Ball & Balthazart, 2006).

Female Reproduction-Related Behavior and Gonadal Hormones

FIGURE 13.11 The sexual behavior of male guinea pigs with low, medium, and high sex drive. Sexual behavior was disrupted by castration and returned to its original level by very large replacement injections of testosterone. (Based on Grunt & Young, 1952.)

Sexually mature female rats and guinea pigs display 4-day cycles of gonadal hormone release. There is a gradual increase in the secretion of estrogens by the developing follicle (ovarian structure in which eggs mature) in the 2 days prior to ovulation, followed by a sudden surge in progesterone as the egg is released. These surges of estrogens and progesterone initiate estrus —a period of 12 to 18 hours during which the female is fertile, receptive (likely to assume the lordosis posture when mounted), proceptive (likely to engage in behaviors that serve to attract the male), and sexually attractive (smelling of chemicals that attract males).

Evolutionary Perspective

The close relation between the cycle of hormone release and the estrous cycle —the cycle of sexual receptivity—in female rats and guinea pigs and in many other mammalian species suggests that female sexual behavior in these species is under hormonal control. The effects of ovariectomy confirm this conclusion; ovariectomy of female rats and guinea pigs produces a rapid decline of both proceptive and receptive behaviors. Furthermore, estrus can be induced in ovariectomized rats and guinea pigs by an injection of estradiol followed about a day and a half later by an injection of progesterone.

Women are different from female rats, guinea pigs, and other mammals when it comes to the hormonal control of their sexual behavior: Female primates are the only female mammals that are motivated to copulate during periods of nonfertility (Ziegler, 2007). Moreover, ovariectomy has surprisingly little direct effect on either their sexual motivation or their sexual behavior (e.g., Martin, Roberts, & Clayton, 1980). Other than sterility, the major consequence of ovariectomy in women is a decrease in vaginal lubrication.

Numerous studies have investigated the role of estradiol in the sex drive of women by relating various measures of their sexual interest and activity to phases of their menstrual cycles. The results of this research are difficult to interpret. Some women do report that their sex drive is related to their menstrual cycles, and many studies have reported statistically significant correlations. The confusion arises because many studies have found no significant correlations, and because many different patterns of correction have been reported (see Regan, 1996; Sanders & Bancroft, 1982). No single pattern has emerged that characterizes fluctuations in human female sexual motivation. Paradoxically, there is evidence that the sex drive of women is under the control of androgens (the so-called male sex hormones), not estrogens (see Davis & Tran, 2001; Sherwin, 1988). Apparently, enough androgens are released from the human adrenal glands to maintain the sexual motivation of women even after their ovaries have been removed. Support for the theory that androgens control human female sexuality has come from three sources:

• In experiments with nonhuman female primates, replacement injections of testosterone, but not estradiol, increased the proceptivity of ovariectomized and adrenalectomized rhesus monkeys (see Everitt & Herbert, 1972; Everitt, Herbert, & Hamer, 1971).

• In correlational studies of healthy women, various measures of sexual motivation have been shown to correlate with testosterone levels but not with estradiol levels (see Bancroft et al., 1983; Morris et al., 1987).

• In clinical studies of women following ovariectomy and adrenalectomy or menopause, replacement injections of testosterone, but not of estradiol, rekindled the patients’ sexual motivation (see de Paula et al., 2007; Sherwin, Gelfand, & Brender, 1985).

This research has led to the development of a testosterone skin patch for the treatment of low sex drive in women. The patch has been shown to be effective for women who have lost their sex drive following radical hysterectomy (Buster et al., 2005), Although a few studies have reported positive correlations between blood testosterone levels and the strength of sex drive in women (e.g., Turna et al., 2004), most women with low sex drive do not have low blood levels of testosterone (Davis et al., 2005; Gerber et al., 2005). Thus, the testosterone skin patch is unlikely to help most women with libido problems.

Clinical Implications

Thinking Creatively

Although neither the sexual motivation nor the sexual activity of women has found to be linked to their menstrual cycles, the type of men they prefer may be. Several studies have shown that women prefer masculine faces more on their fertile days than on their nonfertile days (e.g., Gangestad, Thornhill, & Garver-Apgar, 2005; PentonVoak & Perrett, 2000).

Anabolic Steroid Abuse

Anabolic steroids are steroids, such as testosterone, that have anabolic (growth-promoting) effects. Testosterone itself is not very useful as an anabolic drug because it is broken down soon after injection and because it has undesirable side effects. Chemists have managed to synthesize a number of potent anabolic steroids that are long-acting, but they have not managed to synthesize one that does not have side effects.

According some experts, we are currently in the midst of an epidemic of anabolic steroid abuse. Many competitive athletes and bodybuilders are self-administering appallingly large doses, and many others use them for cosmetic purposes. Because steroids are illegal, estimates of the numbers who use them are likely underestimates. Still, the results of some surveys have been disturbing: For example, a survey by the U.S. Centers for Disease Control and Prevention (Eaton et al., 2005) found that almost 5% of high school students had been steroid users.

Clinical Implications

Effects of Anabolic Steroids on Athletic Performance

Do anabolic steroids really increase the muscularity and strength of the athletes who use them? Surprisingly, the early scientific evidence was inconsistent (see Yesalis & Bahrke, 1995), even though many athletes and coaches believe that it is impossible to compete successfully at the highest levels of their sports without an anabolic steroid boost. The failure of the early experiments to confirm the benefits that had been experienced by many athletes likely results from two shortcomings of the research. First, the early experimental studies tended to use doses of steroids smaller than those used by athletes and for shorter periods of time. Second, the early studies were often conducted on volunteers who were not involved in intense training. However, despite the inconsistent experimental evidence, the results achieved by numerous individual steroid users, such as the man pictured in Figure 13.12, are convincing.

FIGURE 13.12 An athlete who used anabolic steroids to augment his training program.

Physiological Effects of Anabolic Steroids

There is general agreement (see Maravelias et al., 2005; Yesalis & Bahrke, 1995) that people who take high doses of anabolic steroids risk side effects. In men, the negative feedback from high levels of anabolic steroids reduces gonadotropin release; this leads to a reduction in testicular activity, which can result in testicular atrophy (wasting away of the testes) and sterility. Gynecomastia (breast growth in men) can also occur, presumably as the result of the aromatization of anabolic steroids to estrogens. In women, anabolic steroids can produce amenorrhea (cessation of menstruation), sterility, hirsutism (excessive growth of body hair), growth of the clitoris, development of a masculine body shape, baldness, shrinking of the breasts, and deepening and coarsening of the voice. Unfortunately, some of the masculinizing effects of anabolic steroids on women appear to be irreversible.

Both men and women who use anabolic steroids can suffer muscle spasms, muscle pains, blood in the urine, acne, general swelling from the retention of water, bleeding of the tongue, nausea, vomiting, and a variety of psychotic behaviors, including fits of depression and anger (Maravelias et al., 2005). Oral anabolic steroids produce cancerous liver tumors.

One controlled evaluation of the effects of exposure to anabolic steroids was conducted in adult male mice. Adult male mice were exposed for 6 months to a cocktail of four anabolic steroids at relative levels comparable to those used by human athletes (Bronson & Matherne, 1997). None of the mice died during the period of steroid exposure; however, by 20 months of age (6 months after termination of the steroid exposure), 52% of the steroid-exposed mice had died, whereas only 12% of the controls had died.

Evolutionary Perspective

There are two general points of concern about the adverse health consequences of anabolic steroids: First, the use of anabolic steroids in puberty, before developmental programs of sexual differentiation are complete, is particularly risky (see Farrell & McGinnis, 2003). Second, many of the adverse effects of anabolic steroids may take years to be manifested—steroid users who experience few immediate adverse effects may pay the price later.

Behavioral Effects of Anabolic Steroids

Other than those focusing on athletic performance, which you have just read about, few studies have systematically investigated the effects of anabolic steroids on behavior. However, because of the similarity between anabolic steroids and testosterone, there has been some suggestion that anaboic steroid use might increase aggressive and sexual behaviors. Let’s take a brief look at the evidence.

Thinking Creatively

Evidence that anabolic steroid use increases aggression comes almost entirely from the claims of steroid users. These anecdotal reports are unconvincing for three reasons:

• Because of the general belief that testosterone causes aggression, reports of aggressive behavior in male steroid users might be a consequence of expectation.

• Many individuals who use steroids (e.g., professional fighters or football players) are likely to have been aggressive before they started treatment.

• Aggressive behavior might be an indirect consequence of increased size and muscularity.

In one experimental assessment of the effects of anabolic steroids on aggression, Pope, Kouri, and Hudson (2000) administered either testosterone or placebo injections in a double-blind study of 53 men. Each volunteer completed tests of aggression and kept a daily aggression-related diary. Pope and colleagues found increases in aggression in only a few of the volunteers.

Although their similarity to testosterone suggests that steroids might increase sexual motivation, there is no evidence of such an effect. On the contrary, there are several anecdotal reports of the disruptive effects of anabolic steroids on human male copulatory behavior, and controlled experiments have shown that anabolic steroids disrupt the copulatory behavior of both male and female rodents (see Clark & Henderson, 2003).

Neuroprotective Effects of Estradiol

Although estradiol is best known for its sex-related organizational and activational effects, this hormone also can reduce the brain damage associated with stroke and various neurodegenerative disorders (see De Butte-Smith et al., 2009). For example, Yang and colleagues (2003) showed that estradiol administered to rats just before, during, or just after the induction of cerebral hypoxia (reduction of oxygen to the brain) reduced subsequent brain damage (see Chapter 10).

Estradiol has been shown to have several neurotrophic effects that might account for its neuroprotective properties (see Chapter 10). It has been shown to reduce inflammation, encourage axonal regeneration, promote synaptogenesis (see Stein & Hoffman, 2003; Zhang et al., 2004), and increase adult neurogenesis (see Chapter 10). Injection of estradiol initially increases the number of new neurons created in the dentate gyri of the hippocampuses of adult female rats and then, about 48 hours later, there is a period of reduced neurogenesis (see Galea et al., 2006; Ormerod, Falconer, & Galea, 2003). As well as increasing adult neurogenesis, estradiol increases the survival rate of the new neurons (see Galea et al., 2006; Ormerod & Galea, 2001b).

Neuroplasticity

The discovery of estradiol’s neuroprotective properties has created a lot of excitement among neuroscientists. These properties may account for women’s greater longevity and their lower incidence of several common neuropsychological disorders, such as Parkinson’s disease. They may also explain the decline in memory and some other cognitive deficits experienced by postmenopausal women (see Bisagno, Bowman, & Luine, 2003; Gandy, 2003).

Clinical Implications

Several studies have assessed the ability of estrogen treatments to reduce the cognitive deficits experienced by postmenopausal women. The results of some studies have been encouraging, but others have observed either no benefit or an increase in cognitive deficits (see Blaustein, 2008; Frick, 2009). Two suggestions have been made for improving the effectiveness of estradiol therapy: First, Sherwin (2007) pointed out that such therapy appears to be effective in both humans and non-humans only if the estradiol treatment is commenced at menopause or shortly thereafter. Second, Marriott and Wenk (2004) argued that the chronically high doses that have been administered to postmenopausal women are unnatural and potentially toxic; they recommend instead that estradiol therapy should mimic the natural cycle of estradiol levels in premenopausal women.

Thinking Creatively

Scan Your Brain

You encountered many clinical problems in the preceding two sections of the chapter. Do you remember them? Write the name of the appropriate condition or syndrome in each blank, based on the clues provided. The answers appear at the end of the exercise. Before proceeding, review material related to your errors and omissions.

Name of condition or syndrome

Clues

1. ____________

Genetic male, sparse pubic hair, short vagina

2. ____________

Congenital adrenal hyperplasia, elevated androgen levels

3. ____________

David Reimer, destruction of penis

4. ____________

Castrated males, gonadectomized males

5. ____________

Castrated females, gonadectomized females

6. ____________

Unable to achieve erection

7. ____________

Anabolic steroids, breasts on men

8. ____________

Anabolic steroids, cessation of menstruation

9. ____________

Anabolic steroids, excessive body hair

10. ____________

Reduction of oxygen to brain, effects can be reduced by estradiol

Scan Your Brain answers:

(1) androgenic insensitivity syndrome,

(2) adrenogenital syndrome,

(3) ablatio penis,

(4) orchidectomized,

(5) ovariectomized,

(6) impotent,

(7) gynecomastia,

(8) amenorrhea,

(9) hirsutism,

(10) cerebral hypoxia.

13.6 Neural Mechanisms of Sexual Behavior

Major differences among cultures in sexual practices and preferences indicate that the control of human sexual behavior involves the highest levels of the nervous system (e.g., association cortex), and this point is reinforced by controlled demonstrations of the major role played by experience in the sexual behaviors of nonhuman animals (see Woodson, 2002; Woodson & Balleine, 2002; Wood-son, Balleine, & Gorski, 2002). Nevertheless, research on the neural mechanisms of sexual behavior has focused almost exclusively on hypothalamic circuits. Consequently, I am forced to do the same here: When it comes to the study of the neural regulation of sexual behavior, the hypothalamus is virtually the only game in town.

FIGURE 13.13 Nissl-stained coronal sections through the preoptic area of male and female rats. The sexually dimorphic nuclei are larger in male rats than in female rats. (Based on Gorski et al., 1978.)

Why has research on the neural mechanisms of sexual behavior focused almost exclusively on hypothalamic circuits? There are three obvious reasons: First, because of the difficulty of studying the neural mechanisms of complex human sexual behaviors, researchers have focused on the relatively simple, controllable copulatory behaviors (e.g., ejaculation, mounting, and lordosis) of laboratory animals (see Agmo & Ellingsen, 2003), which tend to be controlled by the hypothalamus. Second, because the hypothalamus controls gonadotropin release, it was the obvious place to look for sexually dimorphic structures and circuits that might control copulation. And third, early studies confirmed that the hypothalamus does play a major role in sexual behavior, and this finding led subsequent neuroscientific research on sexual behavior to focus on that brain structure.

Structural Differences between the Male and Female Hypothalamus

You have already learned that the male hypothalamus and the female hypothalamus are functionally different in their control of anterior pituitary hormones (steady versus cyclic release, respectively). In the 1970s, structural differences between the male and female hypothalamus were discovered in rats (Raisman & Field, 1971). Most notably, Gorski and his colleagues (1978) discovered a nucleus in the medial preoptic area of the rat hypothalamus that was several times larger in males (see Figure 13.13). They called this nucleus the sexually dimorphic nucleus .

Evolutionary Perspective

At birth, the sexually dimorphic nuclei of male and female rats are the same size. In the first few days after birth, the male sexually dimorphic nuclei grow at a high rate and the female sexually dimorphic nuclei do not. The growth of the male sexually dimorphic nuclei is normally triggered by estradiol, which has been aromatized from testosterone (see McEwen, 1987). Accordingly, castrating day-old (but not 4-day-old) male rats significantly reduces the size of their sexually dimorphic nuclei as adults, whereas injecting neonatal (newborn) female rats with testosterone significantly increases the size of theirs (Gorski, 1980)—see Figure 13.14. Although the overall size of the sexually dimorphic nucleus diminishes only slightly in male rats that are castrated in adulthood, specific areas of the nucleus do display significant degeneration (Bloch & Gorski, 1988).

FIGURE 13.14 The effects of neonatal testosterone exposure on the size of the sexually dimorphic nuclei in male and female adult rats. (Based on Gorski, 1980.)

The size of a male rat’s sexually dimorphic nucleus is correlated with the rat’s testosterone levels and aspects of its sexual activity (Anderson et al., 1986). However, bilateral lesions of the sexually dimorphic nucleus have only slight disruptive effects on male rat sexual behavior (e.g., De Jonge et al., 1989; Turkenburg et al., 1988), and the specific function of this nucleus is unclear.

Since the discovery of the sexually dimorphic nuclei in rats, other sex differences in hypothalamic anatomy have been identified in rats and in other species (see Swaab & Hofman, 1995; Witelson, 1991). In humans, for example, there are nuclei in the preoptic (Swaab & Fliers, 1985), suprachiasmatic (Swaab et al., 1994), and anterior (Allen et al., 1989) regions of the hypothalamus that differ in men and women.

Hypothalamus and Male Sexual Behavior

The medial preoptic area (which includes the sexually dimorphic nucleus) is one area of the hypothalamus that plays a key role in male sexual behavior (see Dominguez & Hull, 2005). Destruction of the entire area abolishes sexual behavior in the males of all mammalian species that have been studied (see Hull et al., 1999). In contrast, medial preoptic area lesions do not eliminate the female sexual behaviors of females, but they do eliminate the male sexual behaviors (e.g., mounting) that are often observed in females (Singer, 1968). Thus, bilateral medial preoptic lesions appear to abolish male copulatory behavior in both sexes. Conversely, electrical stimulation of the medial preoptic area elicits copulatory behavior in male rats (Malsbury, 1971; Rodriguez-Manzo et al., 2000), and copulatory behavior can be reinstated in castrated male rats by medial preoptic implants of testosterone (Davidson, 1980).

Evolutionary Perspective

The medial preoptic circuits that control male sexual behavior appear to be dopaminergic (see Dominguez & Hull, 2005; Lagoda et al., 2004). Dopamine agonists microinjected into the medial preoptic area facilitate male sexual behavior, whereas dopamine agonists block it.

It is not clear why males with medial preoptic lesions stop copulating. One possibility is that the lesions disrupt the ability of males to copulate; another is that the lesions reduce the motivation of the males to engage in sexual behavior. The evidence is mixed, but it favors the hypothesis that the medial preoptic area is involved in the motivational aspects of male sexual behavior (Paredes, 2003).

The medial preoptic area appears to control male sexual behavior via a tract that projects to an area of the midbrain called the lateral tegmental field. Destruction of this tract disrupts the sexual behavior of male rats (Brackett & Edwards, 1984). Moreover, the activity of individual neurons in the lateral tegmental field of male rats is often correlated with aspects of the copulatory act (Shimura & Shimokochi, 1990); for example, some neurons in the lateral tegmental field fire at a high rate only during intromission.

Hypothalamus and Female Sexual Behavior

The ventromedial nucleus (VMN) of the rat hypothalamus contains circuits that appear to be critical for female sexual behavior. Female rats with bilateral lesions of the VMN do not display lordosis, and they are likely to attack suitors who become too ardent.

Evolutionary Perspective

You have already learned that an injection of progesterone brings into estrus an ovariectomized female rat that received an injection of estradiol about 36 hours before. Because the progesterone by itself does not induce estrus, the estradiol must in some way prime the nervous system so that the progesterone can exert its effect. This priming effect appears to be mediated by the large increase in the number of progesterone receptors that occurs in the VMN and surrounding area following an estradiol injection (Blaustein et al., 1988); the estradiol exerts this effect by entering VMN cells and influencing gene expression. Confirming the role of the VMN in estrus is the fact that microinjections of estradiol and progesterone directly into the VMN induce estrus in ovariectomized female rats (Pleim & Barfield, 1988).

The influence of the VMN on the sexual behavior of female rats appears to be mediated by a tract that descends to the periaqueductal gray (PAG) of the tegmentum. Destruction of this tract eliminates female sexual behavior (Hennessey et al., 1990), as do lesions of the PAG itself (Sakuma & Pfaff, 1979).

In conclusion, although many parts of the brain play a role in sexual behavior, much of the research has focused on the role of the hypothalamus in the copulatory behavior of rats. Several areas of the hypothalamus influence this copulatory behavior, and several hypothalamic nuclei are sexually dimorphic in rats, but the medial preoptic area and the ventromedial nucleus are two of the most widely studied. Male rat sexual behavior is influenced by a tract that runs from the medial preoptic area to the lateral tegmental field, and female rat sexual behavior is influenced by a tract that runs from the ventromedial nucleus to the periaqueductal gray (see Figure 13.15).

13.7 Sexual Orientation and Sexual Identity

So far, this chapter has not addressed the topic of sexual orientation. As you know, some people are heterosexual (sexually attracted to members of the other sex), some are homosexual (sexually attracted to members of the same sex), and some are bisexual (sexually attracted to members of both sexes). Also, the chapter has not addressed the topic of sexual identity (the sex, male or female, that a person believes himself or herself to be). A discussion of sexual orientation and sexual identity is a fitting conclusion to this chapter because it brings together the exception-proves-the-rule and anti-mamawawa themes.

Sexual Orientation and Genes

Research has shown that differences in sexual orientation have a genetic basis. For example, Bailey and Pillard (1991) studied a group of male homosexuals who had twin brothers, and they found that 52% of the monozygotic twin brothers and 22% of the dizygotic twin brothers were homosexual. In a comparable study of female twins by the same group of researchers (Bailey et al., 1993), the concordance rates for homosexuality were 48% for monozygotic twins and 16% for dizygotic twins.

Considerable excitement was created by the claim that a gene for male homosexuality had been localized on one end of the X chromosome (Hamer et al., 1993). However, subsequent research has not confirmed this claim (see Mustanski, Chivers, & Bailey, 2002; Rahman, 2005).

Sexual Orientation and Early Hormones

FIGURE 13.15 The hypothalamus-tegmentum circuits that play a role in female and male sexual behavior in rats.

Many people mistakenly assume that homosexuals have lower levels of sex hormones. They don’t: Heterosexuals and homosexuals do not differ in their levels of circulating hormones. Moreover, orchidectomy reduces the sexual behavior of both heterosexual and homosexual males, but it does not redirect it; and replacement injections simply reactivate the preferences that existed prior to surgery.

Many people also assume that sexual preference is a matter of choice. It isn’t: People discover their sexual preferences; they don’t choose them. Sexual preferences seem to develop very early, and a child’s first indication of the direction of sexual attraction usually does not change as he or she matures. Could perinatal hormone exposure be the early event that shapes sexual orientation?

Because experiments involving levels of perinatal hormone exposure are not feasible with humans, efforts to determine whether perinatal hormone levels influence the development of sexual orientation have focused on nonhuman species. A consistent pattern of findings has emerged. In those species that have been studied (e.g., rats, hamsters, ferrets, pigs, zebra finches, and dogs), it has not been uncommon to see males engaging in female sexual behavior, being mounted by other males; nor has it been uncommon to see females engaging in male sexual behavior, mounting other females. However, because the defining feature of sexual orientation is sexual preference, the key studies have examined the effect of early hormone exposure on the sex of preferred sexual partners. In general, the perinatal castration of males has increased their preference as adults for male sex partners; similarly, prenatal testosterone exposure in females has increased their preference as adults for female sex partners (see Baum et al., 1990; Henley, Nunez, & Clemens, 2009; Hrabovszky & Hutson, 2002).

Evolutionary Perspective

On the one hand, we need to exercise prudence in drawing conclusions about the development of sexual preferences in humans based on the results of experiments on laboratory species; it would be a mistake to ignore the profound cognitive and emotional components of human sexuality, which have no counterpart in laboratory animals. On the other hand, it would also be a mistake to think that a pattern of results that runs so consistently through so many mammalian species has no relevance to humans (Swaab, 2004).

In addition, there are some indications that perinatal hormones do influence sexual orientation in humans—although the evidence is sparse (see Diamond, 2009). Support comes from the quasiexperimental study of Ehrhardt and her colleagues (1985). They interviewed adult women whose mothers had been exposed to diethylstilbestrol (a synthetic estrogen) during pregnancy. The women’s responses indicated that they were significantly more sexually attracted to women than was a group of matched controls. Ehrhardt and her colleagues concluded that perinatal estrogen exposure does encourage homosexuality and bisexuality in women but that its effect is relatively weak: The sexual behavior of all but 1 of the 30 participants was primarily heterosexual.

One promising line of research on sexual orientation focuses on the fraternal birth order effect , the finding that the probability of a man’s being homosexual increases as a function of the number of older brothers he has (Blanchard, 2004; Blanchard & Lippa, 2007). A recent study of blended families (families in which biologically related siblings were raised with adopted siblings or step-siblings) found that the effect is related to the number of boys previously born to the mother, not the number of boys one is reared with (Bogaert, 2007). The effect is quite large: The probability of a male’s being homosexual increases by 33.3% for every older brother he has (see Puts, Jordan, & Breedlove, 2006), and an estimated 15% of gay men can attribute their homosexuality to the fraternal birth order effect (Cantor et al., 2002). The maternal immune hypothesis has been proposed to explain the fraternal birth order effect; this hypothesis is that some mothers become progressively more immune to masculinizing hormones in male fetuses (see Blanchard, 2004), and a mother’s immune system might deactivate masculinizing hormones in her younger sons.

What Triggers the Development of Sexual Attraction?

The evidence indicates that most girls and boys living in Western countries experience their first feelings of sexual attraction at about 10 years of age, whether they are heterosexual or homosexual (see Quinsey, 2003). This finding is at odds with the usual assumption that sexual interest is triggered by puberty, which, as you have learned, currently tends to occur at 10.5 years of age in girls and at 11.5 years in boys.

McClintock and Herdt (1996) have suggested that the emergence of sexual attraction may be stimulated by adrenal cortex steroids. Unlike gonadal maturation, adrenal maturation occurs at about the age of 10.

 Watch Adolescence: Identity and Role Development and Sexual Orientation www.mypsychlab.com

Is There a Difference in the Brains of Homosexuals and Heterosexuals?

The brains of homosexuals and heterosexuals must differ in some way, but how? Many studies have attempted to identify neuroanatomical, neuropsychological, neuro-physiological, and hormonal response differences between homosexuals and heterosexuals.

In a highly publicized study, LeVay (1991) found that the structure of one hypothalamic nucleus in male homosexuals was intermediate between that in female heterosexuals and that in male heterosexuals. This study has not been consistently replicated, however. Indeed, no reliable difference between the brains of heterosexuals and homosexuals has yet been discovered (see Rahman, 2005).

Sexual Identity

Sexual identity is the sex, male or female, that a person believes himself or herself to be. Usually, sexual identity coincides with a person’s anatomical sex, but not always.

Transsexualism is a condition of sexual identity in which an individual believes that he or she is trapped in a body of the other sex. To put it mildly, the transsexual faces a bizarre conflict: “I am a woman (or man) trapped in the body of a man (or woman). Help!” It is important to appreciate the desperation of these individuals; they do not merely think that life might be better if their gender were different. Although many transsexuals do seek surgical sexual reassignment (surgery to change their sex), the desperation is better indicated by how some of them dealt with their problem before surgical sexual reassignment was an option: Some biological males (psychological females) attempted self-castration, and others consumed copious quantities of estrogen-containing face creams in order to feminize their bodies.

 Watch Transsexuality www.mypsychlab.com

How does surgical sexual reassignment work? I will describe the male-to-female procedure. The female-to-male procedure is much more complex (because a penis must be created) and far less satisfactory (for example, because a surgically created penis has no erectile potential), and male-to-female sexual reassignment is three times more prevalent.

Clinical Implications

The first step in male-to-female reassignment is psychiatric assessment to establish that the candidate for surgery is a true transsexual. Once accepted for surgical re-assignment, each transsexual receives in-depth counseling to prepare for the difficulties that will ensue. If the candidate is still interested in re-assignment after counseling, estrogen administration is initiated to feminize the body; the hormone regimen continues for life to maintain the changes. Then, comes the surgery. The penis and testes are surgically removed, and female external genitalia and a vagina are constructed—the vagina is lined with skin and nerves from the former penis so that it will have sensory nerve endings that will respond to sexual stimulation. Finally, some patients have cosmetic surgery to feminize the face (e.g., to reduce the size of the Adam’s apple). Generally, the adjustment of transsexuals after surgical sexual reassignment is good.

The causes of transsexualism are unknown. Transsexualism was once thought to be a product of social learning, that is, of inappropriate child-rearing practices (e.g., mothers dressing their little boys in dresses). The occasional case that is consistent with this view can be found, but in most cases, there is no obvious cause (see Diamond, 2009; Swaab, 2004). One of the major difficulties in identifying the causes and mechanisms of transsexualism is that there is no comparable syndrome in nonhumans (Baum, 2006).

Evolutionary Perspective

Independence of Sexual Orientation and Sexual Identity

To complete this chapter, I would like to remind you of two of its main themes and show you how useful they are in thinking about one of the puzzles of human sexuality. One of the two themes is that the exception proves the rule: that a powerful test of any theory is its ability to explain exceptional cases. The second is that the mamawawa is seriously flawed: We have seen that men and women are similar in some ways (Hyde, 2005) and different in others (Cahill, 2006), but they are certainly not opposites, and their programs of development are neither parallel nor opposite.

Thinking Creatively

Here, I want to focus on the puzzling fact that sexual attraction, sexual identity, and body type are sometimes unrelated. For example, consider transsexuals: They, by definition, have the body type of one sex and the sexual identity of the other sex, but the orientation of their sexual attraction is an independent matter. Some transsexuals with a male body type are sexually attracted to females, others are sexually attracted to males, and others are sexually attracted to neither—and this is not changed by sexual reassignment (see Van Goozen et al., 2002). Also, it is important to realize that a particular sex-related trait in an individual can lie at midpoint between the female and male norms.

Obviously, the mere existence of homosexuality and transsexualism is a challenge to the mamawawa, the assumption that males and females belong to distinct and opposite categories. Many people tend to think of “femaleness” and “maleness” as being at opposite ends of a continuum, with a few abnormal cases somewhere between the two. Perhaps this is how you tend to think. However, the fact that body type, sexual orientation, and sexual identity are often independent constitutes a serious attack on any assumption that femaleness and maleness lie at opposite ends of a single scale. Clearly, femaleness or maleness is a combination of many different attributes (e.g., body type, sexual orientation, and sexual identity), each of which can develop quite independently. This is a real puzzle for many people, including scientists, but what you have already learned in this chapter suggests a solution.

Thinking Creatively

Think back to the section on brain differentiation. Until recently, it was assumed that the differentiation of the human brain into its usual female and male forms occurred through a single testosterone-based mechanism. However, a different notion has developed from recent evidence. Now, it is clear that male and female brains differ in many ways and that the differences develop at different times and by different mechanisms. If you keep this developmental principle in mind, you will have no difficulty understanding how it is possible for some individuals to be female in some ways and male in others, and to lie between the two norms in still others.

This analysis exemplifies a point I make many times in this book. The study of biopsychology often has important personal and social implications: The search for the neural basis of a behavior frequently provides us with a greater understanding of that behavior. I hope that you now have a greater understanding of, and acceptance of, differences in human sexuality.

Themes Revisited

Three of the book’s four major themes were repeatedly emphasized in this chapter: the evolutionary perspective, clinical implications, and thinking creatively themes.

The evolutionary perspective theme was pervasive. It received frequent attention because most experimental studies of hormones and sex have been conducted in nonhuman species. The other major source of information about hormones and sex has been the study of human clinical cases, which is why the clinical implications theme was prominent in the cases of the woman who wasn’t, the little girl who grew into a boy, the twin who lost his penis, and the man who lost and regained his manhood.

Evolutionary Perspective

Clinical Implications

The thinking creatively theme was emphasized throughout the chapter because conventional ways of thinking about hormones and sex have often been at odds with the results of biopsychological research. If you are now better able to resist the seductive appeal of the men-are-men-and-women-are-women assumption, you are a more broadminded and understanding person than when you began this chapter. I hope you have gained an abiding appreciation of the fact that maleness and femaleness are multidimensional and, at times, ambiguous variations of each other.

Thinking Creatively

The fourth major theme of the book, neuroplasticity, arose during the discussions of the effects of hormones on the development of sex differences in the brain and of the neurotrophic effects of estradiol.

Neuroplasticity

Think about It

1. Over the last century and a half, the onset of puberty has changed from age 15 or 16 to age 10 or 11, but there has been no corresponding acceleration in psychological and intellectual development. Precocious puberty is like a loaded gun in the hand of a child. Discuss.

2. Do you think adult sex-change operations should be permitted? Should they be permitted in preadolescents? Explain and supply evidence.

3. What should be done about the current epidemic of anabolic steroid abuse? Would you make the same recommendation if a safe anabolic steroid were developed? If a safe drug that would dramatically improve your memory were developed, would you take it?

4. Heterosexuality cannot be understood without studying homosexuality. Discuss.

5. What treatment should be given to infants born with ambiguous external genitals? Why?

6. Sexual orientation, sexual identity, and body type are not always related. Discuss.

Key Terms

13.1 Neuroendocrine System

Exocrine glands (p. 329)

Endocrine glands (p. 329)

Hormones (p. 329)

Gonads (p. 329)

Testes (p. 329)

Ovaries (p. 329)

Copulation (p. 329)

Zygote (p. 329)

Sex chromosomes (p. 329)

Amino acid derivative hormones (p. 329)

Peptide hormones (p. 329)

Protein hormones (p. 329)

Steroid hormones (p. 329)

Androgens (p. 329)

Estrogens (p. 329)

Testosterone (p. 329)

Estradiol (p. 329)

Progestins (p. 330)

Progesterone (p. 330)

Adrenal cortex (p. 330)

Gonadotropin (p. 330)

Posterior pituitary (p. 330)

Pituitary stalk (p. 330)

Anterior pituitary (p. 330)

Menstrual cycle (p. 330)

Vasopressin (p. 331)

Oxytocin (p. 331)

Paraventricular nuclei (p. 331)

Supraoptic nuclei (p. 331)

Hypothalamopituitary portal system (p. 331)

Releasing hormones (p. 331)

Release-inhibiting factors (p. 331)

Thyrotropin-releasing hormone (p. 331)

Thyrotropin (p. 332)

Gonadotropin-releasing hormone (p. 332)

Follicle-stimulating hormone (FSH) (p. 332)

Luteinizing hormone (LH) (p. 332)

Pulsatile hormone release (p. 333)

13.2 Hormones and Sexual Development of the Body

Sry gene (p. 334)

Sry protein (p. 334)

Wolffian system (p. 334)

Müllerian system (p. 334)

Müllerian-inhibiting substance (p. 334)

Scrotum (p. 334)

Ovariectomy (p. 334)

Orchidectomy (p. 334)

Gonadectomy (p. 335)

Genitals (p. 335)

Secondary sex characteristics (p. 335)

Growth hormone (p. 335)

Adrenocorticotropic hormone (p. 335)

Androstenedione (p. 335)

13.3 Hormones and Sexual Development of Brain and Behavior

Aromatase (p. 338)

Aromatization (p. 338)

Aromatization hypothesis (p. 338)

Alpha fetoprotein (p. 338)

Masculinizes (p. 339)

Defeminizes (p. 339)

Lordosis (p. 339)

Feminizes (p. 339)

Demasculinizes (p. 339)

Intromission (p. 339)

Ejaculation (p. 339)

Proceptive behaviors (p. 339)

13.4 Three Cases of Exceptional Human Sexual Development

Androgenic insensitivity syndrome (p. 340)

Adrenogenital syndrome (p. 341)

Congenital adrenal hyperplasia (p. 341)

Ablatio penis (p. 342)

13.5 Effects of Gonadal Hormones on Adults

Replacement injections (p. 343)

Impotent (p. 344)

Estrus (p. 345)

Estrous cycle (p. 345)

Anabolic steroids (p. 345)

13.6 Neural Mechanisms of Sexual Behavior

Medial preoptic area (p. 348)

Sexually dimorphic nucleus (p. 348)

Ventromedial nucleus (VMN) (p. 349)

13.7 Sexual Orientation and Sexual Identity

Heterosexual (p. 350)

Homosexual (p. 350)

Bisexual (p. 350)

Sexual identity (p. 350)

Fraternal birth order effect (p. 351)

Maternal immune hypothesis (p. 351)

Transsexualism (p. 352)

 Quick Review Test your comprehension of the chapter with this brief practice test. You can find the answers to these questions as well as more practice tests, activities, and other study resources at www.mypsychlab.com.

1. The ovaries and testes are

a. zygotes.

b. exocrine glands.

c. gonads.

d. both a and c

e. both b and c

2. Gonadotropin is released by the

a. anterior pituitary.

b. posterior pituitary.

c. hypothalamus.

d. gonads.

e. adrenal cortex.

3. Releasing hormones are released by the

a. anterior pituitary.

b. posterior pituitary.

c. hypothalamus.

d. gonads.

e. adrenal cortex.

4. Which term refers specifically to the surgical removal of the testes?

a. orchidectomy

b. castration

c. gonadectomy

d. ovariectomy

e. both b and c

5. Adrenogenital syndrome typically has severe consequences for

a. rodents but not primates.

b. Caucasians but not other ethnic groups.

c. girls but not boys.

d. boys but not girls.

(Pinel, 2010, pp. 327-354)

14 Sleep, Dreaming, and Circadian Rhythms How Much Do You Need to Sleep?

14.1 Stages of Sleep

14.2 Why Do We Sleep, and Why Do We Sleep When We Do?

14.3 Effects of Sleep Deprivation

14.4 Circadian Sleep Cycles

14.5 Four Areas of the Brain Involved in Sleep

14.6 Drugs That Affect Sleep

14.7 Sleep Disorders

14.8 Effects of Long-Term Sleep Reduction

Most of us have a fondness for eating and sex—the two highly esteemed motivated behaviors discussed in Chapter 12 and 13. But the amount of time devoted to these behaviors by even the most amorous gourmands pales in comparison to the amount of time spent sleeping: Most of us will sleep for well over 175,000 hours in our lifetimes. This extraordinary commitment of time implies that sleep fulfills a critical biological function. But what is it? And what about dreaming: Why do we spend so much time dreaming? And why do we tend to get sleepy at about the same time every day? Answers to these questions await you in this chapter.

Almost every time I lecture about sleep, somebody asks “How much sleep do we need?” Each time, I provide the same unsatisfying answer: I explain that there are two fundamentally different answers to this question, but neither has emerged a clear winner. One answer stresses the presumed health-promoting and recuperative powers of sleep and suggests that people need as much sleep as they can comfortably get—the usual prescription being at least 8 hours per night. The other answer is that many of us sleep more than we need to and are consequently sleeping part of our life away. Just think how your life could change if you slept 5 hours per night instead of 8. You would have an extra 21 waking hours each week, a mind-boggling 10,952 hours each decade.

Watch Sleep: How Much?

www.mypsychlab.com

As I prepared to write this chapter, I began to think of the personal implications of the idea that we get more sleep than we need. That is when I decided to do something a bit unconventional. I am going to participate in a sleep-reduction experiment—by trying to get no more than 5 hours of sleep per night—11:00 P.M. to 4:00 A.M.—until this chapter is written. As I begin, I am excited by the prospect of having more time to write, but a little worried that this extra time might cost me too dearly.

Thinking Creatively

It is now the next day—4:50 Saturday morning to be exact—and I am just sitting down to write. There was a party last night, and I didn’t make it to bed by 11:00; but considering that I slept for only 3 hours and 35 minutes, I feel quite good. I wonder what I will feel like later in the day. In any case, I will report my experiences to you at the end of the chapter.

The following case study challenges several common beliefs about sleep. Ponder its implications before proceeding to the body of the chapter.

The Case of the Woman Who Wouldn’t Sleep

Miss M . . . is a busy lady who finds her ration of twenty-three hours of wakefulness still insufficient for her needs. Even though she is now retired she is still busy in the community, helping sick friends whenever requested. She is an active painter and . . . writer. Although she becomes tired physically, when she needs to sit down to rest her legs, she does not ever report feeling sleepy. During the night she sits on her bed . . . reading, writing, crocheting or painting. At about 2:00 A.M. she falls asleep without any preceding drowsiness often while still holding a book in her hands. When she wakes about an hour later, she feels as wide awake as ever….

We invited her along to the laboratory. She came willingly but on the first evening we hit our first snag. She announced that she did not sleep at all if she had interesting things to do, and by her reckoning a visit to a university sleep laboratory counted as very interesting. Moreover, for the first time in years, she had someone to talk to for the whole of the night. So we talked.

In the morning we broke into shifts so that some could sleep while at least one person stayed with her and entertained her during the next day. The second night was a repeat performance of the first night….

In the end we prevailed upon her to allow us to apply EEG electrodes and to leave her sitting comfortably on the bed in the bedroom. She had promised that she would co-operate by not resisting sleep although she claimed not to be especially tired. . . . At approximately 1:30 A.M., the EEG record showed the first signs of sleep even though . . . she was still sitting with the book in her hands. . . .

The only substantial difference between her sleep and what we might have expected. . . was that it was of short duration….[After 99 minutes], she had no further interest in sleep and asked to …join our company again.

(“The Case of the Woman Who Wouldn’t Sleep,” from The Sleep Instinct by R. Meddis. Copyright © 1977, Routledge & Kegan Paul, London, pp. 42–44. Reprinted by permission of the Taylor & Francis Group.)

14.1 Stages of Sleep

Many changes occur in the body during sleep. This section introduces you to the major ones.

Three Standard Psychophysiological Measures of Sleep

There are major changes in the human EEG during the course of a night’s sleep. Although the EEG waves that accompany sleep are generally high-voltage and slow, there are periods throughout the night that are dominated by low-voltage, fast waves similar to those in nonsleeping individuals. In the 1950s, it was discovered that rapid eye movements (REMs) occur under the closed eyelids of sleepers during these periods of low-voltage, fast EEG activity. And in 1962, Berger and Oswald discovered that there is also a loss of electromyographic activity in the neck muscles during these same sleep periods. Subsequently, the electroencephalogram (EEG), the electrooculogram (EOG), and the neck electromyogram (EMG) became the three standard psychophysiological bases for defining stages of sleep.

Figure 14.1 depicts a volunteer participating in a sleep experiment. A participant’s first night of sleep in a laboratory is often fitful. That’s why the usual practice is to have each participant sleep several nights in the laboratory before commencing a sleep study. The disturbance of sleep observed during the first night in a sleep laboratory is called the first-night phenomenon. It is well known to graders of introductory psychology examinations because of the creative definitions of it that are offered by students who forget that it is a sleep-related, rather than a sex-related, phenomenon.

FIGURE 14.1 A participant in a sleep experiment.

Four Stages of Sleep EEG

There are four stages of sleep EEG: stage 1, stage 2, stage 3, and stage 4. Examples of these are presented in Figure 14.2.

After the eyes are shut and a person prepares to go to sleep, alpha waves—waxing and waning bursts of 8- to 12-Hz EEG waves—begin to punctuate the low-voltage, high-frequency waves of alert wakefulness. Then, as the person falls asleep, there is a sudden transition to a period of stage 1 sleep EEG. The stage 1 sleep EEG is a low-voltage, high-frequency signal that is similar to, but slower than, that of alert wakefulness.

FIGURE 14.2 The EEG of alert wakefulness, the EEG that precedes sleep onset, and the four stages of sleep EEG. Each trace is about 10 seconds long.

There is a gradual increase in EEG voltage and a decrease in EEG frequency as the person progresses from stage 1 sleep through stages 2, 3, and 4. Accordingly, the stage 2 sleep EEG has a slightly higher amplitude and a lower frequency than the stage 1 EEG; in addition, it is punctuated by two characteristic wave forms: K complexes and sleep spindles. Each K complex is a single large negative wave (upward deflection) followed immediately by a single large positive wave (downward deflection)—see Cash and colleagues (2009). Each sleep spindle is a 1- to 2-second waxing and waning burst of 12- to 14-Hz waves. The stage 3 sleep EEG is defined by the occasional presence of delta waves—the largest and slowest EEG waves, with a frequency of 1 to 2 Hz—whereas the stage 4 sleep EEG is defined by a predominance of delta waves.

Watch Stages of Sleep

www.mypsychlab.com

Once sleepers reach stage 4 EEG sleep, they stay there for a time, and then they retreat back through the stages of sleep to stage 1. However, when they return to stage 1, things are not at all the same as they were the first time through. The first period of stage 1 EEG during a night’s sleep (initial stage 1 EEG) is not marked by any striking electro myographic or electrooculographic changes, whereas subsequent periods of stage 1 sleep EEG (emergent stage 1 EEG) are accompanied by REMs and by a loss of tone in the muscles of the body core.

After the first cycle of sleep EEG—from initial stage 1 to stage 4 and back to emergent stage 1—the rest of the night is spent going back and forth through the stages. Figure 14.3 illustrates the EEG cycles of a typical night’s sleep and the close relation between emergent stage 1 sleep, REMs, and the loss of tone in core muscles. Notice that each cycle tends to be about 90 minutes long and that, as the night progresses, more and more time is spent in emergent stage 1 sleep, and less and less time is spent in the other stages, particularly stage 4. Notice also that there are brief periods during the night when the person is awake, although he or she usually does not remember these periods of wakefulness in the morning.

Let’s pause here to get some sleep-stage terms straight. The sleep associated with emergent stage 1 EEG is usually called REM sleep (pronounced “rehm”), after the associated rapid eye movements; whereas all other stages of sleep together are called NREM sleep (non-REM sleep). Stages 3 and 4 together are referred to as slow-wave sleep (SWS), after the delta waves that characterize them.

REMs, loss of core-muscle tone, and a low-amplitude, high-frequency EEG are not the only physiological correlates of REM sleep. Cerebral activity (e.g., oxygen consumption, blood flow, and neural firing) increases to waking levels in many brain structures, and there is a general increase in the variability of autonomic nervous system activity (e.g., in blood pressure, pulse, and respiration). Also, the muscles of the extremities occasionally twitch, and there is often some degree of penile erection in males.

REM Sleep and Dreaming

Nathaniel Kleitman’s laboratory was an exciting place in 1953. REM sleep had just been discovered, and Kleitman and his students were driven by the fascinating implication of the discovery. With the exception of the loss of tone in the core muscles, all of the other measures suggested that REM sleep episodes were emotion-charged. Could REM sleep be the physiological correlate of dreaming? Could it provide researchers with a window into the subjective inner world of dreams? The researchers began by waking a few sleepers in the middle of REM episodes:

The vivid recall that could be elicited in the middle of the night when a subject was awakened while his eyes were moving rapidly was nothing short of miraculous. It [seemed to open] . . . an exciting new world to the subjects whose only previous dream memories had been the vague morning-after recall. Now, instead of perhaps some fleeting glimpse into the dream world each night, the subjects could be tuned into the middle of as many as ten or twelve dreams every night. (From Some Must Watch While Some Must Sleep by William C. Dement, Portable Stanford Books, Stanford Alumni Association, Stanford University, 1978, p. 37. Used by permission of William C. Dement.)

FIGURE 14.3 The course of EEG stages during a typical night’s sleep and the relation of emergent stage 1 EEG to REMs and lack of tone in core muscles.

Strong support for the theory that REM sleep is the physiological correlate of dreaming came from the observation that 80% of awakenings from REM sleep but only 7% of awakenings from NREM (non-REM) sleep led to dream recall. The dreams recalled from NREM sleep tended to be isolated experiences (e.g., “I was falling”), while those associated with REM sleep tended to take the form of stories, or narratives. The phenomenon of dreaming, which for centuries had been the subject of wild speculation, was finally rendered accessible to scientific investigation.

Listen Dreaming

www.mypsychlab.com

Testing Common Beliefs about Dreaming

The high correlation between REM sleep and dream recall provided an opportunity to test some common beliefs about dreaming. The following five beliefs were among the first to be addressed:

• Many people believe that external stimuli can become incorporated into their dreams. Dement and Wolpert (1958) sprayed water on sleeping volunteeers after they had been in REM sleep for a few minutes, and then awakened them a few seconds later. In 14 of 33 cases, the water was incorporated into the dream report. The following narrative was reported by one participant who had been dreaming that he was acting in a play:

Watch Lucid Dreaming

www.mypsychlab.com

I was walking behind the leading lady when she suddenly collapsed and water was dripping on her. I ran over to her and water was dripping on my back and head. The roof was leaking. . . . I looked up and there was a hole in the roof. I dragged her over to the side of the stage and began pulling the curtains. Then I woke up. (p. 550)

• Some people believe that dreams last only an instant, but research suggests that dreams run on “real time.” In one study (Dement & Kleitman, 1957), volunteers were awakened 5 or 15 minutes after the beginning of a REM episode and asked to decide on the basis of the duration of the events in their dreams whether they had been dreaming for 5 or 15 minutes. They were correct in 92 of 111 cases.

• Some people claim that they do not dream. However, these people have just as much REM sleep as normal dreamers. Moreover, they report dreams if they are awakened during REM episodes (Goodenough et al., 1959), although they do so less frequently than do normal dreamers.

• Penile erections are commonly assumed to be indicative of dreams with sexual content. However, erections are no more complete during dreams with frank sexual content than during those without it (Karacan et al., 1966). Even babies have REM-related penile erections.

• Many people believe that sleeptalking (somniloquy) and sleepwalking (somnambulism) occur only during dreaming. This is not so (see Dyken, Yamada, & Lin-Dyken, 2001). Sleeptalking has no special association with REM sleep—it can occur during any stage but often occurs during a transition to wakefulness. Sleepwalking usually occurs during stage 3 or 4 sleep, and it never occurs during dreaming, when core muscles tend to be totally relaxed (Usui et al., 2007). There is no proven treatment for sleepwalking (Harris & Grunstein, 2008).

Interpretation of Dreams

Sigmund Freud believed that dreams are triggered by unacceptable repressed wishes, often of a sexual nature. He argued that because dreams represent unacceptable wishes, the dreams we experience (our manifest dreams) are merely disguised versions of our real dreams (our latent dreams): He hypothesized an unconscious censor that disguises and subtracts information from our real dreams so that we can endure them. Freud thus concluded that one of the keys to understanding people and dealing with their psychological problems is to expose the meaning of their latent dreams through the interpretation of their manifest dreams.

There is no convincing evidence for the Freudian theory of dreams; indeed, the brain science of the 1890s, which served as its foundation, is now obsolete. Yet many people accept the notion that dreams bubble up from a troubled subconscious and that they represent repressed thoughts and wishes.

The modern alternative to the Freudian theory of dreams is Hobson’s (1989) activation-synthesis theory (see Eiser, 2005). It is based on the observation that, during REM sleep, many brain-stem circuits become active and bombard the cerebral cortex with neural signals. The essence of the activation-synthesis theory is that the information supplied to the cortex during REM sleep is largely random and that the resulting dream is the cortex’s effort to make sense of these random signals.

14.2 Why Do We Sleep, and Why Do We Sleep When We Do?

Now that you have been introduced to the properties of sleep and its various stages, the focus of this chapter shifts to a consideration of two fundamental questions about sleep: Why do we sleep? And why do we sleep when we do?

Two kinds of theories for sleep have been proposed: recuperation theories and adaptation theories. The differences between these two theoretical approaches are revealed by the answers they offer to the two fundamental questions about sleep.

The essence of recuperation theories of sleep is that being awake disrupts the homeostasis (internal physiological stability) of the body in some way and sleep is required to restore it. Various recuperation theories differ in terms of the particular physiological disruption they propose as the trigger for sleep—for example, it is commonly believed that the function of sleep is to restore energy levels. However, regardless of the particular function postulated by restoration theories of sleep, they all imply that sleepiness is triggered by a deviation from home-ostasis caused by wakefulness and that sleep is terminated by a return to homeostasis.

The essence of adaptation theories of sleep is that sleep is not a reaction to the disruptive effects of being awake but the result of an internal 24-hour timing mechanism—that is, we humans are programmed to sleep at night regardless of what happens to us during the day. According to these theories, we have evolved to sleep at night because sleep protects us from accident and predation during the night. (Remember that humans evolved long before the advent of artificial lighting.)

Adaptation theories of sleep focus more on when we sleep than on the function of sleep. Some of these theories even propose that sleep plays no role in the efficient physiological functioning of the body. According to these theories, early humans had enough time to get their eating, drinking, and reproducing out of the way during the daytime, and their strong motivation to sleep at night evolved to conserve their energy resources and to make them less susceptible to mishap (e.g., predation) in the dark (Rattenborg, Martinez-Gonzales, & Lesku, 2009; Siegel, 2009). Adaptation theories suggest that sleep is like reproductive behavior in the sense that we are highly motivated to engage in it, but we don’t need it to stay healthy.

Evolutionary Perspective

Comparative Analysis of Sleep

Sleep has been studied in only a small number of species, but the evidence so far suggests that most mammals and birds sleep. Furthermore, the sleep of mammals and birds, like ours, is characterized by high-amplitude, low-frequency EEG waves punctuated by periods of low-amplitude, high-frequency waves (see Siegel, 2008). The evidence for sleep in amphibians, reptiles, fish, and insects is less clear: Some display periods of inactivity and unresponsiveness, but the relation of these periods to mammalian sleep has not been established (see Siegel, 2008; Zimmerman et al., 2008). Table 14.1 gives the average number of hours per day that various mammalian species spend sleeping.

Evolutionary Perspective

The comparative investigation of sleep has led to several important conclusions. Let’s consider four of these.

Evolutionary Perspective

First, the fact that most mammals and birds sleep suggests that sleep serves some important physiological function, rather than merely protecting animals from mishap and conserving energy. The evidence is strongest in species that are at increased risk of predation when they sleep (e.g., antelopes) and in species that have evolved complex mechanisms that enable them to sleep. For example, some marine mammals, such as dolphins, sleep with only half of their brain at a time so that the other half can control resurfacing for air (see Rattenborg, Amlaner, & Lima, 2000). It is against the logic of natural selection for some animals to risk predation while sleeping and for others to have evolved complex mechanisms to permit them to sleep, unless sleep itself serves some critical function.

Second, the fact that most mammals and birds sleep suggests that the primary function of sleep is not some special, higher-order human function. For example, suggestions that sleep helps humans reprogram our complex brains or that it permits some kind of emotional release to maintain our mental health are improbable in view of the comparative evidence.

Third, the large between-species differences in sleep time suggest that although sleep may be essential for survival, it is not necessarily needed in large quantities (refer to Table 14.1). Horses and many other animals get by quite nicely on 2 or 3 hours of sleep per day. Moreover, it is important to realize that the sleep patterns of mammals and birds in their natural environments can vary substantially from their patterns in captivity, which is where they are typically studied (see Horne, 2009). For example, some animals that sleep a great deal in captivity sleep little in the wild when food is in short supply or during periods of migration (Siegel, 2008).

TABLE 14.1 Average Number of Hours Slept per Day by Various Mammalian Species

Mammalian Species

Hours of Sleep per Day

Giant sloth

20

Opossum, brown bat

19

Giant armadillo

18

Owl monkey, nine-banded armadillo

17

Arctic ground squirrel

16

Tree shrew

15

Cat, golden hamster

14

Mouse, rat, gray wolf, ground squirrel

13

Arctic fox, chinchilla, gorilla, raccoon

12

Mountain beaver

11

Jaguar, vervet monkey, hedgehog

10

Rhesus monkey, chimpanzee, baboon, red fox

9

Human, rabbit, guinea pig, pig

8

Gray seal, gray hyrax, Brazilian tapir

6

Tree hyrax, rock hyrax

5

Cow, goat, elephant, donkey, sheep

3

Roe deer, horse

2

FIGURE 14.4 After gorging themselves on a kill, African lions often sleep almost continuously for 2 or 3 days. And where do they sleep? Anywhere they want!

Fourth, many studies have tried to identify some characteristic that identifies various species as long sleepers or short sleepers. Why do cats tend to sleep about 14 hours a day and horses only about 2? Under the influence of recuperation theories, researchers have focused on energy-related factors in their efforts. However, there is no strong relationship between a species’ sleep time and its level of activity, its body size, or its body temperature (see Siegel, 2005). The fact that giant sloths sleep 20 hours per day is a strong argument against the theory that sleep is a compensatory reaction to energy expenditure—similarly, energy expenditure has been shown to have little effect on subsequent sleep in humans (Driver & Taylor, 2000; Youngstedt & Kline, 2006). In contrast, adaptation theories correctly predict that the daily sleep time of each species is related to how vulnerable it is while it is asleep and how much time it must spend each day to feed itself and to take care of its other survival requirements. For example, zebras must graze almost continuously to get enough to eat and are extremely vulnerable to predatory attack when they are asleep—and they sleep only about 2 hours per day. In contrast, African lions often sleep more or less continuously for 2 or 3 days after they have gorged themselves on a kill. Figure 14.4 says it all.

14.3 Effects of Sleep Deprivation

One way to identify the functions of sleep is to determine what happens when a person is deprived of sleep. This section begins with a cautionary note about the interpretation of the effects of sleep deprivation, a description of the predictions that recuperation theories make about sleep deprivation, and two classic case studies of sleep deprivation. Then, it summarizes the results of sleep-deprivation research.

Interpretation of the Effects of Sleep Deprivation: The Stress Problem

I am sure that you have experienced the negative effects of sleep loss. When you sleep substantially less than you are used to, the next day you feel out of sorts and unable to function as well as you usually do. Although such experiences of sleep deprivation are compelling, you need to be cautious in interpreting them. In Western cultures, most people who sleep little or irregularly do so because they are under extreme stress (e.g., from illness, excessive work, shift work, drugs, or examinations), which could have adverse effects independent of any sleep loss. Even when sleep deprivation studies are conducted on healthy volunteers in controlled laboratory environments, stress can be a contributing factor because many of the volunteers will find the sleep-deprivation procedure itself stressful. Because it is difficult to separate the effects of sleep loss from the effects of stressful conditions that may have induced the loss, results of sleep-deprivation studies must be interpreted with particular caution.

Unfortunately, many studies of sleep deprivation, particularly those that are discussed in the popular media, do not control for stress. For example, almost weekly I read an article in my local newspaper decrying the effects of sleep loss in the general population. It will point out that many people who are pressured by the demands of their work schedule sleep little and experience a variety of health and accident problems. There is a place for this kind of research because it identifies a problem that requires public attention; however, because the low levels of sleep are hopelessly confounded with high levels of stress, many sleep-deprivation studies tell us little about the functions of sleep and how much we need.

Predictions of Recuperation Theories about Sleep Deprivation

Because recuperation theories of sleep are based on the premise that sleep is a response to the accumulation of some debilitating effect of wakefulness, they make the following three predictions about sleep deprivation:

• Long periods of wakefulness will produce physiological and behavioral disturbances.

• These disturbances will grow steadily worse as the sleep deprivation continues.

• After a period of deprivation has ended, much of the missed sleep will be regained.

Have these predictions been confirmed?

Two Classic Sleep-Deprivation Case Studies

Let’s look at two widely cited sleep-deprivation case studies. First is the study of a group of sleep-deprived students, described by Kleitman (1963); second is the case of Randy Gardner, described by Dement (1978).

The Case of the Sleep-Deprived Students

While there were differences in the many subjective experiences of the sleep-evading persons, there were several features common to most…. [D]uring the first night the subject did not feel very tired or sleepy. He could read or study or do laboratory work, without much attention from the watcher, but usually felt an attack of drowsiness between 3 A.M. and 6 A.M. . . . Next morning the subject felt well, except for a slight malaise which always appeared on sitting down and resting for any length of time. However, if he occupied himself with his ordinary daily tasks, he was likely to forget having spent a sleepless night. During the second night . . . reading or study was next to impossible because sitting quietly was conducive to even greater sleepiness. As during the first night, there came a 2–3 hour period in the early hours of the morning when the desire for sleep was almost overpowering. . . . Later in the morning the sleepiness diminished once more, and the subject could perform routine laboratory work, as usual. It was not safe for him to sit down, however, without danger of falling asleep, particularly if he attended lectures….

The third night resembled the second, and the fourth day was like the third. . . . At the end of that time the individual was as sleepy as he was likely to be. Those who continued to stay awake experienced the wavelike increase and decrease in sleepiness with the greatest drowsiness at about the same time every night. (Kleitman, 1963, pp. 220–221)

The Case of Randy Gardner

As part of a 1965 science fair project, Randy Gardner and two classmates, who were entrusted with keeping him awake, planned to break the then world record of 260 hours of consecutive wakefulness. Dement read about the project in the newspaper and, seeing an opportunity to collect some important data, joined the team, much to the comfort of Randy’s worried parents. Randy proved to be a friendly and cooperative subject, although he did complain vigorously when his team would not permit him to close his eyes for more than a few seconds at a time. However, in no sense could Randy’s behavior be considered abnormal or disturbed. Near the end of his vigil, Randy held a press conference attended by reporters and television crews from all over the United States, and he conducted himself impeccably. When asked how he had managed to stay awake for 11 days, he replied politely, “It’s just mind over matter.” Randy went to sleep exactly 264 hours and 12 minutes after his alarm clock had awakened him 11 days before. And how long did he sleep? Only 14 hours the first night, and thereafter he returned to his usual 8-hour schedule. Although it may seem amazing that Randy did not have to sleep longer to “catch up” on his lost sleep, the lack of substantial recovery sleep is typical of such cases.

(From Some Must Watch While Some Must Sleep by William C. Dement, Portable Stanford Books, Stanford Alumni Association, Stanford University, 1978, pp. 38–39. Used by permission of William C. Dement.)

Experimental Studies of Sleep Deprivation in Humans

Since the first studies of sleep deprivation by Dement and Kleitman in the mid-20th century, there have been hundreds of studies assessing the effects on humans of sleep-deprivation schedules ranging from a slightly reduced amount of sleep during one night to total sleep deprivation for several nights (see Durmer & Dinges, 2005). The studies have assessed the effects of these schedules on many different measures of sleepiness, mood, cognition, motor performance, physiological function, and even molecular function (see Cirelli, 2006).

Even moderate amounts of sleep deprivation—for example, sleeping 3 or 4 hours less than normal for one night—have been found to have three consistent effects. First, sleep-deprived individuals display an increase in sleepiness: They report being more sleepy, and they fall asleep more quickly if given the opportunity. Second, sleep-deprived individuals display negative affect on various written tests of mood. And third, they perform poorly on tests of vigilance, such as watching a computer screen and responding when a moving light flickers.

The effects of sleep deprivation on complex cognitive functions have been less consistent (see Drummond et al., 2004). Consequently, researchers have preferred to assess performance on the simple, dull, monotonous tasks most sensitive to the effects of sleep deprivation (see Harrison & Horne, 2000). Nevertheless, a growing number of studies have been able to demonstrate disruption of the performance of complex cognitive tasks by sleep deprivation (Blagrove, Alexander, & Horne, 2006; Durmer & Dinges, 2005; Killgore, Balkin, & Wesensten, 2006; Nilsson et al., 2005) although a substantial amount of sleep deprivation (e.g., 24 hours) has often been required to produce consistent disruption (e.g., Killgore, Balkin, & Wesensten, 2006; Strangman et al., 2005).

The disruptive impact of sleep deprivation on cognitive function has been clarified by the discovery that only some cognitive functions are susceptible. Many early studies of the effect of sleep deprivation on cognitive function used tests of logical deduction or critical thinking, and performance on these has proved to be largely immune to the disruptive effects of sleep loss. In contrast, performance on tests of executive function (cognitive abilities that appear to depend on the prefrontal cortex) has proven much more susceptible (see Nilsson et al., 2005). Executive function includes innovative thinking, lateral thinking, insightful thinking, and assimilating new information to update plans and strategies.

The adverse effects of sleep deprivation on physical performance have been surprisingly inconsistent considering the general belief that a good night’s sleep is essential for optimal motor performance. Only a few measures tend to be affected, even after lengthy periods of deprivation (see Van Helder & Radomski, 1989).

Sleep deprivation has been found to have a variety of physiological consequences such as reduced body temperature, increases in blood pressure, decreases in some aspects of immune function, hormonal changes, and metabolic changes (e.g., Dinges et al., 1994; Kato et al., 2000; Knutson et al., 2007; Ogawa et al., 2003). The problem is that there is little evidence that these changes have any consequences for health or performance. For example, the fact that a decline in immune function was discovered in sleep-deprived volunteers does not necessarily mean that they would be more susceptible to infection—the immune system is extremely complicated and a decline in one aspect can be compensated for by other changes. This is why I want to single out a study by Cohen and colleagues (2009) for commendation: Rather than studying immune function, these researchers focused directly on susceptibility to infection and illness. They exposed 153 healthy volunteers to a cold virus. Those who reported sleeping less than 8 hours a night were not less likely to become infected, but they were more likely to develop cold symptoms. Although this is only a correlational study (see Chapter 1) and thus cannot directly implicate sleep duration as the causal factor, experimental studies of sleep and infectious disease need to follow this example and directly measure susceptibility to infection and illness.

After 2 or 3 days of continuous sleep deprivation, most study participants experience microsleeps, unless they are in a laboratory environment where the microsleeps can be interrupted as soon as they begin. Microsleeps are brief periods of sleep, typically about 2 or 3 seconds long, during which the eyelids droop and the subjects become less responsive to external stimuli, even though they remain sitting or standing. Microsleeps disrupt performance on tests of vigilance, but such performance deficits also occur in sleep-deprived individuals who are not experiencing microsleeps (Ferrara, De Gennaro, & Bertini, 1999).

It is useful to compare the effects of sleep deprivation with those of deprivation of the motivated behaviors discussed in Chapters 12 and 13. If people were deprived of the opportunity to eat or engage in sexual activity, the effects would be severe and unavoidable: In the first case, starvation and death would ensue; in the second, there would be a total loss of reproductive capacity. Despite our powerful drive to sleep, the effects of sleep deprivation tend to be subtle, selective, and variable. This is puzzling. Another puzzling thing is that performance deficits observed after extended periods of sleep deprivation disappear so readily—for example, in one study, 4 hours of sleep eliminated the performance deficits produced by 64 hours of sleep deprivation (Rosa, Bonnett, & Warm, 2007).

Thinking Creatively

Sleep-Deprivation Studies with Laboratory Animals

Evolutionary

Perspective

FIGURE 14.5 The carousel apparatus used to deprive an experimental rat of sleep while a yoked control rat is exposed to the same number and pattern of disk rotations. The disk on which both rats rest rotates every time the experimental rat has a sleep EEG. If the sleeping rat does not awaken immediately, it is deposited in the water. (Based on Rechtschaffen et al., 1983.)

The carousel apparatus (see Figure 14.5) has been used to deprive rats of sleep. Two rats, an experimental rat and its yoked control, are placed in separate chambers of the apparatus. Each time the EEG activity of the experimental rat indicates that it is sleeping, the disk, which serves as the floor of half of both chambers, starts to slowly rotate. As a result, if the sleeping experimental rat does not awaken immediately, it gets shoved off the disk into a shallow pool of water. The yoked control is exposed to exactly the same pattern of disk rotations; but if it is not sleeping, it can easily avoid getting dunked by walking in the direction opposite to the direction of disk rotation. The experimental rats typically died after about 12 days, while the yoked controls stayed reasonably healthy (see Rechtschaffen & Bergmann, 1995).

The fact that humans and rats have been sleep-deprived by other means for similar periods of time without dire consequences argues for caution in interpreting the results of the carousel sleep-deprivation experiments (see Rial et al., 2007; Siegel, 2009). It may be that repeatedly being awakened by this apparatus kills the experimental rats not because it keeps them from sleeping but because it is stressful. This interpretation is consistent with the pathological problems in the experimental rats that were revealed by postmortem examination: swollen adrenal glands, gastric ulcers, and internal bleeding.

You have already encountered many examples in this book of the value of the comparative approach. However, sleep deprivation may be one phenomenon that cannot be productively studied in nonhu-mans because of the unavoidable confounding effects of extreme stress (see Benington & Heller, 1999; D’Almeida et al., 1997; Horne, 2000).

Thinking Creatively

REM-Sleep Deprivation

Because of its association with dreaming, REM sleep has been the subject of intensive investigation. In an effort to reveal the particular functions of REM sleep, sleep researchers have specifically deprived sleeping volunteers of REM sleep by waking them up each time a bout of REM sleep begins.

REM-sleep deprivation has been shown to have two consistent effects (see Figure 14.6). First, following REM-sleep deprivation, participants display a REM rebound; that is, they have more than their usual amount of REM sleep for the first two or three nights (Brunner et al., 1990). Second, with each successive night of deprivation, there is a greater tendency for participants to initiate REM sequences. Thus, as REM-sleep deprivation proceeds, participants have to be awakened more and more frequently to keep them from accumulating significant amounts of REM sleep. For example, during the first night of REM-sleep deprivation in one experiment (Webb & Agnew, 1967), the participants had to be awakened 17 times to keep them from having extended periods of REM sleep; but during the seventh night of deprivation, they had to be awakened 67 times.

The compensatory increase in REM sleep following a period of REM-sleep deprivation suggests that the amount of REM sleep is regulated separately from the amount of slow-wave sleep and that REM sleep serves a special function. This finding, coupled with the array of interesting physiological and psychological events that define REM sleep, has led to much speculation about its function.

FIGURE 14.6 The two effects of REM-sleep deprivation.

Considerable attention has focused on the potential role of REM sleep in strengthening explicit memory (see Chapter 11). Many reviewers of the literature on this topic have treated the positive effect of REM sleep on the storage of existing memories as well established, and researchers have moved on to study the memory-promoting effects of other stages of sleep (e.g. Deak & Stickgold, 2010; Rasch & Born, 2008; Stickgold & Walker, 2007) and the physiological mechanisms of these memory-promoting effects (e.g., Rasch et al., 2007). However, two eminent sleep researchers, Robert Vertes and Jerome Seigel (2005), have argued that the evidence that REM sleep strengthens memory is unconvincing (see Vertes, 2004). They point out, for example, that numerous studies failing to support a mnemonic (pertaining to memory) function of REM sleep have been ignored. They also question why the many patients who have taken antidepressant drugs that block REM sleep experience no obvious memory problems, even if they have taken the drugs for months or even years. In one study, pharmacologically blocking REM sleep in human volunteers did not disrupt consolidation of verbal memories, and it actually improved the consolidation of the memory of motor tasks (Rasch et al., 2008)

The default theory of REM sleep is a different approach (Horne, 2000). According to this theory, it is difficult to stay continuously in NREM sleep, so the brain periodically switches to one of two other states. If there is any immediate bodily need to take care of (e.g., eating or drinking), the brain switches to wakefulness; if there are no immediate needs, it switches to the default state—REM sleep. According to the default theory, REM sleep and wakefulness are similar states, but REM sleep is more adaptive when there are no immediate bodily needs. Indirect support for this theory comes from the many similarities between REM sleep and wakefulness.

A study by Nykamp and colleagues (1998) supported the default theory of REM sleep. The researchers awakened young adults every time they entered REM sleep, but instead of letting them go back to sleep immediately, substituted a 15-minute period of wakefulness for each lost REM period. Under these conditions, the participants, unlike the controls, were not tired the next day, despite getting only 5 hours of sleep, and they displayed no REM rebound. In other words, there seemed to be no need for REM sleep if periods of wakefulness were substituted for it. This finding has been replicated in rats (Oniani & Lortkipanidze, 2003), and it is consistent with the finding that as antidepressants reduce REM sleep, the number of nighttime awakenings increases (see Horne, 2000).

Sleep Deprivation Increases the Efficiency of Sleep

One of the most important findings of human sleep-deprivation research is that individuals who are deprived of sleep become more efficient sleepers (see Elmenhorst et al., 2008). In particular, their sleep has a higher proportion of slow-wave sleep (stages 3 and 4), which seems to serve the main restorative function. Because this is such an important finding, let’s look at six major pieces of evidence that support it:

• Although people regain only a small proportion of their total lost sleep after a period of sleep deprivation, they regain most of their lost stage 4 sleep (e.g., Borbély et al., 1981; De Gennaro, Ferrara, & Bertini, 2000; Lucidi et al., 1997).

• After sleep deprivation, the slow-wave sleep EEG of humans is characterized by an even higher proportion than usual of slow waves (Aeschbach et al., 1996; Borbély, 1981; Borbély et al., 1981).

• People who sleep 6 hours or less per night normally get as much slow-wave sleep as people who sleep 8 hours or more (e.g., Jones & Oswald, 1966; Webb & Agnew, 1970).

• If individuals take a nap in the morning after a full night’s sleep, their naptime EEG shows few slow waves, and the nap does not reduce the duration of the following night’s sleep (e.g., Åkerstedt & Gillberg, 1981; Hume & Mills, 1977; Karacan et al., 1970).

• People who gradually reduce their usual sleep time get less stage 1 and stage 2 sleep, but the duration of their slow-wave sleep remains about the same as before (Mullaney et al., 1977; Webb & Agnew, 1975).

• Repeatedly waking individuals during REM sleep produces little increase in the sleepiness they experience the next day, whereas repeatedly waking individuals during slow-wave sleep has major effects (Nykamp et al., 1998).

The fact that sleep becomes more efficient in people who sleep less means that conventional sleep-deprivation studies are virtually useless for discovering how much sleep people need. Certainly, our bodies respond negatively when we get less sleep than we are used to getting. However, the negative consequences of sleep loss in inefficient sleepers does not indicate whether the lost sleep was really needed. The true need for sleep can be assessed only by experiments in which sleep is regularly reduced for many weeks, to give the participants the opportunity to adapt to getting less sleep by maximizing their sleep efficiency. Only when people are sleeping at their maximum efficiency is it possible to determine how much sleep they really need. Such sleep-reduction studies are discussed later in the chapter, but please pause here to think about this point—it is extremely important, and it is totally consistent with the growing appreciation of the plasticity and adaptiveness of the adult mammalian brain.

Thinking Creatively

Neuroplasticity

This is an appropriate time, here at the end of the section on sleep deprivation, for me to file a brief progress report. It has now been 2 weeks since I began my 5-hours-per-night sleep schedule. Generally, things are going well. My progress on this chapter has been faster than usual. I am not having any difficulty getting up on time or getting my work done, but I am finding that it takes a major effort to stay awake in the evening. If I try to read or watch a bit of television after 10:30, I experience microsleeps. My so-called friends delight in making sure that my transgressions are quickly interrupted.

Scan Your Brain

Before continuing with this chapter, scan your brain by completing the following exercise to make sure you understand the fundamentals of sleep. The correct answers appear at the end of the exercise. Before proceeding, review material related to your errors and omissions.

1. The three standard psychophysiological measures of sleep are the EEG, the EMG, and the ______.

2. Stage 4 sleep EEG is characterized by a predominance of ______ waves.

3. ______ stage 1 EEG is accompanied by neither REM nor loss of core-muscle tone.

4. Dreaming occurs predominantly during ______ sleep.

5. The modern alternative to Freud’s theory of dreaming is Hobson’s ______ theory.

6. There are two fundamentally different kinds of theories of sleep: recuperation theories and _________ theories.

7. The effects of sleep deprivation are often difficult to study because they are often confounded by ______.

8. Convincing evidence that REM-sleep deprivation does not produce severe memory problems comes from the study of patients taking certain ______ drugs.

9. After a lengthy period of sleep deprivation (e.g., several days), a person’s first night of sleep is only slightly longer than usual, but it contains a much higher proportion of ______ waves.

10. ______ sleep in particular, rather than sleep in general, appears to play the major restorative role.

Scan Your Brain answers:

(1) EOG,

(2) delta,

(3) Initial,

(4) REM,

(5) activation synthesis,

(6) adaptation,

(7) stress,

(8) antidepressant,

(9) slow (or delta),

(10) Slow-wave (or stage 3 and 4).

14.4 Circadian Sleep Cycles

The world in which we live cycles from light to dark and back again once every 24 hours, and most surface-dwelling species have adapted to this regular change in their environment with a variety of circadian rhythms (see Foster & Kreitzman, 2004; circadian means “lasting about a day”). For example, most species display a regular circadian sleep–wake cycle. Humans take advantage of the light of day to take care of their biological needs, and then they sleep for much of the night; in contrast, nocturnal animals, such as rats, sleep for much of the day and stay awake at night.

Although the sleep–wake cycle is the most obvious circadian rhythm, it is difficult to find a physiological, biochemical, or behavioral process in animals that does not display some measure of circadian rhythmicity (Gillette & Sejnowski, 2005). Each day, our bodies adjust themselves in a variety of ways to meet the demands of the two environments in which we live: light and dark.

Our circadian cycles are kept on their once-every-24-hours schedule by temporal cues in the environment. The most important of these cues for the regulation of mammalian circadian rhythms is the daily cycle of light and dark. Environmental cues, such as the light–dark cycle, that can entrain (control the timing of) circadian rhythms are called zeitgebers (pronounced “ZITE-gay-bers”), a German word that means “time givers.” In controlled laboratory environments, it is possible to lengthen or shorten circadian cycles somewhat by adjusting the duration of the light–dark cycle; for example, when exposed to alternating 11.5-hour periods of light and 11.5-hour periods of dark, subjects’ circadian cycles begin to conform to a 23-hour day. In a world without 24-hour cycles of light and dark, other zeitgebers can entrain circadian cycles. For example, the circadian sleep–-wake cycles of hamsters living in continuous darkness or in continuous light can be entrained by regular daily bouts of social interaction, hoarding, eating, or exercise (see Mistlberger et al., 1996; Sinclair & Mistlberger, 1997). Hamsters display particularly clear circadian cycles and thus are frequent subjects of research on circadian rhythms.

Free-Running Circadian Sleep–Wake Cycles

What happens to sleep–wake cycles and other circadian rhythms in an environment that is devoid of zeitgebers? Remarkably, under conditions in which there are absolutely no temporal cues, humans and other animals maintain all of their circadian rhythms. Circadian rhythms in constant environments are said to be free-running rhythms, and their duration is called the free-running period. Free-running periods vary in length from subject to subject, are of relatively constant duration within a given subject, and are usually longer than 24 hours—about 24.2 hours is typical in humans living under constant moderate illumination (see Czeizler et al., 1999). It seems that we all have an internal biological clock that habitually runs a little slow unless it is entrained by time-related cues in the environment.

A typical free-running circadian sleep–wake cycle is illustrated in Figure 14.7. Notice its regularity. Without any external cues, this man fell asleep at intervals of approximately 25.3 hours for an entire month. The regularity of free-running sleep–wake cycles despite variations in physical and mental activity provides support for the dominance of circadian factors over recuperative factors in the regulation of sleep.

Free-running circadian cycles do not have to be learned. Even rats that are born and raised in an unchanging laboratory environment (in continuous light or in continuous darkness) display regular free-running sleep–wake cycles that are slightly longer than 24 hours (Richter, 1971).

FIGURE 14.7 A free-running circadian sleep–wake cycle 25.3 hours in duration. Despite living in an unchanging environment with no time cues, the man went to sleep each day approximately 1.3 hours later than he had the day before. (Based on Wever, 1979, p. 30.)

Many animals display a circadian cycle of body temperature that is related to their circadian sleep–wake cycle: They tend to sleep during the falling phase of their circadian body temperature cycle and awaken during its rising phase. However, when subjects are housed in constant laboratory environments, their sleep–wake and body temperature cycles sometimes break away from one another. This phenomenon is called internal desynchronization (see De La Iglesia, Cambras, & Díez-Noguera, 2008). For example, in one human volunteer, the free-running periods of both the sleep–wake and body temperature cycles were initially 25.7 hours; then, for some unknown reason, there was an increase in the free-running period of the sleep–wake cycle to 33.4 hours and a decrease in the free-running period of the body temperature cycle to 25.1 hours. The potential for the simultaneous existence of two different free-running periods suggests that there is more than one circadian timing mechanism, and that sleep is not causally related to the decreases in body temperature that are normally associated with it.

There is another point about free-running circadian sleep–wake cycles that is incompatible with recuperation theories of sleep. On occasions when subjects stay awake longer than usual, the following sleep time is shorter rather than longer (Wever, 1979). Humans and other animals are programmed to have sleep–wake cycles of approximately 24 hours; hence, the more wakefulness there is during a cycle, the less time there is for sleep.

Jet Lag and Shift Work

People in modern industrialized societies are faced with two different disruptions of circadian rhythmicity: jet lag and shift work. Jet lag occurs when the zeitgebers that control the phases of various circadian rhythms are accelerated during east-bound flights (phase advances) or decelerated during west-bound flights (phase delays). In shift work, the zeitgebers stay the same, but workers are forced to adjust their natural sleep–wake cycles in order to meet the demands of changing work schedules. Both of these disruptions produce sleep disturbances, fatigue, general malaise, and deficits on tests of physical and cognitive function. The disturbances can last for many days; for example, it typically takes about 10 days to completely adjust to the phase advance of 10.5 hours that one experiences on a Tokyo-to-Boston flight.

What can be done to reduce the disruptive effects of jet lag and shift work? Two behavioral approaches have been proposed for the reduction of jet lag. One is gradually shifting one’s sleep–wake cycle in the days prior to the flight. The other is administering treatments after the flight that promote the required shift in the circadian rhythm. For example, exposure to intense light early in the morning following an east-bound flight accelerates adaptation to the phase advance. Similarly, the results of a study of hamsters (Mrosovsky & Salmon, 1987) suggest that a good workout early in the morning of the first day after an east-bound flight might accelerate adaptation to the phase advance; hamsters that engaged in one 3-hour bout of wheel running 7 hours before their usual period of activity adapted quickly to an 8-hour advance in their light–dark cycle (see Figure 14.8 on page 368).

Companies that employ shift workers have had success in improving the productivity and job satisfaction of those workers by scheduling phase delays rather than phase advances; whenever possible, shift workers are transferred from their current schedule to one that begins later in the day (see Driscoll, Grunstein, & Rogers, 2007). It is much more difficult to go to sleep 4 hours earlier and get up 4 hours earlier (a phase advance) than it is to go to sleep 4 hours later and get up 4 hours later (a phase delay). This is also why east-bound flights tend to be more problematic for travelers than west-bound flights.

FIGURE 14.8 A period of forced exercise accelerates adaptation to an 8-hour phase advance in the circa-dian light–dark cycle. Daily activity is shown in red; periods of darkness are shown in black; and the period of forced exercise is shown in green. (Based on Mrosovsky & Salmon, 1987.)

A Circadian Clock in the Suprachiasmatic Nuclei

The fact that circadian sleep–wake cycles persist in the absence of temporal cues from the environment indicates that the physiological systems that regulate sleep are controlled by an internal timing mechanism—the circadian clock.

The first breakthrough in the search for the circadian clock was Richter’s 1967 discovery that large medial hypothalamic lesions disrupt circadian cycles of eating, drinking, and activity in rats. Next, specific lesions of the suprachiasmatic nuclei (SCN) of the medial hypothalamus were shown to disrupt various circadian cycles, including sleep–wake cycles. Although SCN lesions do not greatly affect the amount of time mammals spend sleeping, they do abolish the circadian periodicity of sleep cycles. Further support for the conclusion that the suprachiasmatic nuclei contain a circadian timing mechanism comes from the observation that the nuclei display circadian cycles of electrical, metabolic, and biochemical activity that can be entrained by the light–dark cycle (see Mistlberger, 2005; Saper et al., 2005).

If there was any lingering doubt about the location of the circadian clock, it was eliminated by the brilliant experiment of Ralph and his colleagues (1990). They removed the SCN from the fetuses of a strain of mutant hamsters that had an abnormally short (20-hour) free-running sleep–wake cycle. Then, they transplanted the SCN into normal adult hamsters whose free-running sleep–wake cycles of 25 hours had been abolished by SCN lesions. These transplants restored free-running sleep–wake cycles in the recipients; but, remarkably, the cycles were about 20 hours long rather than the original 25 hours. Transplants in the other direction—that is, from normal hamster fetuses to SCN-lesioned adult mutants—had the complementary effect: They restored free-running sleep–wake cycles that were about 25 hours long rather than the original 20 hours.

Although the suprachiasmatic nuclei are unquestionably the major circadian clocks in mammals, they are not the only ones (e.g., Tosini et al., 2008). Three lines of experiments, largely conducted in the 1980s and 1990s, pointed to the existence of other circadian timing mechanisms:

• Under certain conditions, bilateral SCN lesions have been shown to leave some circadian rhythms unaffected while abolishing others.

• Bilateral SCN lesions do not eliminate the ability of all environmental stimuli to entrain circadian rhythms; for example, SCN lesions can block entrainment by light but not by food or water availability.

• Just like suprachiasmatic neurons, cells from other parts of the body display free-running circadian cycles of activity when maintained in tissue culture.

Neural Mechanisms of Entrainment

How does the 24-hour light–dark cycle entrain the sleep–wake cycle and other circadian rhythms? To answer this question, researchers began at the obvious starting point: the eyes (see Morin & Allen, 2006). They tried to identify and track the specific neurons that left the eyes and carried the information about light and dark that entrained the biological clock. Cutting the optic nerves before they reached the optic chiasm eliminated the ability of the light–dark cycle to entrain circadian rhythms; however, when the optic tracts were cut at the point where they left the optic chiasm, the ability of the light–dark cycle to entrain circadian rhythms was unaffected. As Figure 14.9 illustrates, these two findings indicated that visual axons critical for the entrainment of circadian rhythms branch off from the optic nerve in the vicinity of the optic chiasm. This finding led to the discovery of the retinohypothalamic tracts, which leave the optic chiasm and project to the adjacent suprachiasmatic nuclei.

Surprisingly, although the retinohypothalamic tracts mediate the ability of light to entrain photoreceptors, neither rods nor cones are necessary for the entrainment. The mystery photoreceptors have proven to be neurons, a rare type of retinal ganglion cells with distinctive functional properties (see Berson, 2003; Hattar et al., 2002). During the course of evolution, these photoreceptors have sacrificed the ability to respond quickly and briefly to rapid changes of light in favor of the ability to respond consistently to slowly changing levels of background illumination. Their photopigment is melanopsin (Hankins, Peirson, & Foster, 2007; Panda et al., 2005).

Evolutionary

Perspective

Genetics of Circadian Rhythms

An important breakthrough in the study of circadian rhythms came in 1988 when routine screening of a shipment of hamsters revealed that some of them had abnormally short 20-hour free-running circadian rhythms. Subsequent breeding experiments showed that the abnormality was the result of a genetic mutation, and the gene that was mutated was named tau (Ralph & Menaker, 1988).

FIGURE 14.9 The discovery of the retinohypothalamic tracts. Neurons from each retina project to both suprachiasmatic nuclei.

Although tau was the first mammalian circadian gene to be identified, it was not the first to have its molecular structure characterized. This honor went to clock, a mammalian circadian gene discovered in mice. The structure of the clock gene was characterized in 1997, and that of the tau gene was characterized in 2000 (Lowrey et al., 2000). The molecular structures of several other mammalian circadian genes have now been specified (see Morse & Sassone-Corsi, 2002).

The identification of circadian genes has led to three important discoveries:

• The same or similar circadian genes have been found in many species of different evolutionary ages (e.g., bacteria, flies, fish, frogs, mice, and humans), indicating that circadian genes evolved early in evolutionary history and have been conserved in various descendant species (see Cirelli, 2009).

Evolutionary

Perspective

• Once the circadian genes were discovered, the fundamental molecular mechanism of circadian rhythms was quickly clarified. The key mechanism seems to be gene expression, that is, the transcription of proteins by the circadian genes displays a circadian cycle (see Dunlap, 2006; Hardin, 2006; Meyer, Saez, & Young, 2006).

• The identification of circadian genes provided a more direct method of exploring the circadian timing capacities of parts of the body other than the SCN. Molecular circadian timing mechanisms similar to those in the SCN exist in most cells of the body (see Green & Menaker, 2003; Hastings, Reddy, & Maywood, 2003; Yamaguchi et al., 2003). Although most cells contain circadian timing mechanisms, these cellular clocks are normally entrained by neural and hormonal signals from the SCN.

14.5 Four Areas of the Brain Involved in Sleep

You have just learned about the neural structures involved in controlling the circadian timing of sleep. This section describes four areas of the brain that are directly involved in producing or reducing sleep. You will learn more about their effects in the later section on sleep disorders.

Two Areas of the Hypothalamus Involved in Sleep

It is remarkable that two areas of the brain that are involved in the regulation of sleep were discovered early in the 20th century, long before the advent of modern behavioral neuroscience. The discovery was made by Baron Constantin von Economo, a Viennese neurologist (see Saper, Scammell, & Lu, 2005).

The Case of Constantin von Economo, the Insightful Neurologist

During World War I, the world was swept by a serious viral infection of the brain: encephalitis lethargica. Many of its victims slept almost continuously. Baron Constantin von Economo discovered that the brains of deceased victims who had problems with excessive sleep all had damage in the posterior hypothalamus and adjacent parts of the midbrain. He then turned his attention to the brains of a small group of victims of encephalitis lethargica who had had the opposite sleep-related problem: In contrast to most victims, they had difficulty sleeping. He found that the brains of the deceased victims in this minority always had damage in the anterior hypothalamus and adjacent parts of the basal forebrain. On the basis of these clinical observations, von Economo concluded that the posterior hypothalamus promotes wakefulness, whereas the anterior hypothalamus promotes sleep.

Since von Economo’s discovery of the involvement of the posterior hypothalamus and the anterior hypothalamus in human wakefulness and sleep, respectively, that involvement has been confirmed by lesion and recording studies in experimental animals (see Szymusiak, Gvilia, & McGinty, 2007; Szymusiak & McGinty, 2008). The locations of the posterior and anterior hypothalamus are shown in Figure 14.10.

Evolutionary

Perspective

Reticular Formation and Sleep

Another area involved in sleep was discovered through the comparison of the effects of two different brain-stem transections in cats. First, in 1936, Bremer severed the brain stems of cats between their inferior colliculi and superior colliculi in order to disconnect their forebrains from ascending sensory input (see Figure 14.11). This surgical preparation is called a cerveau isolé preparation (pronounced “ser-VOE ees-o-LAY”—literally, “isolated forebrain”).

Evolutionary

Perspective

Bremer found that the cortical EEG of the isolated cat forebrains was indicative of almost continuous slow-wave sleep. Only when strong visual or olfactory stimuli were presented (the cerveau isolé has intact visual and olfactory input) could the continuous high-amplitude, slow-wave activity be changed to a desynchronized EEG—a low-amplitude, high-frequency EEG. However, this arousing effect barely outlasted the stimuli.

Next, for comparison purposes, Bremer (1937) transected (cut through) the brain stems of a different group of cats. These transections were located in the caudal brain stem, and thus, they disconnected the brain from the rest of the nervous system (see Figure 14.11). This experimental preparation is called the encéphale isolé preparation (pronounced “on-say-FELL ees-o-LAY”).

FIGURE 14.10 Two regions of the brain involved in sleep. The anterior hypothalamus and adjacent basal forebrain are thought to promote sleep; the posterior hypothalamus and adjacent midbrain are thought to promote wakefulness.

Although it cut most of the same sensory fibers as the cerveau isolé transection, the encéphale isolé transection did not disrupt the normal cycle of sleep EEG and wakefulness EEG. This suggested that a structure for maintaining wakefulness was located somewhere in the brain stem between the two transections.

Later, two important findings suggested that this wakefulness structure in the brain stem was the reticular formation. First, it was shown that partial transections at the cerveau isolé level disrupted normal sleep–wake cycles of cortical EEG only when they severed the reticular formation core of the brain stem; when the partial transections were restricted to more lateral areas, which contain the ascending sensory tracts, they had little effect on the cortical EEG (Lindsey, Bowden, & Magoun, 1949). Second, it was shown that electrical stimulation of the reticular formation of sleeping cats awakened them and produced a lengthy period of EEG desynchronization (Moruzzi & Magoun, 1949).

FIGURE 14.11 Four pieces of evidence that the reticular formation is involved in sleep.

In 1949, Moruzzi and Magoun considered these four findings together: (1) the effects on cortical EEG of the cerveau isolé preparation, (2) the effects on cortical EEG of the encéphale isolé preparation, (3) the effects of reticular formation lesions, and (4) the effects on sleep of stimulation of the reticular formation. From these four key findings, Moruzzi and Magoun proposed that low levels of activity in the reticular formation produce sleep and that high levels produce wakefulness (see McCarley, 2007). Indeed, this theory is so widely accepted that the reticular formation is commonly referred to as the reticular activating system, even though maintaining wakefulness is only one of the functions of the many nuclei that it comprises.

Reticular REM-Sleep Nuclei

The fourth area of the brain that is involved in sleep controls REM sleep and is included in the brain area I have just described—it is part of the caudal reticular formation. It makes sense that an area of the brain involved in maintaining wakefulness would also be involved in the production of REM sleep because of the similarities between the two states. Indeed, REM sleep is controlled by a variety of nuclei scattered throughout the caudal reticular formation. Each site is responsible for controlling one of the major indices of REM sleep (Datta & MacLean, 2007; Siegel, 1983; Vertes, 1983)—a site for the reduction of core-muscle tone, a site for EEG desynchronization, a site for rapid eye movements, and so on. The approximate location in the caudal brain stem of each of these REM-sleep nuclei is illustrated in Figure 14.12.

FIGURE 14.12 A sagittal section of the brain stem of the cat illustrating the areas that control the various physiological indices of REM sleep. (Based on Vertes, 1983.)

Please think for a moment about the broad implications of these various REM-sleep nuclei. In thinking about the brain mechanisms of behavior, many people assume that if there is one name for a behavior, there must be a single structure for it in the brain: In other words, they assume that evolutionary pressures have acted to shape the human brain according to our current language and theories. Here we see the weakness of this assumption: The brain is organized along different principles, and REM sleep occurs only when a network of independent structures becomes active together. Relevant to this is the fact that the physiological changes that go together to define REM sleep sometimes break apart and go their separate ways—and the same is true of the changes that define slow-wave sleep. For example, during REM-sleep deprivation, penile erections, which normally occur during REM sleep, begin to occur during slow-wave sleep. And during total sleep deprivation, slow waves, which normally occur only during slow-wave sleep, begin to occur during wakefulness. This suggests that REM sleep, slow-wave sleep, and wakefulness are not each controlled by a single mechanism. Each state seems to result from the interaction of several mechanisms that are capable under certain conditions of operating independently of one another.

Thinking Creatively

Scan Your Brain

Before continuing with this chapter, scan your brain by completing the following exercise to make sure you understand the fundamentals of sleep. The correct answers appear at the end of the exercise. Before proceeding, review material related to your errors and omissions.

1. ______ means lasting about one day.

2. Free-running rhythms are those that occur in environments devoid of ______.

3. The major circadian clock seems to be located in the ______ nuclei of the hypothalamus.

4. The ______ tracts conduct information about light–dark cycles to the circadian clock in the SCN.

5. The first mammalian circadian gene to have its structure characterized was ______.

6. Patients with damage to the ________ hypothalamus and adjacent basal ganglia often have difficulty sleeping.

7. Damage to the ______ hypothalamus and adjacent areas of the midbrain often cause excessive sleepiness.

8. The low-amplitude high-frequency EEG of wakefulness is said to be ______.

9. In Bremer’s classic study, the ______ preparation displayed an EEG characteristic of continuous sleep.

10. The indices of REM sleep are controlled by a variety of nuclei located in the caudal ______.

Scan Your Brain answers:

(1) Circadian,

(2) zeitgebers,

(3) suprachiasmatic,

(4) retinohypothalamic,

(5) clock,

(6) anterior,

(7) posterior,

(8) desynchronized,

(9) encéphale isolé,

(10) reticular formation.

14.6 Drugs That Affect Sleep

Most drugs that influence sleep fall into two different classes: hypnotic and antihypnotic. Hypnotic drugs are drugs that increase sleep; antihypnotic drugs are drugs that reduce sleep. A third class of sleep-influencing drugs comprises those that influence its circadian rhythmicity; the main drug of this class is melatonin.

Watch Nightsleep

www.mypsychlab.com

Hypnotic Drugs

The benzodiazepines (e.g., Valium and Librium) were developed and tested for the treatment of anxiety, yet they are the most commonly prescribed hypnotic medications. In the short term, they increase drowsiness, decrease the time it takes to fall asleep, reduce the number of awakenings during a night’s sleep, and increase total sleep time (Krystal, 2008). Thus, they can be effective in the treatment of occasional difficulties in sleeping.

Clinical Implications

Although benzodiazepines can be effective therapeutic hypnotic agents in the short term, their prescription for the treatment of chronic sleep difficulties, though common, is ill-advised (Riemann & Perlis, 2008). Five complications are associated with the chronic use of benzodiazepines as hypnotic agents:

• Tolerance develops to the hypnotic effects of benzodiazepines; thus, patients must take larger and larger doses to maintain the drugs’ efficacy and often become addicted.

• Cessation of benzodiazepine therapy after chronic use causes insomnia (sleeplessness), which can exacerbate the very problem that the benzodiazepines were intended to correct.

• Benzodiazepines distort the normal pattern of sleep; they increase the duration of stage 2 sleep, while actually decreasing the duration of stage 4 and of REM sleep.

• Benzodiazepines lead to next-day drowsiness (Ware, 2008) and increase the incidence of traffic accidents (Gustavsen et al., 2008).

• Most troubling is that chronic use of benzodiazepines has been shown to substantially reduce life expectancy (see Siegel, 2010).

Evidence that the raphé nuclei, which are serotonergic, play a role in sleep suggested that serotonergic drugs might be effective hypnotics. Efforts to demonstrate the hypnotic effects of such drugs have focused on 5-hydroxytryptophan (5-HTP)—the precursor of serotonin—because 5-HTP, but not serotonin, readily passes through the blood–brain barrier. Injections of 5-HTP do reverse the insomnia produced in both cats and rats by the serotonin antagonist PCPA; however, they appear to be of no therapeutic benefit in the treatment of human insomnia (see Borbély, 1983).

Antihypnotic Drugs

The mechanisms of the following three classes of anti-hypnotic drugs are well understood: cocaine-derived stimulants, amphetamine-derived stimulants, and tricyclic antidepressants. The drugs in these three classes seem to promote wakefulness by boosting the activity of catecholamines (norepinephrine, epinephrine, and dopamine)—by increasing their release into synapses, by blocking their reuptake from synapses, or both. The antihypnotic mechanisms of two other stimulant drugs, codeine and modafinil, are less well understood.

Clinical Implications

The regular use of antihypnotic drugs is risky. Antihypnotics tend to produce a variety of adverse side effects, such as loss of appetite, anxiety, tremor, addiction, and disturbance of normal sleep patterns. Moreover, they may mask the pathology that is causing the excessive sleepiness.

Melatonin

Melatonin is a hormone that is synthesized from the neurotransmitter serotonin in the pineal gland (see Moore, 1996). The pineal gland is an inconspicuous gland that René Descartes, whose dualistic philosophy was discussed in Chapter 2, once believed to be the seat of the soul. The pineal gland is located on the midline of the brain just ventral to the rear portion of the corpus callosum (see Figure 14.13).

The pineal gland has important functions in birds, reptiles, amphibians, and fish (see Cassone, 1990). The pineal gland of these species has inherent timing properties and regulates circadian rhythms and seasonal changes in reproductive behavior through its release of melatonin. In humans and other mammals, however, the functions of the pineal gland and melatonin are not as apparent.

Evolutionary

Perspective

In humans and other mammals, circulating levels of melatonin display circadian rhythms under control of the suprachiasmatic nuclei (see Gillette & McArthur, 1996), with the highest levels being associated with darkness and sleep (see Foulkes et al., 1997). On the basis of this correlation, it has long been assumed that melatonin plays a role in promoting sleep or in regulating its timing in mammals.

In order to put the facts about melatonin in perspective, it is important to keep one significant point firmly in mind. In adult mammals, pinealectomy and the consequent elimination of melatonin appear to have little effect. The pineal gland plays a role in the development of mammalian sexual maturity, but its functions after puberty are not at all obvious.

FIGURE 14.13 The location of the pineal gland, the source of melatonin.

Does exogenous (externally produced) melatonin improve sleep, as widely believed? The evidence is mixed (see van den Heuvel et al., 2005). However, a meta-analysis (a combined analysis of results of more than one study) of 17 studies indicated that exogenous melatonin has a slight, but statistically significant, soporific (sleep-promoting) effect (Brzezinski et al., 2005).

In contrast to the controversy over the soporific effects of exogenous melatonin in mammals, there is good evidence that it can shift the timing of mammalian circadian cycles. Indeed, several researchers have argued that melatonin is better classified as a chronobiotic (a substance that adjusts the timing of internal biological rhythms) than as a soporific (see Scheer & Czeisler, 2005). Arendt and Skene (2005) have argued that administration of melatonin in the evening increases sleep by accelerating the start of the nocturnal phase of the circadian rhythm and that administration at dawn increases sleep by delaying the end of the nocturnal phase.

Exogenous melatonin has been shown to have a therapeutic potential in the treatment of two types of sleep problems (see Arendt & Skene, 2005). Melatonin before bedtime has been shown to improve the sleep of those insomniacs who are melatonin-deficient and of blind people who have sleep problems attributable to the lack of the synchronizing effects of the light–dark cycle. Melatonin’s effectiveness in the treatment of other sleep disorders remains controversial.

Clinical Implications

14.7 Sleep Disorders

Many sleep disorders fall into one of two complementary categories: insomnia and hypersomnia. Insomnia includes all disorders of initiating and maintaining sleep, whereas hypersomnia includes disorders of excessive sleep or sleepiness. A third major class of sleep disorders includes all those disorders that are specifically related to REM-sleep dysfunction. Ironically, both insomnia and hypersomnia are common symptoms of depression and other mood disorders (Kaplan & Harvey, 2009).

Clinical Implications

In various surveys, approximately 30% of respondents report significant sleep-related problems. However, it is important to recognize that complaints of sleep problems often come from people whose sleep appears normal in laboratory sleep tests. For example, many people normally sleep 6 hours or less a night and seem to do well sleeping that amount, but they are pressured by their doctors, their friends, and their own expectations to sleep more (e.g., at least 8 hours). As a result, they spend more time in bed than they should and have difficulty getting to sleep. Often, the anxiety associated with their inability to sleep more makes it even more difficult for them to sleep (see Espie, 2002). Such patients can often be helped by counseling that convinces them to go to bed only when they are very sleepy (see Anch et al., 1988). Others with disturbed sleep have more serious problems (see Mahowald & Schenck, 2005).

Insomnia

Many cases of insomnia are iatrogenic (physician-created)—in large part because sleeping pills (i.e., benzodiazepines), which are usually prescribed by physicians, are a major cause of insomnia. At first, hypnotic drugs may be effective in increasing sleep, but soon the patient may become trapped in a rising spiral of drug use, as tolerance to the drug develops and progressively more of it is required to produce its original hypnotic effect. Soon, the patient cannot stop taking the drug without running the risk of experiencing withdrawal symptoms, which include insomnia. The case of Mr. B. illustrates this problem.

Mr. B., the Case of Iatrogenic Insomnia

Mr. B. was studying for a civil service exam, the outcome of which would affect his entire future. He was terribly worried about the test and found it difficult to get to sleep at night. Feeling that the sleep loss was affecting his ability to study, he consulted his physician. . . . His doctor prescribed a moderate dose of barbiturate at bedtime, and Mr. B. found that this medication was very effective …for the first several nights. After about a week, he began having trouble sleeping again and decided to take two sleeping pills each night. Twice more the cycle was repeated, until on the night before the exam he was taking four times as many pills as his doctor had prescribed. The next night, with the pressure off, Mr. B. took no medication. He had tremendous difficulty falling asleep, and when he did, his sleep was terribly disrupted. . . . Mr. B. now decided that he had a serious case of insomnia, and returned to his sleeping pill habit. By the time he consulted our clinic several years later, he was taking approximately 1,000 mg sodium amytal every night, and his sleep was more disturbed than ever. . . . Patients may go on for years and years—from one sleeping pill to another—never realizing that their troubles are caused by the pills.

(“Mr. B., the Case of Iatrogenic Insomnia,” from Some Must Watch While Some Must Sleep by William C. Dement, Portable Stanford Books, Stanford Alumni Association, Stanford University, 1978, p. 80. Used by permission of William C. Dement.)

In one study, insomniacs claimed to take an average of 1 hour to fall asleep and to sleep an average of only 4.5 hours per night; but when they were tested in a sleep laboratory, they were found to have an average sleep latency (time to fall asleep) of only 15 minutes and an average nightly sleep duration of 6.5 hours. It used to be common medical practice to assume that people who claimed to suffer from insomnia but slept more than 6.5 hours per night were neurotic. However, this practice stopped when some of those diagnosed as neurotic pseudoinsomniacs were subsequently found to be suffering from sleep apnea, nocturnal myoclonus, or other sleep-disturbing problems. Insomnia is not necessarily a problem of too little sleep; it is often a problem of too little undisturbed sleep (Bonnet & Arand, 2002; Stepanski et al., 1987).

One of the most effective treatments for insomnia is sleep restriction therapy (Morin, Kowatch, & O’Shanick, 1990): First, the amount of time that an insomniac is allowed to spend in bed is substantially reduced. Then, after a period of sleep restriction, the amount of time spent in bed is gradually increased in small increments, as long as sleep latency remains in the normal range. Even severe insomniacs can benefit from this treatment.

Some cases of insomnia have specific medical causes; sleep apnea is one such cause. The patient with sleep apnea stops breathing many times each night. Each time, the patient awakens, begins to breathe again, and drifts back to sleep. Sleep apnea usually leads to a sense of having slept poorly and is thus often diagnosed as insomnia. However, some patients are totally unaware of their multiple awakenings and instead complain of excessive sleepiness during the day, which can lead to a diagnosis of hypersomnia (Stepanski et al., 1984).

Sleep apnea disorders are of two types: (1) obstructive sleep apnea results from obstruction of the respiratory passages by muscle spasms or atonia (lack of muscle tone) and often occurs in individuals who are vigorous snorers; (2) central sleep apnea results from the failure of the central nervous system to stimulate respiration (Banno & Kryger, 2007). Sleep apnea is more common in males, in the overweight, and in the elderly (Villaneuva et al., 2005).

Two other specific causes of insomnia are related to the legs: periodic limb movement disorder and restless legs syndrome. Periodic limb movement disorder is disorder characterized by periodic, involuntary movements of the limbs, often involving twitches of the legs during sleep. Most patients suffering from this disorder complain of poor sleep and daytime sleepiness but are unaware of the nature of their problem. In contrast, people with restless legs syndrome are all too aware of their problem. They complain of a hard-to-describe tension or uneasiness in their legs that keeps them from falling asleep. Once established, both of these disorders are chronic (see Garcia-Borreguero et al., 2006). Much more research into their treatment is needed, although some dopamine agonists can be effective (Ferini-Strambi et al., 2008; Hornyak et al., 2006).

Hypersomnia

Narcolepsy is the most widely studied disorder of hyper-somnia. It occurs in about 1 out of 2,000 individuals (Ohayon, 2008) and has two prominent symptoms (see Nishino, 2007). First, narcoleptics experience severe daytime sleepiness and repeated, brief (10- to 15-minute) daytime sleep episodes. Narcoleptics typically sleep only about an hour per day more than average; it is the inappropriateness of their sleep episodes that most clearly defines their condition. Most of us occasionally fall asleep on the beach, in front of the television, or in that most soporific of all daytime sites—the large, stuffy, dimly lit lecture hall. But narcoleptics fall asleep in the middle of a conversation, while eating, while scuba diving, or even while making love.

The second prominent symptom of narcolepsy is cataplexy (Houghton, Scammell, & Thorpy, 2004). Cataplexy is characterized by recurring losses of muscle tone during wakefulness, often triggered by an emotional experience. In its mild form, it may simply force the patient to sit down for a few seconds until it passes. In its extreme form, the patient drops to the ground as if shot and remains there for a minute or two, fully conscious.

In addition to the two prominent symptoms of narcolepsy (daytime sleep attacks and cataplexy), narcoleptics often experience two other symptoms: sleep paralysis and hypnagogic hallucinations. Sleep paralysis is the inability to move (paralysis) just as one is falling asleep or waking up. Hypnagogic hallucinations are dreamlike experiences during wakefulness. Many healthy people occasionally experience sleep paralysis and hypnagogic hallucinations. Have you experienced them?

Three lines of evidence suggested to early researchers that narcolepsy results from an abnormality in the mechanisms that trigger REM sleep. First, unlike normal people, narcoleptics often go directly into REM sleep when they fall asleep. Second and third, as you have already learned, narcoleptics often experience dreamlike states and loss of muscle tone during wakefulness.

Some of the most exciting current research on the neural mechanisms of sleep in general and narcolepsy in particular began with the study of a strain of narcoleptic dogs. After 10 years of studying the genetics of these narcoleptic dogs, Lin and colleagues (1999) finally isolated the gene that causes the disorder. The gene encodes a receptor protein that binds to a neuropeptide called orexin (sometimes called hypocretin), which exists in two forms: orexin-A and orexin-B (see Sakurai, 2005). Although discovery of the orexin gene has drawn attention to genetic factors in narcolepsy, the concordance rate between identical twins is only about 25% (Raizen, Mason, & Pack, 2006).

Evolutionary

Perspective

Several studies have documented reduced levels of orexin in the cerebrospinal fluid of living narcoleptics and in the brains of deceased narcoleptics (see Nishino & Kanbayashi, 2005). Also, the number of orexin-releasing neurons has been found to be reduced in the brains of narcoleptics (e.g., Peyron et al., 2000; Thannickal et al., 2000).

Where is orexin synthesized in the brain? Orexin is synthesized by neurons in the region of the hypothalamus that has been linked to the promotion of wakefulness: the posterior hypothalamus (mainly its lateral regions). The orexin-producing neurons project diffusely throughout the brain, but they show many connections with neurons of the other wakefulness-promoting area of the brain: the reticular formation. Currently, there is considerable interest in understanding the role of the orexin circuits in normal sleep–wake cycles (see Sakurai, 2007; Siegel, 2004).

Narcolepsy has traditionally been treated with stimulants (e.g., amphetamine, methylphenidate), but these have substantial addiction potential and produce many undesirable side effects. The antihypnotic stimulant modafinil has been shown to be effective in the treatment of narcolepsy, and antidepressants can be effective against cataplexy (Thorpy, 2007).

REM-Sleep–Related Disorders

Several sleep disorders are specific to REM sleep; these are classified as REM-sleep–related disorders. Even narcolepsy, which is usually classified as a hypersomnic disorder, can reasonably be considered to be a REM-sleep–related disorder—for reasons you have just encountered.

Occasionally, patients who have little or no REM sleep are discovered. Although this disorder is rare, it is important because of its theoretical implications. Lavie and others (1984) described a patient who had suffered a brain injury that presumably involved damage to the REM-sleep controllers in the caudal reticular formation. The most important finding of this case study was that the patient did not appear to be adversely affected by his lack of REM sleep. After receiving his injury, he completed high school, college, and law school and established a thriving law practice.

Some patients experience REM sleep without core-muscle atonia. It has been suggested that the function of REM-sleep atonia is to prevent the acting out of dreams. This theory receives support from case studies of people who suffer from this disorder—case studies such as the following one.

The Case of the Sleeper Who Ran Over Tackle

I was a halfback playing football, and after the quarterback received the ball from the center he lateraled it sideways to me and I’m supposed to go around end and cut back over tackle and—this is very vivid—as I cut back over tackle there is this big 280-pound tackle waiting, so I, according to football rules, was to give him my shoulder and bounce him out of the way. . . . [W]hen I came to I was standing in front of our dresser and I had [gotten up out of bed and run and] knocked lamps, mirrors and everything off the dresser, hit my head against the wall and my knee against the dresser. (Schenck et al., 1986, p. 294)

Presumably, REM sleep without atonia is caused by damage to the nucleus magnocellularis or to an interruption of its output. The nucleus magnocellularis is a structure of the caudal reticular formation that controls muscle relaxation during REM sleep. In normal dogs, it is active only during REM sleep; in narcoleptic dogs, it is also active during their catalectic attacks.

Evolutionary

Perspective

14.8 Effects of Long-Term Sleep Reduction

When people sleep less than they are used to sleeping, they do not feel or function well. I am sure that you have experienced these effects. But what do they mean? Most people—nonexperts and experts alike—believe that the adverse effects of sleep loss indicate that we need the sleep we typically get. However, there is an alternative interpretation, one that is consistent with the now acknowledged plasticity of the adult human brain. Perhaps the brain needs a small amount of sleep each day but will sleep much more under ideal conditions because of sleep’s high positive incentive value. The brain then slowly adapts to the amount of sleep it is getting—even though this amount may be far more than it needs—and is disturbed when there is a sudden reduction.

Neuroplasticity

Fortunately, there are ways to determine which of these two interpretations of the effects of sleep loss is correct. The key is to study individuals who sleep little, either because they have always done so or because they have purposefully reduced their sleep times. If people need at least 8 hours of sleep each night, short sleepers should be suffering from a variety of health and performance problems. Before I summarize the results of this key research, I want to emphasize one point: Because they are so time-consuming, few studies of long-term sleep patterns have been conducted, and some of those that have been conducted are not sufficiently thorough. Nevertheless, there have been enough of them for a clear pattern of results to have emerged. I think they will surprise you.

This final section begins with a comparison of short and long sleepers. Then, it discusses two kinds of long-term sleep-reduction studies: studies in which volunteers reduced the amount they slept each night and studies in which volunteers reduced their sleep by restricting it to naps. Next comes a discussion of studies that have examined the relation between sleep duration and health. Finally, I relate my own experience of long-term sleep reduction

Differences between Short and Long Sleepers

Numerous studies have compared short sleepers (those who sleep 6 hours or less per night) and long sleepers (those who sleep 8 hours or more per night). I focus here on the 2004 study of Fichten and colleagues because it is the most thorough. The study had three strong features:

• It included a large sample (239) of adult short sleepers and long sleepers.

• It compared short and long sleepers in terms of 48 different measures, including daytime sleepiness, daytime naps, regularity of sleep times, busyness, regularity of meal times, stress, anxiety, depression, life satisfaction, and worrying.

• Before the study began, the researchers carefully screened out volunteers who were ill or under various kinds of stress or pressure; thus, the study was conducted with a group of healthy volunteers who slept the amount that they felt was right for them.

The findings of Fichten and colleagues are nicely captured by the title of their paper, “Long sleepers sleep more and short sleepers sleep less.” In other words, other than the differences in sleep time, there were no differences between the two groups on any of the other measures—no indication that the short sleepers were suffering in any way from their shorter sleep time. Fichten and colleagues report that these results are consistent with most previous comparisons of short and long sleepers (e.g., Monk et al., 2001), except for a few studies that did not screen out subjects who slept little because they were under pressure (e.g., from worry, illness, or a demanding work schedule). Those studies did report some negative characteristics in the short-sleep group, which likely reflected the stress experienced by some in that group.

Long-Term Reduction of Nightly Sleep

Are short sleepers able to live happy productive lives because they are genetically predisposed to be short sleepers, or is it possible for average people to adapt to a short sleep schedule? There have been only two published studies in which healthy volunteers have reduced their nightly sleep for several weeks or longer. In one (Webb & Agnew, 1974), a group of 16 volunteers slept for only 5.5 hours per night for 60 days, with only one detectable deficit on an extensive battery of mood, medical, and performance tests: a slight deficit on a test of auditory vigilance.

In the other systematic study of long-term nightly sleep reduction (Friedman et al., 1977; Mullaney et al., 1977), 8 volunteeers reduced their nightly sleep by 30 minutes every 2 weeks until they reached 6.5 hours per night, then by 30 minutes every 3 weeks until they reached 5 hours, and then by 30 minutes every 4 weeks thereafter. After a participant indicated a lack of desire to reduce sleep further, the person spent 1 month sleeping the shortest duration of nightly sleep that had been achieved, then 2 months sleeping the shortest duration plus 30 minutes. Finally, each participant slept however long was preferred each night for 1 year. The minimum duration of nightly sleep achieved during this experiment was 5.5 hours for 2 participants, 5.0 hours for 4 participants, and an impressive 4.5 hours for 2 participants. In each participant, a reduction in sleep time was associated with an increase in sleep efficiency: a decrease in the amount of time it took to fall asleep after going to bed, a decrease in the number of nighttime awakenings, and an increase in the proportion of stage 4 sleep. After the participants had reduced their sleep to 6 hours per night, they began to experience daytime sleepiness, and this became a problem as sleep time was further reduced. Nevertheless, there were no deficits on any of the mood, medical, or performance tests administered throughout the experiment. The most encouraging result was that an unexpected follow-up 1 year later found that all participants were sleeping less than they had previously—between 7 and 18 hours less each week—with no excessive sleepiness.

Long-Term Sleep Reduction by Napping

Most mammals and human infants display polyphasic sleep cycles; that is, they regularly sleep more than once per day. In contrast, most adult humans display monophasic sleep cycles; that is, they sleep once per day. Nevertheless, most adult humans do display polyphasic cycles of sleepiness, with periods of sleepiness occurring in late afternoon and late morning (Stampi, 1992a). Have you ever experienced them?

Do adult humans need to sleep in one continuous period per day, or can they sleep effectively in several naps as human infants and other mammals do? Which of the two sleep patterns is more efficient? Research has shown that naps have recuperative powers out of proportion with their brevity (e.g., Milner & Cote, 2008; Smith et al., 2007), suggesting that polyphasic sleep might be particularly efficient.

Interest in the value of polyphasic sleep was stimulated by the legend that Leonardo da Vinci managed to generate a steady stream of artistic and engineering accomplishments during his life by napping for 15 minutes every 4 hours, thereby limiting his sleep to 1.5 hours per day. As unbelievable as this sleep schedule may seem, it has been replicated in several experiments (see Stampi, 1992b). Here are the main findings of these truly mind-boggling experiments: First, participants required a long time, several weeks, to adapt to a polyphasic sleep schedule. Second, once adapted to polyphasic sleep, participants were content and displayed no deficits on the performance tests they were given. Third, Leonardo’s 4-hour schedule works quite well, but in unstructured working situations (e.g., around-the-world solo sailboat races), individuals often vary the duration of the cycle without feeling negative consequences. Fourth, most people display a strong preference for particular sleep durations (e.g., 25 minutes) and refrain from sleeping too little, which leaves them unrefreshed, or too much, which leaves them groggy for several minutes when they awake—an effect called sleep inertia (e.g., Fushimi & Hayashi, 2008; Ikeda & Hayashi, 2008; Wertz et al., 2006). Fifth, when individuals first adopt a polyphasic sleep cycle, most of their sleep is slow-wave sleep, but eventually they return to a mix of REM and slow-wave sleep.

The following are the words of artist Giancarlo Sbragia, who adopted Leonardo’s purported sleep schedule:

This schedule was difficult to follow at the beginning…. It took about 3 wk to get used to it. But I soon reached a point at which I felt a natural propensity for sleeping at this rate, and it turned out to be a thrilling and exciting experience.

. . . How beautiful my life became: I discovered dawns, I discovered silence, and concentration. I had more time for studying and reading—far more than I did before. I had more time for myself, for painting, and for developing my career. (Sbragia, 1992, p. 181)

Effects of Shorter Sleep Times on Health

For decades, it was believed that sleeping 8 hours or more per night is ideal for promoting optimal health and longevity. Then, a series of large-scale epidemiological studies conducted in both the United States and Japan challenged this belief (e.g., Ayas et al., 2003; Kripke et al., 2002; Patel et al., 2003; Tamakoshi & Ohno, 2004). These studies did not include participants who were a potential source of bias, for example, people who slept little because they were ill, depressed, or under stress. The studies started with a sample of healthy volunteers and followed their health for several years.

Thinking Creatively

The results of these studies are remarkably uniform (Kripke, 2004). Figure 14.14 presents data from Tamakoshi and Ohno (2004), who followed 104,010 volunteers for 10 years. You will immediately see that sleeping 8 hours per night is not the healthy ideal that it has been assumed to be: The fewest deaths occurred among people sleeping between 5 and 7 hours per night, far fewer than among those who slept 8 hours. You should be aware that other studies that are not as careful in excluding volunteers who sleep little because of stress or ill health do find more problems associated with short sleep (see Cappuccio et al., 2008), but any such finding is likely an artifact of preexisting ill health or stress, which is more prevalent among short sleepers.

Clinical Implications

Because these epidemiological data are correlational, it is important not to interpret them causally (see Grandner & Drummond, 2007; Stamatakis & Punjabi, 2007; Youngstedt & Kripke, 2004). They do not prove that sleeping 8 or more hours a night causes health problems: Perhaps there is something about people who sleep 8 hours or more per night that leads them to die sooner than people who sleep less. Thus, these studies do not prove that reducing your sleep will cause you to live longer—although some experts are advocating sleep reduction as a means of improving health (e.g., Youngstedt & Kripke, 2004). These studies do, however, provide strong evidence that sleeping less than 8 hours is not the risk to life and health that it is often made out to be.

Thinking Creatively

Long-Term Sleep Reduction: A Personal Case Study

FIGURE 14.14 The mortality rates associated with different amounts of sleep, based on 104,010 volunteers followed over 10 years. The mortality rate at 7 hours of sleep per night has been arbitrarily set at 100%, and the other mortality rates are presented in relation to it. (Based on Tamakoshi & Ohno, Sleep 2004, 27(1): 51–4.)

I began this chapter 4 weeks ago with both zeal and trepidation. I was fascinated by the idea that I could wring 2 or 3 extra hours of living out of each day by sleeping less, and I hoped that adhering to a sleep-reduction program while writing about sleep would create an enthusiasm for the subject that would color my writing and be passed on to you. I began with a positive attitude because I was aware of the relevant evidence; still I was more than a little concerned about the negative effect that reducing my sleep by 3 hours per night might have on me and my writing.

The Case of the Author Who Reduced His Sleep

Rather than using the gradual stepwise reduction method of Friedman and his colleagues, I jumped directly into my 5-hours-per-night sleep schedule. This proved to be less difficult than you might think. I took advantage of a trip to the East Coast from my home on the West Coast to reset my circadian clock. While I was in the East, I got up at 7:00 A.M., which is 4:00 A.M. on the West Coast, and I just kept on the same schedule when I got home. I decided to add my extra waking hours to the beginning of my day rather than to the end so there would be no temptation for me to waste them—there are not too many distractions around this university at 5:00 A.M.

Figure 14.15 is a record of my sleep times for the 4-week period that it took me to write a first draft of this chapter. I didn’t quite meet my goal of sleeping less than 5 hours every night, but I didn’t miss by much: My overall mean was 5.05 hours per night. Notice that in the last week, there was a tendency for my circadian clock to run a bit slow; I began sleeping in until 4:30 A.M. and staying up until 11:30 P.M.

What were the positives and negatives of my experience? The main positive was the added time to do things: Having an extra 21 hours per week was wonderful. Furthermore, because my daily routine was out of synchrony with everybody else’s, I spent little time sitting in traffic. The only negative of the experience was sleepiness. It was no problem during the day, when I was active. However, staying awake during the last hour before I went to bed—an hour during which I usually engaged in sedentary activities, such as reading—was at times a problem. This is when I became personally familiar with the phenomenon of microsleeps, and it was then that I required some assistance in order to stay awake. Each night of sleep became a highly satisfying but all too brief experience.

I began this chapter with this question: How much sleep do we need? Then, I gave you my best professorial it-could-be-this, it-could-be-that answer. However, that was a month ago. Now, after experiencing sleep reduction firsthand and reviewing the evidence yet again, I am less inclined toward wishy-washiness on the topic of sleep. The fact that most committed subjects who are active during the day can reduce their sleep to about 5.5 hours per night without great difficulty or major adverse consequences suggested to me that the answer is 5.5 hours of sleep. But that was before I learned about polyphasic sleep schedules. Now, I must revise my estimate downward.

Conclusion

In this section, you have learned that many people sleep little with no apparent ill effects and that people who are average sleepers can reduce their sleep time substantially, again with no apparent ill effects. You also learned that the health of people who sleep between 5 and 7 hours a night does not suffer; indeed, epidemiological studies indicate that they are the most healthy and live the longest. Together, this evidence challenges the widely held belief that humans have a fundamental need for at least 8 hours of sleep per night.

Thinking Creatively

FIGURE 14.15 Sleep record of Pinel during a 4-week sleep-reduction program.

Themes Revisited

The thinking creatively theme pervaded this chapter. The major purpose of the chapter was to encourage you to reevaluate conventional ideas about sleep in the light of relevant evidence. Has this chapter changed your thinking about sleep? Writing it changed mine.

Thinking Creatively

The evolutionary perspective theme also played a prominent role in this chapter. You learned how thinking about the adaptive function of sleep and comparing sleep in different species have led to interesting insights. Also, you saw how research into the physiology and genetics of sleep has been conducted on nonhuman species.

Evolutionary

Perspective

The clinical implications theme received emphasis in the section on sleep disorders. Perhaps most exciting and interesting were the recent breakthroughs in the understanding of the genetics and physiology of narcolepsy.

Clinical Implications

Finally, the neuroplasticity theme arose in a fundamental way. The fact that the adult human brain has the capacity to change and adapt raises the possibility that it might successfully adapt to a consistent long-term schedule of sleep that is of shorter duration than most people currently choose.

Neuroplasticity

Think about It

1. Do you think your life could be improved by changing when or how long you sleep each day? In what ways? What negative effects do you think such changes might have on you?

2. Some people like to stay up late, some people like to get up early, others like to do both, and still others like to do neither. Design a sleep-reduction program that is tailored to your own preferences and lifestyle and that is consistent with the research literature on circadian cycles and sleep deprivation. The program should produce the greatest benefits for you with the least discomfort.

3. How has reading about sleep research changed your views about sleep? Give three specific examples.

4. Given the evidence that the long-term use of benzodiazepines actually contributes to the problems of insomnia, why are they so commonly prescribed for its treatment?

5. Your friend tells you that everybody needs 8 hours of sleep per night; she points out that every time she stays up late to study, she feels lousy the next day. What evidence would you provide to convince her that she does not need 8 hours of sleep per night?

Key Terms

14.1 Stages of Sleep

Electroencephalogram (EEG) (p. 357)

Electrooculogram (EOG) (p. 357)

Electromyogram (EMG) (p. 357)

Alpha waves (p. 357)

Delta waves (p. 358)

Initial stage 1 EEG (p. 358)

Emergent stage 1 EEG (p. 358)

REM sleep (p. 358)

Slow-wave sleep (SWS) (p. 358)

Activation-synthesis theory (p. 359)

14.2 Why Do We Sleep, and Why Do We Sleep When We Do?

Recuperation theories of sleep (p. 359)

Adaptation theories of sleep (p. 360)

14.3 Effects of Sleep Deprivation

Executive function (p. 363)

Microsleeps (p. 363)

Carousel apparatus (p. 363)

14.4 Circadian Sleep Cycles

Circadian rhythms (p. 366)

Zeitgebers (p. 366)

Free-running rhythms (p. 366)

Free-running period (p. 366)

Internal desynchronization (p. 367)

Jet lag (p. 367)

Circadian clock (p. 368)

Suprachiasmatic nuclei (SCN) (p. 368)

Melanopsin (p. 369)

Tau (p. 369)

14.5 Four Areas of the Brain Involved in Sleep

Cerveau isolé preparation (p. 370)

Desynchronized EEG (p. 370)

Encéphale isolé preparation (p. 371)

Reticular activating system (p. 372)

14.6 Drugs That Affect Sleep

Hypnotic drugs (p. 373)

Antihypnotic drugs (p. 373)

Melatonin (p. 373)

Benzodiazepines (p. 373)

5-Hydroxytryptophan (5-HTP) (p. 373)

Pineal gland (p. 374)

Chronobiotic (p. 375)

14.7 Sleep Disorders

Insomnia (p. 375)

Hypersomnia (p. 375)

Iatrogenic (p. 375)

Sleep apnea (p. 376)

Periodic limb movement disorder (p. 376)

Restless legs syndrome (p. 376)

Narcolepsy (p. 376)

Cataplexy (p. 376)

Sleep paralysis (p. 376)

Hypnagogic hallucinations (p. 376)

Orexin (p. 376)

Nucleus magnocellularis (p. 377)

14.8 Effects of Long-Term Sleep Reduction

Polyphasic sleep cycles (p. 378)

Monophasic sleep cycles (p. 378)

Sleep inertia (p. 379)

Quick Review Test your comprehension of the chapter with this brief practice test. You can find the answers to these questions as well as more practice tests, activities, and other study resources at www.mypsychlab.com.

1. In which stage of sleep do delta waves predominate?

a. initial stage 1

b. emergent stage 1

c. stage 2

d. stage 3

e. stage 4

2. The results of many sleep-deprivation studies are difficult to interpret because of the confounding effects of

a. sex.

b. dreaming.

c. shift work.

d. memory loss.

e. stress.

3. The carousel apparatus has been used to

a. entertain sleep-deprived volunteers.

b. synchronize zeitgebers.

c. synchronize circadian rhythms.

d. deprive rodents of sleep.

e. block microsleeps in sleep-deprived humans.

4. Dreaming occurs during

a. initial stage 1 sleep.

b. stage 2 sleep.

c. stage 3 sleep.

d. stage 4 sleep.

e. none of the above

5. Circadian rhythms without zeitgebers are said to be

a. entrained.

b. free-running.

c. desynchronized.

d. internal. (Pinel, 10/2010, pp. 355-382)

15 Drug Addiction and the Brain’s Reward Circuits Chemicals That Harm with Pleasure

15.1 Basic Principles of Drug Action

15.2 Role of Learning in Drug Tolerance

15.3 Five Commonly Abused Drugs

15.4 Biopsychological Approaches to Theories of Addiction

15.5 Intracranial Self-Stimulation and the Pleasure Centers of the Brain

15.6 Early Studies of Brain Mechanisms of Addiction: Dopamine

15.7 Current Approaches to Brain Mechanisms of Addiction

15.8 A Noteworthy Case of Addiction

Drug addiction is a serious problem in most parts of the world. For example, in the United States alone, over 60 million people are addicted to nicotine, alcohol, or both; 5.5 million are addicted to illegal drugs; and many millions more are addicted to prescription drugs. Pause for a moment and think about the sheer magnitude of the problem represented by such figures—hundreds of millions of addicted people worldwide. The incidence of drug addiction is so high that it is almost certain that you, or somebody dear to you, will be adversely affected by drugs.

This chapter introduces you to some basic pharmacological (pertaining to the scientific study of drugs) principles and concepts, compares the effects of five common addictive drugs, and reviews the research on the neural mechanisms of addiction. You likely already have strong views about drug addiction; thus, as you progress through this chapter, it is particularly important that you do not let your thinking be clouded by preconceptions. In particular, it is important that you do not fall into the trap of assuming that a drug’s legal status has much to say about its safety. You will be less likely to assume that legal drugs are safe and illegal drugs are dangerous if you remember that most laws governing drug abuse in various parts of the world were enacted in the early part of the 20th century, long before there was any scientific research on the topic.

Thinking Creatively

The Case of the Drugged High School Teachers

People’s tendency to equate drug legality with drug safety was recently conveyed to me in a particularly ironic fashion: I was invited to address a convention of high school teachers on the topic of drug abuse. When I arrived at the convention center to give my talk, I was escorted to a special suite, where I was encouraged to join the executive committee in a round of drug taking—the drug being a special high-proof single-malt whiskey. Later, the irony of the situation had its full impact. As I stepped to the podium under the influence of a psychoactive drug (the whiskey), I looked out through the haze of cigarette smoke at an audience of educators who had invited me to speak to them because they were concerned about the unhealthy impact of drugs on their students. The welcoming applause gradually gave way to the melodic tinkling of ice cubes in liquor glasses, and I began. They did not like what I had to say.

15.1 Basic Principles of Drug Action

This section focuses on the basic principles of drug action, with an emphasis on psychoactive drugs —drugs that influence subjective experience and behavior by acting on the nervous system.

Drug Administration and Absorption

Drugs are usually administered in one of four ways: by oral ingestion, by injection, by inhalation, or by absorption through the mucous membranes of the nose, mouth, or rectum. The route of administration influences the rate at which and the degree to which the drug reaches its sites of action in the body.

Oral Ingestion

The oral route is the preferred route of administration for many drugs. Once they are swallowed, drugs dissolve in the fluids of the stomach and are carried to the intestine, where they are absorbed into the bloodstream. However, some drugs readily pass through the stomach wall (e.g., alcohol), and these take effect sooner because they do not have to reach the intestine to be absorbed. Drugs that are not readily absorbed from the digestive tract or that are broken down into inactive metabolites (breakdown products of the body’s chemical reactions) before they can be absorbed must be taken by some other route.

The two main advantages of the oral route of administration over other routes are its ease and relative safety. Its main disadvantage is its unpredictability: Absorption from the digestive tract into the bloodstream can be greatly influenced by such difficult-to-gauge factors as the amount and type of food in the stomach.

Injection

Drug injection is common in medical practice because the effects of injected drugs are strong, fast, and predictable. Drug injections are typically made subcutaneously (SC), into the fatty tissue just beneath the skin; intramuscularly (IM), into the large muscles; or intravenously (IV), directly into veins at points where they run just beneath the skin. Many addicts prefer the intravenous route because the bloodstream delivers the drug directly to the brain. However, the speed and directness of the intravenous route are mixed blessings; after an intravenous injection, there is little or no opportunity to counteract the effects of an overdose, an impurity, or an allergic reaction. Furthermore, many addicts develop scar tissue, infections, and collapsed veins at the few sites on their bodies where there are large accessible veins.

Inhalation

Some drugs can be absorbed into the bloodstream through the rich network of capillaries in the lungs. Many anesthetics are typically administered by inhalation, as are tobacco and marijuana. The two main shortcomings of this route are that it is difficult to precisely regulate the dose of inhaled drugs, and many substances damage the lungs if they are inhaled chronically.

Absorption through Mucous Membranes

Some drugs can be administered through the mucous membranes of the nose, mouth, and rectum. Cocaine, for example, is commonly self-administered through the nasal membranes (snorted)—but not without damaging them.

Drug Penetration of the Central Nervous System

Once a drug enters the bloodstream, it is carried in the blood to the blood vessels of the central nervous system. Fortunately, a protective filter, the blood–brain barrier, makes it difficult for many potentially dangerous blood-borne chemicals to pass from the blood vessels of the CNS into its neurons.

Mechanisms of Drug Action

Psychoactive drugs influence the nervous system in many ways (see Koob & Bloom, 1988). Some drugs (e.g., alcohol and many of the general anesthetics) act diffusely on neural membranes throughout the CNS. Others act in a more specific way: by binding to particular synaptic receptors; by influencing the synthesis, transport, release, or deactivation of particular neurotransmitters; or by influencing the chain of chemical reactions elicited in postsynaptic neurons by the activation of their receptors (see Chapter 4).

Drug Metabolism and Elimination

The actions of most drugs are terminated by enzymes synthesized by the liver. These liver enzymes stimulate the conversion of active drugs to nonactive forms—a process referred to as drug metabolism . In many cases, drug metabolism eliminates a drug’s ability to pass through lipid membranes of cells so that it can no longer penetrate the blood–brain barrier. In addition, small amounts of some psychoactive drugs are passed from the body in urine, sweat, feces, breath, and mother’s milk.

Drug Tolerance

Drug tolerance is a state of decreased sensitivity to a drug that develops as a result of exposure to it. Drug tolerance can be demonstrated in two ways: by showing that a given dose of the drug has less effect than it had before drug exposure or by showing that it takes more of the drug to produce the same effect. In essence, what this means is that drug tolerance is a shift in the dose-response curve (a graph of the magnitude of the effect of different doses of the drug) to the right (see Figure 15.1).

There are three important points to remember about the specificity of drug tolerance.

• One drug can produce tolerance to other drugs that act by the same mechanism; this is known as cross tolerance .

• Drug tolerance often develops to some effects of a drug but not to others. Failure to understand this second point can have tragic consequences for people who think that because they have become tolerant to some effects of a drug (e.g., to the nauseating effects of alcohol or tobacco), they are tolerant to all of them. In fact, tolerance may develop to some effects of a drug while sensitivity to other effects of the same drug increases. Increasing sensitivity to a drug is called drug sensitization (Robinson, 1991).

Thinking Creatively

• Drug tolerance is not a unitary phenomenon; that is, there is no single mechanism that underlies all examples of it (Littleton, 2001). When a drug is administered at doses that affect nervous system function, many kinds of adaptive changes can occur to reduce its effects.

Two categories of changes underlie drug tolerance: metabolic and functional. Drug tolerance that results from changes that reduce the amount of the drug getting to its sites of action is called metabolic tolerance . Drug tolerance that results from changes that reduce the reactivity of the sites of action to the drug is called functional tolerance .

FIGURE 15.1 Drug tolerance: A shift in the dose-response curve to the right as a result of exposure to the drug.

Tolerance to psychoactive drugs is largely functional. Functional tolerance to psychoactive drugs can result from several different types of adaptive neural changes (see Treistman & Martin, 2009). For example, exposure to a psychoactive drug can reduce the number of receptors for it, decrease the efficiency with which it binds to existing receptors, or diminish the impact of receptor binding on the activity of the cell. At least some of these adaptive neural changes are caused by epigenetic mechanisms that affect gene expression (Wang et al., 2007).

Drug Withdrawal Effects and Physical Dependence

After significant amounts of a drug have been in the body for a period of time (e.g., several days), its sudden elimination can trigger an adverse physiological reaction called a withdrawal syndrome . The effects of drug withdrawal are virtually always opposite to the initial effects of the drug. For example, the withdrawal of anticonvulsant drugs often triggers convulsions, and the withdrawal of sleeping pills often produces insomnia. Individuals who suffer withdrawal reactions when they stop taking a drug are said to be physically dependent on that drug.

Clinical Implications

The fact that withdrawal effects are frequently opposite to the initial effects of the drug suggests that withdrawal effects may be produced by the same neural changes that produce drug tolerance (see Figure 15.2). According to this theory, exposure to a drug produces compensatory changes in the nervous system that offset the drug’s effects and produce tolerance. Then, when the drug is eliminated from the body, these compensatory neural changes, without the drug to offset them, manifest themselves as withdrawal symptoms opposite to the initial effects of the drug.

The severity of withdrawal symptoms depends on the particular drug in question, on the duration and degree of the preceding drug exposure, and on the speed with which the drug is eliminated from the body. In general, longer exposure to greater doses followed by more rapid elimination produces greater withdrawal effects.

Addiction: What Is It?

Addicts are habitual drug users, but not all habitual drug users are addicts. Addicts are those habitual drug users who continue to use a drug despite its adverse effects on their health and social life, and despite their repeated efforts to stop using it (see Volkow & Li, 2004).

The greatest confusion about the nature of drug addiction concerns its relation to physical dependence. Many people equate the two: They see drug addicts as people who are trapped on a merry-go-round of drug taking, withdrawal symptoms, and further drug taking to combat the withdrawal symptoms. Although appealing in its simplicity, this conception of drug addiction is inconsistent with the evidence. Addicts sometimes take drugs to prevent or alleviate their withdrawal symptoms (Baker et al., 2006), but this is not the major motivating factor in their addiction. If it were, drug addicts could be easily cured by hospitalizing them for a few days, until their withdrawal symptoms subsided. However, most addicts renew their drug taking even after months of enforced abstinence. This is an important issue, and it will be revisited later in this chapter.

Thinking Creatively

It may have occurred to you, given the foregoing definition of addiction, that drugs are not the only substances to which humans are commonly addicted.

FIGURE 15.2 The relation between drug tolerance and withdrawal effects. The same adaptive neurophysiological changes that develop in response to drug exposure and produce drug tolerance manifest themselves as withdrawal effects once the drug is removed. As the neurophysiological changes develop, tolerance increases; as they subside, the severity of the withdrawal effects decreases.

 Watch Why Drug Addiction Is Hard to Treat

www.mypsychlab.com

Indeed, people who risk their health by continually bingeing on high-calorie foods, risk their family life by repeated illicit sex, or risk their economic stability through compulsive gambling clearly satisfy the definition of an addict (Johnson & Kenny, 2010; Pelchat, 2009; Volkow & Wise, 2005). Although this chapter focuses on drug addiction, food, sex, and gambling addictions may be based on the same neural mechanisms.

15.2 Role of Learning in Drug Tolerance

An important line of psychopharmacologic research has shown that learning plays a major role in drug tolerance. In addition to contributing to our understanding of drug tolerance, this research has established that efforts to understand the effects of psychoactive drugs without considering the experience and behavior of the subjects can provide only partial answers.

Research on the role of learning in drug tolerance has focused on two phenomena: contingent drug tolerance and conditioned drug tolerance. These two phenomena are discussed in the following subsections.

Contingent Drug Tolerance

Contingent drug tolerance refers to demonstrations that tolerance develops only to drug effects that are actually experienced. Most studies of contingent drug tolerance employ the before-and-after design . In before-and-after experiments, two groups of subjects receive the same series of drug injections and the same series of repeated tests, but the subjects in one group receive the drug before each test of the series and those in the other group receive the drug after each test. At the end of the experiment, all subjects receive the same dose of the drug followed by the test so that the degree to which the drug disrupts test performance in the two groups can be compared.

My colleagues and I (Pinel, Mana, & Kim, 1989) used the before-and-after design to study contingent tolerance to the anticonvulsant effect of alcohol. In one study, two groups of rats received exactly the same regimen of alcohol injections: one injection every 2 days for the duration of the experiment. During the tolerance development phase, the rats in one group received each alcohol injection 1 hour before a mild convulsive amygdala stimulation so that the anticonvulsant effect of the alcohol could be experienced on each trial. The rats in the other group received their injections 1 hour after each convulsive stimulation so that the anticonvulsant effect could not be experienced. At the end of the experiment, all of the subjects received a test injection of alcohol, followed 1 hour later by a convulsive stimulation so that the amount of tolerance to the anticonvulsant effect of alcohol could be compared in the two groups. As Figure 15.3 illustrates, the rats that received alcohol on each trial before a convulsive stimulation became almost totally tolerant to alcohol’s anticonvulsant effect, whereas those that received the same injections and stimulations in the reverse order developed no tolerance whatsoever to alcohol’s anticonvulsant effect. Contingent drug tolerance has been demonstrated for many other drug effects in many species, including humans (see Poulos & Cappell, 1991; Wolgin & Jakubow, 2003).

Evolutionary Perspective

FIGURE 15.3 Contingent tolerance to the anticonvulsant effect of alcohol. The rats that received alcohol on each trial before a convulsive stimulation became tolerant to its anticonvulsant effect; those that received the same injections after a convulsive stimulation on each trial did not become tolerant. (Based on Pinel et al., 1989.)

Conditioned Drug Tolerance

Whereas studies of contingent drug tolerance focus on what subjects do while they are under the influence of drugs, studies of conditioned drug tolerance focus on the situations in which drugs are taken. Conditioned drug tolerance refers to demonstrations that tolerance effects are maximally expressed only when a drug is administered in the same situation in which it has previously been administered (see McDonald & Siegel, 2004; Mitchell, Basbaum, & Fields, 2000; Weise-Kelley & Siegel, 2001).

 Listen The Effects of Drugs and Alcohol

www.mypsychlab.com

In one demonstration of conditioned drug tolerance (Crowell, Hinson, & Siegel, 1981), two groups of rats received 20 alcohol and 20 saline injections in an alternating sequence, 1 injection every other day. The only difference between the two groups was that the rats in one group received all 20 alcohol injections in a distinctive test room and the 20 saline injections in their colony room, while the rats in the other group received the alcohol in the colony room and the saline in the distinctive test room. At the end of the injection period, the tolerance of all rats to the hypothermic (temperature-reducing) effects of alcohol was assessed in both environments. As Figure 15.4 illustrates, tolerance was observed only when the rats were injected in the environment that had previously been paired with alcohol administration. There have been dozens of other demonstrations of the situational specificity of drug tolerance: The effect is large, reliable, and general.

Evolutionary Perspective

The situational specificity of drug tolerance led Siegel and his colleagues to propose that addicts may be particularly susceptible to the lethal effects of a drug overdose when the drug is administered in a new context. Their hypothesis is that addicts become tolerant when they repeatedly self-administer their drug in the same environment and, as a result, begin taking larger and larger doses to counteract the diminution of drug effects. Then, if the addict administers the usual massive dose in an unusual situation, tolerance effects are not present to counteract the effects of the drug, and there is a greater risk of death from overdose. In support of this hypothesis, Siegel and colleagues (1982) found that many more heroin-tolerant rats died following a high dose of heroin administered in a novel environment than died in the usual injection environment. (Heroin, as you will learn later in the chapter, kills by suppressing respiration.)

Siegel views each incidence of drug administration as a Pavlovian conditioning trial in which various environmental stimuli (e.g., bars, washrooms, needles, or other addicts) that regularly predict the administration of the drug are conditional stimuli and the drug effects are unconditional stimuli. The central assumption of the theory is that conditional stimuli that predict drug administration come to elicit conditional responses opposite to the unconditional effects of the drug. Siegel has termed these hypothetical opposing conditional responses conditioned compensatory responses . The theory is that as the stimuli that repeatedly predict the effects of a drug come to elicit greater and greater conditioned compensatory responses, they increasingly counteract the unconditional effects of the drug and produce situationally specific tolerance.

Thinking Creatively

FIGURE 15.4 The situational specificity of tolerance to the hypothermic effects of alcohol in rats. (Based on Crowell et al., 1981.)

Alert readers will have recognized the relation between Siegel’s theory of drug tolerance and Woods’s theory of mealtime hunger, which you learned about in Chapter 12. Stimuli that predict the homeostasis-disrupting effects of meals trigger conditioned compensatory responses to minimize a meal’s disruptive effects in the same way that stimuli that predict the homeostasis-disturbing effects of a drug trigger conditioned compensatory responses to minimize the drug’s disruptive effects.

Most demonstrations of conditioned drug tolerance have employed exteroceptive stimuli (external, public stimuli, such as the drug-administration environment) as the conditional stimuli. However, interoceptive stimuli (internal, private stimuli) are just as effective in this role. For example, both the feelings produced by the drug-taking ritual and the first mild effects of the drug experienced soon after administration can, through conditioning, come to reduce the full impact of a drug (Siegel, 2005). This point about interoceptive stimuli is important because it indicates that just thinking about a drug can evoke conditioned compensatory responses.

Although tolerance develops to many drug effects, sometimes the opposite occurs, that is, drug sensitization. Drug sensitization, like drug tolerance, can be situationally specific (see Arvanitogiannis, Sullivan, & Amir, 2000). For example, Anagnostaras and Robinson (1996) demonstrated the situational specificity of sensitization to the motor stimulant effects of amphetamine. They found that 10 amphetamine injections, 1 every 3 or 4 days, greatly increased the ability of amphetamine to activate the motor activity of rats—but only when the rats were injected and tested in the same environment in which they had experienced the previous amphetamine injections.

Drug withdrawal effects and conditioned compensatory responses are similar: They are both responses that are opposite to the unconditioned effect of the drug. The difference is that drug withdrawal effects are produced by elimination of the drug from the body, whereas conditioned compensatory responses are elicited by drug-predictive cues in the absence of the drug. In complex, real-life situations, it is often difficult to tell them apart.

Thinking about Drug Conditioning

In any situation in which drugs are repeatedly administered, conditioned effects are inevitable. That is why it is particularly important to understand them. However, most theories of drug conditioning have a serious problem: They have difficulty predicting the direction of the conditioned effects. For example, Siegel’s conditioned compensatory response theory predicts that conditioned drug effects will always be opposite to the unconditioned effects of the drug, but there are many documented instances in which conditional stimuli elicit responses similar to those of the drug.

Ramsay and Woods (1997) contend that much of the confusion about conditioned drug effects stems from a misunderstanding of Pavlovian conditioning. In particular, they criticize the common assumption that the unconditional stimulus in a drug-tolerance experiment is the drug and that the unconditional response is whatever change in physiology or behavior the experimenter happens to be recording. They argue instead that the unconditional stimulus (i.e., the stimulus to which the subject reflexively reacts) is the disruption of neural functioning that has been directly produced by the drug, and that the unconditional responses are the various neurally mediated compensatory reactions to the unconditional stimulus.

Thinking Creatively

This change in perspective makes a big difference. For example, in the previously described alcohol tolerance experiment by Crowell and colleagues (1981), alcohol was designated as the unconditional stimulus and the resulting hypothermia as the unconditional response. Instead, Ramsay and Woods would argue that the unconditional stimulus was the hypothermia directly produced by the exposure to alcohol, whereas the compensatory changes that tended to counteract the reductions in body temperature were the unconditional responses. The important point about all of this is that once one determines the unconditional stimulus and unconditional response, it is easy to predict the direction of the conditional response in any drug-conditioning experiment: The conditional response is always similar to the unconditional response.

15.3 Five Commonly Abused Drugs

This section focuses on the hazards of chronic use of five commonly abused drugs: tobacco, alcohol, marijuana, cocaine, and the opiates.

Tobacco

When a cigarette is smoked, nicotine —the major psychoactive ingredient of tobacco—and some 4,000 other chemicals, collectively referred to as tar, are absorbed through the lungs. Nicotine acts on nicotinic cholinergic receptors in the brain (see Benowitz, 2008). Tobacco is the leading preventable cause of death in Western countries. In the United States, it contributes to 400,000 premature deaths a year—about 1 in every 5 deaths (U.S. Centers for Disease Control and Prevention, 2008b).

Because considerable tolerance develops to some of the immediate effects of tobacco, the effects of smoking a cigarette on nonsmokers and smokers can be quite different. Nonsmokers often respond to a few puffs of a cigarette with various combinations of nausea, vomiting, coughing, sweating, abdominal cramps, dizziness, flushing, and diarrhea. In contrast, smokers report that they are more relaxed, more alert, and less hungry after a cigarette.

There is no question that heavy smokers are drug addicts in every sense of the word (see Hogg & Bertrand, 2004). Can you think of any other psychoactive drug that is self-administered almost continually—even while the addict is walking along the street? The compulsive drug craving, which is the major defining feature of addiction, is readily apparent in any habitual smoker who has run out of cigarettes or who is forced by circumstance to refrain from smoking for several hours. Furthermore, habitual smokers who stop smoking experience a variety of withdrawal effects, such as depression, anxiety, restlessness, irritability, constipation, and difficulties in sleeping and concentrating.

About 70% of all people who experiment with smoking become addicted—this figure compares unfavorably with 10% for alcohol and 30% for heroin. Moreover, nicotine addiction typically develops quickly, within a few weeks (Di Franza, 2008), and only about 20% of all attempts to stop smoking are successful for 2 years or more. Twin studies (Lerman et al., 1999; True et al., 1999) confirm that nicotine addiction, like other addictions, has a major genetic component. The heritability estimate is about 65%.

The consequences of long-term tobacco use are alarming. Smoker’s syndrome is characterized by chest pain, labored breathing, wheezing, coughing, and a heightened susceptibility to infections of the respiratory tract. Chronic smokers are highly susceptible to a variety of potentially lethal lung disorders, including pneumonia, bronchitis (chronic inflammation of the bronchioles of the lungs), emphysema (loss of elasticity of the lung from chronic irritation), and lung cancer. Although the increased risk of lung cancer receives the greatest publicity, smoking also increases the risk of cancer of the larynx (voice box), mouth, esophagus, kidneys, pancreas, bladder, and stomach. Smokers also run a greater risk of developing a variety of cardiovascular diseases, which may culminate in heart attack or stroke.

Clinical Implications

 Watch Smoking Damage

www.mypsychlab.com

Many smokers claim that they smoke despite the adverse effects because smoking reduces tension. However, smokers are actually more tense than nonsmokers: Their levels of tension are reasonably normal while they are smoking, but they increase markedly between cigarettes.

Thus, the apparent relaxant effect of smoking merely reflects the temporary reversal of the stress caused by the smoker’s addiction (see Parrott, 1999). Consistent with this finding is the fact that smokers are more prone than nonsmokers to experience panic attacks (Zvolensky & Bernstein, 2005).

Sufferers from Buerger’s disease provide a shocking illustration of the addictive power of nicotine. In Buerger’s disease —which occurs in about 15 of 100,000 individuals, mostly in male smokers—the blood vessels, especially those supplying the legs, become constricted.

If a patient with this condition continues to smoke, gangrene may eventually set in. First a few toes may have to be amputated, then the foot at the ankle, then the leg at the knee, and ultimately at the hip. Somewhere along this gruesome progression gangrene may also attack the other leg. Patients are strongly advised that if they will only stop smoking, it is virtually certain that the otherwise inexorable march of gangrene up the legs will be curbed. Yet surgeons report that it is not at all uncommon to find a patient with Buerger’s disease vigorously puffing away in his hospital bed following a second or third amputation operation. (Brecher, 1972, pp. 215–216)

The adverse effects of tobacco smoke are unfortunately not restricted to those who smoke. Individuals who live or work with smokers are more likely to develop heart disease and cancer than those who don’t. Even the unborn are vulnerable (see Huizink & Mulder, 2006; Thompson, Levitt, & Stanwood, 2009). Nicotine is a teratogen (an agent that can disturb the normal development of the fetus): Smoking during pregnancy increases the likelihood of miscarriage, stillbirth, and early death of the child. And the levels of nicotine in the blood of breastfed infants are often as great as those in the blood of their smoking mothers.

 Watch Alcoholism

www.mypsychlab.com

If you or a loved one is a cigarette smoker, some recent findings provide both good news and bad news (see West, 2007). First the bad news: Treatments for nicotine addiction are only marginally effective—nicotine patches have been shown to help some in the short term (Shiffman & Ferguson, 2008). The good news: Many people do stop smoking, and they experience major health benefits. For example, smokers who manage to stop smoking before the age of 30 live almost as long as people who have never smoked (Doll et al., 2004).

Alcohol

 Watch Prenatal Smoking

www.mypsychlab.com

Alcohol is involved in over 3% of all deaths in the United States, including deaths from birth defects, ill health, accidents, and violence (see Mokdad et al., 2004). Approximately 13 million Americans are heavy users, and about 80,000 die each year from alcohol-related diseases and accidents (U.S. Centers for Disease Control and Prevention, 2008a).

Because alcohol molecules are small and soluble in both fat and water, they invade all parts of the body. Alcohol is classified as a depressant because at moderate-to-high doses it depresses neural firing; however, at low doses it can stimulate neural firing and facilitate social interaction. Alcohol addiction has a major genetic component (McGue, 1999): Heritability estimates are about 55%, and several genes associated with alcoholism have been identified (Nurnberger & Bierut, 2007).

 Watch Genetic Predisposition to Alcoholism

www.mypsychlab.com

With moderate doses, the alcohol drinker experiences various degrees of cognitive, perceptual, verbal, and motor impairment, as well as a loss of control that can lead to a variety of socially unacceptable actions. High doses result in unconsciousness; and if blood levels reach 0.5%, there is a risk of death from respiratory depression. The telltale red facial flush of alcohol intoxication is produced by the dilation of blood vessels in the skin; this dilation increases the amount of heat that is lost from the blood to the air and leads to a decrease in body temperature (hypothermia). Alcohol is also a diuretic; that is, it increases the production of urine by the kidneys.

Alcohol, like many addictive drugs, produces both tolerance and physical dependence. The livers of heavy drinkers metabolize alcohol more quickly than do the livers of nondrinkers, but this increase in metabolic efficiency contributes only slightly to overall alcohol tolerance; most alcohol tolerance is functional. Alcohol withdrawal often produces a mild syndrome of headache, nausea, vomiting, and tremulousness, which is euphemistically referred to as a hangover.

 Watch Alcohol Withdrawal

www.mypsychlab.com

A full-blown alcohol withdrawal syndrome comprises three phases (see De Witte et al., 2003). The first phase begins about 5 or 6 hours after the cessation of a long bout of heavy drinking and is characterized by severe tremors, agitation, headache, nausea, vomiting, abdominal cramps, profuse sweating, and sometimes hallucinations. The defining feature of the second phase, which typically occurs between 15 and 30 hours after cessation of drinking, is convulsive activity. The third phase, which usually begins a day or two after the cessation of drinking and lasts for 3 or 4 days, is called delirium tremens (DTs) . The DTs are characterized by disturbing hallucinations, bizarre delusions, agitation, confusion, hyperthermia (high body temperature), and tachycardia (rapid heartbeat). The convulsions and the DTs produced by alcohol withdrawal can be lethal.

Clinical Implications

Alcohol attacks almost every tissue in the body (see Anderson et al., 1993). Chronic alcohol consumption produces extensive brain damage. This damage is produced both directly (see Mechtcheriakov et al., 2007) and indirectly. For example, you learned in Chapter 1 that alcohol indirectly causes Korsakoff’s syndrome (a neuropsychological disorder characterized by memory loss, sensory and motor dysfunction, and, in its advanced stages, severe dementia) by inducing thiamine deficiency, and it also indirectly causes brain damage by increasing susceptibility to stroke (Rehm, 2006). Alcohol affects the brain function of drinkers in other ways, as well. For example, it reduces the flow of calcium ions into neurons by acting on ion channels; it interferes with the function of second messengers inside neurons; it disrupts GABAergic and glutaminergic transmission; and it triggers apoptosis (see Farber & Olney, 2003; Ikonomidou et al., 2000).

Chronic alcohol consumption also causes extensive scarring, or cirrhosis , of the liver, which is the major cause of death among heavy alcohol users. Alcohol erodes the muscles of the heart and thus increases the risk of heart attack. It irritates the lining of the digestive tract and, in so doing, increases the risk of oral and liver cancer, stomach ulcers, pancreatitis (inflammation of the pancreas), and gastritis (inflammation of the stomach). And not to be forgotten is the carnage that alcohol produces from accidents on our roads, in our homes, in our workplaces, and at recreational sites—in the United States, over 20,000 people die each year in alcohol-related traffic accidents alone.

Many people assume that the adverse effects of alcohol occur only in people who drink a lot—they tend to define “a lot” as “much more than they themselves consume.” But they are wrong. Several large-scale studies have shown that even low-to-moderate regular drinking (a drink or two per day) is associated with elevated levels of most cancers, including breast, prostate, ovary, and skin cancer (Allen et al., 2009; Bagnardi et al., 2001; Benedetti, Parent, & Siemiatycki, 2009).

Like nicotine, alcohol readily penetrates the placental membrane and acts as a teratogen. The result is that the offspring of mothers who consume substantial quantities of alcohol during pregnancy can develop fetal alcohol syndrome (FAS) —see Calhoun and Warren (2007). The FAS child suffers from some or all of the following symptoms: brain damage, mental retardation, poor coordination, poor muscle tone, low birth weight, retarded growth, and/or physical deformity. Because alcohol can disrupt brain development in so many ways (e.g., by disrupting neurotrophic support, by disrupting the production of cell-adhesion molecules, or by disrupting normal patterns of apoptosis), there is no time during pregnancy when alcohol consumption is safe (see Farber & Olney, 2003; Guerri, 2002). Moreover, there seems to be no safe amount. Although full-blown FAS is rarely seen in the babies of mothers who never had more than one drink a day during pregnancy, children of mothers who drank only moderately while pregnant are sometimes found to have a variety of cognitive problems, even though they are not diagnosed with FAS (see Korkman, Kettunen, & Autti-Ramo, 2003).

There is no cure for alcoholism; however, disulfiram (Antabuse) can help reduce alcohol consumption under certain conditions. Disulfiram is a drug that interferes with the metabolism of alcohol and produces an accumulation in the bloodstream of acetaldehyde (one of alcohol’s breakdown products). High levels of acetaldehyde produce flushing, dizziness, headache, vomiting, and difficulty breathing; thus, a person who is medicated with disulfiram cannot drink much alcohol without feeling ill. Unfortunately, disulfiram is not a cure for alcoholism because alcoholics simply stop taking it when they return to drinking alcohol. However, treatment with disulfiram can be useful in curtailing alcohol consumption in hospital or outpatient environments, where patients take the medication each day under supervision (Brewer, 2007).

One of the most widely publicized findings about alcohol is that moderate drinking reduces the risk of coronary heart disease. This conclusion is based on the finding that the incidence of coronary heart disease is less among moderate drinkers than among abstainers. You learned in Chapter 1 about the difficulty in basing causal interpretations on correlational data, and researchers worked diligently to identify and rule out factors other than the alcohol that might protect moderate drinkers from coronary heart disease. They seemed to rule out every other possibility. However, a thoughtful new analysis has led to a different conclusion. Let me explain. In a culture in which alcohol consumption is the norm, any large group of abstainers will always include some people who have stopped drinking because they are ill—perhaps this is why abstainers have more heart attacks than moderate drinkers. This hypothesis was tested by including in a meta-analysis only those studies that used an abstainers control group consisting of individuals who had never consumed alcohol. This meta-analysis indicated that alcohol in moderate amounts does not prevent coronary heart disease; that is, moderate drinkers did not suffer less coronary heart disease than lifelong abstainers (Fillmore et al., 2006; Stockwell et al., 2007).

Thinking Creatively

Marijuana

Marijuana is the name commonly given to the dried leaves and flowers of Cannabis sativa —the common hemp plant. Approximately 2 million Americans have used marijuana in the last month. The usual mode of consumption is to smoke these leaves in a joint (a cigarette of marijuana) or a pipe; but marijuana is also effective when ingested orally, if first baked into an oil-rich substrate, such as a chocolate brownie, to promote absorption from the gastrointestinal tract.

The psychoactive effects of marijuana are largely attributable to a constituent called THC (delta-9-tetrahydrocannabinol). However, marijuana contains over 80 cannabinoids (chemicals of the same chemical class as THC), which may also be psychoactive. Most of the cannabinoids are found in a sticky resin covering the leaves and flowers of the plant; this resin can be extracted and dried to form a dark corklike material called hashish . Hashish can be further processed into an extremely potent product called hash oil.

Written records of marijuana use go back 6,000 years in China, where its stems were used to make rope, its seeds were used as a grain, and its leaves and flowers were used for their psychoactive and medicinal effects. In the Middle Ages, cannabis cultivation spread into Europe, where it was grown primarily for the manufacture of rope. During the period of European imperialism, rope was in high demand for sailing vessels, and the American colonies responded to this demand by growing cannabis as a cash crop. George Washington was one of the more notable cannabis growers.

The practice of smoking the leaves of Cannabis sativa and the word marijuana itself seem to have been introduced to the southern United States in the early part of the 20th century. In 1926, an article appeared in a New Orleans newspaper exposing the “menace of marijuana,” and soon similar stories were appearing in newspapers all over the United States claiming that marijuana turns people into violent, drug-crazed criminals. The misrepresentation of the effects of marijuana by the news media led to the rapid enactment of laws against the drug. In many states, marijuana was legally classified a narcotic (a legal term generally used to refer to opiates), and punishment for its use was dealt out accordingly. Marijuana bears no resemblance to opiate narcotics.

Popularization of marijuana smoking among the middle and upper classes in the 1960s stimulated a massive program of research. One of the difficulties in studying the effects of marijuana is that they are subtle, difficult to measure, and greatly influenced by the social situation:

At low, usual “social” doses, the intoxicated individual may experience an increased sense of well-being: initial restlessness and hilarity followed by a dreamy, carefree state of relaxation; alteration of sensory perceptions including expansion of space and time; and a more vivid sense of touch, sight, smell, taste, and sound; a feeling of hunger, especially a craving for sweets; and subtle changes in thought formation and expression. To an unknowing observer, an individual in this state of consciousness would not appear noticeably different. (National Commission on Marijuana and Drug Abuse, 1972, p. 68)

Although the effects of typical social doses of marijuana are subtle, high doses do impair psychological functioning. At high doses, short-term memory is impaired, and the ability to carry out tasks involving multiple steps to reach a specific goal declines. Speech becomes slurred, and meaningful conversation becomes difficult. A sense of unreality, emotional intensification, sensory distortion, feelings of paranoia, and motor impairment are also common.

 Watch Marijuana and Performance

www.mypsychlab.com

Some people do become addicted to marijuana, but its addiction potential is low. Most people who use marijuana do so only occasionally, with only about 10% of them using daily; moreover, most people who try marijuana do so in their teens and curtail their use by their 30s or 40s (see Room et al., 2010). Tolerance to marijuana develops during periods of sustained use; however, obvious withdrawal symptoms (e.g., nausea, diarrhea, sweating, chills, tremor, sleep disturbance) are rare, except in contrived laboratory situations in which massive oral doses are administered.

What are the health hazards of marijuana use? Two have been documented. First, the few marijuana smokers who do smoke it regularly for long periods (estimated to be about 10%) tend to develop respiratory problems (see Aldington et al., 2008; Brambilla & Colonna, 2008; Tetrault et al., 2007): cough, bronchitis, and asthma. Second, because marijuana produces tachycardia (elevated heart rate), single large doses can trigger heart attacks in susceptible individuals who have previously suffered a heart attack.

Clinical Implications

Although many people believe that marijuana causes brain damage, almost all efforts to document brain damage in marijuana users have proven negative. The one exception is an MRI study by Yücel and colleagues (2008), who studied the brains of 15 men who had an extremely high level of marijuana exposure—at least 5 joints per day for almost 20 years. These men had hippocampuses and amygdalae with reduced volumes. Because this finding is the single positive report in a sea of negative findings, it needs to be replicated. Furthermore, because the finding is correlational, it cannot prove that extremely high doses of marijuana can cause brain damage—one can just as easily conclude that brain damage predisposes individuals to pathological patterns of marijuana use.

Because it has been difficult to directly document brain damage in marijuana users, many studies have taken an indirect approach: They have attempted to document permanent memory loss in marijuana users—the assumption being that such loss would be indicative of brain damage. Many studies have documented memory deficits in marijuana users, but these deficits tend to be acute effects associated with marijuana intoxication that disappear after a few weeks of abstinence. Indeed, there seems to be a general consensus that marijuana use is not associated with substantial permanent memory problems (see Grant et al., 2003; Jager et al., 2006; Iversen, 2005). There have been reports (see Medina et al., 2007) that people who become heavy marijuana users in adolescence display memory and other cognitive deficits; however, Pope and colleagues (2003) found that adolescents with lower verbal intelligence scores are more likely to become heavy marijuana users, which likely accounts for the poorer cognitive performance.

Several correlational studies have found that heavy marijuana users are more likely to be be diagnosed with schizophrenia (see Arseneault et al., 2004). The best of these studies followed a group of Swedish males for 25 years (Zammit et al., 2002); after some of the obvious confounds had been controlled, there was a higher incidence of schizophrenia among heavy marijuana users. This correlation has led some to conclude that marijuana causes schizophrenia, but, as you know, correlational evidence cannot prove causation. In this case, it is also possible that youths in the early developmental stages of schizophrenia have a particular attraction and/or susceptibility to marijuana; however, more research is required to understand the causal factors involved in this correlation (see Pollack & Reurer, 2007). In the meantime, individuals with a history of schizophrenia in their families should avoid marijuana.

Thinking Creatively

THC has been shown to have several therapeutic effects (see Karanian & Bahr, 2006). Since the early 1990s, it has been widely used to suppress nausea and vomiting in cancer patients and to stimulate the appetite of AIDS patients (see DiMarzo & Matias, 2005). THC has also been shown to block seizures; to dilate the bronchioles of asthmatics; to decrease the severity of glaucoma (a disorder characterized by an increase in the pressure of the fluid inside the eye); and to reduce anxiety, some kinds of pain, and the symptoms of multiple sclerosis (Agarwal et al., 2007; Nicoll & Alger, 2004; Page et al., 2003). Medical use of THC does not appear to be associated with adverse side effects (Degenhardt & Hall, 2008; Wang et al., 2008).

Research on THC changed irrevocably in the early 1990s with the discovery of two receptors for it in the brain: CB1 and CB2. CB1 turned out to be the most prevalent G-protein–linked receptor in the brain (see Chapter 4); CB2 is found in the brain stem and in the cells of the immune system (see Van Sickle et al., 2005). But why are there THC receptors in the brain? They could hardly have evolved to mediate the effects of marijuana smoking. This puzzle was quickly solved with the discovery of a class of endogenous cannabinoid neurotransmitters: the endocannabinoids (see Harkany, Mackie, & Doherty, 2008). The first endocannabinoid neurotransmitter to be isolated and characterized was named anandamide , from a word that means “internal bliss” (see Nicoll & Alger, 2004).

I cannot end this discussion of marijuana (Cannabis sativa) without telling you the following story:

You can imagine how surprised I was when my colleague went to his back door, opened it, and yelled, “Sativa, here Sativa, dinner time.”

“What was that you called your dog?” I asked as he returned to his beer.

“Sativa,” he said. “The kids picked the name. I think they learned about it at school; a Greek goddess or something. Pretty, isn’t it? And catchy too: Every kid on the street seems to remember her name.”

“Yes,” I said. “Very pretty.”

Cocaine and Other Stimulants

Stimulants are drugs whose primary effect is to produce general increases in neural and behavioral activity. Although stimulants all have a similar profile of effects, they differ greatly in their potency. Coca-Cola is a mild commercial stimulant preparation consumed by many people around the world. Today, its stimulant action is attributable to caffeine, but when it was first introduced, “the pause that refreshes” packed a real wallop in the form of small amounts of cocaine. Cocaine and its derivatives are the most commonly abused stimulants, and thus they are the focus of this discussion.

Cocaine is prepared from the leaves of the coca bush, which grows primarily in Peru and Bolivia. For centuries, a crude extract called coca paste has been made directly from the leaves and eaten. Today, it is more common to treat the coca paste and extract cocaine hydrochloride, the nefarious white powder that is referred to simply as cocaine and typically consumed by snorting or by injection. Cocaine hydrochloride may be converted to its base form by boiling it in a solution of baking soda until the water has evaporated. The impure residue of this process is crack , which is a potent, cheap, smokable form of cocaine. However, because crack is impure, variable, and consumed by smoking, it is difficult to study, and most research on cocaine derivatives has thus focused on pure cocaine hydrochloride. Approximately 36 million Americans have used cocaine or crack (Substance Abuse and Mental Health Services Administration [SAMSHA], 2009).

 Watch Cocaine

www.mypsychlab.com

Cocaine hydrochloride is an effective local anesthetic and was once widely prescribed as such until it was supplanted by synthetic analogues such as procaine and lidocaine. It is not, however, cocaine’s anesthetic actions that are of interest to users. People eat, smoke, snort, or inject cocaine or its derivatives in order to experience its psychological effects. Users report being swept by a wave of well-being; they feel self-confident, alert, energetic, friendly, outgoing, fidgety, and talkative; and they have less than their usual desire for food and sleep.

Cocaine addicts tend to go on so-called cocaine sprees , binges in which extremely high levels of intake are maintained for periods of a day or two. During a cocaine spree, users become increasingly tolerant to the euphoria-producing effects of cocaine. Accordingly, larger and larger doses are often administered. The spree usually ends when the cocaine is gone or when it begins to have serious toxic effects. The effects of cocaine sprees include sleeplessness, tremors, nausea, hyperthermia, and psychotic behavior, which is called cocaine psychosis and has often been mistakenly diagnosed as paranoid schizophrenia. During cocaine sprees, there is a risk of loss of consciousness, seizures, respiratory arrest, heart attack, or stroke (Kokkinos & Levine, 1993). Although tolerance develops to most effects of cocaine (e.g., to the euphoria), repeated cocaine exposure sensitizes subjects (i.e., makes them even more responsive) to its motor and convulsive effects (see Robinson & Berridge, 1993). The withdrawal effects triggered by abrupt termination of a cocaine spree are relatively mild. Common cocaine withdrawal symptoms include a negative mood swing and insomnia.

Clinical Implications

Cocaine and its various derivatives are not the only commonly abused stimulants. Amphetamine (speed) and its relatives also present major health problems. Amphetamine has been in wide illicit use since the 1960s. It is usually consumed orally in the potent form called d-amphetamine (dextroamphetamine). The effects of d-amphetamine are comparable to those of cocaine; for example, it produces a syndrome of psychosis called amphetamine psychosis.

FIGURE 15.5 Structural MRIs have revealed widespread loss of cortical volume in methamphetamine users. Red indicates the areas of greatest loss. (From Thompson et al., 2004.)

In the 1990s, d-amphetamine was supplanted as the favored amphetamine-like drug by several more potent relatives. One is methamphetamine, or “meth” (see Cho, 1990), which is commonly used in its even more potent, smokable, crystalline form (ice or crystal). Another potent relative of amphetamine is 3,4-methylenedioxy methamphetamine (MDMA, or ecstasy), which is taken orally (see Baylen & Rosenberg, 2006).

The primary mechanism by which cocaine and its derivatives exert their effects is the blockade of dopamine transporters , molecules in the presynaptic membrane that normally remove dopamine from synapses and transfer it back into presynaptic neurons. Other stimulants increase the release of monoamines into synapses (Sulzer et al., 2005).

Do stimulants have long-term adverse effects on the health of habitual users? There is mounting evidence that they do. Users of MDMA have deficits in the performance of various neuropsychological tests; they have deficiencies in various measures of dopaminergic and serotonergic function; and functional brain imaging during tests of executive functioning, inhibitory control, and decision making often reveals abnormalities in many areas of the cortex and limbic system (see Aron & Paulus, 2007; Baicy & London, 2007; Chang et al., 2007; Volz, Fleckenstein, & Hanson, 2007). The strongest evidence that methamphetamine damages the brain comes from a structural MRI study that found decreases in volume of various parts of the brains of persons who had used methamphetamine for an average of 10 years (Thompson et al., 2004)—the reductions in cortical volume are illustrated in Figure 15.5. Controlled experiments on nonhumans have confirmed the adverse effects of stimulants on brain function (see McCann & Ricaurte, 2004).

Clinical Implications

Although research on the health hazards of stimulants has focused on brain pathology, there is also evidence of heart pathology—many methamphetamine-dependent patients have been found to have electrocardiographic abnormalities (Haning & Goebert, 2007). Also, many behavioral, neurological, and cardiovascular problems have been observed in infants born to mothers who have used stimulants while pregnant (see Harvey, 2004).

The Opiates: Heroin and Morphine

Opium —the dried form of sap exuded by the seed pods of the opium poppy—has several psychoactive ingredients. Most notable are morphine and codeine , its weaker relative. Morphine, codeine, and other drugs that have similar structures or effects are commonly referred to as opiates . The opiates exert their effects by binding to receptors whose normal function is to bind to endogenous opiates. The endogenous opiate neurotransmitters that bind to such receptors are of two classes: endorphins and enkephalins (see Chapter 4).

The opiates have a Jekyll-and-Hyde character. On their Dr. Jekyll side, the opiates are effective as analgesics (painkillers; see Watkins et al., 2005); they are also extremely effective in the treatment of cough and diarrhea. But, unfortunately, the kindly Dr. Jekyll brings with him the evil Mr. Hyde—the risk of addiction.

The practice of eating opium spread from the Middle East sometime before 4000 B.C. Three historic events fanned the flame of opiate addiction. First, in 1644, the Emperor of China banned tobacco smoking, and this contributed to a gradual increase in opium smoking in China, spurred on by the smuggling of opium into China by the British East India Company. Because smoking opium has a greater effect on the brain than does eating it, many more people became addicted. Second, morphine, the most potent constituent of opium, was isolated in 1803, and it became available commercially in the 1830s. Third, the hypodermic needle was invented in 1856, and soon the injured were introduced to morphine through a needle.

Until the early part of the 20th century, opium was available legally in many parts of the world, including Europe and North America. Indeed, opium was an ingredient in cakes, candies, and wines, as well as in a variety of over-the-counter medicinal offerings. Opium potions such as laudanum (a very popular mixture of opium and alcohol), Godfrey’s Cordial, and Dalby’s Carminative were very popular. (The word carminative should win first prize for making a sow’s ear at least sound like a silk purse: A carminative is a drug that expels gas from the digestive tract, thereby reducing stomach cramps and flatulence. Flatulence is the obvious pick for second prize.) There were even over-the-counter opium potions just for baby—such as Mrs. Winslow’s Soothing Syrup and the aptly labeled Street’s Infant Quietness. Although pure morphine required a prescription at the time, physicians prescribed it for so many different maladies that morphine addiction was common among those who could afford a doctor.

The Harrison Narcotics Act , passed in 1914, made it illegal to sell or use opium, morphine, or cocaine in the United States—although morphine and its analogues are still legally prescribed for their medicinal properties. However, the act did not include the semisynthetic opiate heroin . Heroin was synthesized in 1870 by the addition of two acetyl groups to the morphine molecule, which greatly increased its ability to penetrate the blood–brain barrier. In 1898, heroin was marketed by the Bayer Drug Company; it was freely available without prescription and was widely advertised as a superior kind of aspirin. Tests showed that it was a more potent analgesic than morphine and that it was less likely to induce nausea and vomiting. Moreover, the Bayer Drug Company, on the basis of flimsy evidence, claimed that heroin was not addictive; this is why it was not covered by the Harrison Narcotics Act. The consequence of omitting heroin from the Harrison Narcotics Act was that opiate addicts in the United States, forbidden by law to use opium or morphine, turned to the readily available and much more potent heroin—and the flames of addiction were further fanned. In 1924, the U.S. Congress made it illegal for anybody to possess, sell, or use heroin. Unfortunately, the laws enacted to stamp out opiate addiction in the United States have been far from successful: An estimated 136,000 Americans currently use heroin (National Survey on Drug Use and Health, 2005), and organized crime flourishes on the proceeds.

The effect of opiates most valued by addicts is the rush that follows intravenous injection. The heroin rush is a wave of intense abdominal, orgasmic pleasure that evolves into a state of serene, drowsy euphoria. Many opiate users, drawn by these pleasurable effects, begin to use the drug more and more frequently. Then, once they reach a point where they keep themselves drugged much of the time, tolerance and physical dependence develop and contribute to the problem. Opiate tolerance encourages addicts to progress to higher doses, to more potent drugs (e.g., heroin), and to more direct routes of administration (e.g., IV injection); and physical dependence adds to the already high motivation to take the drug.

The classic opiate withdrawal syndrome usually begins 6 to 12 hours after the last dose. The first withdrawal sign is typically an increase in restlessness; the addict begins to pace and fidget. Watering eyes, running nose, yawning, and sweating are also common during the early stages of opiate withdrawal. Then, the addict often falls into a fitful sleep, which typically lasts for several hours. Once the person wakes up, the original symptoms may be joined in extreme cases by chills, shivering, profuse sweating, gooseflesh, nausea, vomiting, diarrhea, cramps, dilated pupils, tremor, and muscle pains and spasms. The gooseflesh skin and leg spasms of the opiate withdrawal syndrome are the basis for the expressions “going cold turkey” and “kicking the habit.” The symptoms of opiate withdrawal are typically most severe in the second or third day after the last injection, and by the seventh day they have all but disappeared. In short, opiate withdrawal is about as serious as a bad case of the flu:

Opiate withdrawal is probably one of the most misunderstood aspects of drug use. This is largely because of the image of withdrawal that has been portrayed in the movies and popular literature for many years…. Few addicts . . . take enough drug to cause the . . . severe withdrawal symptoms that are shown in the movies. Even in its most severe form, however, opiate withdrawal is not as dangerous or terrifying as withdrawal from barbiturates or alcohol. (McKim, 1986, p. 199)

Although opiates are highly addictive, the direct health hazards of chronic exposure are surprisingly minor. The main direct risks are constipation, pupil constriction, menstrual irregularity, and reduced libido (sex drive). Many opiate addicts have taken pure heroin or morphine for years with no serious ill effects. In fact, opiate addiction is more prevalent among doctors, nurses, and dentists than among other professionals (e.g., Brewster, 1986):

Clinical Implications

An individual tolerant to and dependent upon an opiate who is socially or financially capable of obtaining an adequate supply of good quality drug, sterile syringes and needles, and other paraphernalia may maintain his or her proper social and occupational functions, remain in fairly good health, and suffer little serious incapacitation as a result of the dependence. (Julien, 1981, p. 117)

One such individual was Dr. William Steward Halsted, one of the founders of Johns Hopkins Medical School and one of the most brilliant surgeons of his day . . . known as “the father of modern surgery.” And yet, during his career he was addicted to morphine, a fact that he was able to keep secret from all but his closest friends. In fact, the only time his habit caused him any trouble was when he was attempting to reduce his dosage. (McKim, 1986, p. 197)

Most medical risks of opiate addiction are indirect—that is, not entirely attributable to the drug itself. Many of the medical risks arise out of the battle between the relentless addictive power of opiates and the attempts of governments to eradicate addiction by making opiates illegal. The opiate addicts who cannot give up their habit—treatment programs report success rates of only about 10%—are caught in the middle. Because most opiate addicts must purchase their morphine and heroin from illicit dealers at greatly inflated prices, those who are not wealthy become trapped in a life of poverty and petty crime. They are poor, they are undernourished, they receive poor medical care, they are often driven to prostitution, and they run great risk of contracting AIDS and other infections (e.g., hepatitis, syphilis, and gonorrhea) from unsafe sex and unsterile needles. Moreover, they never know for sure what they are injecting: Some street drugs are poorly processed, and virtually all have been cut (stretched by the addition of some similar-appearing substance) to some unknown degree.

Thinking Creatively

Death from heroin overdose is a serious problem—high doses of heroin kill by suppressing breathing (Megarbane et al., 2005). However, death from heroin overdose is not well understood. The following are three points of confusion:

• Medical examiners often attribute death to heroin overdose without assessing blood levels of heroin. Careful toxicological analysis at autopsy often reveals that this diagnosis is questionable (Poulin, Stein, & Butt, 2000). In many cases, the deceased have low levels of heroin in the blood and high levels of other CNS depressants such as alcohol and benzodiazepines. In short, many so-called heroin overdose deaths appear to be a product of drug interaction (Darke et al., 2000; Darke & Zador, 1996; Mirakbari, 2004).

• Some deaths from heroin overdose are a consequence of its legal status. Because addicts are forced to buy their drugs from criminals, they never know for sure what they are buying. Reports of death from heroin overdose occur when a shipment of heroin hits the street that has been cut by a toxic substance or when the heroin is more pure than normal (Darke et al., 1999; Mcgregor et al.,1998).

• In the United States, deaths from opiate overdose have increased precipitously in the last few years, and many people attribute this increase to heroin. However, the sharp increase is almost entirely due to legal synthetic opioid analgesics such as Oxycontin and Lorcet (Manchikanti, 2007; Paulozzi, Budnitz, & Xi, 2006).

The primary treatment for heroin addiction in most countries is methadone. Ironically, methadone is itself an opiate with many of the same adverse effects as heroin. However, because methadone produces less pleasure than heroin, the strategy has been to block heroin withdrawal effects with methadone and then maintain addicts on methadone until they can be weaned from it. Methadone replacement has been shown to improve the success rate of some treatment programs, but its adverse effects and the high drop-out rates from such programs are problematic (see Zador, 2007). Buprenorphine is an alternative treatment for heroin addiction. Buprenorphine has a high and long-lasting affinity for opiate receptors and thus blocks the effects on the brain of other opiates, without producing powerful euphoria. Studies suggest that it is as effective as methadone (see Davids & Gaspar, 2004; Gerra et al., 2004).

In 1994, the Swiss government took an alternative approach to the problem of heroin addiction—despite substantial opposition from the Swiss public. It established a series of clinics in which, as part of a total treatment package, Swiss heroin addicts could receive heroin injections from a physician for a small fee. The Swiss government wisely funded a major research program to evaluate the clinics (see Gschwend et al., 2002). The results have been uniformly positive. Once they had a reliable source of heroin, most addicts gave up their criminal lifestyles, and their health improved once they were exposed to the specialized medical and counseling staff at the clinics. Many addicts returned to their family and jobs, and many opted to reduce or curtail their heroin use. As a result, addicts are no longer a presence in Swiss streets and parks; drug-related crime has substantially declined, and the physical and social well-being of the addicts has greatly improved. Furthermore, the number of new heroin addicts has declined, apparently because once addiction becomes treated as an illness, it becomes less cool (see Brehmer & Iten, 2001; De Preux, Dubois-Arber, & Zobel, 2004; Gschwend et al., 2003; Nordt & Stohler, 2006; Rehm et al., 2001).

Clinical Implications

These positive results have led to the establishment of similar experimental programs in other countries (e.g., Canada, Norway, Netherlands, and Germany) with similar success (see Skeie et al., 2008; Yan, 2009). Furthermore, safe injection facilities have managed to reduce the spread of infection and death from heroin overdose in many cities (e.g., Milloy et al., 2008). Given the unqualified success of such programs in dealing with the drug problem, it is interesting to consider why some governments have not adopted them (see Fischer et al., 2007). What do you think?

Thinking Creatively

Comparison of the Hazards of Tobacco, Alcohol, Marijuana, Cocaine, and Heroin

One way of comparing the adverse effects of tobacco, alcohol, marijuana, cocaine, and heroin is to compare the prevalence of their use in society as a whole. In terms of this criterion, it is clear that tobacco and alcohol have a greater negative impact than do marijuana, cocaine, and heroin (see Figure 15.6). Another method of comparison is one based on death rates: Tobacco has been implicated in the deaths of approximately 400,000 Americans per year; alcohol, in approximately 80,000 per year; and all other drugs combined, in about 25,000 per year.

Thinking Creatively

But what about the individual drug user? Who is taking greater health risks: the cigarette smoker, the alcohol drinker, the marijuana smoker, the cocaine user, or the heroin user? You now have the information to answer this question. Complete the Scan Your Brain, which will help you appreciate the positive impact that studying biopsychology is having on your understanding of important issues. Would you have ranked the health risks of these drugs in the same way before you began this chapter? How have the laws, or lack thereof, influenced the hazards associated with the five drugs?

Thinking Creatively

Scan Your Brain

FIGURE 15.6 Prevalence of drug use in the United States. Figures are based on a survey of people 12 years of age and over who live in households and used the drug in question at least once in the last month. (Based on National Survey on Drug Use and Health, 2005.)

15.4 Biopsychological Approaches to Theories of Addiction

This section of the chapter introduces two diametrically different ways of thinking about an addiction: Are addicts driven to take drugs by an internal need, or are they drawn to take drugs by the anticipated positive effects? I am sure you will recognize, after having read the preceding chapters, that this is the same fundamental question that has been the focus of biopsychological research on the motivation to eat and sleep.

Physical-Dependence and Positive-Incentive Perspectives of Addiction

Early attempts to explain the phenomenon of drug addiction attributed it to physical dependence. According to various physical-dependence theories of addiction , physical dependence traps addicts in a vicious circle of drug taking and withdrawal symptoms. The idea was that drug users whose intake has reached a level sufficient to induce physical dependence are driven by their withdrawal symptoms to self-administer the drug each time they attempt to curtail their intake.

Early drug addiction treatment programs were based on the physical-dependence perspective. They attempted to break the vicious circle of drug taking by gradually withdrawing drugs from addicts in a hospital environment. Unfortunately, once discharged, almost all detoxified addicts return to their former drug-taking habits— detoxified addicts are addicts who have no drugs in their bodies and who are no longer experiencing withdrawal symptoms.

The failure of detoxification as a treatment for addiction is not surprising, for two reasons. First, some highly addictive drugs, such as cocaine and amphetamines, do not produce severe withdrawal distress (see Gawin, 1991). Second, the pattern of drug taking routinely displayed by many addicts involves an alternating cycle of binges and detoxification (Mello & Mendelson, 1972). There are a variety of reasons for this pattern of drug use. For example, some addicts adopt it because weekend binges are compatible with their work schedules, others adopt it because they do not have enough money to use drugs continuously, others have it forced on them because their binges often land them in jail, and others have it forced on them by their repeated unsuccessful efforts to shake their habit. However, whether detoxification is by choice or necessity, it does not stop addicts from renewing their drug-taking habits (see Leshner, 1997).

As a result of these problems with physical-dependence theories of addiction, a different approach began to predominate in the 1970s and 1980s (see Higgins, Heil, & Lussier, 2004). This approach was based on the assumption that most addicts take drugs not to escape or to avoid the unpleasant consequences of withdrawal, but rather to obtain the drugs’ positive effects. Theories of addiction based on this premise are called positive-incentive theories of addiction . They hold that the primary factor in most cases of addiction is the craving for the positive-incentive (expected pleasure-producing) properties of the drug.

There is no question that physical dependence does play a role in addiction: Addicts do sometimes consume the drug to alleviate their withdrawal symptoms. However, most researchers now assume that the primary factor in addiction is the drugs’ hedonic (pleasurable) effects (see Cardinal & Everitt, 2004; Everitt, Dickinson, & Robbins, 2001). All drugs with addiction potential have some pleasurable effects for users.

From Pleasure to Compulsion: Incentive-Sensitization Theory

To be useful, positive-incentive theories of drug addiction need to offer explanations for two puzzling aspects of drug addiction. First, they must explain why there is often such a big difference between the hedonic value of drug taking and the positive-incentive value of drug taking. Positive-incentive value refers specifically to the anticipated pleasure associated with an action (e.g., taking a drug), whereas hedonic value refers to the amount of pleasure that is actually experienced. Addicts often report a huge discrepancy between them: Although they are compulsively driven to take their drug by its positive-incentive value (i.e., by the anticipated pleasure), taking the drug is often not as pleasurable as it once was (see Ahmed, 2004; Redish, 2004).

The second challenge faced by positive-incentive theories of drug addiction is that they must explain the process that transforms a drug user into a drug addict. Many people periodically use addictive drugs and experience their hedonic effects without becoming addicted to them (see Everitt & Robbins, 2005; Kreek et al., 2005). What transforms some drug users into compulsive users, or addicts?

The incentive-sensitization theory of drug addiction meets these two challenges (see Berridge, Robinson, & Aldridge, 2009). The central tenet of this theory is that the positive-incentive value of addictive drugs increases (i.e., is sensitized) with drug use (see Miles et al., 2004). Robinson and Berridge (2003) have suggested that in addiction-prone individuals, the use of a drug sensitizes the drug’s positive-incentive value, thus rendering such individuals highly motivated to seek and consume the drug. A key point of Robinson and Berridge’s incentive-sensitization theory is that it isn’t the pleasure (liking) of taking the drug that is the basis of addiction; it is the anticipated pleasure (wanting) of drug taking (i.e., the drug’s positive-incentive value). Initially, a drug’s positive-incentive value is closely tied to its pleasurable effects; but tolerance often develops to the pleasurable effects, whereas the addict’s wanting for the drug is sensitized. Thus, in chronic addicts, the positive-incentive value of the drug is often out of proportion with the pleasure actually derived from it: Many addicts are miserable, their lives are in ruins, and the drug effects are not that great anymore; but they crave the drug more than ever.

Relapse and Its Causes

The most difficult problem in treating drug addicts is not getting them to stop using their drug. The main problem is preventing those who stop from relapsing. The propensity to relapse (to return to one’s drug taking habit after a period of voluntary abstinence), even after a long period of voluntary abstinence, is a hallmark of addiction. Thus, understanding the causes of relapse is one key to understanding addiction and its treatment.

Three fundamentally different causes of relapse in drug addicts have been identified (see Shaham & Hope, 2005):

• Many therapists and patients point to stress as a major factor in relapse. The impact of stress on drug taking was illustrated in a dramatic fashion by the marked increases in cigarette and alcohol consumption that occurred among New Yorkers following the terrorist attacks of September 11, 2001.

• Another cause of relapse in drug addicts is drug priming (a single exposure to the formerly abused drug). Many addicts who have abstained for many weeks, and thus feel that they have their addiction under control, sample their formerly abused drug just once and are immediately plunged back into fullblown addiction.

• A third cause of relapse in drug addicts is exposure to environmental cues (e.g., people, times, places, or objects) that have previously been associated with drug taking (see Concklin, 2006; Di Ciano & Everitt, 2003). Such environmental cues have been shown to precipitate relapse. The fact that the many U.S. soldiers who became addicted to heroin while fighting in the Vietnam War easily shed their addiction when they returned home has been attributed to their removal from that drug-associated environment.

Explanation of the effects of environmental cues on relapse is related to our discussion of conditioned drug tolerance earlier in the chapter (see Kauer & Malenka, 2007). You may recall that cues that predict drug exposure come to elicit conditioned compensatory responses through a Pavlovian conditioning mechanism, and because conditioned compensatory responses are usually opposite to the original drug effects, they produce tolerance. The point here is that these same conditioned compensatory responses seem to increase craving in abstinent drug addicts and, in so doing, trigger relapse. Moreover, because interoceptive cues have been shown to function as conditional stimuli in conditioned tolerance experiments, they can also induce craving—that is why just thinking about drugs is enough to induce craving and relapse. Because susceptibility to relapse is a defining feature of drug addicts, conditioned drug responses play a major role in most modern theories of drug addiction (see Day & Carelli, 2007; Hellemans, Dickinson, & Everitt, 2006; Hyman, Malenka, & Nestler, 2006).

Scan Your Brain

So far in this chapter, you have been introduced to the principles of drug action, the role of learning in drug tolerance, five common addictive drugs, and theories of drug addiction. This is a good place to pause and reinforce what you have learned. In each blank, write the appropriate term. The correct answers are provided at the end of the exercise. Review material related to your errors and omissions before proceeding.

1. Drugs that affect the nervous system and behavior are called ______ drugs.

2. The most dangerous route of drug administration is ______ injection.

3. Drug tolerance is of two different types: metabolic and ______.

4. An individual who displays a withdrawal syndrome when intake of a drug is curtailed is said to be ______ on that drug.

5. The before-and-after design is used to study ______ drug tolerance.

6. The fact that drug tolerance is often ______ suggests that Pavlovian conditioning plays a major role in addiction.

7. ______ disease provides a compelling illustration of nicotine’s addictive power.

8. Convulsions and hyperthermia are symptoms of withdrawal from ______.

9. Anandamide was the first endogenous ______ to be identified.

10. Cocaine sprees can produce cocaine psychosis, a syndrome that is similar to paranoid ______.

11. Morphine and codeine are constituents of ______.

12. ______ is a semisynthetic opiate that penetrates the blood–brain barrier more effectively than morphine.

13. ______ heroin addicts were among the first to legally receive heroin injections from a physician for a small fee.

14. Many current theories of addiction focus on the ______ of addictive drugs.

Scan Your Brain answers:

(1) psychoactive,

(2) intravenous (or IV),

(3) functional,

(4) physically dependent,

(5) contingent,

(6) situationally specific,

(7) Buerger’s,

(8) alcohol,

(9) cannabinoid,

(10) schizophrenia,

(11) opium,

(12) Heroin,

(13) Swiss,

(14) positive-incentive value.

15.5 Intracranial Self-Stimulation and the Pleasure Centers of the Brain

Rats, humans, and many other species will administer brief bursts of weak electrical stimulation to specific sites in their own brains (see Figure 15.7). This phenomenon is known as intracranial self-stimulation (ICSS) , and the brain sites capable of mediating the phenomenon are often called pleasure centers. When research on addiction turned to positive incentives in the 1970s and 1980s, what had been learned about the neural mechanisms of pleasure from studying intracranial self-stimulation served as a starting point for the study of the neural mechanisms of addiction.

FIGURE 15.7 A rat pressing a lever to obtain rewarding brain stimulation.

Olds and Milner (1954), the discoverers of intracranial self-stimulation, argued that the specific brain sites that mediate self-stimulation are those that normally mediate the pleasurable effects of natural rewards (i.e., food, water, and sex). Accordingly, researchers studied the self-stimulation of various brain sites in order to map the neural circuits that mediate the experience of pleasure.

Fundamental Characteristics of Intracranial Self-Stimulation

It was initially assumed that intracranial self-stimulation was a unitary phenomenon—that is, that its fundamental properties were the same regardless of the site of stimulation. Most early studies of intracranial self-stimulation involved septal or lateral hypothalamic stimulation because the rates of self-stimulation from these sites are spectacularly high: Rats typically press a lever thousands of times per hour for stimulation of these sites, stopping only when they become exhausted. However, self-stimulation of many other brain structures has been documented.

Evolutionary Perspective

Early studies of intracranial self-stimulation suggested that lever pressing for brain stimulation was fundamentally different from lever pressing for natural reinforcers such as food or water. Two puzzling observations contributed to this view. First, despite their extremely high response rates, many rats stopped pressing the self-stimulation lever almost immediately when the current delivery mechanism was shut off. This finding was puzzling because high rates of operant responding are generally assumed to indicate that the reinforcer is particularly pleasurable, whereas rapid rates of extinction are usually assumed to indicate that it is not. Would you stop pressing a lever that had been delivering $100 bills the first few times that a press did not produce one? Second, experienced self-stimulators often did not recommence lever pressing when they were returned to the apparatus after being briefly removed from it. In such cases, the rats had to be primed to get them going again: The experimenter simply pressed the lever a couple of times, to deliver a few free stimulations, and the hesitant rat immediately began to self-stimulate at a high rate once again.

These differences between lever pressing for rewarding lateral hypothalamic or septal stimulation and lever pressing for food or water seemed to discredit Olds and Milner’s original theory that intracranial self-stimulation involves the activation of natural reward circuits in the brain. However, several lines of research indicate that the circuits mediating intracranial self-stimulation are natural reward circuits. Let’s consider three of these.

First, brain stimulation through electrodes that mediate self-stimulation often elicits a natural motivated behavior such as eating, drinking, or copulation in the presence of the appropriate goal object. Second, producing increases in natural motivation (for example, by food or water deprivation, by hormone injections, or by the presence of prey objects) often increases self-stimulation rates.

The third point is a bit more complex: It became clear that differences between the situations in which the rewarding effects of brain stimulation and those of natural rewards were usually studied contribute to the impression that these effects are qualitatively different. For example, comparisons between lever pressing for food and lever pressing for brain stimulation are usually confounded by the fact that subjects pressing for brain stimulation are nondeprived and by the fact that the lever press delivers the reward directly and immediately. In contrast, in studies of lever pressing for natural rewards, subjects are often deprived, and they press a lever for a food pellet or a drop of water, which they must then approach and consume to experience the rewarding effects. This point was illustrated by a clever experiment (Panksepp & Trowill, 1967) that compared lever pressing for brain stimulation and lever pressing for a natural reinforcer in a situation in which the usual confounds were absent. In the absence of the confounds, some of the major differences between lever pressing for food and lever pressing for brain stimulation disappeared. When nondeprived rats pressed a lever to inject a small quantity of chocolate milk directly into their mouths through an intraoral tube, they behaved remarkably like self-stimulating rats: They quickly learned to press the lever, they pressed at high rates, they extinguished quickly, and some even had to be primed.

Thinking Creatively

Mesotelencephalic Dopamine System and Intracranial Self-Stimulation

The mesotelencephalic dopamine system plays an important role in intracranial self-stimulation. The mesotelencephalic dopamine system is a system of dopaminergic neurons that projects from the mesencephalon (the midbrain) into various regions of the telencephalon. As Figure 15.8 indicates, the neurons that compose the mesotelencephalic dopamine system have their cell bodies in two midbrain nuclei—the substantia nigra and the ventral tegmental area . Their axons project to a variety of telencephalic sites, including specific regions of the prefrontal neocortex, the limbic cortex, the olfactory tubercle, the amygdala, the septum, the dorsal striatum, and, in particular, the nucleus accumbens (nucleus of the ventral striatum)—see Zahm, 2000.

Most of the axons of dopaminergic neurons that have their cell bodies in the substantia nigra project to the dorsal striatum; this component of the mesotelencephalic dopamine system is called the nigrostriatal pathway. It is degeneration in this pathway that is associated with Parkinson’s disease.

FIGURE 15.8 The mesotelencephalic dopamine system in the human brain, consisting of the nigrostriatal pathway (green) and the mesocorticolimbic pathway (red). (Based on Klivington, 1992.)

Most of the axons of dopaminergic neurons that have their cell bodies in the ventral tegmental area project to various cortical and limbic sites. This component of the mesotelencephalic dopamine system is called the mesocorticolimbic pathway. Although there is some intermingling of the neurons between these two dopaminergic pathways, it is the particular neurons that project from the ventral tegmental area to the nucleus accumbens that have been most frequently implicated in the rewarding effects of brain stimulation, natural rewards, and addictive drugs.

FIGURE 15.9 The increase in dopamine release from the nucleus accumbens during consecutive periods of intracranial self-stimulation. (Based on Phillips et al., 1992.)

Several pieces of evidence have supported the view that the mesocorticolimbic pathway of the mesotelencephalic dopamine system plays an important role in mediating intracranial self-stimulation. The following are four of them:

• Many of the brain sites at which self-stimulation occurs are part of the mesotelencephalic dopamine system.

• Intracranial self-stimulation is often associated with an increase in dopamine release in the mesocorticolimbic pathway (Hernandez et al., 2006). See Figure 15.9.

• Dopamine agonists tend to increase intracranial self-stimulation, and dopamine antagonists tend to decrease it.

• Lesions of the mesocorticolimbic pathway tend to disrupt intracranial self-stimulation.

15.6 Early Studies of Brain Mechanisms of Addiction: Dopamine

The positive-incentive value of drug taking had been implicated in addiction, and the experience of pleasure had been linked to the mesocorticolimbic pathway. It was natural, therefore, that the first sustained efforts to discover the neural mechanisms of drug addiction should focus on the mesocorticolimbic pathway.

In considering the neural mechanisms of drug addiction, it is important to appreciate that specific brain mechanisms could not possibly have evolved for the purpose of mediating addiction—drug addiction is not adaptive. Thus, the key to understanding the neural mechanisms of addiction lies in understanding natural motivational mechanisms and how they are co-opted and warped by addictive drugs (Nesse & Berridge, 1997).

Thinking Creatively

Evolutionary Perspective

Two Key Methods for Measuring Drug-Produced Reinforcement in Laboratory Animals

Most of the research on the neural mechanisms of addiction has been conducted in nonhumans. Because of the presumed role of the positive-incentive value of drugs in addiction, methods used to measure the rewarding effects of drugs in the nonhuman subjects have played a key role in this research. Two such methods have played particularly important roles: the drug self-administration paradigm and the conditioned place-preference paradigm (see Aguilar, Rodríguez-Arias, & Miñarro, 2008; Sanchis-Segura & Spanagel, 2006). They are illustrated in Figure 15.10.

In the drug self-administration paradigm , laboratory rats or primates press a lever to inject drugs into themselves through implanted cannulas (thin tubes). They readily learn to self-administer intravenous injections of drugs to which humans become addicted. Furthermore, once they have learned to self-administer an addictive drug, their drug taking often mimics in major respects the drug taking of human addicts (Deroche-Gamonet, Belin, & Piazza, 2004; Louk Vanderschuren & Everitt, 2004; Robinson, 2004). Studies in which microinjections have been self-administered directly into particular brain structures have proved particularly enlightening.

In the conditioned place-preference paradigm, rats repeatedly receive a drug in one compartment (the drug compartment) of a two-compartment box. Then, during the test phase, the drug-free rat is placed in the box, and the proportion of time it spends in the drug compartment, as opposed to the equal-sized but distinctive control compartment, is measured. Rats usually prefer the drug compartment over the control compartment when the drug compartment has been associated with the effects of drugs to which humans become addicted. The main advantage of the conditioned place-preference paradigm is that the subjects are tested while they are drug-free, which means that the measure of the incentive value of a drug is not confounded by other effects the drug might have on behavior.

FIGURE 15.10 Two behavioral paradigms that are used extensively in the study of the neural mechanisms of addiction: the drug self-administration paradigm and the conditioned place-preference paradigm.

Early Evidence of the Involvement of Dopamine in Drug Addiction

In the 1970s, following much research on the role of dopamine in intracranial self-stimulation, experiments began to implicate dopamine in the rewarding effects of natural reinforcers and addictive drugs. For example, in rats, dopamine antagonists blocked the self-administration of, or the conditioned preference for, several different addictive drugs; and they reduced the reinforcing effects of food. These findings suggested that dopamine signaled something akin to reward value or pleasure.

Evolutionary Perspective

The Nucleus Accumbens and Drug Addiction

Once evidence had accumulated linking dopamine to natural reinforcers and drug-induced reward, investigators began to explore particular sites in the mesocorticolimbic dopamine pathway by conducting experiments on laboratory animals. Their findings soon focused attention on the nucleus accumbens. Events occurring in the nucleus accumbens and dopaminergic input to it from the ventral tegmental area appeared to be most clearly related to the experience of reward and pleasure.

The following are four kinds of findings from research on laboratory animals that focused attention on the nucleus accumbens (see Deadwyler et al., 2004; Nestler, 2005; Pierce & Kumaresan, 2006):

Evolutionary Perspective

• Laboratory animals self-administered microinjections of addictive drugs (e.g., cocaine, amphetamine, and morphine) directly into the nucleus accumbens.

• Microinjections of addictive drugs into the nucleus accumbens produced a conditioned place preference for the compartment in which they were administered.

• Lesions to either the nucleus accumbens or the ventral tegmental area blocked the self-administration of drugs into general circulation or the development of drug-associated conditioned place preferences.

• Both the self-administration of addictive drugs and the experience of natural reinforcers were found to be associated with elevated levels of extracellular dopamine in the nucleus accumbens.

Support for the Involvement of Dopamine in Addiction: Evidence from Imaging Human Brains

With the development of brain-imaging techniques for measuring dopamine in human brains, considerable evidence began to emerge that dopamine is involved in human reward in general and human addiction in particular (see O’Doherty, 2004; Volkow et al., 2004). One of the strongest of the early brain-imaging studies linking dopamine to addiction was published by Volkow and colleagues (1997). They administered various doses of radioactively labeled cocaine to addicts and asked the addicts to rate the resulting “high.” They also used positron emission tomography (PET) to measure the degree to which the labeled cocaine bound to dopamine transporters. As you learned earlier in this chapter, cocaine has its agonistic effects on dopamine by binding to these transporters, blocking reuptake, and thus increasing extracellular dopamine levels. The intensity of the “highs” experienced by the addicts was correlated with the degree to which cocaine bound to the dopamine transporters—no high at all was experienced unless the drug bound to 50% of the dopamine transporters.

Brain-imaging studies have also indicated that the nucleus accumbens plays an important role in mediating the rewarding effects of addictive behavior. For example, in one study, healthy (i.e., nonaddicted) human subjects were given an IV injection of amphetamine (Drevets et al., 2001). As dopamine levels in the nucleus accumbens increased in response to the amphetamine injection, the subjects reported a parallel increase in their experience of euphoria.

In general, brain-imaging studies have shown that dopamine function is markedly diminished in human addicts—see Figure 15.11 (Volkow et al., 2009). However, when addicts are exposed to their drug or to stimuli associated with their drug, the nucleus accumbens and some of the other parts of the mesocorticolimbic dopamine pathway tend to become hyperactive.

Dopamine Release in the Nucleus Accumbens: What Is Its Function?

As you have just learned, substantial evidence links dopamine release, particularly in the nucleus accumbens, to the rewarding effects of addictive drugs and other reinforcers (see Kelley, 2004; Nestler & Malenka, 2004). But, reward is a complex process, with many different psychological components (see Berridge & Robinson, 2003): What exactly is the role in reward of dopamine release in the nucleus accumbens?

Several studies have found increases in extracellular dopamine levels in the nucleus accumbens following the presentation of a natural reward (e.g., food), rewarding brain stimulation (Hernandez et al., 2007), or an addictive drug (see Joseph, Datla, & Young, 2003; Ungless, 2004). Even stronger evidence for the idea that increased dopamine levels in the nucleus accumbens are related to the experience of reward came from the finding that ventral tegmental neurons, which release their dopamine into the nucleus accumbens, fire in response to a stimulus at a rate proportional to its reward value. Other studies have suggested that dopamine released in the nucleus accumbens is related to the expectation of reward, rather than to its experience. For example, some studies have shown that neutral stimuli that signal the impending delivery of a reward (e.g., food or an addictive drug) can trigger dopamine release in the nucleus accumbens (e.g., Fiorino, Coury, & Phillips, 1997; Weiss et al., 2000).

FIGURE 15.11 Chronic use of cocaine and methamphetamine reduces binding of radioactive tracers to D2 receptors in the striatum. (From Volkow et al., 2009.)

A third theory about dopamine release in the nucleus accumbens encompasses and extends the experience-of-reward and expectation-of-reward theories (Caplin & Dean, 2008). The theory was proposed by Tobler, Forillo, and Schultz (2005). They found that dopaminergic neurons with their cell bodies in the ventral tegmental area fire at a rate related to the value of the reward. When the expected reward was delivered, there was no change in firing rate; when a greater than expected reward was delivered, firing increased; and when a less than expected reward was delivered, firing decreased. Thus, dopamine release in the nucleus accumbens reflected both the experience and expectation of reward, but not in a straightforward fashion: It seemed to reflect discrepancies between expected and actual rewards (see Fiorillo, Newsome, & Schultz, 2008; Schultz, 2007).

15.7 Current Approaches to Brain Mechanisms of Addiction

The previous three sections of the chapter have brought us from the beginnings of research on the brain mechanisms of addiction to current research, which will be discussed in this section. Figure 15.12 summarizes the major shifts in thinking about the brain mechanisms of addiction that have occurred over time.

Figure 15.12 shows that two lines of thinking about the brain mechanisms of addiction both had their origins in classic research on drug tolerance and physical dependence. One line developed into physical-dependence theories of addiction, which though appealing in their simplicity, proved to be inconsistent with the evidence, and these inconsistencies led to the emergence of positive-incentive theories. In turn, the positive-incentive approach to addiction, in combination with research on dopamine and pleasure centers in the brain, led to a focus on the mesocorticolimbic pathway and the mechanisms of reward. The second line of thinking about the brain mechanisms of addiction also began with early research on drug tolerance and physical dependence. This line moved ahead with the discovery that drug-associated cues come to elicit conditioned compensatory responses through a Pavlovian conditioning mechanism and that these conditioned responses are largely responsible for drug tolerance. This finding gained further prominence when researchers discovered that conditioned responses elicited by drug-associated cues were major factors in drug craving and relapse.

These two lines of research together have shaped modern thinking about the brain mechanisms of addiction, but this is not the end of the story (see Koob, 2006). In this, the final section of the chapter, you will learn about issues that are the focus of current research on the brain mechanisms of addiction and about new areas of the brain that have been linked to drug addiction.

Current Issues in Addiction Research

The early decades of research on addiction and its neural mechanisms clarified a number of issues about addiction, but it raised others that are the focus of current research. The following are four of these issues:

Addiction Is Psychologically Complex

FIGURE 15.12 Historic influences that shaped current thinking about the brain mechanisms of addiction.

Studies of addicted patients have found that drug addicts differ psychologically from healthy controls in a variety of ways. Drug addicts have been found to make poor decisions, to engage in excessive risk taking, and to have deficits in self control (see Baler & Volkow, 2006; Diekof, Falkai, & Gruber, 2008; Yucel & Lubman, 2007). There is a growing appreciation for the fact that efforts to develop theories of drug addiction and effective treatments for it must take these psychological differences into account.

Addiction Is a Disturbance of Decision Making

There has been an increasing appreciation that the primary symptom of addiction is a disturbance of decision making: Why do addicts decide to engage in harmful behaviors? This has had two beneficial effects: It has led investigators who study drug addiction to consider research on decision making from other fields (e.g., economics and social psychology), and it has led investigators in these other fields to consider research on drug addiction (e.g., Cardinal & Everitt, 2004; Lieberman & Eisenberger, 2009; Sanfey, 2007).

Addiction Is Not Limited to Drugs

There has been a growing consensus that drug addiction is a specific expression of a more general problem and that other behaviors exhibit the defining feature of drug addiction: the inability to refrain from a behavior despite its adverse effects. A lot of attention has recently been paid to overeating as an addiction because of its major adverse health consequences (e.g., Di Chiara & Bassareo, 2007; Trinko et al., 2007), but compulsive gambling, compulsive sexual behavior, kleptomania (compulsive shoplifting), and compulsive shopping also seem to share some brain mechanisms with drug addiction (see Grant, Brewer, & Potenza, 2006; Tanabe et al., 2007).

Addiction Involves Many Neurotransmitters

The evidence implicating dopamine in addiction is diverse and substantial, but it is not possible for any complex behavior to be the product of a single neurotransmitter. Some evidence has pointed to a role for glutamate in addiction (Kalivas, 2004)—of particular interest are prefrontal glutaminergic neurons that project into the nucleus accumbens. Also of interest to researchers are endogenous opioids, norepinephrine, GABA, and end-ocannabinoids (see Koob, 2006; Weinshenker & Schroeder, 2007).

Brain Structures That Mediate Addiction: The Current View

Although researchers do not totally agree about the neural mechanisms of drug addiction, there seems to be an emerging consensus that areas of the brain other than the nucleus accumbens are involved in its three stages (see Everitt & Robbins, 2005; Koob, 2006): (1) initial drug taking, (2) the change to craving and compulsive drug taking, and (3) relapse.

Initial Drug Taking

Initial taking of potentially addictive drugs is thought to be mediated in much the same way as any pleasurable activity, with the mesocorticolimbic pathway—in particular, the nucleus accumbens—playing a key role. But the nucleus accumbens does not act alone; its interactions with three other areas of the brain are thought to be important. The prefrontal lobes are thought to play a major role in the decision to take a drug (Grace et al., 2007); the hippocampus and related structures are assumed to provide information about previous relevant experiences; and the amygdala is thought to coordinate the positive or negative emotional reactions to the drug taking.

Change to Craving and Compulsive Drug Taking

The repeated consumption of an addictive drug brings major changes in the motivation of the developing addict. Drug taking develops into a habit and then to a compulsion; that is, despite its numerous adverse effects, drug taking starts to dominate the addict’s life. Earlier in this chapter, you learned that this change has been described as an increase in the positive-incentive value of taking the drug that occurs in the absence of any increase in its hedonic effects. It is not yet known why this change occurs and why it occurs in some drug takers but not in others. The change may be a direct neural response to the repeated experience of drug-induced pleasure; it could be a product of the myriad conditioned responses to drug-associated cues (Cardinal & Everitt, 2004; Kenny, 2007); or, more likely, it could be a product of both of these influences.

Several changes in the brain’s responses appear to contribute to the development of addiction. First, there is a change in how the striatum reacts to drugs and drug-associated cues. As addiction develops, striatal control of addiction spreads from the nucleus accumbens (i.e., the ventral striatum) to the dorsal striatum, an area that is known to play a role in habit formation and retention (see Chapter 11). Also, at the same time, the role of the prefrontal cortex in controlling drug-related behaviors apparently declines, and stress circuits in the hypothalamus (see Chapter 17) begin to interact with the dorsal striatum. In essence, the development of addiction is a pathological neuroplastic response that some people show with repeated drug taking (see Kalivas, 2005; Koob & Le Moal, 2005).

Neuroplasticity

Relapse

As you learned in an earlier section of this chapter, three factors are known to trigger relapse in abstinent addicts: priming doses of the drug, drug associated cues, and stress. Each cause of relapse appears to be mediated by interaction of a different brain structure with the striatum. Evidence from research on drug self-administration in laboratory animals suggests that the prefrontal cortex mediates priming-induced relapse, the amygdala mediates conditional cue-induced relapse, and the hypothalamus mediates stress-induced relapse.

15.8 A Noteworthy Case of Addiction

To illustrate in a more personal way some of the things you have learned about addiction, this chapter concludes with a case study of one addict. The addict was Sigmund Freud, a man of great significance to psychology.

Freud’s case is particularly important for two reasons. First, it shows that nobody, no matter how powerful their intellect, is immune to the addictive effects of drugs. Second, it allows comparisons between the two drugs of addiction with which Freud had problems.

 Watch Dr. Freud and the Self-Administration Paradigm

www.mypsychlab.com

The Case of Sigmund Freud

In 1883, a German army physician prescribed cocaine, which had recently been isolated, to Bavarian soldiers to help them deal with the demands of military maneuvers. When Freud read about this, he decided to procure some of the drug.

Clinical Implications

In addition to taking cocaine himself, Freud pressed it on his friends and associates, both for themselves and for their patients. He even sent some to his fiancée. In short, by today’s standards, Freud was a public menace.

Freud’s famous essay “Song of Praise” was about cocaine and was published in July 1884. Freud wrote in such glowing terms about his own personal experiences with cocaine that he created a wave of interest in the drug. But within a year, there was a critical reaction to Freud’s premature advocacy of the drug. As evidence accumulated that cocaine was highly addictive and produced a psychosis-like state at high doses, so too did published criticisms of Freud.

Freud continued to praise cocaine until the summer of 1887, but soon thereafter he suddenly stopped all use of cocaine—both personally and professionally. Despite the fact that he had used cocaine for 3 years, he seems to have had no difficulty stopping.

Some 7 years later, in 1894, when Freud was 38, his physician and close friend ordered him to stop smoking because it was causing a heart arrhythmia. Freud was a heavy smoker; he smoked approximately 20 cigars per day.

Freud did stop smoking, but 7 weeks later he started again. On another occasion, Freud stopped for 14 months, but at the age of 58, he was still smoking 20 cigars a day—and still struggling against his addiction. He wrote to friends that smoking was adversely affecting his heart and making it difficult for him to work . . . yet he kept smoking.

In 1923, at the age of 67, Freud developed sores in his mouth. They were cancerous. When he was recovering from oral surgery, he wrote to a friend that smoking was the cause of his cancer … yet he kept smoking.

In addition to the cancer, Freud began to experience severe heart pains (tobacco angina) whenever he smoked … still he kept smoking.

At 73, Freud was hospitalized for his heart condition and stopped smoking. He made an immediate recovery. But 23 days later, he started to smoke again.

In 1936, at the age of 79, Freud was experiencing more heart trouble, and he had had 33 operations to deal with his recurring oral cancer. His jaw had been entirely removed and replaced by an artificial one. He was in constant pain, and he could swallow, chew, and talk only with difficulty … yet he kept smoking.

Freud died of cancer in 1939 (see Sheth, Bhagwate, & Sharma, 2005).

Themes Revisited

Two of this book’s themes—thinking creatively and clinical implications—received strong emphasis in this chapter because they are integral to its major objective: to sharpen your thinking about the effects of addiction on people’s health. You were repeatedly challenged to think about drug addiction in ways that may have been new to you but are more consistent with the evidence.

Thinking Creatively

Clinical Implications

The evolutionary perspective theme was also highlighted frequently in this chapter, largely because of the nature of biopsychological research into drug addiction. Because of the risks associated with the administration of addictive drugs and the direct manipulation of brain structures, the majority of biopsychological studies of drug addiction involve nonhumans—mostly rats and monkeys. Also, in studying the neural mechanisms of addiction, there is a need to maintain an evolutionary perspective. It is important not to lose sight of the fact that brain mechanisms did not evolve to support addiction; they evolved to serve natural adaptive functions and have somehow been co-opted by addictive drugs.

Evolutionary Perspective

Although the neuroplasticity theme pervades this chapter, the neuroplasticity tag appeared only once—where the text explains that the development of addiction is a pathological neuroplastic response. The main puzzle in research on addiction is how the brain of an occasional drug user is transformed into the brain of an addict.

Neuroplasticity

Think about It

1. There are many misconceptions about drug addiction. Describe three. What factors contribute to these misconceptions? In what ways is the evidence about drug addiction often misrepresented?

2. A doctor who had been a morphine user for many years was found dead of an overdose at a holiday resort. She appeared to have been in good health, and no foul play was suspected. Explain how conditioned tolerance may have contributed to her death.

3. If you had an opportunity to redraft current laws related to drug use in light of what you have learned in this chapter, what changes would you make? Do you think that all drugs, including nicotine and alcohol, should be illegal? Explain.

4. Speculate: How might recent advances in the study of the mesotelencephalic dopamine system eventually lead to effective treatments?

5. Does somebody you love use a hard drug such as nicotine or alcohol? What should you do?

6. One of my purposes in writing this chapter was to provide you with an alternative way of thinking about drug addiction, one that might benefit you. Imagine my dismay when I received an e-mail message suggesting that this chapter was making things worse for addicts. According to this message, discussion of addiction induces craving in addicts who have stopped taking drugs, thus encouraging them to recommence their drug taking. Discuss this point, and consider its implications for the design of antidrug campaigns.

Key Terms

Pharmacological (p. 384)

15.1 Basic Principles of Drug Action

Psychoactive drugs (p. 384)

Drug metabolism (p. 385)

Drug tolerance (p. 385)

Cross tolerance (p. 385)

Drug sensitization (p. 385)

Metabolic tolerance (p. 385)

Functional tolerance (p. 385)

Withdrawal syndrome (p. 386)

Physically dependent (p. 386)

Addicts (p. 386)

15.2 Role of Learning in Drug Tolerance

Contingent drug tolerance (p. 387)

Before-and-after design (p. 387)

Conditioned drug tolerance (p. 388)

Conditioned compensatory responses (p. 388)

Exteroceptive stimuli (p. 389)

Interoceptive stimuli (p. 389)

15.3 Five Commonly Abused Drugs

Nicotine (p. 389)

Smoker’s syndrome (p. 390)

Buerger’s disease (p. 390)

Teratogen (p. 390)

Depressant (p. 391)

Delirium tremens (DTs) (p. 391)

Korsakoff’s syndrome (p. 391)

Cirrhosis (p. 391)

Fetal alcohol syndrome (FAS) (p. 391)

Disulfiram (p. 392)

Cannabis sativa (p. 392)

THC (p. 392)

Hashish (p. 392)

Narcotic (p. 392)

Anandamide (p. 394)

Stimulants (p. 394)

Cocaine (p. 394)

Crack (p. 394)

Cocaine sprees (p. 394)

Cocaine psychosis (p. 394)

Amphetamine (p. 394)

Dopamine transporters (p. 395)

Opium (p. 395)

Morphine (p. 395)

Codeine (p. 395)

Opiates (p. 395)

Analgesics (p. 395)

Harrison Narcotics Act (p. 396)

Heroin (p. 396)

15.4 Biopsychological Approaches to Theories of Addiction

Physical-dependence theories of addiction (p. 399)

Detoxified addicts (p. 399)

Positive-incentive theories of addiction (p. 399)

Positive-incentive value (p. 400)

Hedonic value (p. 400)

Incentive-sensitization theory (p. 400)

Relapse (p. 400)

Drug priming (p. 400)

15.5 Intracranial Self-Stimulation and the Pleasure Centers of the Brain

Intracranial self-stimulation (ICSS) (p. 401)

Primed (p. 402)

Mesotelencephalic dopamine system (p. 402)

Substantia nigra (p. 402)

Ventral tegmental area (p. 402)

Nucleus accumbens (p. 402)

15.6 Early Studies of Brain Mechanisms of Addiction: Dopamine

Drug self-administration paradigm (p. 403)

Conditioned place-preference paradigm (p. 403)

 Quick Review Test your comprehension of the chapter with this brief practice test. You can find the answers to these questions as well as more practice tests, activities, and other study resources at www.mypsychlab.com.

1. Tolerance to psychoactive drugs is largely

a. nonexistent.

b. metabolic.

c. functional.

d. sensitization.

e. cross tolerance.

2. Which drug is thought to lead to about 400,000 deaths each year in the United States alone?

a. heroin

b. cocaine

c. alcohol

d. nicotine

e. marijuana

3. Delirium tremens can be produced by withdrawal from

a. heroin.

b. morphine.

c. alcohol.

d. amphetamines.

e. both a and b

4. Animals that have been previously trained to press a lever to deliver rewarding electrical stimulation to their own brains will often not begin pressing unless they have been

a. primed.

b. extinguished.

c. fed.

d. frightened.

e. punished.

5. A method of measuring drug-produced reinforcement or pleasure in laboratory animals is the

a. drug self-administration paradigm.

b. conditioned place-preference paradigm.

c. conditioned tolerance paradigm.

d. all of the above

e. both a and b

(Pinel, 10/2010, p. 383)

Pinel, J. P. (10/2010). Biopsychology, 8th Edition [VitalSource Bookshelf version]. Retrieved from http://online.vitalsource.com/books/9781269533744

The citation provided is a guideline. Please check each citation for accuracy before use.

The post Hunger, Eating, and Health Why Do Many People Eat Too Much? appeared first on Smart Essays.

Leave a Comment

Your email address will not be published. Required fields are marked *

Chat With Us
Chat With Us