Loading...

Messages

Proposals

Stuck in your homework and missing deadline? Get urgent help in $10/Page with 24 hours deadline

Get Urgent Writing Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework & Achieve A+ Grades.

Privacy Guaranteed - 100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

Victor bautista and matthew montejo

01/12/2021 Client: muhammad11 Deadline: 2 Day

Does the mere sight of the golden arches in front of McDonald 's make you feel pangs of hunger and think about hamburgers? If it does, you are displaying an elementary form oflearning called clas- sical conditioning. Classical conditioning helps explain such diverse phenomena as crying at the sight of a bride walking down the aisle, fearing the dark, and falling in love. .

Classical conditioning is one of a number of different types of learning that psychologists have identified, but a general definition encompasses them all: learning is a relatively permanent change in behavior that is brought about by experience.

We are primed for learning from the beginning of life. Infants exhibit a primitive type of learning called habituation. Habituation is the decrease in response to a stimulus that occurs after repeated presentations of the same stimulus. For example, young infants may initially show interest in a novel stimulus, such as a brightly colored

]

toy, but they will soon lose interest if they see the same toy over and over. (Adults exhibit habituation, too: newlyweds soon stop noticing that they are wearing a wedding ring.) Habituation permits us to ignore things that have stopped providing new information. Most learning is considerably more complex than habituation, and the study

of learning has been at the core of the field of psychology. Although philoso- phers since the time of Aristotle have speculated on the foundations of learn- ing, the first systematic research on learning was done at the beginning of the twentieth century, when Ivan Pavlov (does the name ring a belli) developed the framework for learning called classical conditioning.

»LOl The Basics of Classical Conditioning

LEARNINC OUTCOMES

15.1 Describe the basics of classical conditioning and how they relate to learning.

15.2 Give examples of applying conditioning principles to human behavior.

15.3 Explain extinction. 15.4 Discuss stimulus generalization and discrimination.

Learning A relatively permanent change in behavior brought about by experience.

In the early twentieth century, Ivan Pavlov, a famous Russian physiologist, had been studying the secretion of stomach acids and salivation in dogs in response to the ingestion of varying amounts and kinds of food. While doing that he observed a curious phenomenon: sometimes stomach secretions and salivation would begin in the dogs when they had not yet eaten any food. The mere sight of the experimenter who normally brought the food, or even the sound of the experimenter's footsteps, was enough to produce salivation in the dogs.

162 Chapter 5 LEARNING

_ vlov's genius lay in his ability to recognize the implications of this discov- . He saw that the dogs were responding not only on the basis of a biological

(hunger), but also as a result oflearning-or, as it came to be called, classi- " conditioning, Classical conditioning is a type of learning in which a neu-

_""- timulus (such as the experimenter's footsteps) comes to elicit a response -=-er being paired with a stimulus (such as food) that naturally brings about

---- response. To demonstrate classical conditioning, Pavlov (1927) attached a tube - e salivary gland of a dog, allowing allow him to measure precisely

--". dog's salivation. He then rang a bell and, just a few seconds later, pre- - ed the dog with meat This pairing occurred repeatedly and was care-

planned so that, each time, exactly the same amount of time elapsed een the presentation of the bell and the meat At first the dog would

ivate only when the meat was presented, but soon it began to salivate .- the sound of the bell. In fact, even when Pavlov stopped presenting the =eat, the dog still salivated after hearing the sound. The dog had been

sically conditioned to salivate to the bell. As you can see in Figure 1, the basic processes of classical conditioning

-' at underlie Pavlov's discovery are straightforward, although the terrni- nolog'y he chose is not simple. Consider first the diagram in Figure 1A.

efore conditioning, there are two unrelated stimuli: the ringing of a bell d meat. We know that normally the ringing of a bell does not lead to

: livation but to some irrelevant response, such as pricking up the ears perhaps a startle reaction. The bell is theref06e called the neutral

stimulus because it is a stimulus that, before conditioning,does not naturally ring about the response in which we are interested. We also have meat, which

naturally causes a dog to salivate-the response we are interested in condi- ioning. The meat is considered an unconditioned stimulus, or UCS, because

food placed in a dog's mouth automatically causes salivation to occur. The response that the meat elicits (salivation) is called an unconditioned response, or UCR-a natural, innate, reflexive response that is not associated with previous learning. Unconditioned responses are always brought about by the presence of unconditioned stimuli. "'\

Figure 1B illustrates what happens during conditioning. The bell is rung just before each presentation of the meat. The goal of conditioning is for the dog to associate the bell with the unconditioned stimulus (meat) and therefore to bring about the same sort of response as the unconditioned stimulus. After a number of pairings of the bell and meat, the bell alone causes the dog to salivate.

Ivan Pavlov (center) developed the principles of classical conditioning.

Classical conditioning A type of learning in which a neutral stimulus comes to bring about a response after it is paired with a stimulus that naturally brings about that response.

Neutral stimulus A stimulus that, before conditioning, does not naturally bring about the response of interest.

Unconditioned stimulus (UeS) A'stirnulus that naturally brings about a particular response without having been learned.

unconditioned response (UCR) A response that is natural and needs no training (e.g., salivation at the smell of food).

STUDY ALERT Figure 1 (on the next page)

can help you learn and understand the process (and

terminology) of classical conditioning, which can be

confusing.

Module lS CLASSICAL CONDITIONING 163

Before Conditioning

Neutral stimulus Response unrelated to meat I

4.'Sound of bell ••~,~... . Unconditioned stimulus (UCS) Unconditioned response (UCR)

I "I ~~

Meat

During Conditioning

Unconditioned response (UCR)Neutral stimulus

Sound of bell

Meat

FIGU E 1 The basic process of classical conditioning. (A)Before conditioning. the ringing of a bell does not bring about salivation-making the bell a neutral stimulus, In contrast. meat naturally brings about salivation. making the meat an unconditioned stimulus and salivation an unconditioned response. (B)During conditioning. the bell is rung just before the presentation of the meat. (C)Eventually. the ringing of the bell alone brings about salivation. We now can say that conditioning has been accomplished: the previously neutral stimulus of., the bell now is a conditioned stimulus that brings about the conditioned response of salivation.

164 Chapter 5 LEARNING

When conditioning is complete, the bell has evolved from a neutral stimulus to what is now called a conditioned stimulus, or CS. At this time, salivation that occurs as a response to the conditioned stimulus (bell) is considered a conditioned response, or CR. This situation is depicted in Figure Ie. After conditioning, then, the conditioned stimulus evokes the conditioned response.

The sequence and timing of the presenta- tion of the unconditioned stimulus and the conditioned stimulus are particularly impor- tant. Like a malfunctioning warning light at a railroad crossing that goes on after the train has passed by, a neutral stimulus that follows an unconditioned stimulus has little chance of becoming a conditioned stimu- lus. However, just as a warning light works best if it goes on right before a train passes, a neutral stimulus that is presented just before the unconditioned stimulus is most apt to result in successful conditioning (Bitterman, 2006).

Although the terminology Pavlov used to describe classical conditioning may seem confusing, the following summary can help make the relationships between stimuli and responses easier to understand and remember:

• Conditioned = learned. Unconditioned = not learned.

• An unconditioned stimulus leads to an unconditioned response. Unconditioned stimulus-unconditioned response pairings are unlearned and untrained. During conditioning, a previously neu- tral stimulus is transformed into the conditioned stimulus. A conditioned stimulus leads to a con- ditioned response, and a conditioned stimulus-conditioned response pair- ing is a consequence of learning and training.

• An unconditioned response and a con- ditioned response are similar (such as salivation in Pavlov's experiment), but the unconditioned response occurs naturally, whereas the conditioned response is learned.

What do you think would happen if a dog that had become classically condi- tioned to salivate at the ringing of a bell never again received food when the bell was rung? The answer lies in one of the basic phenomena of learning: extinc- tion. Extinction occurs when a previously conditioned response decreases in frequency and eventually disappears.

To produce extinction, one needs to end the association between condi- tioned stimuli and unconditioned stimuli. For instance, if we had trained a dog to salivate (the conditioned response) at the ringing of a bell (the conditioned

02 Applying Conditioning Principles to Human Behavior

Conditioned stimulus (CS) A once- neutral stimulus that has been paired with an unconditioned stimulus to bring about a response formerly caused only by the unconditioned stimulus.

Conditioned response (CR) A response that, after conditioning, follows a previously neutral stimulus (e.g., salivation at the ringing of a bell).

Extinction A basic phenomenon of learning that occurs when a previously conditioned response decreases in frequency and eventually disappears.

Although the initial conditioning experiments were carried out with ani- mals, classical conditioning principles were soon found to explain many aspects of everyday human behavior. Recall, for instance, the earlier illus- tration of how people may experience hunger pangs at the sight of McDon- ald's golden arches. The cause of this reaction is classical conditioning: the previously neutral arches have become associated with the food inside the restaurant (the unconditioned stimulus), causing the arches to become a conditioned stimulus that brings about the conditioned response of hunger.

Emotional responses are especially likely to be learned through classi- cal conditioning processes. For instance, how do some of us develop fears of mice, spiders, and other creatures that are typically harmless? In a now infamous case study, psychologist John B. Watson and colleague Rosalie Rayner (1920) showed that classical conditioning was at the root of such fears by condi- tioning an l l-month -old infant named Albert to be afraid of rats. "Little Albert," like most infants, initially was frightened by loud noises but had no fear of rats.

In the study, the experimenters sounded a loud noise just as they showed Little Albert a rat. The noise (the unconditioned stimulus) evoked fear (the unconditioned response). However, after just a few pairings of noise and rat, Albert began to show fear of the rat by itself, bursting into tears when he saw it. The rat,. then, had become a CS that brought about the CR, fear. Furthermore, the effects of the conditioning lingered: five days later, Albert reacted with fear not only when shown a rat, but when shown objects that looked similar to the white, furry rat, including a white rabbit, a white sealskin coat, and even a white Santa Claus mask. (By the way, we don't know what happened to the unfortunate Little Albert. Watson, the experimenter, has been condemned for using ethically questionable procedures that could never be conducted today.)

Learning by means of classical conditioning also occurs during adulthood. For example, you' may not go to a dentist as often as you should because of prior associations of dentists with pain. On the other hand, classical conditioning also accounts for pleasant experiences. For instance, you may have a particular fond- ness for the smell of a certain perfume or aftershave lotion because the feelings and thoughts of an early love come rushing be\ck whenever you encounter it. Classical conditioning, then, explains many of the reactions we have to stimuli in the world around us.

Emotional responses are especially likely to be learned through classical conditioning processes.

L03 Extinction

Module 15 CLASSICAL CONDITIONING 165

--------------------------------------------------------------------------------------- .... Acquisition (conditioned response and unconditioned response presented together)

Extinction follows (conditioned stimulus alone)

Strong

ureak~ Tr_a_in_in~g C_S_a_lo_ne Pa_u_se S~po_n_ta_n_e_ou_s_r_ec_o_ve_ry~_

e e Time ------------_)0

FIGURE 2 Acquisition, extinction, and spontaneous recovery of a classically conditioned response. A conditioned response (CR)gradually increases in strength during training (A).However, if the conditioned stimulus is presented by itself enough times, the conditioned response gradually fades, and extinction occurs (B). After a pause (C) in which the conditioned stimulus is not presented, spontaneous recovery can occur (D). However, extinction typically reoccurs soon after.

S t Th

1 stimulus), we could produce extinction by repeatedly ringing the bell but not

pon aneous recovery e . .. ... reemergence of an extinguished providing meat. At first the dog would continue to salivate when it heard the conditioned response after a period of bell, but after a few such instances, the amount of salivation would probably rest and with no further conditioning. decline, and the dog would eventually stop responding to the bell altogether.

. At that point, we could say that the response had been extinguished. In sum, extinction occurs when the conditioned stimulus is presented repeatedly with- out the unconditioned stimulus (see Figure 2).

Once a conditioned response has been extin- guished, has it vanished forever? Not necessar- ily. Pavlov discovered this phenomenon when he returned to his dog a few days after the conditioned behavior had seemingly been extinguished. If he rang a bell, the dog once again salivated-an effect known as spontaneous recovery, or the reemer-

. gen~ of an extinguished conditioned response after a period of rest and with no further conditioning.

Spontaneous recovery helps explain why it is so hard to overcome drug addictions. For example, cocaine addicts who are thought to be "cured" can experience an irresistible impulse to use the drug again if they are subsequently confronted by a stimulus with strong connections to the drug, such as a white powder (DrCano & Everitt, 2002; Rodd et al., 2004; Plowright, Simonds, & Butler, 2006). '" )

Once a conditioned response has been extinguished, has it vanished forever?

Not necessarily.

From the perspective Of ... A VETERINARY ASSISTANT How might knowledge of classical conditioning

be useful in your career?

166 Chapter 5 LEARNING

4 Generalization and Discrimination

STUDY ALERT

Despite differences in color and shape, to most of us a rose is a rose is a rose. The pleasure we experience at the beauty, smell, and grace of the flower is similar for different types of roses. Pavlov noticed a similar phenomenon. His dogs often salivated not only at the ringing of the bell that was used during their original conditioning but at the sound of a buzzer as well.

Such behavior is the result of stimulus gener- alization. Stimulus generalization occurs when a conditioned response follows a stimulus that is similar to the original conditioned stimulus. The greater the similarity between two stimuli, the greater the likelihood of stimulus generalization. Little Albert, who, as we mentioned earlier, was conditioned to be fearful of white rats, grew afraid of other furry white things as well. However, according to the principle of timulus generalization, it is unlikely that he would have been afraid of a

black dog, because its color would have differentiated it sufficiently from the original fear-evoking stimulus.

On the other hand, stimulus discrimination occurs if two stimuli are sufficiently distinct from each other that one evokes a conditioned response but the other does not. Stimulus discrimination provides the ability to dif- ferentiate between stimuli. For example, my dog, Cleo, comes running into the kitchen when she hears the sound of the electric can opener, which she has learned is used to open her dog food when her dinner is about to be served. She does not bound into the kitchen at the sound of the food proces- sor, although it sounds similar'. In other words, she discriminates between the stimuli of can opener and food pro- cessor. Similarly, our ability to discrimi- nate between the behavior of a growling dog and that of one whose tail is wagging can lead to adaptive behavior=-avoiding the growling dog and petting the friendly one.

Remember that stimulus generalization relates to

stimuli that are similar to one another, while stimulus

discrimination relates to stimuli that are different

from one another.

The greater the similarity between two stimuli, the greater the likelihood of stimulus generalization.

Stimulus generalization Occurs when a conditioned response follows a stimulus that is similar to the original conditioned stimulus; the more similar the two stimuli are, the more likely generalization is to occur.

Stimulus discrimination The process that occurs if two stimuli are sufficiently distinct from each other that one evokes a conditioned response but the other does not; the ability to differentiate between stimuli.

Because of a previous unpleasant experience, a person may expect a similar occurrence when faced with a comparable situation in the future, a process known as stimulus generalization. Can you think of ways this process is used in everyday life?

Module 15 CLASSICAL CONblilONINC 167

16.1 Define the basics of operant conditioning.

16.2 Explain reinforcers and punishment.

16.3 Present the pros and cons of punishment.

16.4 Discuss schedules of reinforcement.

a er LEARNING OUTCOMES

16.5 Explain the concept of shaping.

Operant conditioning Learning in which a voluntary response is strengthened or weakened, depending on its favorable or unfavorable consequences.

170 Chapter 5 LEARNING

nt

Very good ... What a clever idea ... Fantastic ... I agree ... Thank you ... Excellent ... Super Right on ... This is the best paper you've ever written; you get an A You are really getting the hang of it ... I'm impressed ... You're getting a raise ... Have a cookie ... You look great ... I love you ...

Few of us mind being the recipient of any of the preceding comments. But what is especially noteworthy about them is that each of these simple statements can be used, through a process known as operant conditioning, to bring about powerful changes in behavior and to teach the most complex tasks. Operant conditioning is the basis for many of the most important kinds of human, and animal, learning.

Operant conditioning is learning in which a voluntary response is strengthened or weakened, depending on its favorable or unfavor- able consequences. When we say that a response has been strength- ened or weakened, we mean that it has been made more or less likely

to recur regularly. Unlike classical conditioning, in which the original behaviors are the nat-

ural, biological responses to the presence of a stimulus such as food, water, or pain, operant conditioning applies to voluntary responses, which an organism performs deliberately to produce a desirable outcome. The term operant emphasizes this point: the organism operates on its environment to produce a desirable result. Operant conditioning is at work-when we learn

that toiling indllstriously can bring about a raise or that exercising hard results in a good physique. ~..

») LOl The Basics of Operant Conditioning The inspiration for a whole generation of psychologists studying operant con- ditioning was one of the twentieth century's most influential psychologists, B. F. Skinner (1904-1990). Skinner was interested in specifying how behavior var- ies as a result of alterations in the environment.

Skinner conducted his research using an apparatus called the Skinner box (shown in Figure 1),a chamber with a highly controlled environment that was used to study operant conditioning processes with laboratory animals. Let's consider what happens to a rat in the typical Skinner box (Pascual & Rodriguez, 2006).

Suppose you want to teach a hungry rat to press a lever that is in its box. At first the rat will wan-

er around the box, exploring the environment in a relatively random fashion. At some point, however, - will probably press the lever by chance, and when : does, it will receive a food pellet. The first time this happens, the rat will not learn the connection etween pressing a lever and receiving food and will

continue to explore the box. Sooner or later the rat will press the lever again and receive a pellet, and in time the frequency of the pressing response will increase. Eventually, the rat will press the lever con- tinually until it satisfies its hunger, thereby demon-

rating that it has learned that the receipt of food is contingent on pressing the lever.

Reinforcement: The Central Concept of Operant Conditioning

Food ---0:; dispenser

Response --.:;--- lever

FIGURE: 1 B. F. Skinner with a Skinner box used to study operant conditioning. Laboratory rats learn to press the lever in order to obtain food. which is delivered in the tray.

Reinforcement The process by which a stimulus increases the probability that a preceding behavior will be repeated.

Reinforcer Any stimulus that increases the probability that a preceding behavior will occur again.

kinner called the process that leads the rat to con- rinue pressing the key "reinforcement" Reinforcement is the process by which a stimulus increases the probability that a preceding behavior will be repeated: In 'other words, pressing the lever is more likely to occur again because of the stimulus of food.

In a situation such as this one, the food is called a reinforcer. A reinforcer is any stimulus that increases the probability that a preceding behavior will occur again. Hence, food is a reinforcer because it increases the probability that the behavior of pressing (formally referred to as the response of press- ing) will take place.

What kind of stimuli can act as reinforcers? Bonuses, toys, and good grades can serve as reinforcers-if they strengthen the probability of the response that occurred before their introduction.

There are two major types of reinforcers. Aprimary reinforcer satisfies some biological need and works naturally, regardless of a person's prior experience. Food for a hungry person, warmth for a cold person, and relief for a person in pain all would be classified as primary reinforcers. A secondary reinforcer, in con- trast, is a stimulus that becomes reinforcing because of its association with a primary reinforcer. For instance, we know that money is valuable because we have learned that it allows us to obtain other desirable objects, including primary reinforcers such as food and shelter. Money thus becomes a secondary rein!>rcer.

Bonuses, toys, and good 9rades can serve as reinforcers-if they strengthen the probability of the response that occurred before their introduction.

l02 Positive Reinforcers, Negative Reinforcers, and Punishment In many respects, reinforcers can be thought of in terms of rewards; both a reinforcer and a reward increase the probability that a preceding response will occur again. But the term reward is limited to positive occurrences, and this is where it differs from a reinforcer-for it turns out that reinforcers can be positive or negative.

STUDY ALERT Remember that primary

reinforcers satisfy a biological need; secondary

reinforcers are effective due to previous association with

a primary reinforcer.

Module 16 OPERANT CONDITIONING 171

From the perspective of ... A RETAIL SUPERVISOR How might you use the principles of operant conditioning to

change employee behavior involving tardiness, customer service, or store cleanliness?

Positive reinforcer A stimulus added to the environment that brings about an increase in a preceding response. Negative reinforcer An unpleasant stimulus whose removal leads to an increase in the probability that a preceding response will be repeated in the future. ' Punishment A stimulus that decreases the probability that a previous behavior will occur again.

172 Chapter 5 LEARNING

A positive reinforcer is a stimulus added to the environment that brings about an increase in a preceding response. If food, water, money, or praise is provided after a response, it is more likely that that response will occur again in the future. The paychecks that workers get at the end of the week, for example, increase the likelihood that they will return to their jobs the following week.

In contrast, a negative reinforcer refers to an unpleasant stimulus whose removal leads to an increase in the probability that a preceding response will be repeated in the future. For example, if you have an itchy rash (an unpleas- ant stimulus) that is relieved when you apply a certain brand of ointment, you are more likely to use that ointment the next time you have an itchy rash. Using the ointment, then, is negatively reinforcing, because it removes the unpleasant itch. Negative reinforcement, then, teaches the individual that taking an action removes a negative condition that exists in the envi- ronment. Like positive reinforcers, negative reinforcers increase the likeli- hood that preceding behaviors will be repeated.

Note that negative reinforcement is not the same as punishment. Punishment refers to a stimulus that decreases the probability that a prior behavior will occur again. Unlike negative reinforcement, which produces

an increase in behavior, punishment reduces the likelihood of a prior response. If we receive a shock that is meant to decrease a certain behavior, then, we are receiving punishment, but if we are already receiving a shock and do some- thing to stop that shock, the behavior that stops the shock is considered to be negatively reinforced. In the first case, the specific behavior is apt to decrease because of the punishment; in the second, it is likely to increase because of the negative reinforcement.

There are two types of punishment: positive punishment and negative pun- ishment, just as there are positive reinforcement and negative reinforcement. (In both cases, "positive" means adding something, and "negative" means removing something.) Positive punishment weakens a response through the application of an unpleasant stimulus. For instance, spanking a child for misbehaving, or spend- ing 10 years in jail for committing a crime, is positive punishment. In con- trast, negative punishment consists of the removal of something pleasant. For instance, when a teenager is told she is "grounded" and will no longer be able to use the family car because of her poor grades, or when an employee is informed that he has been demoted with a cut in pay because of a poor job evaluation, negative punishment is being administered. Both positive and negative punishment result in a decrease in the lik~lihood that a prior behavior will be repeated.

Thlfollowing rules (and the summary in Figure 2) can help you distinguish these concepts from one another:

<,

• Reinforcement increases the frequency of the behavior pre- ceding it; punishment decreases the frequency of the behavior preceding it.

• The application of a positive stimulus brings about an increase in the frequency of behavior and is referred to as positive

Positive Reinforcement

Example: Applying ointment to relieve an itchy rash leads to a higher future likelihood of applying the ointment

Result: Increase in response of using ointment

~ rease in :;ehavior reinforcement)

Example: Giving a raise for good performance

Result: Increase in response of good performance

Positive Punishment ,..

Example: Teenager's access to car' restricted by parents due to teenager's breaking curfew

Result: Decrease in response of breaking curfew

Decrease in oehavior ounishment)

Example: Yelling at a" teenager when she steals a bracelet

Result: Decrease in frequency of response of stealing

GURE L Types of reinforcement and punishment.

reinforcement; the application of a negative stimulus decreases or reduces the frequency of behavior and is called punishment. The removal of a negative stimulus that results in an increase in the frequency of behavior is negative reinforcement; the removal of a positive stimulus that decreases the frequency of behavior is negative punishment.

STUDY ALERT The differences between

positive reinforcement, negative reinforcement,

positive punishment, and negative punishment

are tricky, so pay special attention to Figure 2 and

the rules in the text.03 The Pros and Cons of unishment: Why Reinforcement

Beats Punishment Q Is punishment an effective way to modify behavior? Punishment often pres- ents the quickest route to changing behavior that, if allowed to continue, might be dangerous to an individual. For instance, a parent may not have a second chance to warn a child not to run into a busy street, and so punishing the first incidence of this behavior may prove to be wise. Moreover, the use of punishment to suppress behavior, even temporarily, provides an opportu- nity to reinforce a person for subsequently behav- ing in a more desirable way.

Punishment has several disadvantages that make its routine use questionable. For one thing, punishment is frequently ineffective, particu- larly if it is not delivered shortly after the undesired behavior or if the indi- vidual is able to leave the setting in which the punishment is being given. An

Punishment has several disadvantages that make its routine use questionable.

Module 16 OPERANT CONDITIONING 173

Schedules of reinforcement Different patterns of frequency and timing of reinforcement following desired behavior.

Continuous reinforcement schedule Reinforcing of a behavior every time it OCCU rs.

Partial (or intermittent) reinforcement schedule Reinforcing of a behavior some but not all of the time.

174 Chapter 5 LEARNING

employee who is reprimanded by the boss may quit; a teenager who loses the use of the family car may borrow a friend's car instead. In such instances, the initial behavior that is being punished may be replaced by one that is even less desirable.

Even worse, physical punishment can convey to the recipient the idea that physical aggression is permissible and perhaps even desirable. A father who yells at and hits his son for misbehaving teaches the son that aggression is an appropriate, adult response. The son soon may copy his father's behavior by acting aggressively toward others. In addition, physical punishment is often administered by people who are themselves angry or enraged. It is unlikely that individuals in such an emotional state will be able to think through what they are doing or control carefully the degree of punishment they are inflicting (Baumrind, Larzelere, & Cowan, 2002; Sorbring, Deater-Deckard, & Palmerus, 2006).

In short, the research findings are clear: reinforcing desired behavior is a more appropriate technique for modifying behavior than using punishment (Hiby, Rooney, & Bradshaw, 2004; Sidman, 2006).

»1..04 Schedules of Reinforcement: Timing Life's Rewards The world would be a different place if poker players never played cards again after the first losing hand, fishermen returned to shore as soon as they missed a

catch, or telemarketers never made another phone call after their first hang- up. The fact that such unreinforced behaviors continue, often with great fre- quency and persistence, illustrates that reinforcement need not be received continually for behavior to be learned and maintained. In fact, behavior that is reinforced only occasionally Can ultimately be learned better than can behavior that is always reinforced.

When we refer to the frequency and timing of reinforcement that fol- lows desired behavior, we are talking about schedules of reinforcement. Behavior that is reinforced every time it occurs is said to be on a continuous reinforcement schedule; if it is reinforced some but not all of the time, it is on a partial (or intermittent) reinforcement schedule. Although learn- ing occurs more rapidly under a continuous reinforcement schedule, behav- iQ;rlasts longer after reinforcement stops when it is learned under a partial

reinforcement schedule (Staddon & Cerutti, 2003; Gottlieb, 2004; Casey, Cooper-Brown, & Wacher, 2006).

Why should intermittent reinforcement result in stronger, longer-lasting learning than con- tinuous reinforcement? We can answer the ques- tion by examining how we might behave when using a candy vending machine compared with a Las Vegas slot machine. When we use a vend- ing machine, prior experience has taught us that every time we put in the appropriate' amount of money, the reinforcement, a candy bar, ought to be delivered. In other words, the schedule of

NExT STIMULUS

.~ ..

__••••----~~---------------------------------------~;-----"""""'I--

reinforcement is continuous. In comparison, a slot machine offers intermittent reinforcement. We have learned that after putting in our cash, most of the time

re will not receive anything in return. At the same time, though, we know that we will occasionally win something.

Now suppose that, unknown to us, both the candy vending machine and - e slot machine are broken, and so neither one is able to dispense anything. _ would not be very long before we stopped depositing coins into the broken candy machine. Probably at most we would try only two or three times before .eaving the machine in disgust. But the story would be quite different with the roken slot machine. Here, we would drop in money for a considerably longer

:ime, even though there would be no payoff. In formal terms, we can see the difference between the two reinforcement

s hedules: partial reinforcement schedules (such as those provided by slot achines) maintain performance longer than do continuous reinforcement

- hedules (such as those established in candy vending machines) before extinction=sue disappearance of the conditioned response-occurs.

Certain kinds of partial reinforcement schedules produce stronger and :engthier responding before extinction than do others. Although many dif- ferent partial reinforcement schedules have been examined, they can most readily be put into two categories: schedules that consider the number of responses made before reinforcement is given, called fixed-ratio and variable- ratio schedules, and those that consider the amount of time that elapses before reinforcement is provided, called fixed-interval and variable-interval schedules

vartdal, 2003; Pellegrini et al., 2004; Gottlieb, 2006).

STUDY ALERT Remember that the different

schedules of reinforcement affect the rapidity with

which a response is learned and how long it

lasts after reinforcement is no longer provided.

Fixed- and Variable-Ratio Schedules in a fixed-ratio schedule, reinforcement is given only after a specific number of responses. For instance, a rat might receive a food pellet every 10th time ~ pressed a lever; here, the ratio would be 1:10. Similarly, garment workers are generally paid on fixed-ratio schedules: they receive a specific number of dollars for every blouse they sew. Because a greater rate of production

eans more reinforcement, people on fixed-ratio schedules are apt to work as quickly as possible (see Figure 3).

In a variable-ratio schedule, reinforcement occurs after a varying num- er of responses rather than after a fixed number. Although the specific

number of responses necessary to receive reinforcement varies, the number of responses usually hovers around a specific average. A good example of a variable-ratio schedule is a telephone salespejson's job. She might make a sale

uring the third, eighth, ninth, and twentieth calls without being successful uring any call in between. Although the number of responses that must be

made before making a sale varies, it averages out to a 20 percent success rate. Under these circumstances, you might expect that the salesperson would try o make as many calls as possible in as short a time as possible. This is the case

with all variable-ratio schedules, which lead to a high rate of response and rests- ance to extinction.

Fixed-ratio schedule Aschedule by which reinforcement is given only after a specific number of responses are made.

Variable-ratio schedule Aschedule by which reinforcement occurs after a varying number of responses rather than after a fixed number.

Fixed- and Variable-lnterval Schedules: The Passage of Time In contrast to fixed- and variable-ratio schedules, in which the crucial factor is the number of responses, fixed-interval and variable-interval schedules focus on the amount of time that has elapsed since a person or animal was rewarded.

Module 16 OPERANT CONDITIONING 175

Fixed-Ratio Schedule Variable-Ratio Schedule

Responding occurs at a high, steady rateThere are shortpauses after each

response

Time Time

Fixed-Interval Schedule Variable-Interval Schedule

V)

(]J V> C o0.. V>~

4- o G c (]J ::J 0- (]J..t: (]J >'.0

.!l! ::> to ::J U

V)

(]J V> C o0.. V1~

4- o Gc (]J ::J 0- (]J..t:

There are typically long pauses after each response

Time Time

F GURE Typical outcomes of different reinforcement schedules. (A) In a fixed- ratio schedule, short pauses occur after each response. Because the more responses, the more reinforcement, fixed -ratio schedules produce a high rate of responding. (B)In a variable-ratio schedule, responding also occurs at a high rate. (C)A fixed- interval schedule produces lower rates of responding, especially just after reinforcement has been presented, because the organism learns that a specified time period must elapse between reinforcements. (D)A variable-interval schedule produces a fairly steady stream of responses. .

o One example of a fixed-interval schedule is a weekly paycheck. For people who receive regular, weekly paychecks, it typically makes relatively little difference exactly how much they produce in a given week.

Because a fixed-interval schedule provides reinforcement for a response only if a fixed time period has elapsed, overall rates of response are rela- tively low. This is especially true in the period just after reinforcement, when the time before another reinforcement is relatively great. Students' study habits often exemplify this reality. If the periods between exams are relatively long (meaning that the opportunity for reinforcement for good performance is given fairly infrequently), students often study minimally

or not at all until the day of the exam draws near. Just before the exam, how- ever, students begin to cram for it, signaling a rapid increase in the rate of their

Fixed-interval schedule A schedule that provides reinforcement for a response only if a fixed time period has elapsed, making overall rates of response relatively low.

176 Chapter 5 LEARNING

studying response. As you might expect, immediately after the exam there is a rapid decline in the rate of responding, with few people opening a book the day after a test. Fixed-interval schedules produce the kind of "scalloping effect" shown in Figure 3.

One way to decrease the delay in responding that occurs just after rein- forcement, and to maintain the desired behavior more consistently through- out an interval, is to use a variable-interval schedule. In a variable-interval schedule, the time between reinforcements varies around some average rather than being fixed. For example, a professor who gives surprise quiz- • zes that vary from one every three days to one every three weeks, averaging .------- _ one every two weeks, is using a variable-interval schedule. Compared to the study habits we observed with a fixed-interval schedule, students' study hab- its under such a variable-interval schedule would most likely be very differ- ent. Students would be apt to study more regularly because they would never know when the next surprise .quiz was coming. Variable-interval schedules, in

/

general, are more likely to produce relatively steady rates of responding than are fixed-interval schedules, with responses that take longer to extinguish after reinforcement ends. ---'

psych2.0 WWW.MHHE.COM/PSYCHLI FE

Schedules of Reinforcement

LOS Shaping: Reinforcing What Doesn't Come Naturally Consider the difficulty of using operant conditioning to teach people to repair an automobile transmission. If you had to wait until they chanced to fix a transmission perfectly before you provided them with reinforce- ment, the Model T Ford might be back in style long before they mastered the repair process.

There are many complex behaviors, ranging from auto repair to zoo management, that we would not expect to occur naturally as part of any- one's spontaneous behavior .. For such behaviors, for which there might otherwise be no opportunity to provide reinforcement (because the behavior would never occur in the first place), a procedure known as shaping is used. Shaping is the process of teaching a complex behavior by reward- ing closer and closer approximations of the desired behavior. In shaping, you start by reinforcing any behavior that is at all similar to the behavior you want the person to learn. Later, you reinforce only responses that are closer to the behavior you ultimately want to teach. Finally, you reinforce only the desired response. Each step in shaping, then, moves only slightly beyond the previously learned behavior, permitting the person to link the new step to the behavior learned earlier. Shaping allows even lower animals to learn complex responses that would never occur naturally, ranging from lions jumping through hoops, dolphins rescuing divers lost at sea, or rodents finding hidden land mines.

Variable-interval schedule A schedule by which the time between reinforcements varies around some average rather than being fixed.

Shaping The process of teaching a complex behavior by rewarding closer and closer approximations of the desired behavior.

Comparing Classical and Operant Conditioning We've considered classical conditioning and operant conditioning as two com- pletely different processes. And, as summarized in Figure 4, there are a number of key distinctions between the two forms of learning. For example, the key concept in classical conditioning is the association between stimuli, whereas in

Module 16 OPERANT CONDITIONING 177

Reinforcement increases the frequency of the behavior preceding it; punishment decreases the frequency of the behavior preceding it.

Organism voluntarily operates on its environment to produce particular consequences. After behavior occurs, the likelihood of the behavior occurring again is increased or decreased by the behavior's consequences.

, , Concept! Classical Conditioning ! Operant Conditioning

Reinforcement leads to an increase in behavior; punishment leads to a decrease in behavior.

Basic principle Building associations between a conditioned stimulus and conditioned response.

A student who, after studying hard for a test, earns an A (the positive reinforcer) is more likely to study hard in the future. A student who, after going out drinking the night before a test, fails the test (punishment) is less likely to go out drinking the night before the next test.

FIGURE 4 Comparing key concepts in classical conditioning and operant conditioning.

operant conditioning it is reinforcement. Furthermore, classical conditioning involves an involuntary, natural, innate behavior, but operant conditioning is based on voluntary responses made by an organism.

A couple who had been living together for three years began to fight fre- quently. The issues of disagreement ranged from who was going to do the dishes to the quality of their love life.

Disturbed, the couple went to a behavior analyst, a psychologist who spe- ,cialized in behavior-modification techniques. He asked them' to keep a detailed written record of their interactions over the next two weeks.

When they returned with the data, he carefully reviewed the records with them. In doing so, he noticed a pattern: each of their arguments had occurred just after one or the other had left a household chore undone, such as leaving dirty dishes in the sink or draping clothes on the only chair in the bedroom.

becoming an ~Y!formed consumer o - OF PSYCHOLOCY

Using Behavior Analysis and Behavior Modification

Nature of behavior

Based on involuntary, natural, innate behavior. Behavior is elicited by the unconditioned or conditioned stimulus.

Order of events

Before conditioning, an unconditioned stimulus leads to an unconditioned response. After conditioning, a conditioned stimulus leads to a conditioned response.

After a physician gives a child a series of painful injections (an unconditioned stimulus) that produce an emotional reaction (an unconditioned response), the child develops an emotional reaction (a conditioned response) whenever he sees the physician (the conditioned stimulus).

Example

178 Chapter 5 LEARNING

Using the data the couple had collected, the behavior analyst asked them to list all the chores that could possibly arise and assign each one a point value depending on how long it took to complete. Then he- had them divide the chores equally and agree in a written contract to fulfill the ones assigned to them. If either failed to carry out one of the assigned chores, he or she would have to place $1 per point in a fund for the other to spend. They also agreed to a program of verbal praise, promising to reward each other verbally for completing a chore.

The couple agreed to try it for a month and to keep careful records of the number of arguments they had during that period. To their surprise, the number declined rapidly.

[

Behavior modification A formalized technique for promoting the frequency of desirable behaviors and decreasing the incidence of unwanted ones.

This case provides an illustration of behavior modification, a formalized re hnique for promoting the frequency of desirable behaviors and decreas- . g the incidence of unwanted ones, Using the basic principles of learning - eory, behavior-modification techniques have proved to be helpful in a variety of situations. People with severe mental retardation have, for the first rime in their lives, started dressing and feeding themselves. Behavior modi- ~ ation has also helped people lose weight, give up smoking, and behave more safely (Wadden, Crerand, & Brock, 2005; Delinsky, Latner, & Wilson, 2006; _-tinas, 2007).

The techniques used by behavior analysts are as varied as the list of processes • at modify behavior. They include reinforcement scheduling, shaping, gen- eralization training, discrimination training, and extinction. Participants in a ehavior-change program do, however, typically follow a series of similar basic eps that include the following:

Identifying goals and target behaviors. The first step is to define desired behavior. Is it an increase in time spent studying? A decrease in weight? A reduction in the amount of aggression displayed by a child? The goals must be stated in observable terms and must lead to specific targets. For instance, a goal might be "to increase study time," whereas the target behavior would be "to study at least two hours per day on weekdays and an hour on Saturdays." Designing a data-recording system and recording preliminary data. To determine whether behavior has changed, it is necessary to collect data before any changes are made in the situation. This information provides a baseline against which future changes can be measured. Selecting a behavior-change strategy. 'Rhe most crucial step is to select an appropriate strategy. Because all the principles of learning can be employed to bring about behavior change, a "package" of treatments is normally used. This might include the systematic use of positive rein- forcement for desired behavior (verbal praise or something more tangible, such as food), as well as a program of extinction for undesirable behavior (ignoring a child who throws a tantrum). Selecting the right reinforcers is critical, and it may be necessary to experiment a bit to find out what is important to a particular individual. Implementing the program. Probably the most important aspect of pro- gram implementation is consistency. It is also important to reinforce the intended behavior. For example, suppose a mother wants her daughter to spend more time on her homework, but as soon as the child sits down to study, she asks for a snack. If the mother gets a snack for her, she is likely to be reinforcing her daughter's delaying tactic, not her studying.

Module 16 OPERANT CONDITIONING 179

'7l~--------------------------------------------------------------~\ Cognitive

Approac es to •

LEARNINC OUTCOMES onsider what happens when people learn to drive a car. They don't

get behind the wheel and stumble around until they randomly _ 1the key into the ignition, and later, after many false starts, acci- ::entally manage to get the car to move forward, thereby receiving _ itive reinforcement. Instead, they already know the basic ele- znents of driving from prior experience as passengers, when they

ore than likely noticed how the key was inserted into the ignition, - e car was put in drive, and the gas pedal was pressed to make the car go forward.

Clearly, not all learning is due to operant and classical condi- ioning. In fact, activities like learning to drive a car imply that

some kinds of learning must involve higher-order processes in T rhich people's thoughts and memories and the way they process information account for their responses. Such situations argue against regarding learning as the unthinking, mechanical, and utomatic acquisition of associations between stimuli and

responses, as in classical conditioning, Or the presentation of rein- forcement, as in operant conditioning.

Some psychologists view learning in terms o§the thought processes, or cognitions, that underlie it-an approach known as cognitive learning the- ory. Although psychologists working from the cognitive learning perspec- tive do not deny the importance of classical and operant conditioning, they have developed approaches that focus on the unseen mental processes that occur during learning, rather than concentrating solelyon external stimuli, responses, and reinforcements.

In its most basic formulation, cognitive learning theory suggests that it is not enough to say that people make responses because there is an assumed link between a stimulus and a response-a link that is the result of a past history of reinforcement for a response. Instead, according to this point of view, people, and even lower animals, develop an expectation that they will receive a reinforcer after making a response. Two types of learning in which no obvious prior reinforcement is present are latent learning and observational learning.

'7.' Explain latent learning and how it works in humans.

'7.2 Discuss the influence of observational learning in acquiring skills.

'7.3 Describe research findings about observational learning and media violence.

Homework is Completed By:

Writer Writer Name Amount Client Comments & Rating
Instant Homework Helper

ONLINE

Instant Homework Helper

$36

She helped me in last minute in a very reasonable price. She is a lifesaver, I got A+ grade in my homework, I will surely hire her again for my next assignments, Thumbs Up!

Order & Get This Solution Within 3 Hours in $25/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 3 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 6 Hours in $20/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 6 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 12 Hours in $15/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 12 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

6 writers have sent their proposals to do this homework:

Essay & Assignment Help
Assignments Hut
Assignment Hub
Assignment Solver
Assignment Hut
Online Assignment Help
Writer Writer Name Offer Chat
Essay & Assignment Help

ONLINE

Essay & Assignment Help

I am a professional and experienced writer and I have written research reports, proposals, essays, thesis and dissertations on a variety of topics.

$28 Chat With Writer
Assignments Hut

ONLINE

Assignments Hut

I have written research reports, assignments, thesis, research proposals, and dissertations for different level students and on different subjects.

$21 Chat With Writer
Assignment Hub

ONLINE

Assignment Hub

I will provide you with the well organized and well research papers from different primary and secondary sources will write the content that will support your points.

$17 Chat With Writer
Assignment Solver

ONLINE

Assignment Solver

As per my knowledge I can assist you in writing a perfect Planning, Marketing Research, Business Pitches, Business Proposals, Business Feasibility Reports and Content within your given deadline and budget.

$46 Chat With Writer
Assignment Hut

ONLINE

Assignment Hut

I will be delighted to work on your project. As an experienced writer, I can provide you top quality, well researched, concise and error-free work within your provided deadline at very reasonable prices.

$26 Chat With Writer
Online Assignment Help

ONLINE

Online Assignment Help

As an experienced writer, I have extensive experience in business writing, report writing, business profile writing, writing business reports and business plans for my clients.

$19 Chat With Writer

Let our expert academic writers to help you in achieving a+ grades in your homework, assignment, quiz or exam.

Similar Homework Questions

Critical thinking week 8 discussion - Racv show your card and save - Morality and self interest course hero - Ochre restaurant cairns menu - Deliverable 2 - Tutoring on the Normal Distribution - Understanding arguments an introduction to informal logic 8th edition pdf - Security plan for a medium sized health care facility. - Neatherd high school homework - Sampling analog signal in matlab - Woke up half dressed still buzzin - Pick one of the following terms for your research: collaboration, divisional structure, functional structure, horizontal structure, matrix structure, outsourcing, reengineering, teams, vertical linkages, or virtual team. - Monash computer science electives - What is the maximum price of a bond - Cb radio number codes - Paul hartley vocational assessor - Concentration of acetic acid in vinegar - I wanna new room text - Picstart plus development programmer - No debes comer las peras (pears) ahora. - Vital strength define woolworths - Discussion - Cash converters mt roskill - Computer software lesson plans - CS week 7 discussion - Hutchesons grammar school fees - Iron ii sulfate and potassium permanganate - Va form 22 1990 - Eng 225 final film critique paper - Automata Forml Lang homework - Mendelian genetics lab answer key - Coordinates battleships first quadrant - .27 inches to fraction - Vce accounting study design - In azteca, morganthe enlisted the help of the: - Carbonaro effect vase breaks warehouse - New york cardiac diagnostic center - Plundering paradise summary - Strategic Marketing 2 - Irony as a principle of structure - Cert 4 laboratory techniques - Should phones be banned in school persuasive - Newtown an american tragedy pdf - Plagiarism spectrum - Www politics1 com parties htm - Advantages and disadvantages of sbu - Neo hubber nasal bebes - Psychological Factors - Improvements ice maker 606780 manual - Meaa rates 2021 australia - Legal environment - I need help!!!!! - Math 140 final exam - Critical thinking is the enemy of - The iup journal of business strategy - Thorn lighting nz ltd - Hip pointer injury recovery time - Let them eat dog literary analysis essay - Linuxzoo - Scf instructure - Sample interview questions for ethnographic research - Diagram of a fold mountain - Radiant tube heater layout - Matilda chapter 20 summary - There's something wrong with aunt diane hulu - Commonwealth joe coffee roasters case - What is a technical problem - Poisonous crabs in the philippines - Form 71a statement of financial position - Human resource essay - Circuit breaker ratings schneider - Server virtualization cost benefit analysis - Restaurants near the kitchener aud - Bus 380 discussion question - Doing ethics moral reasoning and contemporary issues 4th edition pdf - Discovering psychology the developing child worksheet answers - Https www law2 byu edu career_services jobbank - Paraphrase - Abstract algebra a first course 2nd edition by dan saracino - NEED IN 15 HOURS or LESS - Kirchhoff's voltage law is concerned with - Connecting port 22 or 222 with a client such as winscp3 will allow smoothwall which capability? - Common bond christian band - What is integrity interview question - Australian industry participation plan - Jomo kenyatta the gentlemen of the jungle - Exercises - Daddy long legs musical bootleg - Colin briggs nhs lothian - Les mills body attack 104 - Good sentence starters for essays - Life of pi sparknotes literature guide yann martel - Renew your nervous system and build stamina kriya - Expert in Embedded system - The mystery of jesus christ ocariz pdf - Interview questions for forensic auditor - DONT SEND A MESSAGE WITHOUT READING (I NEED CHEMISTRY EXPERT FOR HOMEWORK) - Cloud Computing - Khloe nida go fund me - Forest school level 1 portfolio example - Christianity - world view