Applied Behavioral Analysis 2
Resource: How to Make a Graph Using Microsoft Excel
The Unit 6 Assignment requires you to apply the theories, concepts, and research that you have covered so far this term to a hypothetical case study. Your answers to the questions and completed graph should consist of information from the text and supplemental readings.You also may use sources from the Kaplan library or other credible Internet sources, but your primary sources should be the readings assigned for the course.
Read each Case Study and answer the questions below. You will need to write 2–3 typed pages for each case in order to address all required parts of the project.Answers to the questions should be typed in an APA formatted Word document, double-spaced in 12-point font and submitted to the Dropbox.
Your final paper must be your original work; plagiarism will not be tolerated. Be sure to review the Syllabus in terms of what constitutes plagiarism.Please make sure to provide proper credit for those sources used in your case study analysis in proper APA format. Please see the APA Quick Reference for any questions related to APA citations. You must credit authors when you:
Summarize a concept, theory or research
Use direct quotes from the text or articles
Read Case Study 1: Martin
Martin, a behavior analyst, is working with Sara, a 14-year-old girl with severe developmental delays who exhibits self-injurious behavior (SIB). Sara’s target behavior is defined as pulling her hair, biting her arm and banging her head against the wall. After conducting a functional analysis, Martin decided to employ an intervention program consisting of differential reinforcement of other (DRO) desired behavior. Martin collected data on Sara's SIB before and during the intervention. Below is a depiction of the data that Martin collected:
Sara’s Frequency of SIB
BASELINE Occurrences DRO Occurrences
22 5
25 5
27 3
26 2
Address the following questions, and complete the following requirements:
Create a basic line graph using Microsoft Excel, to be included in your Word document. The graph should depict the data provided in this case study. You should only need to create one graph, with SIB depicted, both in baseline and in intervention.
What type of research design did Martin employ when working with Sara? What is an advantage and a disadvantage of using this research design?
According to the data in the graph, was the intervention that Martin selected effective in modifying Sara's self-injurious behavior?
Martin had considered using an ABAB reversal design when working with Sara. What are some ethical implications of selecting a reversal design when working with the type of behavior problems that Sara was exhibiting?
Martin's supervisor requested a graph of the data he collected when working with Sara. Why are graphs useful in evaluating behavior change?
Discuss how a graph demonstrates a functional relationship. Identify whether the graph that you created using the data provided in this section depicts a functional relationship.
Chapter 3
3 Graphing Behavior and Measuring Change
· ▪ What are the six essential components of a behavior modification graph?
· ▪ How do you graph behavioral data?
· ▪ What different dimensions of behavior can be shown on a graph?
· ▪ What is a functional relationship, and how do you demonstrate a functional relationship in behavior modification?
· ▪ What different research designs can be used in behavior modification research?
As we saw in Chapter 2 , people who use behavior modification define their target behavior carefully, and directly observe and record the behavior. In this way, they can document whether the behavior has indeed changed when a behavior modification procedure is implemented. The primary tool used to document behavior change is the graph.
A graph is a visual representation of the occurrence of a behavior over time. After instances of the target behavior are recorded (on a data sheet or otherwise), the information is transferred to a graph. A graph is an efficient way to view the occurrence of the behavior because it shows the results of recording during many observation periods.
Behavior analysts use graphs to identify the level of behavior before treatment and after treatment begins. In this way, they can document changes in the behavior during treatment and make decisions about the continued use of the treatment. The graph makes it easier to compare the levels of the behavior before, during, and after treatment because the levels are presented visually for comparison. In Figure 3-1 , for example, it is easy to see that the frequency of the behavior is much lower during treatment (competing response) than before treatment (baseline). This particular graph is from a student's self-management project. The student's target behavior involved biting the insides of her mouth when she studied. She recorded the behavior on a data sheet each time it occurred. After 10 days of recording the behavior without any treatment (baseline), she implemented a behavior modification plan in which she used a competing response (a behavior that is incompatible with mouth-biting and interrupts each occurrence of mouth-biting) to help her control the mouthbiting behavior. After implementing this competing response procedure, she continued to record the behavior for 20 more days. She then recorded the behavior four more times, after 1, 5, 10, and 20 weeks. The long period after treatment has been implemented is called the follow-up period. From this graph, we can conclude that the mouth-biting behavior (as recorded by the student) decreased substantially while the student implemented the treatment. We can also see that the behavior continued to occur at a low level up to 20 weeks after treatment was implemented.
Components of a Graph
In the typical behavior modification graph, time and behavior are the two variables illustrated. Each data point on a graph gives you two pieces of information: It tells you when the behavior was recorded (time) and the level of the behavior at that time. Time is indicated on the horizontal axis (also called the x-axis, or the abscissa ), and the level of the behavior is indicated on the vertical axis (also called the y-axis, or the ordinate ). In Figure 3-1 , the frequency of mouth-biting is indicated on the vertical axis, and days and weeks are indicated on the horizontal axis. By looking at this graph, you can determine the frequency of mouth-biting on any particular day, before or after treatment was implemented. Because follow-up is reported, you can also see the frequency of the behavior at intervals of up to 20 weeks.
▪Six components are necessary for a graph to be complete.
▪The y-axis and the x-axis. The vertical axis (y-axis) and the horizontal axis (x-axis) meet at the bottom left of the page. On most graphs, the x-axis is longer than the y-axis; it is usually one to two times as long ( Figure 3-2 ).
▪The labels for the y-axis and the x-axis. The y-axis label usually tells you the behavior and the dimension of the behavior that is recorded. The x-axis label usually tells you the unit of time during which the behavior is recorded. In Figure 3-3 , the y-axis label is “Hours of Studying” and the x-axis label is “Days.”
Thus, you know that the hours of studying will be recorded each day for this particular person.
▪The numbers on the y-axis and the x-axis. On the y-axis, the numbers indicate the units of measurement of the behavior; on the x-axis, the numbers indicate the units of measurement of time. There should be a hash mark on the y-axis and the x-axis to correspond to each of the numbers. In Figure 3-4 , the numbers on the y-axis indicate the number of hours the studying behavior occurred, and the numbers on the x-axis indicate the days on which studying was measured.
▪Data points. The data points must be plotted correctly to indicate the level of the behavior that occurred at each particular time period. The information on the level of the behavior and the time periods is taken from the data sheet or other behavior-recording instrument. Each data point is connected to the adjacent data points by a line ( Figure 3-5 ).
▪Phase lines. A phase line is a vertical line on a graph that indicates a change in treatment. The change can be from a no-treatment phase to a treatment phase, from a treatment phase to a no-treatment phase, or from one treatment phase to another treatment phase. A phase is a period in which the same treatment (or no treatment) is in effect. In Figure 3-6 , the phase line separates baseline (no treatment) and treatment phases. Data points are not connected across phase lines. This allows you to see differences in the level of the behavior in different phases more easily.
▪Phase labels. Each phase in a graph must be labeled. The phase label appears at the top of the graph above the particular phase ( Figure 3-7 ). Most behavior modification graphs have at least two phases that are labeled: the notreatment phase and the treatment phase. “ Baseline ” is the label most often given to the no-treatment phase. The label for the treatment phase should identify the particular treatment being used. In Figure 3-7 , the two phase labels are “Baseline” and “Behavioral Contract.” The behavioral contract is the particular treatment the student is using to increase studying. Some graphs have more than one treatment phase or more than one baseline phase.
Graphing Behavioral Data
As discussed in Chapter 2 , behavioral data are collected through direct observation and recording of the behavior on a data sheet or other instrument. Once the behavior has been recorded on the data sheet, it can be transferred to a graph. For example, Figure 3-8 a is a frequency data sheet that shows 2 weeks of behavior recording, and Figure 3-8 b is a graph of the behavioral data from the data sheet. Notice that days 1–14 on
contract in which the client agreed to smoke one fewer cigarette per day every second day. Behavioral contracts are described in Chapter 23 .
Also notice that the frequency of the behavior listed on the data sheet for each day corresponds to the frequency recorded on the graph for that day. As you look at the graph, you can immediately determine that the frequency of the behavior is much lower during treatment than during baseline. You have to look more closely at the data sheet to be able to detect the difference between baseline and treatment. Finally, notice that all six essential components of a graph are included in this graph.
Consider a second example. A completed duration data sheet is shown in Figure 3-9 a, and Figure 3-9 b is a table that summarizes the daily duration of the behavior recorded on the data sheet. Notice that the duration of the behavior listed in the summary table for each of the 20 days corresponds to the duration that was recorded each day on the data sheet.
To complete Figure 3-9 c, you must add four components. First, you should add the data points for days 8–20 and connect them. Second, include the phase line between days 7 and 8. Data points on days 7 and 8 should not be connected across the phase line. Third, add the phase label “Behavioral Contract,” to the right of the phase line. Fourth, add the label “Days” to the x-axis. When these four components are added, the graph includes all six essential components ( Figure 3-10 ).
FOR FURTHER READING Graphing in Excel
Although it is easy to construct a graph with a piece of graph paper, a ruler, and a pencil, there are graphing programs that allow you to construct a graph on your computer. Graphs can be constructed in two different Microsoft Office programs; PowerPoint and Excel ( Vaneslow & Bourret, 2012 ). Carr and Burkholder (1998) and Dixon et al. (2007) published articles in the Journal of Applied Behavior Analysis providing step-by-step instructions on how to use Microsoft Excel for constructing the types of graphs used in applied behavior analysis or behavior modification. Vaneslow and Bourret (2012) described how to use an online tutorial about constructing graphs using Microsoft excel. Students interested in learning how to construct graphs in Excel are encouraged to read these articles.
Graphing Data from Different Recording Procedures
Figures 3-8 and 3-10 illustrate graphs of frequency data and duration data, respectively. Because other types of data can be recorded, other types of graphs are possible. Regardless of the dimension of behavior or type of data that is being graphed, however, the same six components of a graph must be present. What will change with different recording procedures are the y-axis label and the numbering on the y-axis. For example, if you are recording the percentage of math problems a student completes correctly during each math class, you would label the y-axis “Percentage of Correct Math Problems” and number the y-axis from 0% to 100%. As you can see, the y-axis label identifies the behavior (correct math problems) and the type of data (percentage) that is recorded.
Consider another example. A researcher is studying Tourette's syndrome, a neurological disorder in which certain muscles in the body twitch or jerk involuntarily (these are called motor tics). The researcher uses an interval recording system and records whether a motor tic occurs during each consecutive 10-second interval in 30-minute observation periods. At the end of each observation period, the researcher calculates the percentage of intervals in which a tic occurred. The researcher labels the y-axis of the graph “Percentage of Intervals of Tics” and numbers the y-axis from 0% to 100%. Whenever an interval recording system is used, the y-axis is labeled “Percentage of Intervals of (Behavior).” The x-axis label indicates the time periods in which the behavior was recorded (e.g., “Sessions” or “Days”). The x-axis is then numbered accordingly. A session is a period in which a target behavior is observed and recorded. Once treatment is started, it is also implemented during the session.
Other aspects of a behavior may be recorded and graphed, such as intensity or product data. In each case, the y-axis label should clearly reflect the behavior and the dimension or aspect of the behavior that is recorded. For example, as a measure of how intense or serious a child's tantrums are, you might use the label “Tantrum Intensity Rating” and put the numbers of the rating scale on the y-axis. For a measure of loudness of speech, the y-axis label might be “Decibels of Speech,” with decibel levels numbered on the y-axis. To graph product recording data, you would label the y-axis to indicate the unit of measurement and the behavior. For example, “Number of Brakes Assembled” is a y-axis label that indicates the work output of a person who puts together bicycle brakes.
Research Designs
When people conduct research in behavior modification, they use research designs that include more complex types of graphs. The purpose of a research design is to determine whether the treatment (independent variable) was responsible for the observed change in the target behavior (dependent variable) and to rule out the possibility that extraneous variables caused the behavior to change. In research, an independent variable is what the researcher manipulates to produce a change in the target behavior. The target behavior is called the dependent variable . An extraneous variable, also called a confounding variable, is any event that the researcher did not plan that may have affected the behavior. For a person with a problem, it may be enough to know that the behavior changed for the better after using behavior modification procedures. However, a researcher also wants to demonstrate that the behavior modification procedure is what caused the behavior to change.
When a researcher shows that a behavior modification procedure causes a target behavior to change, the researcher is demonstrating a functional relationship between the procedure and the target behavior. That is, the researcher demonstrates that the behavior changes as a function of the procedure.
A functional relationship is established if:
· (a) a target behavior changes when an independent variable is manipulated (a procedure is implemented), while all other variables are held constant, and
· (b) the process is replicated or repeated one or more times and the behavior changes each time.
A behavior modification researcher uses a research design to demonstrate a functional relationship. A research design involves both treatment implementation and replication. If the behavior changes each time the procedure is implemented and only when the procedure is implemented, a functional relationship is demonstrated.
In this case, we would say that the researcher has demonstrated experimental control over the target behavior. It is unlikely that an extraneous variable caused the behavior change if it changed only when the treatment was implemented. This section reviews research designs used in behavior modification (for further information on behavior modification research designs, see Bailey, 1977 ; Barlow & Hersen, 1984 ; Gast, 2009 ; Hayes, Barlow, & Nelson-Gray, 1999 ; Kazdin, 2010 ; Poling & Grossett, 1986 ).
A-B Design
The simplest type of design used in behavior modification has just two phases: baseline and treatment. This is called an A-B design , where A = baseline and B = treatment. A-B designs are illustrated in Figures 3-1, 3-7, 3-8b, and 3-10. By means of an A-B design, we can compare baseline and treatment to determine whether the behavior changed in the expected way after treatment. However, the A-B design does not demonstrate a functional relationship because treatment is not replicated (implemented a second time). Therefore, the A-B design is not a true research design; it does not rule out the possibility that an extraneous variable was responsible for the behavior change. For example, although mouth-biting decreased when the competing response treatment was implemented in Figure 3-1 , it is possible that some other event (extraneous variable) occurred at the same time as treatment was implemented. In that case, the decrease in mouth-biting may have resulted from the other event or a combination of treatment and the other event. For example, the person may have seen a TV show about controlling nervous habits and learned from that how to control her mouth-biting.
The A-B design is not a true research design. Because the A-B design does not include a replication and thus does not demonstrate a functional relationship, it is rarely used by behavior modification researchers. It is most often used in applied, nonresearch situations, in which people are more interested in demonstrating that behavior change has occurred than in proving that the behavior modification procedure caused the behavior change. You probably would use an A-B graph in a self-management project to show whether your behavior changed after you implemented a behavior modification procedure.
A-B-A-B Reversal Design
The A-B-A-B reversal design is an extension of the simple A-B design (where A = baseline and B = treatment). In the A-B-A-B design, baseline and treatment phases are implemented twice. It is called a reversal design because after the first treatment phase, the researcher removes the treatment and reverses back to baseline. This second baseline is followed by replication of the treatment. Figure 3-11 illustrates an A-B-A-B design.
The A-B-A-B graph in Figure 3-11 shows the effect of a teacher's demands on the aggressive behavior of an adolescent with intellectual disability named Bob. Carr and his colleagues ( Carr, Newsom, & Binkoff, 1980 ) studied the influence of demands on Bob's aggressive behavior by alternating phases in which teachers made frequent demands with phases in which teachers made no demands. In Figure 3-11 , you can see that the behavior changed three times. In the baseline phase (“Demands”), the aggressive behavior occurred frequently. When the treatment phase (“No Demands”) was first implemented, the behavior decreased. When the second “Demands” phase occurred, the behavior returned to its level during the first “Demands” phase. Finally, when the “No Demands” phase was implemented a second time, the behavior decreased again. The fact that the behavior changed three times, and only when the phase changed, is evidence that the change in demands (rather than some extraneous variable) caused the behavior change. When the independent variable was manipulated (demands were turned on and off each time), the behavior changed accordingly. It is highly unlikely that an extraneous variable was turned on and off at exactly the same time as the demands, so it is highly unlikely that any other variable except the independent variable (change in demands) caused the behavior change.
Variations of the A-B-A-B reversal design may be used in which more than one treatment is evaluated. Suppose for example, you implemented one treatment (B) and it did not work, so you implemented a second treatment (C) and it did work. To replicate this treatment and show experimental control, you might use an A-B-C-A-C design. If the second treatment (C) resulted in a change in the target behavior each time it was implemented, you are demonstrating a functional relationship between this treatment and the behavior.
A number of considerations must be taken into account in deciding whether to use the A-B-A-B research design. First, it may not be ethical to remove the treatment in the second baseline if the behavior is dangerous (e.g., self-injurious behavior). Second, you must be fairly certain that the level of the behavior will reverse when treatment is withdrawn. If the behavior fails to change when the treatment is withdrawn, a functional relationship is not demonstrated. Another consideration is whether you can actually remove the treatment after it is implemented. For example, if the treatment is a teaching procedure and the subject learns a new behavior, you cannot take away the learning that took place. (For a more detailed discussion of considerations in the use of the A-B-A-B design, see Bailey [1977] , Bailey and Burch [2002] , Barlow and Hersen [1984] , Gast [2009] , and Kazdin [2010] .)
Multiple-Baseline Design
There are three types of multiple-baseline designs.
▪In a multiple-baseline-across-subjects design , there is a baseline and a treatment phase for the same target behavior of two or more subjects.
▪In a multiple-baseline-across-behaviors design , there is a baseline and treatment phase for two or more behaviors of the same subject.
▪In a multiple-baseline-across-settings design , there is a baseline and treatment phase for two or more settings in which the same behavior of the same subject is measured.
Remember that the A-B-A-B design can also have two baseline phases and two treatment phases, but both baseline and treatment phases occur for the same behavior of the same subject in the same setting. With the multiple-baseline design, the different baseline and treatment phases occur for different subjects, or for different behaviors, or in different settings.
A multiple-baseline design may be used:
· (a) when you are interested in the same target behavior exhibited by multiple subjects,
· (b) when you have targeted more than one behavior of the same subject, or
· (c) when you are measuring a subject's behavior across two or more settings.
A multiple-baseline design is useful when you cannot use an A-B-A-B design for the reasons listed earlier. The multiple-baseline design and the appropriate time to use it are described in more detail by Bailey (1977) , Bailey and Burch (2002) , Barlow and Hersen (1984) , Gast (2009) , and Kazdin (2010) .
Figure 3-12 illustrates the multiple-baseline-across-subjects design. This graph, from a study by DeVries, Burnette, and Redmon (1991) , shows the effect of an intervention involving feedback on the percentage of time that emergency department nurses wore rubber gloves when they had contact with patients. Notice that there is a baseline and treatment phase for four different subjects (nurses). Figure 3-12 also illustrates a critical feature of the multiple-baseline design: The baselines for each subject are of different lengths. Treatment is implemented for subject 1, while subjects 2, 3, and 4 are still in baseline. Then, treatment is implemented for subject 2, while subjects 3 and 4 are still in base line. Next, treatment is implemented for subject 3 and, finally, for subject 4. When treatment is implemented at different times, we say that treatment is staggered over time. Notice that the behavior increased for each subject only after the treatment phase was started for that subject. When treatment was implemented for subject 1, the behavior increased, but the behavior did not increase at that time for subjects 2, 3, and 4, who were still in baseline and had not yet received treatment. The fact that the behavior changed for each subject only after treatment started is evidence that the treatment, rather than an extraneous variable, caused the behavior change. It is highly unlikely that an extraneous variable would happen to occur at exactly the same time that treatment started for each of the four subjects.
A multiple-baseline-across-behaviors design is illustrated in Figure 3-13 . This graph, from a study by Franco, Christoff, Crimmins, and Kelly (1983) , shows the effect of treatment (social skills training) on four different social behaviors of a shy adolescent: asking questions, acknowledging other people's comments, making eye contact, and showing affect (e.g., smiling). Notice in this graph that treatment is staggered across the four behaviors, and that each of the behaviors changes only after treatment is implemented for that particular behavior. Because each of the four behaviors changed only after treatment was implemented for that behavior, the researchers demonstrated that treatment, rather than some extraneous variable, was responsible for the behavior change.
A graph used in a multiple-baseline-across-settings design would look like those in Figures 3-12 and 3-13. The difference is that in a multiple-baseline-acrosssettings graph, the same behavior of the same subject is being recorded in baseline and treatment phases in two or more different settings, and treatment is staggered across the settings.
Draw a graph of a multiple-baseline-across-settings design with hypothetical data. Be sure to include all six components of a complete graph. Assume that you have recorded the disruptive behavior of a student in two different class rooms using an interval recording system. Include baseline and treatment across two settings in the graph.
The graph in Figure 3-14 , from a study by Dunlap, Kern-Dunlap, Clarke, and Robbins (1991) , shows the percentage of intervals of disruptive behavior by a student during baseline and treatment (revised curriculum) in two settings, the morning and afternoon classrooms. It also shows follow-up, in which the researchers collected data once a week for 10 weeks. Notice that treatment is staggered across settings; it was implemented first in one setting and then in the other, and the student's disruptive behavior changed only after treatment was implemented in each setting. Your graph of a multiple-baseline-across-settings design would look like Figure 3-14 .
FOR FURTHER READING Nonconcurrent Multiple-Baseline-Across-Subjects Design
In a multiple-baseline-across-subjects design, data collection starts in each of the baselines (for each of the subjects) at around the same time and the treatment phase is then staggered across time. However, in a nonconcurrent multiple baseline (MBL) across subjects design ( Carr, 2005 ; Watson & Workman, 1981 ) the subjects do not participate in the study concurrently. In a nonconcurrent MBL design, the baselines for two or more subjects may begin at different points in time. The nonconcurrent MBL is equivalent to a number of different A-B designs with each participant having a different baseline length. Treatment is then staggered across baselines of different lengths rather than across time. As long as each of the subjects has a different number of baseline data points before treatment is implemented, the research design is considered a nonconcurrent MBL. The advantage of a nonconcurrent MBL is that participants may be evaluated at different points in time; they may be brought into the study consecutively rather than concurrently, which is often more practical for researchers to carry out ( Carr, 2005 ).
Alternating-Treatments Design
The alternating-treatments design (ATD) , also called a multi-element design, differs from the research designs just reviewed in that baseline and treatment conditions (or two treatment conditions) are conducted in rapid succession and compared with each other. For example, treatment is implemented on one day, baseline the next day, treatment the next day, baseline the next day, and so on. In the A-B, A-B-A-B, or multiple-baseline designs, a treatment phase occurs after a baseline phase has been implemented for a period of time; that is, baseline and treatment occur sequentially. In these designs, a baseline or treatment phase is conducted until a number of data points are collected (usually at least three) and there is no trend in the data. A trend means the data are increasing or decreasing across a phase. In the ATD, two conditions (baseline and treatment or two different treatments) occur during alternating days or sessions. Therefore, the two conditions can be compared within the same time period. This is valuable because any extraneous variables would have a similar effect on both conditions, and thus an extraneous variable could not be the cause of any differences between conditions.
Consider the following example of an ATD. A teacher wants to determine whether violent cartoons lead to aggressive behavior in preschool children. The teacher uses an ATD to demonstrate a functional relationship between violent cartoons and aggressive behavior. On one day, the preschoolers do not watch any cartoons (baseline) and the teacher records the students' aggressive behavior. The next day, the students watch a violent cartoon and the teacher again records their aggressive behavior. The teacher continues to alternate a day with no cartoons and a day with cartoons. After a few weeks, the teacher can determine whether a functional relationship exists. If there is consistently more aggressive behavior on cartoon days and less aggressive behavior on no-cartoon days, the teacher has demonstrated a functional relationship between violent cartoons and aggressive behavior in the preschoolers. An example of a graph from this hypothetical ATD is shown in Figure 3-15 .
Changing-Criterion Design
A changing-criterion design typically includes a baseline and a treatment phase. What makes a changing-criterion design different from an A-B design is that, within the treatment phase, sequential performance criteria are specified; that is, successive goal levels for the target behavior specify how much the target behavior should change during treatment. The effectiveness of treatment is determined by whether the subject's behavior changes to meet the changing performance criteria. That is, does the subject's behavior change each time the goal level changes? A graph used in a changing-criterion design indicates each criterion level so that when the behavior is plotted on the graph, we can determine whether the level of the behavior matches the criterion level.
Consider the graph in Figure 3-16 , from a study by Foxx and Rubinoff (1979) . These researchers helped people reduce their excessive caffeine consumption through a positive reinforcement and response cost procedure. (These procedures are discussed in Chapters 15 and 17 .) As you can see in the graph, they set four different criterion levels for caffeine consumption, each lower than the previous level. When subjects consumed less caffeine than the criterion level, they earned money. If they drank more, they lost money. This graph shows that treatment was successful: This subject's caffeine consumption level was always below each of the criterion levels. Because the subject's behavior changed each time the performance criterion changed, the researchers demonstrated a functional relationship—it is unlikely that an extraneous variable was responsible for the change in behavior. DeLuca and Holborn (1992) used a changingcriterion design in a study constructed to help obese boys exercise more. The boys rode exercise bikes and received points for the amount of pedaling that they did on the bikes. They later exchanged the points for toys and other rewards. In this study, each time the exercise performance criterion was raised (the boys had to pedal more to earn points), the boys' exercise level increased accordingly, thus demonstrating a functional relationship between treatment and the amount of pedaling.
CHAPTER SUMMARY
· 1. The six essential features of a complete behavior modification graph are the y-axis and x-axis, labels for the y-axis and x-axis, units for the y-axis and x-axis, data points, phase lines, and phase labels.
· 2. To graph behavioral data, you plot the data points on the graph to reflect the level of the behavior on the vertical axis (y-axis) and the unit of time on the horizontal axis (x-axis).
· 3. The different dimensions of behavior you can show on a graph include the frequency, duration, intensity, and latency of the behavior. A graph may also show the percentage of intervals of the behavior derived from interval recording or time sample recording or the percentage of opportunities in which the behavior occurred (e.g., percentage correct).
· 4. A functional relationship between the treatment (independent variable) and the target behavior (dependent variable) exists when the treatment causes the behavior to change. A functional relationship or experimental control is demonstrated when a target behavior changes after the implementation of treatment and the treatment procedure is repeated or replicated one or more times and the behavior changes each time.
· 5. The different research designs you can use in behavior modification research include the following:
· ▪ The A-B design shows baseline and treatment for the behavior of one subject. It is not a true research design.
· ▪ The A-B-A-B design shows two baseline and treatment phases repeated for the behavior of one subject.
· ▪ A multiple-baseline design presents baseline and treatment phases for one of the following options: multiple behaviors of one subject, one behavior of multiple subjects, or one behavior of one subject across multiple settings. In each type of multiple-baseline design, treatment is staggered across behaviors, subjects, or settings.
· ▪ The alternating-treatments design presents data from two (or more) experimental conditions that are rapidly alternated (baseline and treatment or two treatments).
· ▪ Finally, in the changing-criterion design, a baseline phase is followed by a treatment phase in which sequential performance criteria are specified.
All research designs, except the A-B design, control for the influence of extraneous variables, so that the effectiveness of a treatment can be evaluated.
6 Punishment
· ▪ What is the principle of punishment?
· ▪ What is a common misconception about the definition of punishment in behavior modification?
· ▪ How does positive punishment differ from negative punishment?
· ▪ How are unconditioned punishers different from conditioned punishers?
· ▪ What factors influence the effectiveness of punishment?
· ▪ What are the problems with punishment?
In Chapters 4 and 5 , we discussed the basic principles of reinforcement and extinction. Positive and negative reinforcement are processes that strengthen operant behavior, and extinction is a process that weakens operant behavior. In this chapter, we focus on punishment, another process that weakens operant behavior ( Lerman & Vorndran, 2002 ). Consider the following examples.
Kathy, a college senior, moved into a new apartment near campus. On her way to class, she passed a fenced-in yard with a big, friendly looking dog. One day, when the dog was near the fence, Kathy reached over to pet the dog.
At once, the dog growled, bared its teeth, and bit her hand. After this, she never again tried to pet the dog.
On Mother's Day, Otis decided to get up early and make breakfast for his mom. He put the cast-iron skillet on the stove and turned the burner on high. Then he mixed a couple of eggs in a bowl with some milk to make scrambled eggs. After about 5 minutes, he poured the eggs from the bowl into the skillet. Immediately, the eggs started to burn and smoke rose from the skillet. Otis grabbed the handle of the skillet to move it off of the burner. As soon as he touched the handle, pain shot through his hand; he screamed and dropped the skillet. After that episode, Otis never again grabbed the handle of a hot cast-iron skillet. He always used a hot pad to avoid burning himself.
Defining Punishment
The preceding two examples illustrate the behavioral principle of punishment. In each example, a person engaged in a behavior and there was an immediate consequence that made it less likely that the person would repeat the behavior in similar situations in the future. Kathy reached over the fence to pet the dog, and the dog immediately bit her. As a result, Kathy is less likely to reach over the fence to pet that dog or other unfamiliar dogs. Otis grabbed the hot handle of a cast-iron skillet, which resulted immediately in painful stimulation as he burned his hand. As a result, Otis is much less likely to grab the handle of a cast-iron skillet on a hot stove again (at least not without a hot pad).
As demonstrated in these examples, there are three parts to the definition of punishment .
· 1. A particular behavior occurs.
· 2. A consequence immediately follows the behavior.
· 3. As a result, the behavior is less likely to occur again in the future. (The behavior is weakened.)
A punisher (also called an aversive stimulus) is a consequence that makes a particular behavior less likely to occur in the future. For Kathy, the dog bite was a punisher for her behavior of reaching over the fence. For Otis, the painful stimulus (burning his hand) was the punisher for grabbing the handle of the cast-iron skillet. A punisher is defined by its effect on the behavior it follows. A stimulus or event is a punisher when it decreases the frequency of the behavior it follows.
Consider the case of 5-year-old Juan who teases and hits his sisters until they cry. His mother scolds him and spanks him each time he teases or hits his sisters. Although Juan stops teasing and hitting his sisters at the moment that his mother scolds him and spanks him, he continues to engage in these aggressive and disruptive behaviors with his sisters day after day.
No, the scolding and spanking do not function as punishers. They have not resulted in a decrease in Juan's problem behavior over time. This example actually illustrates positive reinforcement. Juan's behavior (teasing and hitting) results in the presentation of a consequence (scolding and spanking by his mother and crying by his sisters), and the outcome is that Juan continues to engage in the behavior day after day. These are the three parts of the definition of positive reinforcement.
This raises an important point about the definition of punishment. You cannot define punishment by whether the consequence appears unfavorable, unpleasant, or aversive. You can conclude that a particular consequence is punishing only if the behavior decreases in the future. In Juan's case, scolding and spanking appear to be unfavorable consequences, but he continues to hit and tease his sisters. If the scolding and spanking functioned as a punisher, Juan would stop hitting and teasing his sisters over time. When we define punishment (or reinforcement) according to whether the behavior decreases (or increases) in the future as a result of the consequences, we are adopting a functional definition. See Table 6-1 for examples of punishment.
One other point to consider is whether a behavior decreases or stops only at the time the consequence is administered, or whether the behavior decreases in the future. Juan stopped hitting his sisters at the time that he received a spanking from his mother, but he did not stop hitting his sisters in the future. Some parents continue to scold or spank their children because it puts an immediate stop to the problem behavior, even though their scolding and spanking do not make the child's problem behavior less likely to occur in the future. The parents believe they are using punishment. However, if the behavior continues to occur in the future, the scolding and spanking do not function as punishers and may actually function as reinforcers.
TABLE 6-1 Examples for Self-Assessment (Punishment)
· 1. Ed was riding his bike down the street and looking down at the ground as he pedaled. All of a sudden he ran into the back of a parked car, flew off the bike, and hit the roof of the car with his face. In the process, he knocked his front teeth loose. In the future, Ed was much less likely to look down at the ground when he rode his bike.
· 2. When Alma was in the day care program, she sometimes hit the other kids if they played with her toys. Alma's teacher made her quit playing and sit in a chair in another room for 2 minutes each time she hit someone. As a result, Alma stopped hitting the other children.
· 3. Carlton made money in the summer by mowing his neighbor's lawn each week. One week, Canton ran over the garden hose with the lawn mower and ruined the hose. His neighbor made Carlton pay for the hose. Since then, whenever Carlton mows the lawn, he never runs over a hose or any other objects lying in the grass.
· 4. Sarah was driving down the interstate on her way to see a friend who lived a few hours away. Feeling a little bored, she picked up the newspaper on the seat next to her and began to read it. As she was reading, her car gradually veered to the right without her noticing. Suddenly, the car was sliding on gravel and sideswiped a speed limit sign. As a result, Sarah no longer reads when she drives on the highway.
· 5. Helen goes to school in a special class for children with behavior disorders. Her teachers use poker chips as conditioned reinforcers for her academic performance. The teachers place a poker chip in a container to reinforce her correct answers. However, each time Helen gets out of her seat without permission, the teachers take one chip away from her. As a result, Helen stopped getting out of her seat without permission.
· 6. At parties, Kevin used to make jokes about his wife's cooking and got a lot of laughs from his friends. At first, his wife smiled at his jokes, but eventually she got upset; whenever Kevin made a joke about her cooking, she gave him an icy stare. As a result, Kevin stopped joking about his wife's cooking.
© Cengage Learning®
What reinforces the parents' behavior of scolding and spanking the child?
Because the child temporarily stops the problem behavior after the scolding or spanking, the parents' behavior of scolding or spanking is negatively reinforced, so the parents continue to scold or spank the child in the future when he or she misbehaves.
A Common Misconception about Punishment
In behavior modification, punishment is a technical term with a specific meaning. Whenever behavior analysts speak of punishment, they are referring to a process in which the consequence of a behavior results in a future decrease in the occurrence of that behavior. This is quite different from what most people think of as punishment. In general usage, punishment can mean many different things, most of them unpleasant.
Many people define punishment as something meted out to a person who has committed a crime or other inappropriate behavior. In this context, punishment involves not only the hope that the behavior will cease, but also elements of retribution or retaliation; part of the intent is to hurt the person who has committed the crime. Seen as something that a wrongdoer deserves, punishment has moral or ethical connotations. Authority figures such as governments, police, churches, or parents impose punishment to inhibit inappropriate behavior—that is, to keep people from breaking laws or rules. Punishment may involve prison time, a death sentence, fines, the threat of going to hell, spanking, or scolding. However, the everyday meaning of punishment is quite different from the technical definition of punishment used in behavior modification.
People who are unfamiliar with the technical definition of punishment may believe that the use of punishment in behavior modification is wrong or dangerous. It is unfortunate that Skinner adopted the term punishment, a term that has an existing meaning and many negative connotations. As a student, it is important for you to understand the technical definition of punishment in behavior modification and to realize that it is very different from the common view of punishment in society.
On Terms: Punish Behavior, not People
· ▪ It is correct to say that you punish a behavior (or a response). You are weakening a behavior by punishing it. To say “The teacher punished Sarah's disruptive behavior with time out” is correct.
· ▪ It is incorrect to say that you punish a person. You don't weaken a person, you weaken a person's behavior. To say, “The teacher punished Sarah for disruptive behavior” is not correct.
Positive and Negative Punishment
The two basic procedural variations of punishment are positive punishment and negative punishment. The difference between positive and negative punishment is determined by the consequence of the behavior.
Positive punishment is defined as follows.
· 1. The occurrence of a behavior
· 2. is followed by the presentation of an aversive stimulus,
· 3. and as a result, the behavior is less likely to occur in the future.
Negative punishment is defined as follows.
· 1. The occurrence of a behavior
· 2. is followed by the removal of a reinforcing stimulus,
· 3. and as a result, the behavior is less likely to occur in the future.
Notice that these definitions parallel the definitions of positive and negative reinforcement (see Chapter 4 ). The critical difference is that reinforcement strengthens a behavior or makes it more likely to occur in the future, whereas punishment weakens a behavior or makes it less likely to occur in the future.
Many researchers have examined the effects of punishment on the behavior of laboratory animals. Azrin and Holz (1966) discussed the early animal research on punishment, much of which they had conducted themselves. Since then, researchers have investigated the effects of positive and negative punishment on human behavior ( Axelrod & Apsche, 1983 ). For example, Corte, Wolf, and Locke (1971) helped institutionalized adolescents with intellectual disabilities decrease self-injurious behavior by using punishment. One subject slapped herself in the face. Each time she did so, the researchers immediately applied a brief electric shock with a handheld shock device. (Although the shock was painful, it did not harm the girl.) As a result of this procedure, the number of times she slapped herself in the face each hour decreased immediately from 300 400 to almost zero. (Note that this study is from 1971. Electric shock is rarely, if ever, used as a punisher today because of ethical concerns. This study is cited to illustrate the basic principle of positive punishment, not to support the use of electric shock as a punisher.)
Why is this an example of positive punishment?
This is an example of positive punishment because the painful stimulus was presented each time the girl slapped her face, and the behavior decreased as a result. Sajwaj, Libet, and Agras (1974) also used positive punishment to decrease life-threatening rumination behavior in a 6-month-old infant. Rumination in infants involves repeatedly regurgitating food into the mouth and swallowing it again. It can result in dehydration, malnutrition, and even death. In this study, each time the infant engaged in rumination, the researchers squirted a small amount of lemon juice into her mouth. As a result, the rumination behavior immediately decreased, and the infant began to gain weight.
One other form of positive punishment is based on the Premack principle, which states that when a person is made to engage in a low-probability behavior contingent on a high-probability behavior, the high-probability behavior will decrease in frequency (Miltenberger & Fuqua, 1981). That is, if, after engaging in a problem behavior, a person has to do something he or she doesn't want to do, the person will be less likely to engage in the problem behavior in the future. Luce, Delquadri, and Hall (1980) used this principle to help a 6-year-old boy with developmental disability stop engaging in aggressive behavior. Each time the boy hit someone in the classroom, he was required to stand up and sit down on the floor ten times in a row. As shown in Figure 6-1 , this punishment procedure, called contingent exercise, resulted in an immediate decrease in the hitting behavior.
One thing you should notice in Figure 6-1 is that punishment results in an immediate decrease in the target behavior. Although extinction also decreases a behavior, it usually takes longer for the behavior to decrease, and an extinction burst often occurs where the behavior increases briefly before it decreases. With punishment, the decrease in behavior typically is immediate and there is no extinction burst. However, other side effects are associated with the use of punishment; these are described later in this chapter.
Negative punishment has also been the subject of extensive research. Two common examples of negative punishment are time-out from positive reinforcement and response cost (see Chapter 17 for a more detailed discussion). Both involve the loss of a reinforcing stimulus or activity after the occurrence of a problem behavior. Some students may confuse negative punishment and extinction. They both weaken behavior. Extinction involves withholding the reinforcer that was maintaining the behavior. Negative punishment, by contrast, involves removing or withdrawing a positive reinforcer after the behavior; the reinforcer that is removed in negative punishment is one the individual had already acquired and is not necessarily the same reinforcer that was maintaining the behavior. For example, Johnny interrupts his parents and the behavior is reinforced by his parents' attention. (They scold him each time he interrupts.) In this case, extinction would involve withholding the parents' attention each time Johnny interrupts. Negative punishment would involve the loss of some other reinforcer—such as allowance money or the opportunity to watch TV—each time he interrupted. Both procedures would result in a decrease in the frequency of interrupting.
Clark, Rowbury, Baer, and Baer (1973) used time-out to decrease aggressive and disruptive behavior in an 8-year-old girl with Down syndrome. In time-out, the person is removed from a reinforcing situation for a brief period after the problem behavior occurs. Each time the girl engaged in the problem behavior in the classroom, she had to sit by herself in a small time-out room for 3 minutes. As a result of time-out, her problem behaviors decreased immediately ( Figure 6-2 ). Through the use of time-out, the problem behavior was followed by the loss of access to attention (social reinforcement) from the teacher and other reinforcers in the classroom ( Figure 6-3 ).
In a study by Phillips, Phillips, Fixsen, and Wolf (1971) , “predelinquent” youths with serious behavior problems in a residential treatment program earned points for engaging in appropriate behavior and traded in their points for backup reinforcers such as snacks, money, and privileges. The points were conditioned reinforcers. The researchers then used a negative punishment procedure called response cost to decrease late arrivals for supper. When the youths arrived late, they lost some of the points they had earned. As a result, late arrivals decreased until the youths always showed up on time.
In all of these examples, the process resulted in a decrease in the future occurrence of the behavior. Therefore, in each example, the presentation or removal of a stimulus as a consequence of the behavior functioned as punishment.
On Terms: Distinguishing between Positive and Negative Punishment
Some students have confusion distinguishing between positive and negative punishment. They are both types of punishment, therefore, they both weaken behavior. The only difference is whether a stimulus is added (positive punishment) or removed (negative punishment) following the behavior. Think of positive as a plus or addition (+) sign and negative as a minus or subtraction (−) sign. In + punishment, you add a stimulus (an aversive stimulus) after the behavior. In − punishment, you subtract or take away a stimulus (a reinforcer) after the behavior. If you think of positive and negative in terms of adding or subtracting a stimulus after the behavior, the distinction should be clearer.
Unconditioned and Conditioned Punishers
Like reinforcement, punishment is a natural process that affects human behavior. Some events or stimuli are naturally punishing because avoiding or minimizing contact with these stimuli has survival value ( Cooper et al., 1987 ). Painful stimuli or extreme levels of stimulation are often dangerous. Behaviors that produce painful or extreme stimulation are naturally weakened, and behaviors that result in escape or avoidance of such stimulation are naturally strengthened. For this reason, painful stimuli or extreme levels of stimulation have biological importance. Such stimuli are called unconditioned punishers . No prior conditioning is needed for an unconditioned punisher to function as a punisher. Through the process of evolution, humans have developed the capacity for their behavior to be punished by these naturally aversive events without any prior training or experience. For example, extreme heat or cold, extreme levels of auditory or visual stimulation, or any painful stimulus (e.g., from electric shock, a sharp object, or a forceful blow) naturally weakens the behavior that produces it. If these were not unconditioned punishers, we would be more likely to engage in dangerous behaviors that could result in injury or death. We quickly learn not to put our hands into a fire, look directly into the sun, touch sharp objects, or go barefoot in the snow or on hot asphalt because each of these behaviors results in a naturally punishing consequence.
A second type of punishing stimulus is called a conditioned punisher . Conditioned punishers are stimuli or events that function as punishers only after being paired with unconditioned punishers or other existing conditioned punishers. Any stimulus or event may become a conditioned punisher if it is paired with an established punisher.
The word no is a common conditioned punisher. Because it is often paired with many other punishing stimuli, it eventually becomes a punisher itself. For example, if a child reaches for an electrical outlet and the parent says “no,” the child may be less likely to reach for the outlet in the future. When the child spells a word incorrectly in the classroom and the teacher says “no,” the child will be less likely to spell that word incorrectly in the future. The word no is considered a generalized conditioned punisher because it has been paired with a variety of other unconditioned and conditioned punishers over the course of a person's life. Van Houten and his colleagues ( Van Houten, Nau, MacKenzie-Keating, Sameoto, & Colavecchia, 1982 ) found that if firm reprimands were delivered to students in the classroom when they engaged in disruptive behavior, their disruptive behavior decreased. In this study, reprimands were conditioned punishers for the students' disruptive behavior. Threats of harm often are conditioned punishers. Because threats have often been associated with painful stimulation in the past, threats may become conditioned punishers.
Stimuli that are associated with the loss of reinforcers may become conditioned punishers. A parking ticket or a speeding ticket is associated with the loss of money (paying a fine), so the ticket is a conditioned punisher for many people. In reality, whether speeding tickets or parking tickets function as conditioned punishers depends on a number of factors, including the schedule of punishment (how likely is it that you will get caught speeding?) and the magnitude of the punishing stimulus (how big is the fine?). These and other factors that influence the effectiveness of punishment are discussed later in this chapter.
A warning from a parent may become a conditioned punisher if it has been paired with the loss of reinforcers such as allowance money, privileges, or preferred activities. As a result, when a child misbehaves and the parent gives the child a warning, the child may be less likely to engage in the same misbehavior in the future. A facial expression or look of disapproval may be a conditioned punisher when it is associated with the loss of attention or approval from an important person (such as a parent or teacher). A facial expression may also be associated with an aversive event such as a scolding or a spanking, and thus may function as a conditioned punisher ( Doleys, Wells, Hobbs, Roberts, & Cartelli, 1976 ; Jones & Miller, 1974 ).
Once again, it is important to remember that a conditioned punisher is defined functionally. It is defined as a punisher only if it weakens the behavior that it follows. If a person exceeds the speed limit and receives a speeding ticket and the outcome is that the person is less likely to speed in the future, the ticket functioned as a punisher. However, if the person continues to speed after receiving a ticket, the ticket was not a punisher. Consider the following example.
The look is not a conditioned punisher because the child's behavior of belching at the table was not weakened; the child did not stop engaging in the behavior. The mother's look may have functioned as a positive reinforcer, or perhaps other family members laughed when the child belched, and thus reinforced the belching behavior. Alternatively, belching may be naturally reinforcing because it relieves an unpleasant sensation in the stomach.
Contrasting Reinforcement and Punishment
Important similarities and differences exist between positive and negative reinforcement on one hand and positive and negative punishment on the other. The defining features of each principle are that a behavior is followed by a consequence, and the consequence influences the future occurrence of the behavior.
The similarities and differences between the two types of reinforcement and punishment can be summarized as follows:
· ▪ When a stimulus is presented after a behavior (left column), the process may be positive reinforcement or positive punishment, depending on whether the behavior is strengthened (reinforcement) or weakened (punishment) in the future.
· ▪ When a stimulus is removed after the behavior (right column), the process may be negative reinforcement or negative punishment. It is negative reinforcement if the behavior is strengthened and negative punishment if the behavior is weakened.
· ▪ When a behavior is strengthened, the process is reinforcement (positive or negative).
· ▪ When a behavior is weakened, the process is punishment (positive or negative).
One particular stimulus may be involved in reinforcement and punishment of different behaviors in the same situation, depending on whether the stimulus is presented or removed after the behavior. Consider the example of Kathy and the dog. When Kathy reached over the fence, this behavior was followed immediately by the presentation of an aversive stimulus (the dog bit her). The dog's bite served as a punisher: Kathy was less likely to reach over the fence in the future. However, when Kathy pulled her hand back quickly, she terminated the dog bite. Because pulling her hand back removed the pain of being bitten, this behavior was strengthened. This is an example of negative reinforcement. As you can see, when the dog bite was presented after one behavior, the behavior was weakened; when the dog bite was removed after another behavior, that behavior was strengthened.
In the example of Otis and the hot skillet, the immediate consequence of grabbing the skillet handle was a painful stimulus. The outcome was that Otis was less likely to grab a hot skillet in the future. This is positive punishment.
How is negative reinforcement involved in this example?
When Otis used a hot pad, he avoided the painful stimulus. As a result, he is more likely to use a hot pad when grabbing a hot skillet in the future (negative reinforcement). Touching the hot skillet is punished by the presentation of a painful stimulus; using the hot pad is reinforced by avoidance of the painful stimulus.
Now consider how the same stimulus may be involved in negative punishment of one behavior and positive reinforcement of another behavior. If a reinforcing stimulus is removed after a behavior, the behavior will decrease in the future (negative punishment), but if a reinforcing stimulus is presented after a behavior, the behavior will increase in the future (positive reinforcement). You know that a stimulus is functioning as a positive reinforcer when its presentation after a behavior increases that behavior and its removal after a behavior decreases that behavior. For example, Fred's parents take his bicycle away for a week whenever they catch him riding after dark. This makes Fred less likely to ride his bike after dark (negative punishment). However, after a few days, Fred pleads with his parents to let him ride his bike again and promises never to ride after dark. They give in and give him his bike back. As a result, he is more likely to plead with his parents in the future when his bike is taken away (positive reinforcement).
Factors That Influence the Effectiveness of Punishment
The factors that influence the effectiveness of punishment are similar to those that influence reinforcement. They include immediacy, contingency, motivating operations, individual differences, and magnitude.
Immediacy
When a punishing stimulus immediately follows a behavior, or when the loss of a reinforcer occurs immediately after the behavior, the behavior is more likely to be weakened. That is, for punishment to be most effective, the consequence must follow the behavior immediately. As the delay between the behavior and the consequence increases, the effectiveness of the consequence as a punisher decreases. To illustrate this point, consider what would happen if a punishing stimulus occurred sometime after the behavior occurred. A student makes a sarcastic comment in class and the teacher immediately gives her an angry look. As a result, the student is less likely to make a sarcastic comment in class. If the teacher had given the student an angry look 30 minutes after the student made the sarcastic comment, the look would not function as a punisher for the behavior of making sarcastic comments. Instead, the teacher's angry look probably would have functioned as a punisher for whatever behavior the student had engaged in immediately before the look.
Contingency
For punishment to be most effective, the punishing stimulus should occur every time the behavior occurs. We would say that the punishing consequence is contingent on the behavior when the punisher follows the behavior each time the behavior occurs and the punisher does not occur when the behavior does not occur. A punisher is most likely to weaken a behavior when it is contingent on the behavior. This means that punishment is less effective when it is applied inconsistently—that is, when the punisher follows only some occurrences of the behavior or when the punisher is presented in the absence of the behavior. If a reinforcement schedule continues to be in effect for the behavior, and punishment is applied inconsistently, some occurrences of the behavior may be followed by a punisher and some occurrences of the behavior may be followed by a reinforcer. In this case, the behavior is being influenced by an intermittent schedule of reinforcement at the same time that it is resulting in an intermittent punishment schedule. When a concurrent schedule of reinforcement is competing with punishment, the effects of punishment are likely to be diminished.
If a hungry rat presses a bar in an experimental chamber and receives food pellets, the rat will continue to press the bar. However, if punishment is implemented and the rat receives an electric shock each time it presses the bar, the bar-pressing behavior will stop. Now suppose that the rat continues to receive food for pressing the bar and receives a shock only occasionally when it presses the bar. In this case, the punishing stimulus would not be effective because it is applied inconsistently or intermittently. The effect of the punishing stimulus in this case depends on the magnitude of the stimulus (how strong the shock is), how often it follows the behavior, and the magnitude of the establishing operation for food (how hungry the rat is).
Motivating Operations
Just as establishing operations (EOs) and abolishing operations (AOs) may influence the effectiveness of reinforcers, they also influence the effectiveness of punishers. An establishing operation is an event or a condition that makes a consequence more effective as a punisher (or a reinforcer). An abolishing operation is an event or a condition that makes a consequence less effective as a punisher (or a reinforcer).
In the case of negative punishment, deprivation is an EO that makes the loss of reinforcers more effective as a punisher and satiation is an AO that makes the loss of reinforcers less effective as a punisher. For example, telling a child who misbehaves at the dinner table that dessert will be taken away will: (a) be a more effective punisher if the child has not eaten any dessert yet and is still hungry (EO), (b) be a less effective punisher if the child has had two or three helpings of the dessert already and is no longer hungry (AO). Losing allowance money for misbehavior will: (a) be a more effective punisher if the child has no other money and plans to buy a toy with the allowance money (EO), (b) be a less effective punisher if the child has recently received money from other sources (AO).
In the case of positive punishment, any event or condition that enhances the aversiveness of a stimulus event makes that event a more effective punisher (EO), whereas events that minimize the aversiveness of a stimulus event make it less effective as a punisher (AO). For example, some drugs (e.g., morphine) minimize the effectiveness of a painful stimulus as a punisher. Other drugs (e.g., alcohol) may reduce the effectiveness of social stimuli (e.g., peer disapproval) as punishers.
Are these examples of AOs or EOs?
These are examples of AOs because in each case the drugs made punishers less effective. Instructions or rules may enhance the effectiveness of certain stimuli as punishers. For example, a carpenter tells his apprentice that when the electric saw starts to vibrate, it may damage the saw or break the blade. As a result of this instruction, vibration from the electric saw is established as a punisher. The behavior that produces the vibration (e.g., sawing at an angle, pushing too hard on the saw) is weakened.
Is this an example of an EO or an AO?
This is an example of an EO because the instruction made the presence of vibration more aversive or more effective as a punisher for using the saw incorrectly. In addition, using the saw correctly avoids the vibration and this behavior is strengthened through negative reinforcement.
Effects of Motivation Operations on Reinforcement and Punishment
An establishing operation (EO):
An abolishing operation (AO):
Makes a reinforcer more potent so it increases:
· ▪ the effectiveness of positive reinforcement
· ▪ the effectiveness of negative punishment
Makes a reinforcer less potent so it decreases:
· ▪ the effectiveness of positive reinforcement
· ▪ the effectiveness of negative punishment
Makes an aversive stimulus more potent so it increases:
· ▪ the effectiveness of negative reinforcement
· ▪ the effectiveness of positive punishment
Makes an aversive stimulus less potent so it decreases:
· ▪ the effectiveness of negative reinforcement
· ▪ the effectiveness of positive punishment
Factors That Influence the Effectiveness of Punishment
Immediacy
A stimulus is more effective as a punisher when presented immediately after the behavior.
Contingency
A stimulus is more effective as a punisher when presented contingent on the behavior.
Motivating operations
Some antecedent events make a stimulus more effective as a punisher at a particular time (EO). Some events make a stimulus a less effective punisher at a particular time (AO).
Individual differences and magnitude
Punishers vary from person to person. In general, a more intense aversive stimulus is a more effective punisher.
Individual Differences and Magnitude of the Punisher
Another factor that influences the effectiveness of punishment is the nature of the punishing consequence. The events that function as punishers vary from person to person ( Fisher et al., 1994 ). Some events may be established as conditioned punishers for some people and not for others because people have different experiences or conditioning histories. Likewise, whether a stimulus functions as a punisher depends on its magnitude or intensity. In general, a more intense aversive stimulus is more likely to function as a punisher. This also varies from person to person. For example, a mosquito bite is a mildly aversive stimulus for most people; thus, the behavior of wearing shorts in the woods may be punished by mosquito bites on the legs, and wearing long pants may be negatively reinforced by the avoidance of mosquito bites. However, some people refuse to go outside at all when the mosquitoes are biting, whereas others go outside and do not seem to be bothered by mosquito bites. This suggests that mosquito bites may be a punishing stimulus for some people but not others. The more intense pain of a bee sting, by contrast, probably is a punisher for most people. People will stop engaging in the behavior that resulted in a bee sting and will engage in other behaviors to avoid a bee sting. Because the bee sting is more intense than a mosquito bite, it is more likely to be an effective punisher.
FOR FURTHER READING Influence Punishment
The behavior modification principle of punishment has been studied by researchers for years. One important recommendation when using punishment is to use a reinforcement procedure in conjunction with punishment. For example, Thompson, Iwata, Conners, and Roscoe (1999) showed that punishment for self-injurious behavior was more effective when a differential reinforcement procedure was used with punishment (they reinforced a desirable behavior at the same time they used punishment for self-injurious behavior). Similarly, Hanley, Piazza, Fisher, and Maglieri (2005) showed that when punishment was added to a differential reinforcement procedure, the reinforcement procedure was more effective. Interestingly, the children in this study preferred the procedure involving reinforcement and punishment over reinforcement alone. These two studies demonstrate the importance of combining reinforcement and punishment. In an investigation of different intensities of punishment, Vorndran and Lerman (2006) showed that a less intense punishment procedure was not effective until it was paired with a more intense punishment procedure. Finally, Lerman, Iwata, Shore, and DeLeon (1997) showed that intermittent punishment is less effective than continuous punishment, although for some participants, intermittent punishment was effective when it followed the use of continuous punishment. Together, these two studies suggest that the punishment contingency and intensity are important factors in the effectiveness of punishment.
Problems with Punishment
A number of problems or issues must be considered with the use of punishment, especially positive punishment involving the use of painful or other aversive stimuli.
· ▪ Punishment may produce elicited aggression or other emotional side effects.
· ▪ The use of punishment may result in escape or avoidance behaviors by the person whose behavior is being punished.
· ▪ The use of punishment may be negatively reinforcing for the person using punishment, and thus may result in the misuse or overuse of punishment.
· ▪ When punishment is used, its use is modeled, and observers or people whose behavior is punished may be more likely to use punishment themselves in the future.
· ▪ Finally, punishment is associated with a number of ethical issues and issues of acceptability. These issues are addressed in detail in Chapter 18 .
Emotional Reactions to Punishment
Behavioral research with nonhuman subjects has demonstrated that aggressive behavior and other emotional responses may occur when painful stimuli are presented as punishers. For example, Azrin, Hutchinson, and Hake (1963) showed that presenting a painful stimulus (shock) results in aggressive behavior in laboratory animals. In this study, when one monkey received a shock, it immediately attacked another monkey that was present when the shock was delivered. When such aggressive behaviors or other emotional responses result in the termination of the painful or aversive stimulus, they are negatively reinforced. Thus, the tendency to engage in aggressive behavior (especially when it is directed at the source of the aversive stimulus) may have survival value.
Escape and Avoidance
Whenever an aversive stimulus is used in a punishment procedure, an opportunity for escape and avoidance behavior is created. Any behavior that functions to avoid or escape from the presentation of an aversive stimulus is strengthened through negative reinforcement. Therefore, although an aversive stimulus may be presented after a target behavior to decrease the target behavior, any behavior the person engages in to terminate or avoid that aversive stimulus is reinforced ( Azrin, Hake, Holz, & Hutchinson, 1965 ). For example, a child might run away or hide from a parent who is about to spank the child. Sometimes people learn to lie to avoid punishment, or learn to avoid the person who delivers the punishing stimulus. When implementing a punishment procedure, you have to be careful that inappropriate escape and avoidance behaviors do not develop.
Negative Reinforcement for the Use of Punishment
Some authors argue that punishment may be too easily misused or overused because its use is negatively reinforcing to the person implementing it ( Sulzer-Azaroff & Mayer, 1991 ).
Describe how the use of punishment may be negatively reinforcing.
When punishment is used, it results in an immediate decrease in the problem behavior. If the behavior decreased by punishment is aversive to the person using punishment, the use of punishment is negatively reinforced by the termination of the aversive behavior. As a result, the person is more likely to use punishment in the future in similar circumstances. For example, Dr. Hopkins hated it when her students talked in class while she was teaching. Whenever someone talked in class, Dr. Hopkins stopped teaching and stared at the student with her meanest look. When she did this, the student immediately stopped talking in class. As a result, Dr. Hopkins's behavior of staring at students was reinforced by the termination of the students' talking in class. Dr. Hopkins used the stare frequently, and she was known all over the university for it.
Punishment and Modeling
People who observe someone making frequent use of punishment may themselves be more likely to use punishment when they are in similar situations. This is especially true with children, for whom observational learning plays a major role in the development of appropriate and inappropriate behaviors ( Figure 6-4 ). For example, children who experience frequent spanking or observe aggressive behavior may be more likely to engage in aggressive behavior themselves ( Bandura, 1969 ; Bandura, Ross, & Ross, 1963 ).
Ethical Issues
Some debate exists among professionals about whether it is ethical to use punishment, especially painful or aversive stimuli, to change the behavior of others ( Repp & Singh, 1990 ). Some argue that the use of punishment cannot be justified ( Meyer & Evans, 1989 ). Others argue that the use of punishment may be justified if the behavior is harmful or serious enough and, therefore, the potential benefits to the individual are great ( Linscheid, Iwata, Ricketts, Williams, & Griffin, 1990 ). Clearly, ethical issues must be considered before punishment is used as a behavior modification procedure. The ethical guidelines that Board Certified Behavior Analysts must follow state that (a) reinforcement should be used before punishment is considered and (b) if punishment is necessary it should be used in conjunction with reinforcement for alternative behavior (see chapter 15 ) ( Bailey & Burch, 2011 ). Surveys show that procedures involving punishment are much less acceptable in the profession than are behavior modification procedures that use reinforcement or other principles ( Kazdin, 1980; Miltenberger, Lennox, & Erfanian, 1989 ). Professionals must consider a number of issues before they decide to use behavior modification procedures based on punishment. In addition, punishment procedures are always used in conjunction with functional assessment and functional interventions emphasizing extinction, strategies to prevent problem behaviors, and positive reinforcement procedures to strengthen the desirable behavior. (See Chapters 13 - 18 for further discussion of these issues.)
CHAPTER SUMMARY
· 1. Punishment is a basic principle of behavior. Its definition has three basic components: The occurrence of a behavior is followed by an immediate consequence, and the behavior is less likely to occur in the future.
· 2. A common misconception about punishment is that it means doing harm to another person or exacting retribution on another person for that person's misbehavior. Instead, punishment is a label for a behavioral principle devoid of the legal or moral connotations usually associated with the word.
· 3. There are two procedural variations of punishment: positive and negative punishment. In positive punishment, an aversive stimulus is presented after the behavior. In negative punishment, a reinforcing stimulus is removed after the behavior. In both cases, the behavior is less likely to occur in the future.
· 4. The two types of punishing stimuli are unconditioned punishers and conditioned punishers. An unconditioned punisher is naturally punishing. A conditioned punisher is developed by pairing a neutral stimulus with an unconditioned punisher or another conditioned punisher.
· 5. Factors that influence the effectiveness of punishment include immediacy, contingency, motivating operations, individual differences, and magnitude.
· 6. Potential problems associated with the use of punishment include emotional reactions to punishment, the development of escape and avoidance behaviors, negative reinforcement for the use of punishment, modeling of the use of punishment, and ethical issues.