Generalization

Amy became fearful of all sorts of dogs. She was uncomfortable around collies and perplexed with poodles. In fact, even a poster of a chow chow scared her. This is due to a phenomenon called stimulus generalization. Amy, as with all of us, generalized her fear to similar stimuli. The shape of the dog, the size of the dog, or the disposition of the dog did not matter. Anything in her “fear category” was defined as a dog, and to Amy all dogs provoked fear. By definition, generalization is the tendency of a stimulus, similar to the conditioned stimulus (CS), to cause the conditioned response (CR).

We often see stimulus generalization when young children begin to learn language. Most parents have at least one embarrassing incident associated with when baby finally calls her father “Dada.” Over the next few days or weeks she calls every man she sees, “Dada.” This generalization is the natural progression of learning. It tends to decrease as a person gets older. However, it is still a powerful reality even for adults. I have seen many adults in the corporate world experience stimulus generalization to their own detriment.

Mr. Carlson was having angry thoughts about his new supervisor. He openly stated that he barely knew the new boss. He had no history of trouble with other supervisors. When I asked him who his new supervisor reminded him of, he shed light on his internal conflict. “Now that you mention it,” he pondered, “He looks a lot like my brother. He is such a dishonest bastard I can’t stand him!” Even in his explanation I wasn’t sure if he was talking about his brother or the new supervisor. Mr. Carlson was experiencing stimulus generalization.

Desensitization

Earlier we met Mrs. Rizzo, a woman who was overwhelmed in the presence of balloons. Her fear brought much discomfort to her work and home life. Mrs. Rizzo and I used a learning process called desensitization to decrease Mrs. Rizzo’s fear of balloons.

Desensitization simply involved reteaching Mrs. Rizzo that balloons are not fear provoking. This reteaching took shape by having Mrs. Rizzo spend time with balloons when she did not feel anxious. This sounds so simple, but it tends to be very tricky. In Mrs. Rizzo’s case, the fact that she was coming to my office to deal with her fear of balloons caused her to feel nervous.

While sitting privately in the waiting room, I explained to her that we were going to solve her problem with balloons. And, that we would solve this problem at her speed. This was very important to her. She was trying to trust me but, in the back of her mind she was concerned about having to confront her fear. If you think about it, she had spent most of her life avoiding her fear. By avoiding balloons she did not have to deal with her discomfort.

Mrs. Rizzo and I continued our session sitting at a round table. I asked her to teach me what it was about balloons that scared her. I softly asked questions about her discomfort.

 

Dr. Phil: How close to a balloon can you come before feeling discomfort?

Mrs. Rizzo: I can’t touch them.

Dr. Phil: I understand, touching them would make you feel uncomfortable. Could you watch a balloon on TV?

Mrs. Rizzo: Sure. It can’t hurt me.

Dr. Phil: Could you hold a photograph of a balloon?

Mrs. Rizzo: I could, it isn’t real.

Dr. Phil: Would you be okay with a balloon on a chair across a large room?

Mrs. Rizzo:  Yes, but I couldn’t go over to it. You’re not going to make me do that are you?

Dr. Phil: As I told you in the waiting room. I work for you. I will not make you do anything. Is that okay with you?

Mrs. Rizzo: It sure is. I don’t want to touch a balloon! (She said with a sour face.)

Dr. Phil: Would you feel relaxed if a balloon was sitting over there on the couch? (I pointed to the couch 15 feet away.)

Mrs. Rizzo: Would I have to touch it?

Dr. Phil: No.

Mrs. Rizzo: I could do that.

Dr. Phil: How about if the balloon was half way between us and the couch?

Mrs. Rizzo: No, No way ... That is just too close.

 

Just the thought of the balloon 7 feet away on the floor was causing her agitation.

 

Dr. Phil: Could you hold a balloon?

Mrs. Rizzo: I don’t think so.

Dr. Phil: Could you blow up a balloon?

Mrs. Rizzo: Are you nuts! (She looked astonished.)

 

What this gentle line of questioning accomplished was a hierarchy of fear. A way of measuring and categorizing Mrs. Rizzo’s level of fear.  At the top of the hierarchy, the most fear provoking activity was blowing up a balloon, at the other end was watching a balloon on TV. The bottom of the hierarchy was the starting point of Mrs. Rizzo’s relearning and the top of the hierarchy was Mrs. Rizzo’s goal. (At this point we didn’t even talk about the specifics of “our goal.” That could send Mrs. Rizzo running into the street yelling, “Dr. Phil is crazy, Dr. Phil is crazy ... He wants me to blow up a balloon!”  That would really get the eye doctor’s staff next door talking. I simply told Mrs. Rizzo that our goal was for her to become  comfortable around balloons.)

Mrs. Rizzo’s balloon fear hierarchy looked like this:

The hierarchy is  pyramid shaped. At the top we have the most fear provoking activity while at the bottom the lowest fear provoking activity. If we extend the pyramid, we would have all things that cause no fear whatsoever. Sometimes we need to start dealing with a problem very far from the top of the pyramid. We start where we can start, without passing judgment.

I met with Mrs. Rizzo, every few days over a two week period. Each time we started at the highest place on the hierarchical pyramid that she was comfortable. Then we slowly moved up the pyramid. This movement was simply a question of choice, “May I bring the balloon one foot, a half foot or an inch closer?” If she said no, we talked about controlling our discomfort by breathing. If she said yes, I slowly moved the balloon closer. At no point was pressure used. The goal was for Mrs. Rizzo to get used to the presence of the balloon, for her to relearn the pairing of the balloon as a neutral stimulus. Mrs. Rizzo was being reconditioned to the true nature of the balloon.

The conditioned stimulus (presence of balloons) was weakened when it appeared alone so often that Mrs. Rizzo no longer exhibited the conditioned response (fear of balloon). At this point Mrs. Rizzo’s fear of balloons is said to be desensitized.

By our fourth session, Mrs. Rizzo was comfortable sitting holding an inflated balloon. During the fifth session she was comfortable blowing air into a deflated balloon. At the end of only two weeks, Mrs. Rizzo was happily talking about the upcoming birthday party. Through the process of extinction we unpaired the initial conditioning by reconditioning a new response to balloons. Mrs. Rizzo felt triumphant.

The same process was implemented to recondition Amy with her fear of dogs. We developed a hierarchy of contact with dogs that caused Amy’s fear. We methodically relearned a more comfortable relationship for Amy with dogs. Within a month, Amy and I went to the county pound to befriend dogs and cats.

 

Due to the powerful effect of classical conditioning, we parents need to keep our eyes open to what we may be accidentally teaching our children by pairing conditioned and unconditioned stimuli. Children are sponges needing to soak in information. I was in a local supermarket and heard one mom say, “I told you not to touch that. If you touch that, the policeman (pointing to a uniformed guard) will take you off to jail! Do you want to go to jail?” The guard playing along with the mom, put his hand on his gun and leered at the toddler. It will not take many pairings such as this for a toddler to learn that “police” take you to jail and that police are scary. I point this out, because this innocent looking interaction is well learned by our children.

In classical conditioning the process of learning is thought to be “passive.” The child elicits a response. This response is reflexive in nature. The loud noise makes the child jump. The puff of air makes the child’s eye blink. The child is passively being influenced by the stimuli. Next we will learn about an interactive teaching method called operant conditioning.

Billy the Bomber

Billy was referred to my office by his probation officer. That within itself was a cause for alarm. You see, Billy was a scrawny, blond haired, mouse of a child. He was nine years old on the day I met him. He was frail looking and at first glance he looked all of six. Billy had been expelled from three schools in a nine week period. Other students’ parents had formed impromptu groups in two of the schools to demand that Billy be stopped. Things got so out of hand, the principal of the third school called the police and requested assistance from the county probation department.

Earlier in the week, when I talked to the probation officer by phone, he asked if I was willing to see this lad. “Sure, he sounds like my kind of kid,” I exclaimed. “Really?” He queried, “What did I say that makes you think this is a nice kid?” I explained:

 

I didn’t say he was a nice kid. I said he was my kind of kid. It seems to me that this kid must be brilliant. He has three principals perplexed and is able to get parents to come to school. Parents don’t even show up for teacher conferences any more. I’m looking forward to meeting him.

 

Through no fault of his own, Billy lived in a foster home. His mother had severe mental problems and his father was not known. Billy had not had contact with any family members since he was three.

Billy was brought to my office by his foster mother. He had lived with her for about three months. (He was asked to leave his last foster home due to his negative behaviors.)  When I first met Billy he bopped into my office and plopped into a stuffed rocking chair. He looked me straight in the eye and announced, “So, you’re my new shrink. Why you so !@#$% fat?”

“Because I eat too much.” I answered. “How come you’re here?” (At this point I knew I was in for a great therapeutic ride, Billy was most definitely my kind of kid.)

 

Billy:  I had to come here because the !@#$% told me to.

Dr. Phil: You do what you’re told?

Billy:  Most of the time, I only *@%! up some times. How about you, are you a !@#$%& up?

Dr. Phil: Sometimes, mainly when I try hard.

Billy:  You don’t mind if I !@#$% swear?

Dr. Phil: Swear? What do you mean?

Billy:  You know fat man!

Dr. Phil: I don’t care about words very much. You can swear in here if you need to. So, how come you had to come here?

Billy:  Mr. Dickhead (probation officer) told my new mom that I had to.

Dr. Phil: How come?

Billy:  Because the kids all call me ‘Billy the Bomber.’

Dr. Phil: How come?

Billy:  ‘Cause I can spit so good.

Dr. Phil: Spit good?

Billy:  Yeah, don’t you @##@$, oh‑ I’m sorry. Don’t you listen?

Dr. Phil: Sure I do, but I just don’t understand.

Billy:  Man! When the kids get in my face I spit at them.

Dr. Phil: Does it work?

Billy:  Sure enough. They all fall down, or run. No one likes to get spit on. Don’t you know nothin’?

Dr. Phil: Give me a break, I’m fat. I just don’t get it. You like to spit on your friends?

Billy:  No man! I don’t have no friends. I spit on the other kids.

Dr. Phil: Oh, I get it. You don’t have any friends.

Billy:  I didn’t come here to talk about friends, I came here to talk about spittin’!

 

As it turned out, Billy had learned to keep people away by spitting at them. He had quite a record. He wasn’t allowed on any school bus. He wasn’t allowed out at recess. He wasn’t allowed in a classroom desk row. His spitting worked very well for him. He had pushed his world away. Parents in the school secretly talked about Billy the Bomber. It got back to the principal of school number two that Billy must be crazed because of the AIDS virus and that he was endangering the other children by spitting on them. The facts didn’t seem to matter. Billy didn’t have AIDS. But the rumors persisted.

Billy was in need of crisis counseling. Before Billy could attend another school, his spitting behavior had to stop. As we discussed earlier, any behavior that can be learned can be unlearned. Seeing Billy was in control of his behavior (spitting), desensitization through classical conditioning would not work. As you recall, in classical conditioning, the child  is passively taught. In this case, Billy was involved with his learning.

When the response of the child influences their surroundings, it is known as operant conditioning. In the model of learning called operant conditioning, the responses of the child “operate” on the environment to produce rewarding consequences.

In 1911, E.L. Thorndike explained the Law of Effect. Thorndike showed that when learning, responses may be altered by their effects on the environment. What this means in plain speak is that behaviors (responses) that lead to positive outcomes (as defined by the child) are increased and behaviors that lead to negative outcomes are decreased. Simply, Thorndike noted what we all know, if something works for us, we do it again, if it doesn’t work for us, we stop doing it.

Let’s go back to Billy. Somehow he had learned to spit when he was upset. When I asked Billy how he started spitting, he told me, “I always have.” I’m pretty sure Billy didn’t pop out of the womb and spit at the hospital staff, at least not intentionally. Spitting, for Billy was an adaptive behavior. One he learned to use to get a need met, “...get kids out of my face.”

I would imagine it all started something like this. Little Billy was feeling picked on by some other child. He felt frustrated. He felt angry. He wanted the other child to leave him alone, then it happened. Splat! Billy spat at the other child. The other child was not overcome with joy and he ran off to some adult to tell on Billy. Did you see the reinforcement? Billy was reinforced when the other kid got out of his face right after Billy spat at him. Billy got his needs met. Billy was rewarded (kid left) for his behavior (spitting). A behavior that is reinforced will increase.

The reward in Billy’s case was the removal of an adverse stimulus (the other kid). This is an example of negative reinforcement. By definition, negative reinforcement is the removal of an adverse (negative) stimulus that increases a response. We know it increased Billy’s response (spitting), he was raining down like El Nino.

Negative reinforcement tends to be tricky to understand. Many confuse it with punishment. But you can keep it clear in your thoughts if you remember that it is a reinforcement; it increases the likelihood of a behavior. It is a reward. The reward in negative reinforcement is the removal of the negative stimulus.

One day, for the fun of it, when I was setting the dinner table, I placed a small piece of candy under each of my children’s plates. By the end of the meal I had forgotten about their little surprise. When one of the boys started to clear his plate he found the candy. What fun! Both boys thought  that this treat should be repeated at every meal. The next morning, my youngest, then four, came to the breakfast table and tilted his bowl, spilling milk and cereal.  “What are you doing?” I questioned. “Where is my candy?” he asked sadly. I had accidentally taught my four year old to spill his cereal. Which, incidentally, he did regularly without my help. For days my kids looked under stuff at every meal. The boys were positively reinforced to look under their plates. This is an example of positive reinforcement. By definition, a positive reinforcer is a stimulus that increases the likelihood of a response. Finding the candy under the plate increased the likelihood of the children looking under the plates.

Extinction

Mrs. Messick was concerned about her four year old daughter. She explained:

 

Wendy is a very bright little girl. She is so sweet, but... she refuses to pick up after herself. It has become a huge battle. I’m getting to the point that I’m afraid to ask her to pick up her own toys. When I do she throws a fit. She yells and screams as if she is being beaten. I’m concerned that my neighbors may think that I’m beating her. But she has to learn how to pick up after herself.

 

My advice to Mrs. Messick was to ignore the screaming and be honest to her reasonable request. This turned out to be a real chore for Mrs. Messick. The following week she told me what had happened.

 

I did just what you told me. At first I thought that this would never work. I even told my husband that you were  silly. I just didn’t think that something this major could be solved so simply.

I visited all my neighbors and told them a little about my problem. I told them that we never beat Wendy and that the increased screaming over the next week was your fault. I’m sorry Dr. Phil, I just didn’t think it was going to work.

That same afternoon I told Wendy that she was a big girl and would have to pick up after herself. She seemed to accept this with no problem. Just before dinner I asked her to pick up her toys in the living room. She went nuts. In minutes she was screaming at the top of her lungs. I ignored her. After three or four minutes she was the loudest I had ever heard her. After about five minutes of this screaming she came into the kitchen and calmly asked why she had to pick up her toys. I told her again about being a big girl and her responsibility. She went nuts, again! I did what you told me and calmly walked away. She followed me— screaming! I went into the bathroom and she sat at the door crying and yelling about how picking up her toys was too hard to do all by herself. She even kicked at the bathroom door a few times. This went on for another five minutes. Then she stopped. The house was too quiet so I went looking for her. She was in her room playing with the toys that had been in the living room. She seemed just fine ... as if she had never cried at all. Over the next few days we played the same game, but only for a few minutes each time. It seems like a miracle. The last few days she just picked up her toys as if she had never had a problem. She just complains a little, just like a kid should.

 

With a big smile on her face she asked, “How do I get my husband to pick up his dirty clothing?”

 

What seemed like a miracle to Mrs. Messick is actually called extinction.  Mrs. Messick extinguished Wendy’s learned behavior (the tantrum) by not reinforcing Wendy’s crying and screaming behaviors. Remember, behaviors have to be reinforced or they weaken (occur less). When mom backed down and picked up Wendy’s toys she was teaching Wendy to tantrum. The positive reinforcer of picking up Wendy’s toys was working quite well for Wendy. But Mrs. Messick did not want to teach this so, she had to extinguish that learned behavior. By not reinforcing Wendy’s crying tantrum the tantrum stopped working for Wendy. By definition, extinction is the process in which a learned response, which is no longer reinforced, reverts to its preconditioned level.

It is also interesting to note that Wendy’s crying was a negative reinforcement on her mom’s behavior of picking up Wendy’s toys. When mom backed down and picked up the toys she was able to turn off the aversive stimulus of Wendy’s crying. This removal of an adverse stimulus reinforced  mom to continue picking up the toys. (This circular reinforcement can keep us parents up at night if we think too hard about all the layers of reinforcement in our life.)

Punishment

For most parents, punishment is the most often used behavior controlling mechanism. While shopping in a local supermarket I observed a little boy who looked to be about four years old. His mother’s patience was wearing thin. Over the next four or five minutes her statements went from, “Not now Tommy!” to “Tommy, damn it! Do you want a spanking?”

When I got to the checkout, by chance, there was Tommy, sitting in the child seat looking angry. Mom was conducting business with the cashier when she spied Tommy reaching for the candy shelf just inches from his grasp. Slap! Like a King Cobra, mom slapped Tommy’s hand without missing a beat of her checkout conversation.

Tommy recoiled his hand. Held it to his chest and seemed only a little bothered by the slap. This is what most parents understand punishment to be. Tommy got his hand slapped so he should now know not to reach for candy. Specifically, Tommy’s reaching is the response that his mother found to be undesirable. The slap is an aversive stimulus designed to decrease the likelihood of the undesired response. The slap caused  Tommy to replace the undesirable behavior (reaching for candy) with a different behavior, not reaching for candy (desired response).

This looks like:

By definition, punishment is the presentation of an aversive stimulus, following an undesired response, that decreases the likelihood of that undesired response.

The actual use of punishment is quite complicated. A few seconds following mom slapping Tommy’s hand Tommy reached his left foot out towards the candy shelf. Mom slapped the foot and exclaimed, “Tommy, stop being a pain, just sit still!” Tommy then reached with his right foot and was able to kick the candy box off the shelf. M & M packets covered the floor. Mom got angry. Tommy got angry. The cashier seemed unfazed. As mom pushed the cart with screaming Tommy out of the store, the cashier said to me, “People should leave their damn kids at home, they give me a headache.”

Punishment is a powerful teaching tool. However, it has two major draw backs to its effectiveness. First, for punishment to be effective it must be severe. If not, its effectiveness is only temporary. Second, punishment brings to the relationship powerful feelings such as anger and revenge which can destroy a positive learning situation. Let’s look at each of these drawbacks individually.

For punishment to be effective it must be severe, if not the lesson is only learned for the short term. Tommy did not truly learn not to reach for the candy. If the punishment was severe enough to teach that we would call it child abuse. My friend Stephan is a locksmith and he is very good at his job. I bring this up because once he suffered an accident to his hand. He was preparing to take out the garbage. When he was pushing the refuse down into the can a glass jar, under the top papers, broke. A large piece of glass protruded up and severely cut his hand between his thumb and palm. The damage was massive, limiting the movement of his thumb. Years later I watched him while he was closing up his shop. As he was taking out the garbage I asked him what he was looking for. “I’m looking for something to push the garbage down with,” he said as he held up his damaged hand, “You won’t catch me using my hand to do it.” Stephan had learned, through the learning process of punishment, not to push garbage into a garbage can with his hand. The lesson had been well learned over fifteen years earlier.

For Stephan, the act of pushing garbage down (response) was followed by the severe pain of the glass cutting deep into his hand (punisher) causing a different behavior. He now uses something to push the garbage down (desired response). Please note, this just happened. Stephan was punished by chance, severely. Due to the severity, the lesson was well learned.

In the real world of parenting, for punishment to work, the severity of the punisher would be much too severe for a parent to implement. Tommy’s mom could probably teach Tommy not to reach for candy by criminally (child abuse) hurting Tommy’s little hand. Obviously, this would be outrageous. Most parents learn that punishment produces only short term learning.

The second major undesirable effect of punishment is the emotional turmoil that can develop.

On a regular basis I have parents tell me:

I can’t believe I can’t get through to my kids.

Will my children ever learn?

What do I have to do to get through to them? I grounded them last week for the exact same problem!

And I hear children say:

My parents don’t understand ... all they do is yell at me.

My folks don’t even know who I am!

What do I have to do to get my parents to understand. They grounded me last week for the same thing. I just don’t care!

 

Mr. Knapp came to my office out of frustration. He said his fourteen year old daughter, Ellen was out of control. “I can’t get her to do anything. Even when I spank her she just yells, ‘You don’t own me!’ I’ve grounded her, taken away her stuff and told her that she can’t have her driver’s license next year. Nothing seems to get through to her!”

Ellen was a bright and stubborn young lady. During our first family counseling session, she stated the situation quite clearly. “I don’t care what my mom and dad do! They can’t hurt me. I haven’t cried from a spanking since I was seven!”

What Mr. Knapp was experiencing was the disruptiveness of emotions that destroy the limited effect of punishment. The act of punishing a child brings up in the child many disruptive thoughts and feelings. These feelings tend to get in the way of our goal as parents, to teach our children appropriate social behaviors. It is common for a child to receive punishment for a misbehavior, only to stomp off to their room and spend the next few hours focusing on their parents’ behavior, the act of punishment, rather than their own inappropriate behavior.

We will discuss this more in the section, The Art of Discipline, in Chapter 4.

Timing is Very Important

Mrs. Babcock read in a children’s magazine about a parenting technique that helped toddlers pick up after themselves with little fuss. So she tried it. She made a chart on a piece of paper showing the days of the week followed by three circles. She told her daughter, Mandy (age 3), that after breakfast, lunch, and dinner it was clean up time. She explained to Mandy that if she helped clean up her toys with mommy, mommy would put a sticker on the correct circle. Mandy was very excited by the colorful stickers. Over the next two weeks Mandy was reasonably helpful in picking up her belongings.

On the two week anniversary of the sticker chart mom couldn’t locate the stickers. She looked high and low but had to tell Mandy that she had no stickers to give. Mandy went ballistic and threw her toys at her mom. From what I was told, Mandy was an accurate little pitcher, future Baseball Hall of Fame material.

To this point we have discussed continuous schedules of reinforcement, which simply means, every time the correct behavior is shown, the child receives a reward. Every time Mandy helped with the clean up, she received a sticker for her chart. One act of helping earned one reward sticker. This is written as 1:1 (1 to 1).

In the real world, reinforcement is seldom 1:1. In the real world reinforcement is usually partial (not 1:1). Psychologists have investigated this fact and have grouped reinforcement into schedules. There are four major groups of reinforcement  parents need to understand. Each type of reinforcement has its place in teaching our children.

Fixed-ratio schedule of reinforcement

In a fixed-ratio schedule, reinforcement is given after a fixed number of correct behaviors. So, in the example above, Mandy was reinforced on a schedule of 1:1. Every time. However, if she had been given a sticker after three helpful cleanup experiences, the ratio would be 3:1 (3 behaviors to 1 reward).

Learning with a fixed-ratio schedule tends to be the quickest of all schedules. This makes sense. Mandy was able to connect the reward with the desired behavior easily. However, unlearning behavior (extinction) is also quick with a fixed-ratio schedule of reinforcement. Mandy knew right away that she wasn’t receiving any reinforcement so she stopped picking up.

Workers who get paid by the piece or by commission tend to be highly productive as long as the reinforcement is received. I once knew a commissioned car salesman who almost got into a fist fight with the sales manager when asked to vacuum the showroom. In the salesman’s eyes he was there to sell cars. He didn’t get paid to pick up around the showroom. Without the direct reinforcement both the car salesman and Mandy refused to help out around the place.

Fixed-interval schedule of reinforcement

In a fixed-interval schedule, reinforcement is given to the first desired behavior following a specific period of time. For example, Mr. Randel was a strict parent. He believed that his military training was pivotal in his success as an adult. In the Randel family bedroom inspections were Sunday at 5 PM. At 5 PM sharp, Mr. Randel walked through his children’s rooms. If they were shipshape he would leave their allowance on their pillows. He was proud to say, “At 5:01 my children’s rooms are perfect.” He was sheepish to say, “By Monday morning the rooms look like a war zone. I can’t believe how undisciplined my children are.”

A fixed-interval schedule of reinforcement tends to show rapid learning, but the results are sporadic. The Randel children, all nine of them, cleaned their rooms Sunday afternoon. The Randel children learned to anticipate the reinforcement and were ready to receive it at 5 PM Sunday.

A fixed-interval schedule of reinforcement is used in most school testing situations. The student knows when the test is and studies (crams) just before. Right after the test the desired behavior, studying, rapidly decreases.

Variable-ratio schedule of reinforcement

My beloved wife, Mrs. Copitch, is an elementary school teacher. She has this classroom currency called Copitch Cash. If you are caught being kind, helpful or just down right wonderful you receive $1 Copitch Cash. At the end of the week the Copitch Cash can be used in the class store or saved for bigger and better goodies. Don’t tell the students, but Mrs. Copitch wants every child to earn lots of Copitch Cash throughout the week. She knows, hopefully from all her proof reading of my work over the years, that a variable-ratio schedule of reinforcement is a powerful and long term learning mechanism. Let’s say, for example, that Mrs. Copitch wants every child to earn at least three dollars in Copitch Cash per day. The students know that there is potential cash to be had. They just don’t know when they are going to be “caught being nice.” Sometimes they do nice things 36 times before getting caught. Other times they get caught on their third nice behavior. This schedule of reinforcement takes a little longer to teach but the durability of the behavior is greater.

In a variable-ratio schedule, the reinforcement is given after a varying number of desired responses. In the adult world, a variable-ratio schedule of reinforcement is the key to how a casino gets gamblers to play slot machines. The slot machine is programmed to let the gambler win, get reinforcement, at a varying ratio, over a set number of plays. This reinforces the gambler to put money into the machine. The gambler knows, “This machine is just about to pay off.” A variable-ratio schedule of reinforcement is very powerful. I have had patients, after putting their rent money into a slot machine, tell me, “If I only had a few more dollars. I just know the machine was ready to pay.” What the person doesn’t see is that the machine is programmed to let you feel that the reward is just one pull away.

Variable-interval schedule of reinforcement

Jason was 16 when I met him. He had been expelled from school for smoking in the bathroom on three separate occasions. He asked his parents to help him stop smoking and, after months of failed attempts, Jason and his family were convinced that Jason had, what his father described as, an “Addictive Personality.”

Jason told me how he got started smoking. When he was 13 he thought that he had to try cigarettes. He stole a smoke from his father’s jacket. Jason whispered as he told his story.

 

I snuck out into the back yard and lit it up. It was so exciting. Getting over on my parents was great. I hated the cigarette. It was nasty and made me choke. But every night for a month I stole another.

Then my father became suspicious. He started taking his pack of cigarettes up to his room at night. Some nights he forgot. I would steal one and smoke it. This went on for months. I guess I was smoking 2 or 3 cigarettes a week. I was hooked. I now smoke a pack and a half a day. I can’t stop.

 

Without knowing it Jason was defining a learning schedule. Every night for a month (fixed-ratio schedule of reinforcement) Jason was reinforced for stealing and smoking a cigarette. The reinforcement was his excitement of “getting over” on his parents. Then, when his father became suspicious and began taking his cigarette pack to his room, Jason was only reinforced at a variable interval, every few nights. Jason never knew if tonight was the night. Jason didn’t know the reinforcement schedule, but the reinforcement was powerful. Even with a nicotine patch Jason could not stop smoking.

A variable-interval schedule of reinforcement taught Jason that the reinforcement was some time interval away. Maybe one day maybe six days. This is why Jason was having  such a hard time staying away from cigarettes even after weeks with no reinforcement.

Schedules of reinforcement and extinction

A fixed-ratio schedule of reinforcement is usually referred to as constant reinforcement. Due to the constant nature of this reinforcement, extinction of the new behavior tends to be rapid. The other three schedules of reinforcement are usually referred to as partial schedules of reinforcement. Partial reinforcement is extremely effective in maintaining a behavior. The learner is not expecting a reward every time so, the behavior is not weakened  as quickly when the reward is not received. Sometimes this works wonderfully, such as when your child is motivated to clean their room without you asking. Sometimes it is a disaster, as Jason found when he tried to stop smoking.

As parents, we need to be attuned to the schedules of reinforcement so that we can teach our children effectively. We must also understand schedules of reinforcement so we can use extinction effectively. As humans we are prone to frustration when what we expect does not occur. Earlier, we found Mandy throwing things at her mother when she did not get the sticker she expected. By understanding the powerful influence schedules of reinforcement have, we can be more patient and understanding when our children experience frustrating situations.

Dr. Phil’s Rule of 10:1, 100:1, or 1000:1

Schedules of reinforcement, and children over the years, have taught me that changing an undesirable behavior is very, very difficult. I have a simple but relatively unscientific Dr. Phil Rule. I say it is unscientific because I have no empirical data for the number part of the rule. But, I have not yet found a parent who didn’t experience the wrath of this rule.

 

Dr. Phil’s Rule of 10:1, 100:1, or 1000:1

Usually shortened to: Dr. Phil’s 10:1 Rule:

 

However many times your child has been

reinforced for an undesired behavior,

it will take at least 10 times that number

to change that behavior.

 

For example. If your child learns that if he whines you will sometimes back down, you are teaching your child to whine using a variable schedule of reinforcement.

An example of this would be when you say “no” to your darling seven year old and he says, “But, mom!” or “Please, please, please...” and then, after a while, you get worn down and change your mind (usually just before you lose it). Take the quantity of whines you taught him to have (variable reinforcement), and multiply it by at least ten to find the number of times he will whine before he believes whining doesn’t work any longer (extinction). Use the multiplier of ten if your child is not too bright. The brighter your child the greater the multiplier. For the average kid multiply  by 100. For a smart kid, one who will someday run the world, multiply it by 1000.

What this means is, if you teach your child to whine seven times before you back down, you will have to un-teach him 70, 700 or 7000 times. It is important that parents are careful about what they inadvertently teach their children. (See Whining if you need encouragement to continue.)

PersonalizationMall.com
ShopTronics 120x600 75% OFF
Liquidation Channel
Anytime Costumes
SodaStream (Soda-Club) USA
education.com workbooks on PLUS
FREE Exclusive LEGO® Martian Manhunter with purchases of $75 or more.  Valid 3.1.14 - 3.31.14 or while supplies last.
Get Free Shipping on orders over $35 at PetCareRx.com! - 160x600

How Children Learn