Template:Jillian924 sandbox

Applied behavior analysis (ABA) is a way to help some people change their behavior. ABA may also be used to find out why some behaviors have changed. ABA is based on the science of psychology. [1] ABA is used as a treatment for some psychological disorders and developmental disabilities. An ABA based treatment is called a behavioral intervention.

ABA is based on behavior principles that were discovered using experiments. These experimental behavior principles were first discovered by psychologists. Psychologists studied the behavior of animals and people by doing experiments. The behavior principles are based on what psychologists learned by doing these experiments. ABA applies these experimental behavior principles to real life behavior. ABA is used to change a person's behavior. It is used to make a person's social behavior better. Socially important behavior is behavior that happens in a person's everyday life. Improving social behavior will increase the quality of a person's life.

History

change

Scientific experiments and articles about ABA are published in the Journal for Applied Behavior Analysis (JABA.) This journal was founded in 1968. Because of this, many people think ABA started in 1968. In the first issue of JABA, three people wrote an article explaining what Applied Behavior Analysis is. These three people were Baer, Wolf, and Risley. This article is still used today to define what ABA is. Even though many people think ABA started in 1968, the ideas behind ABA are much older.

Behaviorism

change

The scientific study of behavior is the basis of ABA. Psychologists who study the way the environment influences behavior are called behaviorists. Behaviorists believe that behavior is caused by things that happen in the environment. These things are called external events. External events could be any change in the environment. They study the external causes of behavior instead of a person's feelings. A person's feelings are internal so behaviorists think they cannot be studied scientifically. Feelings are internal because they happen inside of a person's body. Feeling's cannot be seen just by looking at a person. Since feelings cannot be seen they cannot be studied without bias.

Psychologist John B. Watson founded Behaviorism in 1913[2] and he is regarded as one of the first psychologists to study the applied science of behavior.[3] He thought that internal feelings should not be studied at all. He also thought that behavior is shaped by the environment over time due to the outcome of a person's actions. He believed all behavior is learned and can be changed. If a person gets something good as a result of doing a behavior they are more likely to repeat that behavior. If something bad happens as a result of a behavior a person is less likely to do that thing again. This is an idea that is used a lot in ABA.

One of the most famous behaviorists was the psychologist B.F. Skinner. Skinner first published his ideas about behaviorism in 1938. He wrote a lot about behaviorism throughout his career. In 1945, Skinner came up with the concept of radical behaviorism. [4] Radical behaviorism is a more moderate form of behaviorism. Radical behaviorism is not as extreme as behaviorism. It recognizes that internal feelings can effect behavior. But it says that people should only focus on behavior because behavior can be observed. Skinner thought that behavior is partially caused by the environment. He also thought that behavior was partially caused by genetics. He came up with behavior principles that were based on the results of his experiments. These behavior principles are still used a lot in ABA. He was also involved with the founding of JABA. He is often believed to be one of the founders of Applied Behavior Analysis. But he never actually applied his behavioral principles to social situations like ABA does. Since he did not use applied behavior principles he cannot be the founder of ABA. [4] He still made great contributions to ABA. He made some of the behavior principles that ABA uses.

Other factors in the development of ABA

change

Wilhelm Wundt established the first psychology laboratory in Leipzig, Germany in 1879. [5] The founding of an experimental laboratory was an important moment in the history of psychology. Psychologists could now study behavior scientifically. Since, ABA is based on the science of behavior the foundation of experimental laboratories was very important.

Ivan Pavlov was a Russian man who studied psychology and physiology. He studied respondent behavior. Respondent behavior is behavior that is involuntary. Respondent behavior is caused physiological reactions of the body. In 1897, he started to do experiments about respondent behavior. He came up with the theory of classical conditioning after doing these experiments. Classical conditioning is also called pavlovian conditioning. Classical conditioning says that if a neutral stimulus keeps being paired with a stimulus that causes a physiological reaction, eventually the neutral stimulus will cause the physiological reaction. A physiological reaction is a bodily response that is automatic such as salivation or increased heart rate. A stimulus is a physical event. A neutral stimulus is a stimulus that does not cause a physiological reaction. To pair stimuli together a person must make them happen at the same time. If two stimuli happen at the same time people will associate them with each other. A stimulus that starts out neutral can be changed. A neutral stimulus can change into a stimulus that causes a physiological reaction. For this to happen it needs to be paired with a stimulus that already causes a physiological reaction. Classical conditioning is based on the idea that stimuli can be changed by pairing them together. Pairing stimuli can then change behavior. Pavlov's idea of classical conditioning was important to the development of ABA. It was one of the first times that behavior was studied scientifically. Pavlov was one of the first people to show that behavior could be changed by the environment.

ABA emerged when experimental behavior principles were applied to social behavior. There are various people that began the process of applying these principles to social behavior. One of the most famous was Ole Ivar Lovaas. Lovaas was a psychologist. In the 1960s, Lovaas became one of the first people to use applied behavior analysis as a treatment for autism.[6] ABA is one of the most widely used treatments for autism but this was not always the case. When Lovaas began using ABA to treat children with autism he changed the way developmental disorders were treated. He made the treatments for developmental disorders a lot better. Lovaas also thought that treatments would be more successful if they are started early in a child's life. The idea of starting treatment with a child as soon as possible is called early intervention. Early intervention is still thought to be very important in ABA.

One of the more recent developments in Applied Behavior Analysis is the founding of the Behavior Analysis Certification Board (BACB). The BACB is a non-profit corporation that was started in 1998. The BACB certifies behavior analysts. In order to become a behavior analyst a person must go to a special school and then pass an exam. Once they pass this exam they will be a board certified behavior analysts. The BACB sets standards to make sure board certified behavior analysts (BCBAs) are good at their jobs. The BACB makes sure that the BCBAs are using ethical, legal, and professional practices when they work. [7]

Characteristics

change

There are seven characteristics of ABA that were written in the article "Some Current Dimensions of Applied Behavior Analysis". This article was written by Baer, Wolf, and Risley. The article was put in the first edition of the Journal of Applied Behavior Analysis in 1968.[8] A treatment that is based on ABA must have all of these things.

Applied

change

The term applied means that the behavior being studied must be socially important. Socially important behavior is behavior that happens in everyday life. Bad social behavior will hurt a person or the people around them. Making a person's social behavior better will help them. If a treatment does not try to change a behavior that is socially important then it is not applied.

Behavioral

change

A behavior is anything that a person does. ABA has to be based on behaviors. What a person actually does is the thing that is studied. A person's feelings are not studied in ABA. Feelings are not studied because they are internal so they cannot be measured. ABA treatments measure rates of behavior. A rate is how often the behavior happens.

Analytic

change

For behavior to be analytic a behavior analyst has to have control over it. This means they have to be able to make the behavior happen at certain times. There are two ways that behavior analysts can prove they have control over a behavior. Both ways involve the use of an experimental variable. Behavior analysts will change part of the environment that they think will cause a change in behavior. The part of the environment that they change is the experimental variable. Behavior analysts have to show that the variable they used is what caused the change in behavior. To do this they have to compare data from before they applied the variable to data they collected after they applied the variable. The data from before the experimental variable is applied is called baseline data.

The first way to prove control is called the reversal technique. With the reversal technique an experimental variable is added. Then the behavior is measured to see if the variable made it change. If a change in behavior is found it could be because of the experimental variable. But the behavior change could have also be caused by something else. To prove it was the experimental variable that changed the behavior the variable must be taken away. The reversal happens when the variable is taken away. The behavior is measured again after the reversal. The behavior might go back to the original rate. The original rate of behavior would be the number of times it happened during baseline data. The behavior might not go back to the original rate. If it changes back to its original rate then it means the variable caused the change. If it does not go back to the original rate then it means that another factor caused the change in behavior.

The second way to prove control is called the multiple baseline technique. This is used when the behavior cannot be reversed. Baseline rates for different behaviors are taken. The behavior analyst then adds an experimental variable to the different behaviors one at a time. They study the effect the variable has on the rates of behavior. If a single experimental variable changes the rates of multiple different kinds of behavior it will help to prove control.

Both of these techniques help behavior analysts prove reliability. Reliability is the likelihood that a certain thing caused a change in behavior. If something has high reliability it is thought that the thing was the reason for a change in behavior. To show reliability behavior analysts must be able to analyze behavior. To analyze behavior they must have control over the behavior.

Technological

change

Baer, Wolf, and Risley use the word technological to describe the amount of explanation necessary for a treatment. A behavioral intervention must be explained very well. If an intervention is not explained well it is not technological. Anyone with the proper training who reads a treatment plan should be able to do the intervention and get the same results. A treatment plan must define the behavior being studied and all of the things that could happen during the behavioral intervention. ABA relies on technological treatments because they are universal. Universal means that they can be used in different places by different people and get the same results.

Conceptually Systematic

change

For an ABA treatment to be conceptually systematic it must be related to behavior principles that are already known. ABA relies on behavior principles that were tested in experimental laboratories. They are known to be true because they have been scientifically tested. Using principles that are already known makes ABA more effective.

Effective

change

This characteristic says that behavioral interventions must produce good results to be used in ABA. The purpose of applied interventions is to improve an aspect of social behavior. If a intervention does not improve a person's social environment then it is not effective. Interventions that are not effective should not be used.

Data must be taken to measure the rate of behavior change. It can be difficult to tell how much behavior change makes a intervention effective. Different people can have different thoughts about the rate of behavior that is okay. The people that have to deal with the behavior decide if an intervention is effective or not.

Generality

change

Generality is a term used to show how meaningful a behavior change is. A behavior change has high generality if: 1. The change keeps happening for a long time after the intervention is over 2. The change happens in multiple environments 3. The change causes a change in other behaviors

Behavior analysts want behavior changes to have high generality. High generality means that the change in behavior was meaningful. Behavior changes that have high generality are better than changes with low generality.

Definitions and concepts

change

The following terms and ideas are used in Applied Behavior Analysis. It is important to understand these ideas when using ABA based interventions. All of the following definitions come from Behavior Modification Basic Principles.[9]

Problem behavior

change
In ABA, a behavior that is bad is called a problem behavior. Problem behavior is something that hurts the person doing it or hurts other people around them. It could be a behavior that physically harms a person. It could also be a behavior that causes emotional harm. A problem behavior makes it harder for a person to learn new things. A lot of ABA treatments try to decrease problem behaviors. 

Operant behavior

change

Operant behavior is the opposite of respondent behavior. Operant behavior is voluntary and can be changed. A person has control over their own operant behavior. They decide which behaviors they want to do. Respondent behavior cannot be controlled like operant behavior. ABA studies operant behavior more than respondent because it can be controlled. If a behavior can be controlled then it can be changed.

Three-term contingency

change

Three-term contingency is a concept for understanding operant behavior that was first used by B.F. Skinner. It is still used in ABA. The three-term contingency says that every behavior has an antecedent and a consequence. An antecedent is what happens right before the behavior. The antecedent can change the chance that a behavior will happen. Some antecedents make a behavior more likely to happen. Other antecedents make a behavior less likely to happen. The consequence is what happens after the behavior. It is the response to the behavior. So, an antecedent happens before a behavior and a consequence happens after a behavior. Keeping track of antecedents, behaviors, and consequences is called taking ABC data. If the same thing always happens before or after a behavior it can help behavior analysts find cause of that behavior. ABC data is not enough to prove the cause of a behavior. Behavior analysts have to use experimental variables to prove the cause of a behavior. But ABC data can be used to find things that may be causing behavior. Then these causes can be tested using experimental variables.

Operational definition

change

An operational definition tells people exactly what a target behavior looks like. The target behavior is the behavior that is trying to be changed. The operational definition of the target behavior needs to be very good. People that have not seen the target behavior before should be able to tell when it happens based only on the operational definition. A good operational definition makes the study of behavior better.

Reinforcement

change

When something "increases" the likelihood of a response happening again it is called a reinforcement. The likelihood of a response is the chance that it will happen. If there is a better likelihood then there is more a of a chance a behavior will happen. A reinforcement is often thought of as a reward. But, there is a important difference between a reward and a reinforcement. A reward is anything that is given after a response happens. A reward can only be a reinforcement if it increases the rate of response in the future. If it does not increase the rate of response then it is not a reinforcement. All reinforcers do not work for all people. So it is important to try different types of reinforcers to find out what works best for a person.

Primary reinforcers are reinforcers that fill a biological need. The four primary reinforcers are food, water, warmth and sex. The ability for them to change behavior is genetic and natural. This means it happens in every person and does not need to be learned. Since primary reinforcers do not need to be learned they are also called unconditioned reinforcers.

Reinforcers that do not fill biological needs are called secondary reinforcers. Secondary reinforcers are not naturally reinforcing. A person needs to learn that they are reinforcing. They learn that they are reinforcing because they are paired with other things that are known to be reinforcing. They are also called conditioned reinforcers.

A discriminative stimulus (SD) is something that tells a person that reinforcement is available. The presentation of a SD should cause a response. It will only cause a response if a person makes the connection between the reinforcement and the SD.

Reinforcement should be given right after the behavior happens. If it is not given right away the person may not realize what they are getting reinforced for. The sooner a reinforcer is given the more effective it will be.

Reinforcement can either be positive or negative.

Positive reinforcement

change

Positive reinforcement (SR+) is when the addition of a stimulus leads to an increase in responding. So something is added to the environment after a behavior happens. This increases the likelihood the person will do that behavior again. Positive is not the same as good. Positive just means that something is added to the environment. There are four different types of positive reinforcement:

1. Tangible Reinforcers Tangible reinforcers are things that can be held. Toys or food would be tangible reinforcers. Tangible reinforcers can be expensive. They should not be the first choice for positive reinforcement.

2. Social Reinforcers Social reinforcement is one of the best types of positive reinforcement. It is good because it is not expensive. It is also easy to use. Social reinforcers are things that have social value or meaning. Examples of social reinforcers would be awards, praise, or compliments. Social reinforcement can be very effective.

3. Activity Reinforcers Activity reinforcers are any events that a person can earn. Examples of some activities would be getting to play games, go to the movies, or earning a break. Activity reinforcers can work really well if the person really wants to earn the activity. But activity reinforcers may not be able to be earned right after the behavior happens. This would result in a delay of reinforcement. The reinforcer will be less effective if it is given a long time after the behavior happens.

4. Token Reinforcers A token reinforcer is a neutral stimulus that can be traded for a object or activity. The objects or activities they can trade tokens for are known as back-up reinforcers. The tokens do not have value by themselves. They are only valuable because the person knows that they can trade them for something they want to earn. Token reinforcers are similar to money. They can be really effective because the person will get to choose the reward they want. They are also effective because they can be given right after the behavior happens. There will be no delay in reinforcement with token reinforcers.

Negative reinforcement

change

Negative reinforcement (SR-) happens when the removal of a stimulus increases the likelihood a person will do a behavior in the future. In this case the term negative does not mean bad. The term negative means that something from the environment is removed. The stimulus that is removed from the environment is something aversive. This means that the person does not like it. People will want to do a behavior if they know that something they do not like will be taken away.

Reinforcement schedules

change

Reinforcements are given based on schedules of reinforcement. The schedule will explain the ways reinforcement is earned. Behavior analysts decide which reinforcement schedule is best. Different schedules are useful for different things. The schedule used depends on the person. It also depends on the behavior that is being reinforced.

Continuous reinforcement schedules

change

Continuous reinforcement schedules (CRF) reinforce a person every time they do the target behavior. So every time the person performs the target behavior they will get reinforcement. This leads to high rates of response. CRF is good for teaching a person new skills. A high response rate will make the person better at the new skill.

There are a few problems with CRF schedules though. The rates of behavior are not long-lasting. If the person stops being reinforced they will stop doing the target behavior. Another issue is satiation. If the person becomes satiated it means that they do not want the reinforcer anymore. Because they got the reinforcer so many times they no longer want it. A reinforcer must be changed when using CRF so the person does not get satiated. CRF schedules should be changed into intermittent schedules over time.

Intermittent reinforcement schedules

change

CRF schedules reinforce a person every time they perform a target behavior. Intermittent schedules of reinforcement only reinforce the target behavior sometimes. The four most common types of intermittent schedules are:

1. Fixed-ratio schedules Fixed-ratio (FR) schedules give reinforcement after a certain number of responses. Once a person does the target behavior a certain number of times they will get reinforcement. On a FR 5 schedule, reinforcement would be given after every 5 responses. On a FR 15 schedule, reinforcement would be given after every 15 responses. FR schedules will produce high and steady rates of response.

2. Variable-ratio schedules With variable-ratio (VR) schedules the amount of responses needed for reinforcement changes. The number of responses needed for reinforcement will be different every time. But reinforcement is based on the average. With a VR 10 schedule, reinforcement would be given after an average of every 10 responses. This means that reinforcement could be given after the 2nd, 14th, 7th, 9th or 18th response. If it given after the 4th response one time it may be given after the 12th response the next time. It is random and not predictable. VR schedules also cause high and steady rates of response.

3. Fixed-interval schedules With fixed-interval (FI) schedules reinforcement is delivered for the first response that happens after a fixed amount of time passes. The reinforcer is given for the first response that happens after the amount of time passes. A FI 10 schedule would mean that the first response that happens after 10 minutes is reinforced. Reinforcement will not be given until after the 10 minutes passes. With FI schedules there will be low rates of responding in the beginning of the time interval. At the end of the interval there will be higher rates of response. With FI schedules the rate of response is not steady.

4. Variable-interval schedules Variable-interval (VI) schedules give reinforcement after an variable amount of time passes. The amount of time that needs to pass before reinforcement will be given changes. It is based on an average time period. A VI 10 schedule will give reinforcement for the first response after an average of 10 minutes has passed. The response rate for VI schedules is higher than FI schedules. The response rate for VI schedules is not as high as it would be with fixed or variable ratio schedules.

Differential Reinforcement

change

Differential reinforcement is a way that people can learn to make correct responses over time. The correct behavior is reinforced and the incorrect behavior is not reinforced. Over time people will learn to make the correct response because they want to get the reinforcement. Differential reinforcement teaches a person the difference between a SD and a non-discriminative stimulus (S-delta). A SD signals that reinforcement is available. A S-delta does not tell the person anything about the availability of reinforcement. A S-delta is not likely to produce responses because there is reinforcement is not given. Differential reinforcement is used to teach people to make correct responses.

It is also used to decrease problem behavior. There are four types of differential reinforcement that can be used to decrease a problem behavior.

DRL stands for differential reinforcement of a "low-rate" of response. DRL is used when the problem behavior happens at very high rates. The goal of a DRL program is to get behavior to happen at a low rate. Before starting a DRL program a target frequency needs to be set. The target frequency is how often the behavior should happen at the end of the program. The target frequency is the end goal. A DRL program uses reinforcement to decrease rates of behavior a little at a time. Guidelines are set on how many behaviors are allowed to happen in a time period. If the person has less than the number of behaviors allowed then they get reinforcement. If they have more behaviors than the number that is allowed they do not get the reinforcement. The number of behaviors that is allowed in the time period will decrease with time. So the person must have less behaviors if they want to keep getting reinforcement. The number of behaviors allowed in a time period keeps decreasing until they reach the target frequency. DRL is helpful when trying to decrease a behavior but not get rid of it completely.

DRO stands for differential reinforcement of "other" behavior. With DRO, a person gets a reinforcer if they do not do the problem behavior for a set amount of time. If they have a problem behavior at any time during the period they will not get reinforcement. The time period will increase as the program goes on. This means the person will have to go longer without doing the problem behavior.

DRI stands for differential reinforcement of "incompatible" behavior. A DRI program reinforces behavior that is incompatible with the problem behavior. An incompatible behavior would stop them from being able to do the problem behavior. The incompatible behavior will increase because it is being reinforced. If the incompatible behavior increases then the problem behavior has to decrease.

DRA stands for differential reinforcement of "alternative" behavior. For a DRA program the person is taught an alternative behavior that they can do instead of the problem behavior. The alternative behavior would serve the same purpose as the problem behavior. So they would get the thing they wanted by doing the alternative behavior. The alternative behavior is reinforced and the problem behavior is not. The alternative behavior will replace the problem behavior. Before a DRA program is used the reason for the problem behavior must be understood. If the behavior analyst does not know why the person is doing the problem behavior they cannot find a alternative behavior for them. This is because the alternative behavior must be something that lets the person get what they want. So the behavior analyst must first find out exactly what it is the person is trying to get by doing the problem behavior.

Extinction

change

Extinction is a way to decrease rates of problem behavior. Sometimes a problem behavior will be reinforced by accident. In order to fix accidental reinforcement the behavior has to be put through extinction. A problem behavior can only go through extinction if it has been reinforced before. To put a behavior through extinction the person must stop getting the reinforcement they want when they do the problem behavior. The problem behavior will decrease because the person will learn that this behavior no longer gets them what they want. Extinction can take a long time especially if the problem behavior has been reinforced for a long time. A DRA program should be used at the same time as an extinction program. This will make the extinction program more effective.

Punishment

change

Reinforcement is always the first thing that is tried when trying to change a behavior. Reinforcement produces long-lasting results and is better for the person. However reinforcement does not always reduce problem behavior. If reinforcement does not work it may be necessary to try punishment. Punishment is effective but it can have bad consequences. It can cause emotional harm if it is not used correctly.

Punishment is any response to a behavior that "decreases" the likelihood of a behavior happening in the future. Punishment of a problem behavior should be used with reinforcement of the appropriate behavior. Appropriate behavior should always be reinforced even when using punishment for problem behavior. Just like reinforcement, punishment can be both positive and negative.

Positive punishment

change

Positive punishment is the addition of an stimulus that decreases the chances of future responses. This stimulus that is added is something aversive. The addition of something that a person does not like will make them less likely to repeat the problem behavior.

Negative punishment

change

Negative punishment is the removal of a stimulus that decreases the chance of future behaviors happening. The stimulus that is removed is something that the person likes. Taking away something that a person likes will make them less likely to repeat the problem behavior.

Prompting

change

Prompting increases the likelihood that a behavior will be done the right way. Using prompts will make the learning process easier and quicker. Someone that is teaching a new behavior can use prompts to help the person that is learning the behavior. The two broad categories of prompts are stimulus and response.

Stimulus prompts

change

With stimulus prompts the teacher will change the antecedent to make a response more likely to happen. So they will change something in the environment before the learner responds. This change is meant to help the learner make the correct response. A stimulus prompt helps the learner see the discriminative stimulus. Some examples of stimulus prompts would be lists or pictures that help the learner remember what the correct response is. It is important to remember that stimulus prompts change the antecedent. So they change something in the environment before the behavior happens. This change makes the behavior more likely to happen.

Response prompts

change

Response prompts do not involve a change in the antecedent. Instead the teacher helps the learner with the actual behavior. There are three types of response prompts.

The first type is verbal prompts. Verbal prompts are instructions that the teacher gives to the learner. Verbal prompts are useful for teaching new behaviors.

The second type of response prompt is a gestural prompt. A gestural prompt is when a teacher uses some kind of movement of their body parts to show the learner the correct response. The most common gestural prompt is pointing to the correct response.

The third type of response prompt is a physical prompt. A physical prompt is when the teacher touches the learner in order to help them perform the correct behavior. Physical prompts are often used when teaching new skills that involve movement of the body. If learners have serious disabilities they may always need physical prompts.

Fading

change

Fading prompts is a way to increase the independence of the learner. Once the learner can do the behavior on their own they will not need the help of the teacher. So, once the behavior is learned the teacher should stop using prompts. If a teacher stops using prompts quickly it may confuse the learner. So prompts should be decreased slowly. The process of gradually decreasing prompting is called fading.

Fading is easier if you are using weak prompts. Verbal and gestural prompts are weaker. Physical prompts are the strongest kind of prompts. A teacher should always use the weakest form of a prompt that will still help the learner. Teachers should only use a stronger prompt if they have to. Strong prompts are harder to fade. The amount a teacher can fade prompts depends on the learner. Learners with serious disabilities may always require some form of prompting make correct responses.

Shaping

change

Shaping is a way to teach new behaviors or improve behaviors that have already been taught. It involves reinforcing successive approximations of a target behavior. Successive approximations of the target behavior are reinforced until the end behavior is reached. The end behavior is also called the terminal behavior. The terminal behavior is the end goal. Successive approximations are improvements that bring a person closer to the terminal behavior. Successive approximations are small steps that help the person reach their end goal. The target behavior will improve slowly as new successive approximations are reached. As a person improves an action they are only reinforced for the new successive approximation. Once they improve the behavior they will not be reinforced for the old version of the behavior anymore. This will keep happening until eventually they reach the terminal behavior.

Chaining

change

Complex behaviors are made up of smaller simple behaviors. This is what the concept of behavior chains is based on. A behavior chain is when smaller behaviors are combined to form a complex behavior. These smaller behaviors are done in a certain order. Each of the smaller behaviors is a step in the behavior chain. A behavior chain is usually presented as a list of steps.

Each step in a behavior chain is considered a response on its own. Each response serves as the discriminative stimulus (SD) for the next step. The SD reminds the person of the next step in the chain. It also reminds them that reinforcement will be available at the end of the chain. Chaining a behavior is a good way to teach new skills because it breaks the behavior down into smaller steps. There are three different ways to teach a behavior chain:

Forward chaining is the most common way to teach a behavior chain. The chain is taught starting with the first step. Once the learners can do the first step they move on to the second step. This continues until they have completed all of the steps. The person learns how to do the steps in order. They do not move onto the next step until they can do the step they are on. Forward chains can take a long time because each step needs to be taught on its own.

Backward chaining is the opposite of forward chaining. With backward chaining the last step is taught to the learners first. Once they can do the last step they move on to the second to last step. This continues until they make it to the first step. Once they make it to the first step they will have learned all the steps. The steps are not actually done backwards. The behavior is done normally starting with the first step. The teacher helps the learner do all of the steps until they get to the last step. Then the learner will do the last step by themselves. Next, the teacher will complete all of the steps until the second to last step. Then the learner will do the last two steps by themselves. This continues until they can do the entire chain on their own. There are some advantages to backward chaining. The learner gets help with the whole behavior chain before they have to do it by themselves. The first steps will be repeated a lot so they will be easier for the learner to remember. They also get the reinforcement quickly in the beginning. They will only have to complete one or two steps before they get reinforcement at first. This will make them want to learn the behavior more.

Total task presentation is the third way to teach a behavior chain. It can also be called total task procedure. With forward and backward chaining only one step in the behavior chain is taught at one time. But with total task the learner completes every step in the chain every time they do the program. They should complete every step in the correct order. Total task requires the learner know how to do all of the steps in the chain before they start the program. The learner may make more mistakes at first. But the learner will learn the behavior more quickly. This is because they do every step of the chain every time so they get more practice. Teachers may have to prompt the learner if they need help on any of the steps.

Analyzing Behavior

change

ABA relies on data to make decisions about behavior. This makes decisions better because they are scientific and objective. Objective means that they are based on facts rather than feelings. To make objective decisions, behavior must be measured a lot. So behavior analysts are always collecting data about a person's behavior.

Data collection

change

There are different ways that behavior analysts collect data about a behavior. First, they use direct observation. Direct observation is when behavior is watched and recorded. In ABA, direct observation is used with naturalistic observation a lot.[10] Naturalistic observation means the behavior is studied in its natural environment. So the behavior is observed in the place it normally happens.

Indirect measures could also be used but they are not as good. Indirect measures compare one person's behaviors to other people's behaviors. They get information about a person this way. An example would be an intelligence test. One person's score is compared to the scores that other people got. These are not as good because everyone is different. Scores on indirect measures can also change a lot.

There are three common methods of data collection. They are frequency, interval, and time sampling.[11] These methods are used for direct observation. These methods of data collection are how the people observing the behavior actually record the behavior when it happens.

Frequency is how often a behavior happens. So frequency recording just counts the number of times a behavior happens. Every time a behavior happens it is recorded by the observer. Frequency data is easy to collect so it is used a lot. Behavior analysts use frequency data to determine rate of response. Rate of response is the number of responses for a specific time period. To find rate of response, the frequency is divided by the total time of the observation. For example if the target behavior happened 100 times in 10 hours the rate of response would be 10 times per hour. Rate of response is the best measure of how often behavior happens. Rate of response is the most commonly used measures in ABA.

Interval recording breaks the observation time into smaller pieces. The total time is divided into smaller intervals. If the behavior happens at any time during one of the smaller intervals it is recorded. But only one behavior per interval is recorded. So the total number of behaviors is not recorded. Behavior analysts can get an idea of how often the behavior occurs with interval recording. Interval measurements will not be as accurate as frequency measurements. It can also be hard to measure behavior using intervals. It can be hard because the observer has to keep track of the intervals. To do this the observer must be paying attention the entire time.

Time sampling methods are a little different from frequency and interval methods. They can also be called momentary time sampling. It is called this because the observer only watches the behavior for a moment. The observer will only look to see if the behavior is happening every once in a while. If the behavior is happening when the observers looks it gets recorded. But if the behavior happens any other time it does not get recorded. So the behavior is only recorded sometimes. The observer does not have to be paying attention the whole time for time sampling. Because the observer is not paying attention the whole time they will not record all of the behaviors that happen. For this reason it is not as good as frequency measurements.

Functional analysis

change

A functional analysis (FA) is done to find the reason for problem behavior. The reason for problem behavior must be known before a behavioral intervention can be done. So before treatment for a problem behavior can happen a functional analysis must be done. Behavior analysts came up with the idea of a functional analysis in 1982.[12] Since then it has become an important part of ABA. Functional analyses use data collection to prove the reason for problem behaviors. Problem behavior is usually reinforced accidentally. This reinforcement increases the rates of problem behavior. FAs use this idea to prove the cause of problem behavior. A functional analysis is done like an experiment. During an FA, a behavior analyst will go through different trials with a person. Each trial will put the person in a different situation They will take data on the person's reactions to the different trials. During these situations the behavior analyst will reinforce problem behavior on purpose. This is done to try to make the problem behavior happen like it would in real life. The behavior analyst is trying to find the situations that cause the highest rates of problem behavior. The situations that cause the highest rates of problem behavior tell the behavior analyst the reason for the behavior. Once the reasons for problem behavior are known treatment can begin.

Behavioral interventions

change

Behavioral interventions can be done without doing an FA. A functional analysis is not always done. This is because they are difficult and take a lot of time. But behavioral interventions done without functional analyses will not be as effective. Treatments based on the results of a FA are more effective.[13] FAs improve behavioral interventions for two reasons. First, if the reason for the problem behavior is known it will be easier to stop reinforcing it. The problem behavior can be put through extinction more easily. Second, the person can learn an alternate behavior using a DRA program. The alternate behavior would get the person the same consequence as the problem behavior. To know what consequence the person wants a FA would have to be done.

Use for treatment of autism

change

Applied behavior analysis is commonly associated with Autism. This is because ABA is the only treatment for autism that has been scientifically proven to be effective. [14] For this reason, ABA is mostly used for autism treatments. The earlier an intervention is started the more effective it will be. That is why behavior analysts try to start treatments with children with autism when they are young. ABA has also proven to be effective when used to treat older children and adults with autism. But it is not as effective when used with older people because their problem behaviors have been reinforced for a longer time.

  1. Cooper, John O.; Heron, Timothy E.; Heward, William L. (1987). Applied Behavior Analysis. Merril, Prentice Hall. p. 20. ISBN 978-0-675-20223-7.
  2. Kardas, E. P. (2013). History of psychology: Making of a science. Cengage Learning.
  3. Matson, J. L., & Neal, D. (2009). Applied behavior analysis for children with autism spectrum disorders. (pp. 1-13). Springer.
  4. 4.0 4.1 Morris, E. K., & Smith, N. G. (2005). B.f. skinners to applied behavior analysis. The Behavior Analyst, 28, 99-131. Retrieved from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2755377/pdf/behavan00002-0017.pdf
  5. Matson, J. L., & Neal, D. (2009). Applied behavior analysis for children with autism spectrum disorders. (pp. 1-13). Springer.
  6. Smith, T., & Eikeseth, S. (2011). O. ivar lovaas: Pioneer of applied behavior analysis and intervention for children with autism. Journal of Autism and Developmental Disorders, 41, 375-378.
  7. About the bacb. (n.d.). Retrieved from http://www.bacb.com/index.php?page=1
  8. Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91-97.
  9. Lee, David, Axelrod, Saul. (2005). Behavior Modification Basic Principles Third Edition. Pro-Ed Inc, Texas. ISBN 1416400583.
  10. Kazdin, A. E. (1979). Unobtrusive measures in behavioral assessment. Jnl of Applied Behav Analysis, 12: 713–724.
  11. Repp, A. C., Roberts, D. M., Slack, D. J., Repp, C. F. and Berkler, M. S. (1976). A comparison of frequency, interval and time-sampling methods of data collection. Journal of Applied Behavior Analysis, 9: 501–508.
  12. Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., & Richman, G. S. (1994). Toward a functional analysis of self injury. Journal of Applied Behavior Analysis, 27, 197-209.
  13. Mace, F. C. (1994). The significance and future of functional analysis methodologies. Jnl of Applied Behav Analysis, 27: 385–392.
  14. Granpeesheh, D., Tarbox, J., & Dixon, D. (2009). Applied behavior analytic interventions for children with autism: A description and review of treatment research. Annals of Clinical Psychiatry , 21(3), 162-173.

References

change