Applied Behavioral Analysis
Analysis of all Figures
Programs are developed and implemented in various sectors by individuals, groups, or corporations. After a program is executed, an evaluation is conducted to determine whether the goals of the program were met. Behavioral intervention plans (BIPs) are no exception. These are programs designed to help kids that struggle with misbehaving and have been known to improve behaviors and learning outcomes for affected kids. After BIPs are implemented, they should be evaluated to determine their effectiveness in meeting desired goals.
Program evaluation requires the collection of relevant data such as incidences of bad behavior over a certain period and feedback from observers. These data provide the basis upon which BIPs are evaluated. Staff feedback as in figure one can be used to determine the views of the staff on the effectiveness, challenges, and shortcomings of BIPs while frequencies of challenging behavior as in figure two can help determine the extent to which the program has affected incidences of lousy behavior thereby determining whether the program was effective or ineffective. Comparisons are made between incidences before the implementation of the program, incidences a day or a few days after implementation, and several months after implementation. More data is also collected by observers, as in figure three, to provide information on real-time observations of persons subjected to a BIP. These data enable observers to record the number of incidences of aggressive behavior and use it to determine interobserver agreement (IOA) ratios. Data in figure four involves observation of staff to determine whether they adhere to the set standards under the BIP.
Data collection and analysis are essential to the evaluation of BIPs. While data collection involves gathering of information on a subject of study, data analysis refers to the process of making this data useful by generating insights from it. Data collection in the design and evaluation of a program is essential because it makes available necessary and relevant data on a program’s projected or expected performance as well as data on actual performance after implementation. Data analysis can only take place when data is made available through collection techniques and helps to develop patterns and key aspects of the data such as means, medians, and correlations that form the basis upon which interpretation is made.
Employee engagement and social validity are important concepts in Applied Behavior Analysis (ABA). Social validity determines whether and to what extent the parties involved in a program accept it and are satisfied with its measures and procedures (Miramontes, Marchant, Heath & Fischer, 2011). Programs that engage employees are more likely to succeed because employees feel that they are part of the program and work towards its success. The opposite is true when employee engagement is bypassed. When employees and other parties to a program are fully involved, they tend to be more involved and energetic towards the success of the program because engagement induces a sense of responsibility.
Analysis of Figure One
The data collected in figure one, that is, staff responses and feedback, is qualitative. Qualitative data is usually non-statistical or non-numerical data that describes the qualities and characteristics of variables. On the other hand, quantitative data can be counted, measured, and expressed in numerical form. The data collected in figure one is qualitative because it expresses the feedback of staff and is descriptive in nature. This data is appropriate in the context of the program evaluation because it expresses responses that are otherwise difficult to express in numerical form. The purpose of the data collected in this figure is to determine the views of staff on the general effectiveness of the BIP and whether they gained new ideas on ABC. The data should be used to determine what the staff think of the program and whether it provided learning opportunities for them. The implications of this data with relation to the program are that it can be used to determine, from a generalized point of view, the success of the program and its learning outcomes. If a majority of the staff are dissatisfied or did not learn anything new, then the program may generally be characterized as ineffective.
Analysis of Figure Two
The data used in figure two is quantitative and is appropriate in measuring frequencies of challenging behavior because frequencies are quantifiable and are thus quantitative. Visual analysis involves the simple interpretation of graphs by looking at trends, variability, and level (Harrington & Velicer, 2015; Kahng et al., 2010). In this case, before the implementation of the program (the baseline), the data had a high level, low variability, and an increasing trend of the frequency of challenging behavior. However, after the implementation of the program, the graph displays a high level, low variability, and a decreasing trend. From the graph, it is observed that the intervention program reduced the incidences of challenging behavior and, thus, the decreasing trend. Even though increasing incidences appear during the first month of the intervention, incidences decreased gradually through the other months to less than fifty. From the decreasing trend, a conclusion can be drawn that the program was successful.
Analysis of Figure Three
The data in figure three is quantitative, and it quantifies the frequency of aggressive behavior of a student as observed by two teacher observers. The data is relevant as it can reveal without bias the number of incidences of aggressive behavior and use the Inter Observer Agreement ratio to determine whether results are consistent. IOA is given by lower count/larger count *100 (Reed & Azulay, 2011). The total frequency of physical aggression for observer one is 19, while that of observer two is also 19; thus, there is a total count IOA of 100%. The acceptable range of IOA is no less than 80%, although higher percentages are preferred (Reed & Azulay, 2011). Therefore, our IOA is within an acceptable range. Some of the factors that lead to a high percentage of IOA are consistency in observations and employing adequately trained and skilled observers. IOA measures are used in determining the effectiveness of intervention programs (Reed & Azulay, 2011). High IOA means that there is consistency in the data collected, and thus the data is reliable for analysis. Also, higher IOAs indicate that observers possess high-quality observation and data recording skills and that the data collected reflects changes in behavior. A higher IOA may suggest that an intervention program was successful.
Analysis of Figure Four
The data in figure four is qualitative in nature, and it describes the behavior of staff and marks it as correct or incorrect. The data is appropriate as it describes behavior but can also be quantified in terms of percentages of correct and incorrect behaviors. There are a total of ten observations, out of which four are correct for the baseline. This represents 40% of correct results. One day after intervention, there are nine correct responses out of ten, representing 90% of correct results. Three months after the intervention, there are six out of ten correct observations representing 60% of correct results. The higher the percentages of correct results, the higher the success of a BIP and vice versa. Higher percentages show that staff is observing treatment integrity, while lower percentages represent a disrespect for treatment integrity. Treatment integrity is important because it highlights the extent to which staff follows the integrity checklists of a program. The higher the results of treatment integrity, the more successful a BIP becomes (Fryling, Wallace & Yassine, 2012).
References
Fryling, M. J., Wallace, M. D., & Yassine, J. N. (2012). Impact of treatment integrity on intervention effectiveness. Journal of Applied Behavior Analysis, 45(2), 449-453.
Harrington, M., & Velicer, W. F. (2015). Comparing visual and statistical analysis in single-case studies using published studies. Multivariate behavioral research, 50(2), 162-183.
Kahng, S. W., Chung, K. M., Gutshall, K., Pitts, S. C., Kao, J., & Girolami, K. (2010). Consistent visual analyses of intrasubject data. Journal of applied behavior analysis, 43(1), 35-45.
Miramontes, N. Y., Marchant, M., Heath, M. A., & Fischer, L. (2011). Social validity of a positive behavior interventions and support model. Education and Treatment of Children, 445-468.
Reed, D. D., & Azulay, R. L. (2011). A Microsoft Excel® 2010 based tool for calculating inter-observer agreement. Behavior analysis in practice, 4(2), 45-52.