A comparison of three effect size indices for count-based outcomes in Single-Case Design studies

Date
2023
Journal Title
Journal ISSN
Volume Title
Publisher
University of Delaware
Abstract
In Single-Case Designs (SCD), the outcome variable most commonly involves some form of count data. However, statistical analyses and associated effect size (ES) calculations for count outcomes have only recently been proposed. Three recently proposed ES methods for count data are: Nonlinear Bayesian effect size (Rindskopf, 2014), Log Response Ratio effect size (Pustejovsky, 2018), and Bayesian Rate Ratio effect size (Natesan Batley, Shukla Mehta, & Hitchcock, 2021). Although all three methods calculate ES for count outcome data and can be used with an ABAB design, they use either different statistical modeling or a different estimation framework (Bayesian or frequentist), they may assume the presence or absence of autocorrelation, which is frequently present in SCD data and it is yet to examine how the ES and standard error estimates from these three ES indices are affected by overdispersion, a common occurrence in count data. These fundamental differences call for a closer examination and comparison of the methods and estimates obtained. The proposed dissertation aims to investigate the interpretability and understandability of the estimates produced as proposed by May (2004), examine if the three ES indices can be converted to a common metric to facilitate comparison of the ES estimates, document the benefits and challenges while implementing each method, and examine the performance of these ES methods under positive autocorrelation and overdispersion using Monte Carlo Simulation. Schmidt (2007), a published SCD study that examined the effect of Class-wide Function-related Interventions Teams (CW-FIT) on reducing the disruptive behavior of three first grade students using an ABAB design, was used to examine the interpretability and understandability of the estimates produced and whether the indices can be converted to a common metric. It consisted of 3 cases with 4 phases (ABAB) for each case. For the simulation study, 1000 datasets for each case were simulated using pre-specified data parameters (number of cases, number of data points within each phase of a case, and phase means) taken from Schmidt (2007) study and for various conditions of autocorrelation and overdispersion. A fully crossed factorial design with three autocorrelation (0.0, 0.2, 0.4) and four overdispersion (0.0001, 0.05, 0.1, 0.3) resulting in 12 simulation conditions for each case was used for the data generation purpose. All analyses were carried out in R software. Results indicate all three ES estimates are interpretable. LRR meets the understandability criteria, however both BRR and NLB require advance statistical knowledge to run the models. The three ES can be converted to a common metric because they are ratios of the mean count of the phases. Based on simulation, all the three methods produce almost unbiased estimates of the effect size under different data conditions, however the standard error is affected by autocorrelation and overdispersion. This dissertation can serve as a resource for other SCD researchers and applied practitioners to understand and interpret the different ES values from the LRR, NLB, and BRR methods and help them make better informed decision about which of the three ES indices to use in their own research study if there is presence of autocorrelation and overdispersion in their data.
Description
Keywords
Count outcomes, Effect size, Single-Case Designs, Bayesian effect, First grade students
Citation