Sprecher
Beschreibung
Single case research designs are widely used in special education and related fields to evaluate interventions. The most commonly used approach to analyze these graphs is visual analysis. Despite existing challenges such as low interrater reliability, systematic bias, or a lack of training for raters, this method is most frequently applied due to its perceived simplicity and quick implementation. Tarlow et al. (2021) proposed Pairwise Comparisons as a new method to address the challenges of traditional visual analysis. They compared it with two other visual methods (Rating and Ranking) and statistical methods (PND, Baseline-Corrected Tau, and ITSSIM). The study's results appeared promising; however, the sample used was very small and consisted solely of research team members. In this replication study with a more extensive and independent German-speaking sample, our results were less encouraging: interrater reliability was unsatisfactory across all methods (Rating 𝛂 = .56; Ranking 𝛂 = .65; Pairwise Comparisons 𝛂 = .62), and correlations between the results of visual and statistical analyses were lower than in Tarlow et al. (2021; PND = .56–.60; Baseline-Corrected Tau .37–.40). Although a large portion of the data and methods from the original study were openly available, full replication was not possible due to missing information on ITSSIM. Our analyses do not support the positive findings regarding the reliable assessment of Pairwise Comparisons. Developing evaluation methods for single-case graphs and verifying seemingly effective methods remain crucial research interests in advancing single-case experimental research.
Keywords: Single Case Experimental Design, Visual Analysis, CBM, Replication, n=1, SCD