Sprecher
Beschreibung
When analyzing electroencephalographic (EEG) data, researchers face a maze of potential methods and analytical approaches. But how much do these choices actually influence the results? The EEGManyPipelines (EMP) project addressed this question through a large-scale collaborative effort to test the robustness of EEG findings across different analysis strategies. In this unprecedented study, a single EEG dataset and a set of research questions were provided to 168 expert teams worldwide. Each team applied their own methods to analyze this dataset, later submitting preprocessed data, analysis scripts, and hypothesis testing results. Our findings show that while EEG processing pipelines vary widely, three steps—the baseline window length, averaging across the time window, and the approach to multiple comparisons—were differentially associated with whether significant effects emerged between conditions concerning early and late ERP components. Further analysis of the preprocessed data revealed that choices such as reference type, high-pass filter cutoff value, and the use of plugins for artifact correction significantly impacted the magnitude of these effects. In this poster, I will present results from three ERP hypotheses.