Optimal Experimental Design in Experiments With Samples of Stimuli Jacob Westfall University of Colorado Boulder David A. Kenny University of Connecticut Charles M. Judd University of Colorado Boulder • Studies involving participants responding to stimuli (hypothetical data matrix): Subject # 1 2 3 4 6 7 3 8 8 7 9 5 6 4 7 8 4 6 9 6 7 4 5 3 6 7 4 5 7 5 8 3 4 • Just in domain of implicit prejudice and stereotyping: – – – – – – – IAT (Greenwald et al.) Affective Priming (Fazio et al.) Shooter task (Correll et al.) Affect Misattribution Procedure (Payne et al.) Go/No-Go task (Nosek et al.) Primed Lexical Decision task (Wittenbrink et al.) Many non-paradigmatic studies Hard questions • “How many stimuli should I use?” • “How similar or variable should the stimuli be?” • “When should I counterbalance the assignment of stimuli to conditions?” • “Is it better to have all participants respond to the same set of stimuli, or should each participant receive different stimuli?” • “Should participants make multiple responses to each stimulus, or should every response by a participant be to a unique stimulus?” Stimuli as a source of random variation • Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 103(1), 54-69. Power analysis in crossed designs • Power determined by several parameters: – 1 effect size (Cohen’s d) – 2 sample sizes • p = # of participants • q = # of stimuli – Set of Variance Partitioning Coefficients (VPCs) • VPCs describe what proportion of the random variation in the data comes from which sources • Different designs depend on different VPCs Definitions of VPCs • VP : Participant variance • Variance in participant intercepts • VS : Stimulus variance • Variance in stimulus intercepts • VP×C : Participant-by-Condition variance • Variance in participant slopes • VS×C : Stimulus-by-Condition variance • Variance in stimulus slopes • VP×S : Participant-by-Stimulus variance • Variance in participant-by-stimulus intercepts Four common experimental designs Stimuli-within-Condition design vs. Participants-within-Condition design • S-w-C is more powerful than P-w-C when: > • Where p = # of participants, q = # of stimuli. • If VP is relatively large and/or p is small: Choose Stimuli-within-Condition • If VS is relatively large and/or q is small: Choose Participants-within-Condition Fully Crossed design vs. Counterbalanced design • If q is held constant, then Fully Crossed design is more powerful. • If the total number of responses per participant is held constant, then Counterbalanced design is more powerful when: × < × – This condition will almost always be true! • If the number of unique stimuli is the limiting factor: Choose Fully Crossed design • If # of responses per participant is the limiting factor: Choose Counterbalanced design For power = 0.80, need q ≈ 50 For power = 0.80, need p ≈ 20 ? Maximum attainable power • In crossed designs, power asymptotes at a maximum theoretically attainable value that depends on: – Effect size – Number of stimuli – Stimulus variability • Under realistic assumptions, maximum attainable power can be quite low! When q = 16, max power = .84 Minimum number of stimuli to use? • A reasonable rule of thumb: Use at least 16 stimuli per condition! (preferably more) Implications of maximum attainable power • Think hard about your experimental stimuli before you begin collecting data! – Once data collection begins, maximum attainable power is pretty much determined. Conclusion • There is a growing awareness and appreciation in experimental psychology of the importance of running adequately powered studies. – (Asendorpf et al., 2013; Bakker, Dijk, & Wicherts, 2012; Button et al., 2013; Ioannidis, 2008; Schimmack, 2012) • Discussions of how to maximize power almost always focus simply on recruiting as many participants as possible. • We hope that the present research begins the discussion of how stimuli ought to be sampled in order to maximize statistical power. The end URL for power app: JakeWestfall.org/power/ Manuscript reference: Westfall, J., Kenny, D. A., & Judd, C. M. (under review). Statistical Power and Optimal Design in Experiments in Which Samples of Participants Respond to Samples of Stimuli. Journal of Experimental Psychology: General.