I was recently contacted by the author of the blog PsychBrief (@PsyBrief) about a post they were writing about our recent article, “Value added or misattributed? A multi-institution study on the educational benefit of labs for reinforcing physics content” in the journal Physical Review Physics Education Research. They were using the article as an example of how to deal with selection biases in education and psychology research.
I was very interested to see a thorough power analysis associated with our study. Given that we found a null result, how do we know we aren’t just missing a subtle, yet interesting effect? This is something I often try to teach in our intro physics labs, so I was very excited to see how the analysis would pan out. It turns out, given the sample size, (etc.) we had more than enough power to see some pretty small effect sizes, and yet saw none. (phew!) I highly recommend checking out the description of the analysis on the PsychBrief blog here. I certainly plan to refer to the analysis there next time I find myself dealing with null effects!