When power analyses based on pilot data are biased: inaccurate effect size estimators and follow-up bias

C. Albers, D. Lakens

Research output: Contribution to journalArticleAcademicpeer-review

23 Citations (Scopus)

Abstract

When designing a study, the planned sample size is often based on power analyses. One way to choose an effect size for power analyses is by relying on pilot data. A-priori power analyses are only accurate when the effect size estimate is accurate. In this paper we highlight two sources of bias when performing a-priori power analyses for between-subject designs based on pilot data. First, we examine how the choice of the effect size index (η2, ω2 and ε2) affects the sample size and power of the main study. Based on our observations, we recommend against the use of η2 in a-priori power analyses. Second, we examine how the maximum sample size researchers are willing to collect in a main study (e.g. due to time or financial constraints) leads to overestimated effect size estimates in the studies that are performed. Determining the required sample size exclusively based on the effect size estimates from pilot data, and following up on pilot studies only when the sample size estimate for the main study is considered feasible, creates what we term follow-up bias. We explain how follow-up bias leads to underpowered main studies. Our simulations show that designing main studies based on effect sizes estimated from small pilot studies does not yield desired levels of power due to accuracy bias and follow-up bias, even when publication bias is not an issue. We urge researchers to consider alternative approaches to determining the sample size of their studies, and discuss several options.

Original languageEnglish
Pages (from-to)187-195
Number of pages9
JournalJournal of Experimental Social Psychology
Volume74
DOIs
Publication statusPublished - 1 Jan 2018

Fingerprint

Sample Size
trend
Research Personnel
Publication Bias
simulation

Keywords

  • Effect size
  • Epsilon-squared
  • Eta-squared
  • Follow-up bias
  • Omega-squared
  • Power analysis

Cite this

@article{fb1f8e2969094c0ca6bb3a93aa54dadf,
title = "When power analyses based on pilot data are biased: inaccurate effect size estimators and follow-up bias",
abstract = "When designing a study, the planned sample size is often based on power analyses. One way to choose an effect size for power analyses is by relying on pilot data. A-priori power analyses are only accurate when the effect size estimate is accurate. In this paper we highlight two sources of bias when performing a-priori power analyses for between-subject designs based on pilot data. First, we examine how the choice of the effect size index (η2, ω2 and ε2) affects the sample size and power of the main study. Based on our observations, we recommend against the use of η2 in a-priori power analyses. Second, we examine how the maximum sample size researchers are willing to collect in a main study (e.g. due to time or financial constraints) leads to overestimated effect size estimates in the studies that are performed. Determining the required sample size exclusively based on the effect size estimates from pilot data, and following up on pilot studies only when the sample size estimate for the main study is considered feasible, creates what we term follow-up bias. We explain how follow-up bias leads to underpowered main studies. Our simulations show that designing main studies based on effect sizes estimated from small pilot studies does not yield desired levels of power due to accuracy bias and follow-up bias, even when publication bias is not an issue. We urge researchers to consider alternative approaches to determining the sample size of their studies, and discuss several options.",
keywords = "Effect size, Epsilon-squared, Eta-squared, Follow-up bias, Omega-squared, Power analysis",
author = "C. Albers and D. Lakens",
year = "2018",
month = "1",
day = "1",
doi = "10.1016/j.jesp.2017.09.004",
language = "English",
volume = "74",
pages = "187--195",
journal = "Journal of Experimental Social Psychology",
issn = "0022-1031",
publisher = "Academic Press Inc.",

}

When power analyses based on pilot data are biased : inaccurate effect size estimators and follow-up bias. / Albers, C.; Lakens, D.

In: Journal of Experimental Social Psychology, Vol. 74, 01.01.2018, p. 187-195.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - When power analyses based on pilot data are biased

T2 - inaccurate effect size estimators and follow-up bias

AU - Albers, C.

AU - Lakens, D.

PY - 2018/1/1

Y1 - 2018/1/1

N2 - When designing a study, the planned sample size is often based on power analyses. One way to choose an effect size for power analyses is by relying on pilot data. A-priori power analyses are only accurate when the effect size estimate is accurate. In this paper we highlight two sources of bias when performing a-priori power analyses for between-subject designs based on pilot data. First, we examine how the choice of the effect size index (η2, ω2 and ε2) affects the sample size and power of the main study. Based on our observations, we recommend against the use of η2 in a-priori power analyses. Second, we examine how the maximum sample size researchers are willing to collect in a main study (e.g. due to time or financial constraints) leads to overestimated effect size estimates in the studies that are performed. Determining the required sample size exclusively based on the effect size estimates from pilot data, and following up on pilot studies only when the sample size estimate for the main study is considered feasible, creates what we term follow-up bias. We explain how follow-up bias leads to underpowered main studies. Our simulations show that designing main studies based on effect sizes estimated from small pilot studies does not yield desired levels of power due to accuracy bias and follow-up bias, even when publication bias is not an issue. We urge researchers to consider alternative approaches to determining the sample size of their studies, and discuss several options.

AB - When designing a study, the planned sample size is often based on power analyses. One way to choose an effect size for power analyses is by relying on pilot data. A-priori power analyses are only accurate when the effect size estimate is accurate. In this paper we highlight two sources of bias when performing a-priori power analyses for between-subject designs based on pilot data. First, we examine how the choice of the effect size index (η2, ω2 and ε2) affects the sample size and power of the main study. Based on our observations, we recommend against the use of η2 in a-priori power analyses. Second, we examine how the maximum sample size researchers are willing to collect in a main study (e.g. due to time or financial constraints) leads to overestimated effect size estimates in the studies that are performed. Determining the required sample size exclusively based on the effect size estimates from pilot data, and following up on pilot studies only when the sample size estimate for the main study is considered feasible, creates what we term follow-up bias. We explain how follow-up bias leads to underpowered main studies. Our simulations show that designing main studies based on effect sizes estimated from small pilot studies does not yield desired levels of power due to accuracy bias and follow-up bias, even when publication bias is not an issue. We urge researchers to consider alternative approaches to determining the sample size of their studies, and discuss several options.

KW - Effect size

KW - Epsilon-squared

KW - Eta-squared

KW - Follow-up bias

KW - Omega-squared

KW - Power analysis

UR - http://www.scopus.com/inward/record.url?scp=85031300248&partnerID=8YFLogxK

U2 - 10.1016/j.jesp.2017.09.004

DO - 10.1016/j.jesp.2017.09.004

M3 - Article

AN - SCOPUS:85031300248

VL - 74

SP - 187

EP - 195

JO - Journal of Experimental Social Psychology

JF - Journal of Experimental Social Psychology

SN - 0022-1031

ER -