Why Sample Size Matters
An undersized study lacks the statistical power to detect real effects — wasting resources and producing false negatives. An oversized study is unnecessarily expensive and, in clinical research, may expose more participants to experimental conditions than needed. Proper sample-size calculation balances precision, power, and practical constraints before data collection begins.
The Diminishing Returns of Larger Samples
The margin of error shrinks with the square root of the sample size. Quadrupling n only halves the margin. Going from 400 to 1,600 respondents halves precision — but the cost quadruples. Most researchers find a practical sweet spot where precision is sufficient without being prohibitively expensive.
Power Analysis: The Often-Forgotten Step
Many researchers focus solely on confidence and margin of error, ignoring power. A study can have a tight 95% CI and still be underpowered — meaning it's well-suited to describe what it observed, but not sensitive enough to detect small effects. Power analysis (the Power Analysis tab) lets you work backwards: given your budget (fixed n), what is the smallest effect you can reliably detect?
Finite Populations and the Census Myth
A common misconception is that you need to survey a large fraction of a population to get reliable results. In reality, for populations above ~10,000, the required sample size barely changes. The precision of a survey is determined almost entirely by the absolute size of the sample, not by what fraction of the population it represents.