Data Input

Method
Confidence Level
Sample Datasets
Separate values with commas, spaces, or newlines. Paste two-column data from Excel — it auto-splits.
Pearson Correlation
Enter X and Y data above
Pearson r
r² (Determination)
Strength
Direction
Count (n)
p-Value
95% CI for r
r = Σ(x−x̄)(y−ȳ) / √(Σ(x−x̄)²·Σ(y−ȳ)²) t = r√(n−2) / √(1−r²)

Step-by-Step Calculation Breakdown

xyx − x̄y − ȳ (x−x̄)(y−ȳ)(x−x̄)²(y−ȳ)²
Calculate first to see breakdown
Outlier Detection (|Std. Residual| > 2.5σ)
Calculate first to see outlier analysis.
Fisher z Confidence Interval for r
Calculate first to see Fisher z confidence interval details.
Regression Equation Details
Calculate first to see the regression equation and slope details.

Correlation Strength Interpretation

|r| RangeStrengthPractical Meaning
0.00 – 0.10NegligibleNo meaningful linear relationship
0.10 – 0.30WeakSmall effect, often not practically significant
0.30 – 0.50ModerateNoticeable but limited relationship
0.50 – 0.70StrongSubstantial linear dependency
0.70 – 0.90Very StrongHigh degree of linear dependency
0.90 – 1.00Near PerfectVariables move almost in lockstep

Pearson r vs. Spearman ρ — When to Use Which

AttributePearson rSpearman ρ
MeasuresLinear relationship (raw values)Monotonic relationship (ranks)
AssumesContinuous, normally distributed dataOrdinal or any monotonic data
Outlier sensitivityHigh — one outlier can dominate rLow — ranks limit outlier influence
Nonlinear dataMay give r ≈ 0 even for strong curvesDetects monotonic curves (e.g., log, sqrt)
Typical useLab measurements, financial returnsSurvey scales, ranked data, data with outliers
Formular = Σ(x−x̄)(y−ȳ) / √(Σ(x−x̄)²Σ(y−ȳ)²)ρ = Pearson(rank(X), rank(Y))

Tip: Try the "Nonlinear" sample dataset — Pearson r will be near zero while Spearman ρ may still detect the monotonic relationship in part of the data.

Why You Must Always Visualize — Anscombe's Quartet

Francis Anscombe created four datasets in 1973 that all share nearly identical statistics: mean, variance, and Pearson r ≈ 0.816. Yet they look completely different when plotted. One is a perfect parabola. One has a single outlier pulling the entire line. Always check the scatter plot — a number alone never tells the full story. This calculator's chart is your sanity check.

How to Use This Calculator

01

Enter Your Data

Type X values and Y values as comma-separated numbers. Or paste two-column data from Excel — it auto-splits into X and Y.

02

Choose Method & CI

Select Pearson r, Spearman ρ, or Both. Pick your confidence level (90/95/99%) for the Fisher z interval.

03

Read the Results

The hero value shows r (or ρ), the scatter chart visualizes the data, and the strength meter gives instant context. Check the Analysis tab for outliers and CI details.

04

Export & Share

Click Share URL to copy a link with your data pre-filled. Export CSV to get the full data table with residuals and outlier flags.

Key Formulas

Pearson r r = Σ(xᵢ−x̄)(yᵢ−ȳ) / √(Σ(xᵢ−x̄)² · Σ(yᵢ−ȳ)²)

Measures the linear relationship between two continuous variables. Ranges from −1 to +1.

Spearman ρ ρ = Pearson(rank(X), rank(Y))

Rank-based correlation. More robust to outliers and detects monotonic (not just linear) relationships.

Fisher z CI z' = 0.5·ln((1+r)/(1−r)), SE = 1/√(n−3)

Transforms r into a normally distributed quantity to build confidence intervals. Requires n > 3.

Significance Test t = r·√(n−2) / √(1−r²), df = n−2

Tests H₀: ρ = 0. The two-tailed p-value indicates if the correlation is statistically significant.

Key Terms

Pearson r
The product-moment correlation coefficient. Measures the strength and direction of a linear relationship between two continuous variables.
Spearman ρ (rho)
Rank correlation coefficient. Assesses monotonic relationships and is resistant to outliers and non-normal distributions.
r² (Coefficient of Determination)
The fraction of variance in Y explained by X. r = 0.80 → r² = 0.64 → 64% of Y's variation is accounted for by X.
p-Value
The probability of observing your r (or more extreme) under H₀: ρ = 0. p < 0.05 is typically considered statistically significant.
Fisher z Confidence Interval
A range of plausible values for the true population correlation ρ. A 95% CI means: if you repeated the study, 95% of such intervals would contain the true ρ.
Standardized Residual
Each data point's vertical distance from the regression line, scaled by the residual standard deviation. Points beyond ±2.5 are flagged as potential outliers.
Regression Line
The best-fit line ŷ = mx + b that minimizes the sum of squared residuals. The slope m = r × (σY/σX).
Monotonic Relationship
A relationship where Y consistently increases (or decreases) as X increases — but not necessarily at a constant rate. Spearman ρ detects this; Pearson r may not.

Real-World Examples

Finance
Stock Price vs. Interest Rates

Historically, 10-year Treasury yields and the S&P 500 have a moderate negative correlation (r ≈ −0.35 to −0.50). Rising rates tend to compress stock valuations, but the relationship is noisy.

r ≈ −0.40 → Moderate Negative
Health
Study Hours vs. Exam Score

Studies on student performance typically find r = 0.55–0.70 between hours studied and test scores. Strong, but 50–70% of variance remains unexplained by study time alone.

r ≈ 0.62 → Strong Positive
Science
Temperature vs. Ice Cream Sales

A textbook example of strong positive correlation. As daily temperature rises, ice cream sales increase nearly in lockstep. Classic warning: correlation does not imply causation.

r ≈ 0.85 → Very Strong Positive
Engineering
Component Voltage vs. Measured Output

In precision electronics, calibration data should show near-perfect linear correlation. An r below 0.99 typically signals a faulty sensor, noise, or a nonlinear component.

r ≥ 0.99 → Near Perfect

Correlation vs. Causation — The Essential Distinction

Correlation measures the degree to which two variables move together. It says nothing about why they move together. This distinction — correlation vs. causation — is one of the most important in data analysis, yet it is violated constantly in headlines, research summaries, and business reports.

Why Correlation Is Not Causation

Two variables can be correlated for three reasons: (1) X causes Y, (2) Y causes X, or (3) a third variable Z causes both. Ice cream sales and drowning rates correlate strongly — not because ice cream causes drowning, but because summer heat drives both. This is called a confounding variable or spurious correlation.

When Correlation Is Useful Without Causation

You do not need causation for correlation to be valuable. If credit scores correlate strongly with loan defaults, a bank can use the score to predict risk without knowing the causal mechanism. Prediction and explanation are different goals. For prediction alone, correlation is sufficient.

Establishing Causation

The gold standard is a randomized controlled experiment (RCT) where subjects are randomly assigned to conditions, ruling out confounders. Observational data can support causal inference through methods like difference-in-differences, instrumental variables, or regression discontinuity — but these require strong assumptions that correlation alone cannot satisfy.

Anscombe's Quartet — Always Plot Your Data

In 1973, Francis Anscombe demonstrated that four completely different datasets can share the same Pearson r, mean, and variance. One dataset is linear. One has a curved relationship. One has an outlier distorting a perfectly linear relationship. This is why the scatter plot in this calculator is not optional decoration — it is essential information. No number alone replaces visual inspection.

Frequently Asked Questions

What is a good correlation coefficient value?

Context determines what "good" means. Social scientists often consider r = 0.50 strong, while physicists or engineers may need r > 0.99 for a relationship to be useful. As a general guide: |r| below 0.10 is negligible; 0.10–0.30 weak; 0.30–0.50 moderate; 0.50–0.70 strong; 0.70–0.90 very strong; above 0.90 near perfect.

What is Spearman's ρ and when should I use it instead of Pearson r?

Spearman's ρ (rho) computes the Pearson r of the rank-transformed data, making it robust to outliers and suitable for ordinal data or monotonic but non-linear relationships. Use Spearman when your data are ordinal (e.g. Likert scale), when you have outliers that would distort Pearson r, or when you expect a curved (but one-directional) relationship.

How do I interpret r² (the coefficient of determination)?

r² is the proportion of variance in Y that is explained by X. If r = 0.80, then r² = 0.64, meaning 64% of the variation in Y is accounted for by its linear relationship with X. The remaining 36% is due to other factors or measurement noise.

What sample size do I need for a reliable correlation?

A minimum of n = 30 is widely recommended for a stable correlation estimate. With small samples, the confidence interval for r is wide and the estimate is unreliable. For n = 10 you need |r| > 0.63 for significance at α = 0.05; for n = 100 any |r| > 0.20 is significant. Always report n alongside r.

How do outliers affect the Pearson correlation coefficient?

Pearson r is sensitive to outliers because it is based on raw values. A single extreme point can inflate r from 0 to 0.9 or deflate it from 0.9 to near zero. This calculator flags points with standardized residuals beyond ±2.5σ as potential outliers, shown as red triangles on the scatter plot. If outliers are present, compare Pearson r with Spearman ρ — a large discrepancy indicates the outliers are influential.

What does the p-value mean for a correlation test?

The p-value tests H₀: ρ = 0 (no linear relationship in the population). A p-value below 0.05 means there is less than a 5% probability of observing your r (or a more extreme value) purely by chance if the true correlation is zero. Important: p-value does not measure the size or practical importance of the correlation — only whether it is statistically distinguishable from zero.

Can correlation be used to predict one variable from another?

Correlation quantifies the relationship's strength, but prediction requires linear regression: ŷ = mx + b, where slope m = r × (σY/σX). The r² value tells you the fraction of variance explained by the model. This calculator shows the regression equation in the Analysis tab and draws the regression line on the scatter chart.

What is the Fisher z-transformation used for?

Pearson r has a skewed sampling distribution, especially near ±1. Fisher's z' = 0.5 × ln((1+r)/(1−r)) transforms r into a quantity that is approximately normally distributed with standard error SE = 1/√(n−3). This makes it possible to construct confidence intervals for r and formally test whether two correlations differ.

What is Anscombe's Quartet and why does it matter?

Anscombe's Quartet is four datasets constructed by Francis Anscombe in 1973. All four have nearly identical Pearson r ≈ 0.816, identical means, and nearly identical variances — yet they look completely different on a scatter plot. One is linear. One is a perfect parabola where Pearson r is misleading. One has a single outlier dominating the regression line. This is why you should never rely on r alone without plotting the data.

What is the difference between correlation and covariance?

Covariance = Σ(xᵢ−x̄)(yᵢ−ȳ) / (n−1) has units that depend on the scales of X and Y, making it hard to interpret or compare across datasets. Pearson r divides the covariance by the product of the standard deviations (σX × σY), producing a dimensionless value bounded between −1 and +1 that is comparable across any datasets regardless of units.

Related Calculators