Mode

Tail:
Critical Values — click to fill
Z-Score
Enter values to calculate
Z-Score
Percentile
% Below
% Above
Area Left
Area Right
z = (x − μ) / σ x = μ + z·σ P = Φ(z) p = Φ(−|z|)
Chart: standard normal distribution bell curve.
Step-by-Step Solution

Standard Normal Distribution Table

Cumulative probability P(Z ≤ z) for z from −3.5 to +3.5. Your current z is highlighted with ◀.

Z P(Z ≤ z) P(Z ≥ z) Two-tail p

The Empirical Rule (68–95–99.7)

For any normal distribution, the empirical rule states:

68% of data falls within ±1σ

95% of data falls within ±2σ

99.7% of data falls within ±3σ

Critical Z-Score Reference

±1.282
80% CI
one-tail: 10%
±1.645
90% CI
one-tail: 5%
±1.96
95% CI
one-tail: 2.5%
±2.576
99% CI
one-tail: 0.5%
±3.29
99.9% CI
one-tail: 0.05%

Practical example: Heights are normally distributed with mean 5′10″ and SD 3″. Then 68% of people are between 5′7″ and 6′1″, 95% are between 5′4″ and 6′4″, and 99.7% are between 5′1″ and 6′7″.

Hypothesis testing: z = ±1.96 marks the 95% threshold (p < 0.05, two-tail). z = ±2.576 marks the 99% threshold (p < 0.01). Values beyond ±3 are rare outliers (only 0.3% of data).

When the rule doesn’t apply: Skewed, bimodal, or non-normal distributions may not follow the empirical rule. Always verify normality before applying.

📋

How to Use This Calculator

1

Choose a Mode

Select what you want to find: Z-Score, Value, Percentile, or the probability Between Two Z-scores.

2

Enter Your Values

Type your observed value (x), population mean (μ), and standard deviation (σ). Results update instantly.

3

Read the Results

The bell curve shades the probability area. The significance badge tells you if the value is statistically unusual.

4

Use Critical Value Pills

Click ±1.96, ±2.576, or any critical value pill to instantly fill your inputs with common hypothesis-testing thresholds.

Formula & Methodology

Z-Score Formula
z = (x − μ) / σ
Subtract the mean from the observed value and divide by the standard deviation. The result tells you how many standard deviations the value lies from the mean.
Cumulative Probability
P(Z ≤ z) = Φ(z)
The CDF of the standard normal distribution gives the proportion of values at or below the z-score. Computed using the Abramowitz-Stegun approximation (error < 7.5×10−³).
Between Two Z-Scores
P(z₁ ≤ Z ≤ z₂) = Φ(z₂) − Φ(z₁)
Subtract the smaller CDF from the larger. For example, P(−1 ≤ Z ≤ 1) = Φ(1) − Φ(−1) = 0.8413 − 0.1587 = 68.27%.
📖

Key Terms

Z-Score The number of standard deviations a data point is above or below the mean of its distribution.
Standard Normal Distribution A normal distribution with mean 0 and standard deviation 1; all z-scores follow this distribution.
Percentile The percentage of values in a distribution that fall at or below a given value.
CDF (Φ) The cumulative distribution function maps each z-score to its cumulative probability (area under the curve to the left).
Critical Value A z-score that marks a significance boundary; z = ±1.96 defines the 95% confidence interval threshold.
Two-Tailed Test Considers extreme values in both directions; the p-value is doubled compared to a one-tailed test.
👥

Real-World Examples

📊

SAT Score

Input: x = 1350, μ = 1060, σ = 217

Result
z = 1.34

z = 1.34 — 91st percentile; scored higher than about 91% of test-takers.

📊

Z-Score Quick Reference

Z-ScorePercentileDescription
−3.00.13%Extreme outlier (below)
−2.02.28%Far below average
−1.015.87%Below average
0.050.00%Exactly average
+1.084.13%Above average
+2.097.72%Far above average
+3.099.87%Extreme outlier (above)
📄

Z-Scores: Comparing Apples and Oranges

Why Z-Scores Enable Fair Comparison

A student who scores 85 on a history exam and 75 on a physics exam may have performed better in physics if the physics test was harder and had more variance. Z-scores remove the influence of different scales and spreads, expressing every score in the universal language of standard deviations. This makes z-scores indispensable for comparing across different tests, populations, or measurement systems.

Z-Scores in Real Life

Credit scoring models, medical lab results, and standardized tests all use z-scores internally. A lab result reported as “within normal limits” typically means the z-score falls between roughly −2 and +2. Growth charts for children plot height and weight as z-scores (standard deviation scores) relative to age-matched populations, allowing pediatricians to quickly identify children who fall outside the expected range.

One-Tail vs. Two-Tail Tests

When performing hypothesis testing, choose a one-tailed test if you care only whether the value is above (or below) the threshold, and a two-tailed test if deviations in either direction are meaningful. The p-value in a two-tailed test is exactly double the one-tailed p-value for the same |z|.

Frequently Asked Questions

What is a z-score?+

A z-score tells you how many standard deviations a data point is from the mean. A z-score of 2 means the value is 2 standard deviations above the mean, while a z-score of -1.5 means it is 1.5 standard deviations below.

How do I calculate a z-score?+

Subtract the mean from your data point, then divide by the standard deviation: z equals (x minus mean) divided by standard deviation. Enter your values into the calculator and it computes the z-score along with the associated probability.

What z-score is considered statistically significant?+

In most research, a z-score beyond plus or minus 1.96 is considered significant at the 95 percent confidence level. For 99 percent confidence, the threshold is plus or minus 2.576. These correspond to p-values below 0.05 and 0.01 respectively.

How are z-scores used in real life?+

Z-scores are used in standardized testing (SAT, GRE), quality control (Six Sigma), medical diagnostics (bone density T-scores), finance (risk assessment), and any field where comparing values across different scales is necessary.

Can z-scores be used with non-normal distributions?+

Z-scores can be calculated for any distribution, but the probability interpretations (like the 68-95-99.7 rule) only apply to normally distributed data. For non-normal data, use percentile ranks or distribution-specific methods instead.