Z-Score Calculator
Calculate the z-score (standard score) from a raw value, mean, and standard deviation.
Find how many standard deviations a value is from the mean.
A Z-score (also called a standard score) measures how many standard deviations a data point is from the mean of a dataset. It standardizes values from different scales so they can be compared on a common basis.
Z-score formula:
Z = (X − μ) / σ
What each variable means:
- X: the individual data point (observed value)
- μ (mu), the population mean (or sample mean x̄ if using a sample)
- σ (sigma), the population standard deviation (or s for sample standard deviation)
- Z: the resulting score; positive means above average, negative means below average
Standard Normal Distribution reference:
- Z = 0 → exactly at the mean (50th percentile)
- Z = +1 → 84.13th percentile (1 SD above mean)
- Z = −1 → 15.87th percentile (1 SD below mean)
- Z = +2 → 97.72nd percentile
- Z = −2 → 2.28th percentile
- Z = +3 → 99.87th percentile (used in “six sigma” quality control)
Worked example 1 — Exam scores: Class mean: 74 points. Standard deviation: 8 points. Student scored 90. Z = (90 − 74) / 8 = 16 / 8 = +2.0 This student scored in the top 2.28% of the class.
Worked example 2 — Height: Average male height in the US: 69.1 inches (5'9"), SD = 2.9 inches. A man is 6'3" (75 inches). Z = (75 − 69.1) / 2.9 = 5.9 / 2.9 = +2.034 He is taller than approximately 97.9% of men.
Worked example 3 — Finding a raw score: If the mean test score is 500 and SD is 100 (like the SAT), what score corresponds to the 90th percentile? 90th percentile Z ≈ +1.282 X = μ + Z × σ = 500 + 1.282 × 100 = 628
Uses: Quality control (detecting defects), finance (risk assessment — 2-sigma and 3-sigma events), academic grading (curved grades), medical research (identifying outlier lab values), and standardized testing (SAT, GRE scores).