Scientific Notation Calculator

Convert numbers to/from scientific notation, perform arithmetic operations, and explore the scale of the universe.

Input
Results
Scientific Notation
Standard Form
Engineering Notation
🧮 Step-by-step
Enter Two Numbers in Scientific Notation
Number 1
× 10
Operation
Number 2
× 10
Result
Scientific Notation
🧮 Step-by-step
Find Your Number on the Scale
Enter a number to see where it falls on the scale.

Scale runs from 10−15 (femtometer) to 1026 (observable universe)

Powers-of-Ten Reference Table
Power Name / Object Example

How to Use This Calculator

1
Converter tab: Select your direction (Standard → Scientific or Scientific → Standard), enter your number, and read off the scientific notation, engineering notation, and word name instantly.
2
Operations tab: Enter two numbers already in scientific notation (coefficient + exponent each), choose an operation (×, ÷, +, −), and see the result with a full step-by-step breakdown.
3
Scale Reference tab: Enter any number to see where it sits on a visual scale from subatomic to cosmic. The reference table maps common scientific quantities to their powers of ten.

Key Formulas

a × 10n where 1 ≤ |a| < 10
Multiply: (a × 10m)(b × 10n) = ab × 10m+n
Divide: (a × 10m) ÷ (b × 10n) = (a/b) × 10m−n
Add/Sub: match exponents first, then operate on coefficients
Engineering: exponent is a multiple of 3 (kilo, mega, giga…)

Key Terms

Coefficient
The number multiplied by the power of ten. In standard scientific notation, it must satisfy 1 ≤ |a| < 10.
Mantissa
Another word for the coefficient — the significant part of the number before the power of ten. More precise when there are many decimal places.
Exponent
The power of ten that scales the coefficient. Positive exponents make numbers larger; negative exponents make them smaller (fractions).
Order of Magnitude
A rough measure of the size of a number, given by its exponent. Two numbers that differ by one order of magnitude differ by a factor of 10.
Significant Figures
The meaningful digits in a measurement. Scientific notation makes it easy to express exactly how many significant figures a number has (the digits in the coefficient).
Normalized Form
Scientific notation where the coefficient is in the range [1, 10). A number is in normalized form when it has exactly one non-zero digit before the decimal point.

Real-World Examples

Speed of Light
2.998 × 108 m/s
299,792,458 metres per second — the universal speed limit.
Mass of an Electron
9.109 × 10−31 kg
One of the smallest measurable masses; 1,836× lighter than a proton.
Avogadro’s Number
6.022 × 1023
Number of atoms/molecules in one mole of a substance.
US National Debt
~3.5 × 1013 USD
Roughly $35 trillion — a number too large for everyday notation.
Planck Length
1.616 × 10−35 m
The smallest meaningful length in physics; far smaller than any particle.
Distance to Andromeda
2.365 × 1022 m
Our nearest large galactic neighbour, about 2.5 million light-years away.

Why Scientists Use Scientific Notation

The universe spans an almost incomprehensible range of scales. A proton is roughly 10−15 metres across. The observable universe is roughly 1026 metres wide. Writing these numbers in full would require strings of zeros so long they’d be unreadable — and prone to counting errors. Scientific notation solves this by encoding size as a power of ten, separating “how big” from “how precisely we know.”

The form a × 10n (where 1 ≤ |a| < 10) is universally adopted across physics, chemistry, astronomy, and engineering. The exponent n gives the order of magnitude — a rough sense of size — while the coefficient a carries the precision. Two numbers with the same exponent are the same order of magnitude; each step of 1 in the exponent represents a factor of 10.

Engineering notation is a practical variant used in electronics and SI prefixes. Here the exponent is always a multiple of 3, matching the prefix system: 103 = kilo, 106 = mega, 109 = giga, 10−3 = milli, 10−6 = micro. A capacitor value of 4.7 × 10−6 farads is more naturally written as 4.7 µF in engineering notation.

Arithmetic rules are what make scientific notation especially powerful. Multiplication becomes coefficient multiplication plus exponent addition; division becomes coefficient division minus exponent subtraction. These rules reduce multi-digit calculations to simple single-digit arithmetic, which is why Galileo, Newton, and Kepler relied on them centuries before computers existed — and why physicists still reach for a back-of-the-envelope estimate in powers of ten today.

Frequently Asked Questions

Why must the coefficient be between 1 and 10?

The convention ensures a unique, normalized representation. If any coefficient were allowed, the same number could be written as 32 × 105, 3.2 × 106, or 0.32 × 107 — all equal, but confusing. The 1–10 rule gives every number exactly one standard form.

What does a negative exponent mean?

A negative exponent means the number is very small (less than 1). For example, 10−3 = 0.001 and 4.5 × 10−6 = 0.0000045. The magnitude of the exponent tells you how many places to move the decimal to the left.

Why is addition harder than multiplication in scientific notation?

Multiplication works coefficient-by-coefficient (multiply) and exponent-by-exponent (add). Addition requires both numbers to share the same exponent first — you have to shift one number to match the other before the coefficients can be combined. This extra step makes addition more error-prone than multiplication.

What is engineering notation and when is it used?

Engineering notation restricts the exponent to multiples of 3, aligning with the SI prefix system (kilo, mega, giga, milli, micro, nano). It’s widely used in electronics and electrical engineering so component values map directly to prefix labels (e.g., 4.7 nF for a capacitor).

What is the difference between scientific notation and standard form?

In the US, “standard form” usually means the plain decimal number (e.g., 3,200,000). In UK/Commonwealth usage, “standard form” and “scientific notation” are synonymous (both mean a × 10n). Our calculator shows the plain decimal version as “Standard Form” for clarity.

What is the order of magnitude?

The order of magnitude is the exponent in scientific notation — a rough measure of a number’s size. Two numbers that differ by one order of magnitude differ by a factor of 10. A rough estimate that gets the order of magnitude right is often good enough for physics and engineering decisions.