Many functions encountered in physics, engineering, statistics, and applied mathematics simply do not have closed-form antiderivatives. The Gaussian eˣ^(−x²), the sinc function, most arc-length integrals, and the vast majority of integrals arising from real measurement data fall into this category. Numerical integration — approximating an integral by sampling the function at finitely many points — is how working scientists and engineers compute these. The three methods this calculator implements span the spectrum from simplest (Trapezoidal) to dramatically more efficient (Simpson's), and understanding their convergence properties is essential to choosing the right tool.

Why Three Methods Instead of One

The Trapezoidal Rule is the simplest: connect adjacent sample points with straight lines and sum the resulting trapezoid areas. Its error scales as O(h²), meaning halving the step size h cuts the error by a factor of four. The Midpoint Rule evaluates the function at the center of each subinterval — somewhat counter-intuitively, this performs better than Trapezoidal at the same n because it cancels first-order curvature errors that Trapezoidal accumulates. Simpson's Rule fits a parabola through every three consecutive sample points; this captures quadratic curvature exactly, leaving only fourth-order error terms behind. The practical result is dramatic: at n=100, Simpson's typically delivers 8–10 correct digits where Trapezoidal delivers 3–4. The cost is just one extra constraint — Simpson's needs an even number of subintervals — and the requirement that the integrand be smooth enough that fourth derivatives are bounded.

Reading the Error Column

The calculator reports an exact value for the seven built-in functions because closed-form antiderivatives are available: ∫xⁿ = xⁿ⁺¹/(n+1), ∫sin x = −cos x, ∫eˣ = eˣ, ∫1/x = ln x, ∫√x = (2/3)x^(3/2). Comparing the numerical estimate to the exact value gives an honest measurement of method error rather than a self-reported confidence interval. For functions without antiderivatives in elementary form (the Gaussian, the error function, Fresnel integrals), practitioners run two methods at the same n and treat the disagreement between them as an error estimate. A more rigorous approach uses adaptive quadrature — refining h only where the integrand is changing fast — but for smooth, well-behaved integrands, fixed-n Simpson's with n in the 50–200 range is usually sufficient and faster.

When Numerical Integration Breaks Down

The convergence rates quoted above assume the integrand is smooth — specifically, that the relevant derivatives are bounded on [a, b]. For 1/x integrated near zero, or √x integrated at x=0 (where the derivative is infinite), convergence degrades significantly because the methods' error analyses rely on Taylor expansions that fail. The same is true for oscillatory integrands like sin(1000x) over [0, 1] — you need an n large enough to resolve every oscillation, which can be punishing. For improper integrals (infinite bounds, or integrands that blow up at the endpoints), variable-transformation tricks (substitution to compress an infinite interval into a finite one) are the standard remedy. The Riemann Visualization tab makes this concrete: drag the slider to small n and watch how the approximating rectangles miss steep regions of the curve. As n grows the rectangles tighten against the curve, but for problematic integrands you can see the convergence stall.