Monte Carlo Simulation (Basic)
Result ≈
3.14159
How it works
Monte Carlo simulation estimates the output of a mathematical model by running it thousands of times with randomly sampled inputs drawn from probability distributions. The aggregate distribution of outputs reveals the range, likelihood, and sensitivity of outcomes — a technique central to financial risk analysis, engineering reliability, and project scheduling.
**How it works** Define input distributions: revenue could follow a normal distribution with mean $10M and std $2M; costs could be uniformly distributed between $6M and $8M. Sample randomly: draw one value from each input distribution. Compute: run the model with those sampled inputs (e.g., profit = revenue − cost). Repeat: typically 10,000–100,000 iterations. Analyze: histogram the output — read off percentile values (P10, P50, P90) as scenario bounds.
**Applications** Project planning: PERT estimates with triangular distributions for task durations → P80 completion date. Financial modeling: DCF with uncertain discount rates and growth rates → distribution of intrinsic value. Engineering reliability: combining component failure rates → system MTBF distribution. Portfolio VaR: correlated stock returns → portfolio loss distribution at 95th percentile.
**Number of iterations** 1,000 iterations: rough estimate, visible statistical noise. 10,000: sufficient for percentiles down to P1/P99. 100,000: stable results for extreme percentiles (P0.1). The law of large numbers guarantees convergence — estimation error scales as 1/√N.
Frequently Asked Questions
- Estimation error scales as 1/√N (law of large numbers). For percentile estimates: 1,000 iterations gives rough estimates with visible statistical noise. 10,000 is sufficient for P10–P90 estimates. 100,000 gives stable results for extreme percentiles (P1/P99). 1,000,000 for P0.1/P99.9. As a practical rule: run at least 10,000 iterations and verify that results stabilize by comparing a 10,000-run to a 100,000-run — if percentiles shift significantly, run more.
- Normal distribution: for quantities subject to many small independent random errors (measurement errors, manufacturing tolerances). Log-normal: for quantities that are always positive and right-skewed (project costs, time estimates — actual almost always > estimate). Triangular: for expert estimates with defined minimum, most-likely, maximum — simple and intuitive. Uniform: for quantities with equal probability across a range. PERT: like triangular but with a smoother distribution. Never use normal distributions for quantities with hard physical limits (probability cannot be negative).
- Sensitivity analysis ('tornado charts') varies one input at a time while holding others fixed, showing which input has the most impact on the output. It does not account for simultaneous variation of all inputs. Monte Carlo simultaneously samples all inputs from their distributions on every iteration, capturing the combined effect and correlations between inputs. Monte Carlo provides a full output distribution; sensitivity analysis provides marginal impact rankings.
- Yes, but it requires modeling the correlation structure. For two positively correlated inputs (e.g., revenue and market size move together), drawing independent samples from each distribution ignores the correlation and understates risk. Use Cholesky decomposition to generate correlated normal samples, or copula methods for non-normal distributions. In practice, incorrectly assuming independence when inputs are correlated is a common Monte Carlo modeling error that underestimates tail risk.