The problem with point estimates
When you ask a regression model who will win tonight’s game, it gives you one number. Maybe 58% probability for the home team. That’s it — a single point on the probability line, as if the future is a solved problem with one correct answer.
The problem is that the future isn’t like that. Basketball games are chaotic. Star players go cold. Foul trouble reshapes rotations. A team that’s been playing at 70% of capacity for two weeks due to travel finally gets rest. None of this uncertainty is captured in a regression coefficient.
What Monte Carlo gives you instead
Monte Carlo simulation doesn’t give you one answer. It gives you a distribution — a full shape of what might happen, sampled thousands of times, each run drawing from the real probability distributions of individual player performance.
Run 1,000 simulations of tonight’s Lakers vs. Celtics game and you don’t get “Lakers win, 58%.” You get:
- Lakers win 612 out of 1,000 simulations
- The margin distribution peaks at +3 but has fat tails out to ±20
- In the 388 Celtics wins, a specific player goes 3-for-15 from three in 71% of those games
That last insight is the difference. Regression gives you the center of gravity. Monte Carlo gives you the terrain.
Why this matters for prediction accuracy
Our DunkSim model runs 1,000 game simulations per prediction, sampling from:
- Per-player shooting distributions (not averages — actual variance-adjusted distributions)
- Pace and possession parameters fit to recent games
- Home/away adjustments calibrated on multi-season trend modeling
The regression layer is still there — but it feeds parameters into the simulation, not outputs to the user. The user gets a distribution, not a point.
Across 1,230 validation games in our 2023-24 out-of-sample gate, this architecture produced 63.01% winner prediction accuracy — meaningfully above naive baselines.
The honest answer is a distribution
Most prediction products hide their uncertainty. They show you 58% because it’s clean and confident-looking. We show you the distribution because that’s what honesty requires when the future is uncertain.
A 58% model that never communicates its uncertainty is worse than a 55% model that tells you “this game is essentially a coin flip — the distributions overlap heavily.” One helps you decide; the other creates false confidence.
Monte Carlo forces that honesty by design.