The cognitive bias behind bad forecasting

Most business forecasts fail because of predictable cognitive biases, not unpredictable events. Understanding these biases helps you forecast more realistically.

A man holding a remote control in front of a computer
A man holding a remote control in front of a computer

The founder projected 50% growth. They achieved 15%. The next year, they projected 40% growth. They achieved 12%. Year after year, projections exceeded reality by similar margins. The forecasting errors weren’t random—they were systematically optimistic. This pattern is remarkably common. Forecasting failures aren’t primarily about unpredictable events. They’re about predictable cognitive biases that distort how humans think about the future.

Understanding these biases doesn’t guarantee accurate forecasts. But it helps you recognize when your projections are likely inflated and adjust accordingly.

The planning fallacy

The foundational forecasting bias:

What it is

The tendency to underestimate time, costs, and risks of future actions while overestimating benefits. Plans focus on ideal scenarios rather than realistic ones.

How it appears in business forecasting

“We’ll launch the product in three months and it will generate $X.” The three months becomes six. The $X becomes $X/3. The plan assumed everything would go right. Reality included things going wrong.

Why it persists

Plans are based on imagined scenarios. Imagined scenarios tend toward best case. Past experience with similar projects suggests longer timelines and smaller returns, but past experience gets ignored in favor of the specific optimistic vision.

The inside view problem

Planners focus on the specifics of their situation (inside view) rather than base rates of similar projects (outside view). Your project feels unique. But statistically, it probably isn’t.

Optimism bias

Believing your future will be better:

What it is

Systematic tendency to believe that negative events are less likely to happen to you than to others, and positive events are more likely.

In forecasting

“Our growth will outpace the market.” Most businesses believe this. Mathematically, most can’t be right. Optimism bias makes your forecast the exception to statistical reality.

Founder optimism amplification

Founders are selected for optimism—you have to be optimistic to start a business. This selection effect means founders are disproportionately susceptible to optimism bias. The trait that enables starting also distorts forecasting.

Motivational function

Optimism motivates action. Realistic forecasts might be demotivating. There’s psychological benefit to optimism even when it produces inaccurate forecasts. The benefit competes with accuracy.

Anchoring on recent performance

The past as misleading anchor:

What it is

Using recent numbers as the starting point for projections, then adjusting insufficiently. The anchor dominates the forecast even when adjustment should be larger.

In forecasting

“We grew 30% last year, so we’ll grow 35% next year.” Last year’s growth anchors the projection. But last year might have been unusual. Growth rates typically decelerate. The anchor pulls the forecast toward recent past even when future will differ.

The reversion neglect

Strong performance tends to revert toward average. Weak performance also reverts. But forecasts anchored on recent performance don’t account for reversion. Exceptional periods anchor expectations for continued exceptionalism.

Compounding anchor errors

Forecast this year from last year. Then forecast next year from this year’s forecast. Anchor errors compound. Multi-year projections built from sequential anchoring diverge increasingly from reality.

Confirmation bias in forecasting

Seeing what supports the forecast:

What it is

Seeking and weighting information that confirms existing beliefs while ignoring or dismissing contradicting information.

In forecasting

The forecast assumes growth. Information supporting growth gets noticed. Information suggesting challenges gets minimized. The forecast confirms itself through selective attention to confirming data.

Scenario selection

Of many possible scenarios, the one matching the forecast feels most likely. Alternative scenarios feel less probable because they don’t match the existing expectation. Confirmation bias narrows perceived possibility space.

Post-forecast rationalization

When forecasts fail, reasons are found that preserve original logic. “The forecast would have been right if not for X.” The forecasting approach isn’t questioned; only the unexpected interference is blamed.

Overconfidence in knowledge

Thinking you know more than you do:

What it is

Overestimating the accuracy of your knowledge and predictions. Confidence exceeds actual predictive ability.

In forecasting

“I’m confident revenue will be between $X and $Y.” The range is too narrow. Actual outcomes fall outside “confident” ranges far more often than confidence levels suggest.

Expertise illusion

“I know this market.” Expertise creates confidence. But expertise in a domain doesn’t equal forecasting accuracy in that domain. Market knowledge and market prediction are different skills.

The precision trap

Detailed forecasts feel more credible. But detail doesn’t equal accuracy. A forecast with monthly projections and segment breakdowns isn’t more likely to be right than a simple annual number. Precision creates false confidence.

Neglecting base rates

Ignoring what usually happens:

What it is

Focusing on specific case information while ignoring statistical base rates for similar situations.

In forecasting

“Our new product will succeed because...” followed by reasons specific to your product. But what percentage of new products succeed generally? The base rate of product success is low. Specific reasons feel compelling but don’t override base rate reality.

Startup base rates

Most startups fail. Most growth projections aren’t achieved. Most timelines slip. These base rates should inform forecasts. Instead, each forecast treats itself as exception to base rates.

The reference class problem

What is the appropriate reference class for your forecast? Finding comparable situations and their outcomes would improve forecasting. But this requires admitting your situation isn’t unique.

Availability bias

What comes to mind shapes expectations:

What it is

Estimating likelihood based on how easily examples come to mind. Memorable events seem more probable than they are.

In forecasting

Success stories are memorable and frequently shared. Failure stories are forgotten or hidden. This creates perception that success is more common than it is. Forecasts absorb this distorted probability sense.

Survivorship bias

“Company X did it, so we can too.” You see Company X because they succeeded. You don’t see the hundreds of similar companies that failed. Surviving examples create false sense of achievability.

Media amplification

Business media covers exceptional outcomes. Reading about rapid growth makes rapid growth seem common. Media diet shapes forecast expectations in ways disconnected from statistical reality.

The illusion of control

Overestimating your influence:

What it is

Believing you have more control over outcomes than you actually do. Skill feels like it determines results when luck often dominates.

In forecasting

“We’ll achieve this growth because we’ll execute these strategies.” The forecast assumes execution translates directly to outcomes. But market conditions, competitor actions, and chance also determine results. Control is partial at best.

Scenario planning limitation

Even scenario planning tends to assume control over scenario navigation. “If X happens, we’ll do Y.” But you might not be able to do Y. External constraints limit response options. Plans assume agency that may not exist.

Mitigating forecasting biases

Practical approaches:

Use reference class forecasting

Find similar businesses, products, or projects. What were their outcomes? Use those outcomes as your starting point. Adjust only modestly for what makes your situation different.

Pre-mortem analysis

Imagine the forecast failed. Why did it fail? Working backward from imagined failure surfaces risks that optimistic forward planning misses.

Widen confidence intervals

Whatever range you think is reasonable, widen it. Your “worst case” probably isn’t worst case. Your “best case” might be plausible best case. Wider ranges are more honest.

Track forecast accuracy

Compare past forecasts to actual outcomes. What was your typical error? Apply that historical error rate to current forecasts. Self-calibration improves over time.

Seek disconfirming information

Actively look for reasons the forecast might be wrong. What would have to be true for the forecast to fail? Give disconfirming information equal weight to confirming.

Use outside evaluators

Someone without your optimism or investment reviews the forecast. External perspective counterbalances internal biases. Others see what your biases hide.

When biased forecasting is chosen

Intentional versus accidental bias:

Motivation and fundraising

Optimistic forecasts motivate teams and attract investors. There’s strategic value in optimistic projections even if they’re not realistic. The bias is sometimes chosen, not just experienced.

Goal setting versus prediction

Stretch goals differ from likely outcomes. A forecast might serve as target rather than prediction. Confusing the two creates problems, but they can be deliberately separated.

The honest alternative

Present realistic forecasts with ambitious targets. “We expect X; we’re aiming for Y.” Separating expectation from aspiration maintains honesty while preserving motivation.

Frequently asked questions

If biases are universal, can forecasts ever be accurate?

Accuracy improves with awareness and methods that counteract biases. Perfect accuracy isn’t achievable, but substantial improvement over naive forecasting is. The goal is better, not perfect.

Don’t some businesses actually achieve optimistic forecasts?

Yes, some do. But for each one that does, many don’t. The ones who achieve optimistic forecasts are visible; the ones who don’t are forgotten. Survivorship bias makes achievement look more common than it is.

Is pessimistic forecasting better?

Pessimistic forecasting has its own biases (loss aversion, negativity bias). The goal is realistic forecasting, which may look pessimistic compared to biased optimism but isn’t actually pessimistic—it’s just accurate.

How do I forecast when my business is genuinely unprecedented?

Almost no business is truly unprecedented. Find the closest reference classes. Weight them by relevance. Use their outcomes as anchors. Even imperfect reference classes improve on pure speculation.

Peasy delivers key metrics—sales, orders, conversion rate, top products—to your inbox at 6 AM with period comparisons.

Start simple. Get daily reports.

Try free for 14 days →

Starting at $49/month

Peasy delivers key metrics—sales, orders, conversion rate, top products—to your inbox at 6 AM with period comparisons.

Start simple. Get daily reports.

Try free for 14 days →

Starting at $49/month

© 2025. All Rights Reserved

© 2025. All Rights Reserved

© 2025. All Rights Reserved