Skip to main content

When the Plan Looked Credible, the Spend Is Happening, But the Sales Just Aren’t Coming In

You sit on the board of a digital direct-to-consumer business. Last year, the executive team built a compelling case: to hit the sales targets, marketing spend had to increase. The budget was modelled, approved, and locked in. It all stacked up. The sales forecast was credible. The customer acquisition projections were aligned. The marketing plan looked balanced. So you signed off.

Fast forward: the spend is happening, but the sales are falling short, cash flow is weakening, covenant adherence is touch and go.

There’s no obvious overspend. The CCO is spending the budget to plan. The marketing team is executing. But you’re not getting the revenue lift you expected.

So what’s gone wrong?


Was the Sales Target Reverse-Engineered?

In many organisations, the Marketing Director or CCO is given a top-line revenue target—say, £35 million for the year—and then asked to figure out what level of marketing investment is needed to deliver it. From there, a financial model is built that works backwards: assume a CAC, a conversion rate, and a customer LTV, then calculate the marketing budget needed to support the sales plan.

This is a logical exercise, but it carries risk. If the inputs are too optimistic or taken from outdated performance benchmarks, the entire plan is built on a fragile foundation. What results is a budget that looks mathematically sound—but is designed to hit a target, not necessarily built from live customer insight.

It’s worth asking: “Was this budget grounded in real, current behaviour—or was it a backwards justification for a pre-set number?”


Why Reverse Engineering Often Fails in the Real World

Reverse-engineering a marketing budget from a top-down sales target may look tidy in a spreadsheet, but it often doesn’t hold up in practice.

Why?

  • It treats marketing spend like a linear input: spend more, get more. But customer acquisition is rarely that simple. Diminishing returns are real—channels saturate, creative fatigues, and costs rise faster than outcomes.
  • It assumes the past will repeat itself. Yet marketing performance can swing dramatically based on external factors: platform algorithm changes, competitive bids, or shifting customer behaviour.
  • It often skips scenario planning. These models are designed to show how the plan will work if everything holds—not what happens if just one input falters.
  • It can confuse correlation with causation. A strong ROAS or CAC from last year may have had little to do with budget size and everything to do with a one-off campaign, seasonal uplift, or pricing shift.

The result? Budgets that look sound at sign-off but collapse under the weight of real-world variability.

A good financial model should describe what might happen, not only what needs to happen to justify a target.


Real-World Example: The Risk of Sticking Too Rigidly to the Model

In one business we supported, competitors unexpectedly slashed their prices in Q2. The CCO proposed a responsive pricing strategy to defend market share, but the CFO resisted—concerned that discounting would undermine the gross margin assumptions agreed in the budget.

The outcome?

  • The marketing team kept spending according to plan
  • Conversion rates collapsed as the business became uncompetitive
  • Sales fell sharply
  • Profitability Jaws suffered — Marketing costs ran to budget, but sales inputs collapsed

The lesson? Protecting the assumptions became more important than protecting the outcome.


Did the Budget Process Leave Enough Room for Challenge?

Before we even look at the model itself, it’s worth considering how the budget was actually built. In many businesses, the CFO depends on inputs from the CCO to complete the revenue model. But timelines are tight. Often, the CFO is under pressure to consolidate, submit, or finalise the broader financial plan—so the marketing numbers are needed quickly.

That pressure can lead to a scenario where the CCO provides quick estimates based on past performance or optimistic projections, without fully pressure-testing them.

It’s worth asking: “Were these numbers built collaboratively and rigorously—or were they assembled to fit a deadline?”

Understanding the process that led to the budget is as important as challenging the assumptions within it.


The Assumptions Were Logical—but Possibly Too Simplistic

The budget model likely worked on a handful of key inputs:

  • Spend X → Acquire Y customers
  • Conversion rate = Z
  • Target revenue per customer over a period of time

But what looked like a straightforward sales engine may have underestimated a few critical variables:

  • Was customer behaviour shifting faster than expected?
  • Did the marketing channels underperform compared to last year?
  • Was the budget built with enough sensitivity to conversion risk and timing?
  • Has competition increased, discounting risen, or pricing power weakened?

This isn’t about blame. It’s about recalibrating what we should expect from static models in dynamic markets.


Does the C-Suite Fully Understand the Metrics?

While CFOs, CCOs, and other senior leaders are increasingly expected to collaborate on commercial models, many of the marketing metrics they rely on come from tools like Google Ads, Meta dashboards, and agency reports. These platforms use attribution models that often overstate their contribution to sales and simplify complex customer behaviour.

It’s easy—even common—for C-suite leaders to fall into the “Google says it’s working” trap: accepting metrics like ROAS or CAC at face value without challenging the assumptions behind them or validating them against actual outcomes, such as profit margin, retention, or long-term value.

Board members should ask: “Are we building financial logic off marketing metrics we really understand—and have we validated them in our specific context?”

If the budget was built on platform-driven dashboards that don’t reflect real customer dynamics, that could explain why performance diverges from the plan.

Some CFOs (and marketers too) fall into the “Google says it’s working” trap—accepting metrics like ROAS or CAC at face value, without challenging the assumptions behind them or validating them against actual margin and retention outcomes.

Board members should ask: “Are we building financial logic off marketing metrics we really understand?”

If the budget was built on dashboard figures that don’t reflect true customer behaviour, that could explain the gap between expectations and reality.


What the Board Should Be Asking Now

Before jumping into the questions, it’s worth breaking down the key metrics often used in marketing and sales planning—particularly if you’re not deep in the weeds of acquisition or funnel analytics.


Customer Acquisition Cost (CAC)
What it is: The average cost to acquire a new customer.
How it’s calculated: Total marketing spend ÷ number of new customers acquired.
Why it matters: A low CAC can signal efficient growth, while a high or rising CAC may mean it’s costing more to win each customer—possibly eroding margin.
Good sign: CAC is going down while customer quality stays high.
Warning sign: CAC is down, but only because we’re attracting low-LTV or one-time customers.


Customer Lifetime Value (LTV)
What it is: The projected revenue a customer generates over their full relationship with the business.
How it’s calculated: Average order value × number of purchases × gross margin. Often adjusted for churn or time period.
Why it matters: A high LTV means you can afford to spend more on acquisition. If LTV falls short of CAC, growth becomes unprofitable.
Good sign: LTV is rising due to better retention or higher average spend.
Warning sign: LTV is overstated in the model and doesn’t match real customer behaviour.


Conversion Rate
What it is: The percentage of site visitors (or leads) that go on to make a purchase.
How it’s calculated: Number of conversions ÷ number of total visitors or leads × 100.
Why it matters: It determines how efficiently your marketing and product are turning interest into revenue.
Good sign: Conversion rate increases without needing aggressive discounting.
Warning sign: Conversion is stable but traffic quality or purchase value is declining.


Average Order Value (AOV)
What it is: The average amount spent per order.
How it’s calculated: Total revenue ÷ number of orders.
Why it matters: Higher AOV can dramatically improve revenue efficiency, especially when fixed costs per transaction are significant.
Good sign: Customers are adding more to basket or upgrading.
Warning sign: AOV is dropping due to over-reliance on low-ticket items or discounting.


Armed with those definitions, here are the key questions board members should now be asking:

1. Is the cost per acquisition aligned to the actual revenue per customer—not the assumed figure? It’s easy to base forecasts on a blended LTV figure, but real customer behaviour often diverges. Ask:

“What’s the LTV of the customers we’ve actually acquired this year, not what we modelled?”

2. Are we sure the increased budget is still being spent in the right areas? Just because we agreed to the spend doesn’t mean it should be deployed without scrutiny. Ask:

“Have we adjusted the media mix, channels, or campaigns based on what’s working in real time?”

3. Is the funnel underperforming at a specific stage? Sometimes traffic is healthy, but conversion tanks. Or leads are converting, but AOV (average order value) has dropped. Ask:

“Where exactly is the drop-off—and what’s changed since the model was built?”

4. What learning loops are in place? In high-performing businesses, marketing and finance aren’t locked into last year’s logic. They adjust monthly, even weekly. Ask:

“What are we learning, and how fast are we responding?”


What We’ve Seen at NorthCo

In one client situation, the CCO was given a top-down sales target and asked to build a marketing plan to support it. The CFO reverse-engineered the budget, working backwards from the revenue goal to determine how much marketing spend was needed. The assumptions—on CAC, conversion rates, and LTV—looked credible on paper, and the board signed off.

But six months in, sales were underperforming. The budget was being spent exactly as agreed, yet performance lagged expectations. Because the plan had been built around a static model rather than dynamic insight, the marketing team felt unable to adapt. Channels that had worked the previous year were no longer converting, but the team was focused on “spending to plan” rather than adjusting course.

Compounding the issue, competitors had begun aggressive discounting. The CCO flagged this as a critical factor undermining conversion, yet the business stuck rigidly to its pricing strategy to protect the gross margin assumptions locked into the plan. The brand remained visible but uncompetitive—marketing spend continued, but conversion did not. Channels that had worked the previous year were no longer converting, but the team was focused on “spending to plan” rather than adjusting course.


Rethinking the Sales & Marketing Cadence

After seeing how rigid budget adherence was blocking commercial responsiveness, we helped the team reshape their internal operating rhythm—not just the numbers.

Key changes included:

  • Weekly check-ins between marketing and finance, focused on live outcomes rather than budget adherence.
  • A shift from static dashboards to rolling 4-week trend reviews, enabling early detection of performance shifts.
  • Incorporating competitive intelligence discussions into monthly S&OP sessions, ensuring the team stayed alert to market movements.
  • Introducing a shared set of “action thresholds”—pre-agreed indicators (e.g. drop in ROAS, competitor discounting) that would trigger immediate review of spend allocation or pricing strategy.

The result? The process became sharper and more alive to real-world trading dynamics. Conversations were more commercially grounded, and decisions happened weeks earlier than they would have under the old model.


What We Did Mid-Year—and What You Can Learn From It

Even the most well-intentioned budgets can fail if they’re too rigid, too optimistic, or built in isolation. Here’s what we advised—and how it helped the team close the gap:

1. Build the Budget Around Scenarios, Not Just Targets

Rather than locking in a single sales outcome, model three cases:

  • Base case: What’s most likely based on recent trends
  • Upside: If CAC improves or conversion increases
  • Downside: If cost of traffic rises or AOV slips

Boards can then ask: “How does the plan flex if any one of our assumptions doesn’t hold?”

2. Make Learning Loops a Formal Part of the Plan

Include monthly or quarterly checkpoints where marketing and finance sit down together to:

  • Reassess assumptions (e.g. CAC, conversion rate)
  • Reallocate spend if necessary
  • Identify early warning signs of underperformance

This reduces the risk of sleepwalking through the year on a faulty plan.

3. Align CFO and CCO on Commercial Reality, Not Just the Model

Encourage deeper collaboration between finance and marketing, not just during budgeting, but throughout the year. Both parties should agree on what good looks like—and how it will be measured in practice.

4. Protect Budget Flexibility

Ringfence a portion (e.g. 10–15%) of the marketing budget as a flexible pot for strategic adjustments. Use it to double down on what’s working or respond to competitive shifts.


Board-Level Takeaways

✅ Approving a budget is not the same as guaranteeing results.
✅ Static models need dynamic monitoring.
✅ Real-time data must trump last year’s logic.
✅ CFOs and marketers need freedom to adapt—but accountability to explain how.

If the spend is happening but sales aren’t, your job as a board member is to press for clarity—not just on the inputs, but on the feedback loops, the learning process, and the pace of course correction.

📩 For more boardroom-ready insight, follow NorthCo or subscribe to our newsletter.

Leave a Reply