Episode 010 - 20th Dec 25

Episode 010 - 20th Dec 25

Episode 010 - 20th Dec 25

Incrementality Testing in B2B: How to Measure True Marketing Impact

In this episode of Unqualified Leads, we break down incrementality testing, what it is, how it works, and when B2B teams should (and shouldn’t) use it. While attribution shows where conversions come from, it often fails to show what actually caused them. Incrementality testing fills that gap by measuring true cause and effect.

We explain the core differences between attribution, incrementality testing, and marketing mix models, and why correlation alone can lead to poor budget decisions. You’ll learn how control groups and holdouts work, what sample size and spend levels are required, and why timing and organisational buy-in matter just as much as the data itself.

We also cover real-world scenarios where incrementality testing becomes essential, rising CAC, overlapping channels, brand and retargeting over-credit, and leadership pressure to justify spend. Finally, we outline the practical prerequisites, common pitfalls, and how modern teams can run incrementality tests without massive data science resources.

If you’re making high-stakes marketing decisions and want confidence that your spend is truly driving incremental revenue, this episode is essential listening.

design pic
design pic
design pic

Transcript

Unqualified Leads – Episode 010 Highlights

Hosts: Harry Hughes & Daniel

Topic: What incrementality testing is, why it’s different from attribution, when you should (and shouldn’t) run it, and the prerequisites required to run reliable tests that actually inform budget decisions.

Incrementality Testing: The Difference Between Correlation and Causation

  • Incrementality testing is one of the most effective ways to validate true marketing impact.

  • Attribution shows where conversions came from.

Incrementality testing answers the real question: Would those conversions have happened anyway if the campaign didn’t exist?

That’s the core difference:

  • Attribution = correlation (what happened)

  • Incrementality = causation (what caused it)

Harry explains it using a simple analogy:

Attribution is like a police lineup, it shows who was present in the buyer journey. Incrementality tells you who pulled the trigger.

  1. What Incrementality Testing Actually Is

Incrementality testing compares two groups:

  • Control group (not exposed to the campaign/channel)

  • Exposed group (does receive the campaign/channel)

You then measure the difference in outcomes between the two groups. That difference is the incremental effect.

Example:

  • Control group generates 100 SQLs

  • Exposed group generates 120 SQLs

Incremental lift = 20%

In B2B, the “ideal” metric is closed-won revenue. But for long sales cycles, you may need to use shorter-cycle outcomes like:

  • Demo booked

  • SQLs

  • Opportunities created

  1. When NOT to Run Incrementality Tests

Daniel outlines when incrementality testing is usually a waste of time or produces misleading conclusions.

Avoid incrementality testing when:

  • Your spend is too low (e.g., under ~£20k/month), sample sizes are too small and noise overwhelms signal

  • You’re only running 1–2 channels, attribution plus common sense is typically enough

  • Your sales cycle is very short (under ~14 days), attribution becomes directionally accurate

  • You can’t afford to pause activity or hold out audience segments, you need tolerance for variance

  • You can’t run clean splits (geo splits, audience holdouts, campaign switch-offs) without contaminating results

The key point:

Incrementality only becomes useful when the waters are muddy and attribution can’t guide big decisions reliably.

  1. How Incrementality Tests Go Wrong

Harry highlights why many incrementality tests fail:

  • Insufficient sample size = results aren’t statistically significant

  • Poorly matched groups = the test is contaminated from the start

  • Bad geo selection = different levels of demand, maturity, brand presence, seasonality

  • Contamination from sales/outbound = control group gets touched during the test

  • Changing too many variables mid-test = promos, messaging, positioning, other launches

If the groups aren’t balanced, the result can’t be trusted.

  1. The 3 Core Conditions That Signal You Should Run Incrementality

Daniel shares three clear triggers where incrementality testing becomes necessary.

You should run incrementality tests when:

  • CAC / CPA is rising and attribution can’t explain why

  • You’re spending enough that misallocation is expensive (moving £10k–£50k/month incorrectly can break pipeline)

  • You have overlapping channels influencing each other (Meta, Google, organic, dark social, events)

At that point the key question becomes:

Is this channel creating pipeline, or just capturing demand created elsewhere?

  1. Common Scenarios Where Incrementality Adds Clarity

Once you’re scaling (often beyond ~£50k–£80k/month), these issues show up repeatedly:

  • Retargeting looks “too good to be true” and you risk over-investing

  • Branded search is over-credited and may be cannibalising organic demand

  • Organic/brand efforts are clearly influencing paid CAC, but attribution can’t prove it

  • Leadership starts questioning channel impact and wants causal answers

  • Blended CAC doesn’t match attributed CAC, creating a measurement gap

The takeaway:

Incrementality becomes the truth layer when attribution hits its ceiling.

  1. Timing Matters: Too Early vs Too Late

If you run incrementality too early:

  • You get statistically meaningless results

  • You waste time and budget

If you run it too late:

  • You risk misallocating hundreds of thousands

  • CAC drifts 20–40% higher with no clear explanation

  • Pipeline targets get missed and payback windows extend

The best time to run incrementality:

When you’re about to make a major budget decision but you don’t have enough confidence in the data to justify it.

  1. Practical Prerequisites for Reliable Incrementality Testing

Harry breaks down what needs to be in place before running a test.

You need:

  • Clean splits (audience holdouts, geo splits, campaign switch-offs)

  • A clear baseline period for comparison (previous conversion rates)

  • Clean CRM tagging and tracking so test vs control is identifiable

  • Sales + rev ops alignment so the control group isn’t touched

  • Stable conditions during the test (avoid peak holiday periods, avoid launching promos mid-test)

  • Buy-in from leadership and budget owners, otherwise the test won’t lead to action

  • A predefined decision rule (e.g., “If we see 20% lift in demos, we scale”)

  • A realistic test duration aligned to sales cycle length

  • If your sales cycle is 6 months, a 3-week test is useless.

If you can’t wait for closed-won outcomes, measure the earliest meaningful proxy first (like demo booked), then validate down-funnel later.

  1. Tools, Complexity, and the Role of AI

There’s a perception that incrementality requires data scientists or external providers. That’s true at very large scale (spending millions per month).

But incrementality can also be run internally with:

  • Online calculators for sample size and power

  • Statistical significance tools

  • MDE / MDA calculation (minimum detectable effect)

  • AI support for planning, analysis, and interpretation

The caveat:

AI helps with execution, but it can’t compensate for poor experimental design.


  1. The Cultural Requirement: Be Willing to Accept the Outcome

A critical point Harry makes:

Some teams avoid incrementality because they’re scared of a null or negative result. If you’ve spent big for 12 months, the test might reveal uncomfortable truths. But that’s the whole purpose.

You need a culture where results are accepted:

Positive, null, or negative, because even a “failed” test creates clarity and prevents future waste.

Final Takeaways

  • Incrementality testing is one of the most powerful methods for validating cause and effect in marketing.

  • It outperforms attribution for big decisions, but only if the experiment is designed properly.

To do it well:

  • Run it only when spend and complexity justify it

  • Control variables and avoid contamination

  • Align on what success looks like before you start

  • Have leadership buy-in and a decision rule

  • Use proxies for long sales cycles, then validate downstream later

  • When executed properly, incrementality provides the clearest answer to the question:

  • Is this channel truly driving growth, or just taking credit for demand created elsewhere?



Quarterly Intake Open

Your current agency is buying clicks. We're building an asset.

Look, we get it. Most agencies are happy to hand you a report full of "likes" and "impressions" while your bottom line stays flat. We’re not that agency.

We don’t just buy ads, we architect demand engines designed for one thing: making your business significantly larger than it was yesterday. If you're looking for a partner that treats your ad spend like their own capital, let’s talk.

We partner with a limited number of brands per quarter to ensure focus, impact, and results.

Engagements Start From £5,000 / $7,000 p/m

© Mayfair Media Group Ltd. All Rights Reserved. | Company Number: 16663912 | Privacy & Cookie Policy

Mayfair Media Group

Quarterly Intake Open

Your current agency is buying clicks. We're building an asset.

Look, we get it. Most agencies are happy to hand you a report full of "likes" and "impressions" while your bottom line stays flat. We’re not that agency.

We don’t just buy ads, we architect demand engines designed for one thing: making your business significantly larger than it was yesterday. If you're looking for a partner that treats your ad spend like their own capital, let’s talk.

We partner with a limited number of brands per quarter to ensure focus, impact, and results.

Engagements Start From £5,000 / $7,000 p/m

© Mayfair Media Group Ltd. All Rights Reserved. | Company Number: 16663912 | Privacy & Cookie Policy

Mayfair Media Group

Quarterly Intake Open

Your current agency is buying clicks. We're building an asset.

Look, we get it. Most agencies are happy to hand you a report full of "likes" and "impressions" while your bottom line stays flat. We’re not that agency.

We don’t just buy ads, we architect demand engines designed for one thing: making your business significantly larger than it was yesterday. If you're looking for a partner that treats your ad spend like their own capital, let’s talk.

We partner with a limited number of brands per quarter to ensure focus, impact, and results.

Engagements Start From £5,000 / $7,000 p/m

© Mayfair Media Group Ltd. All Rights Reserved. | Company Number: 16663912 | Privacy & Cookie Policy

Mayfair Media Group