Why most attribution models lie to you

Every attribution model is a story. Last-click tells one story. First-click tells another. Data-driven tells a third — one that sounds more rigorous because it's produced by an algorithm, but is still built on assumptions your vendor chose for you.

The problem isn't the models. The problem is that we treat them as truth rather than as a lens. When a channel looks good in the model, it gets more budget. When it looks bad, it gets cut. We optimise to the model rather than to the outcome — and over time, the model and reality drift apart.

What I've found that actually works

The most useful thing I've done with attribution is run incrementality tests alongside whatever model is in place. Not instead of it — alongside it. The model tells you what the data shows. The incrementality test tells you what would have happened without the spend. The gap between those two numbers is where the truth lives.

Holdout groups are uncomfortable because they mean letting some customers convert without being touched by your best campaigns. The business cost of that feels real. The cost of optimising to a broken model for years feels invisible — until you cut a channel that was holding everything else together.

Attribution tells you where credit was assigned. Incrementality tells you where value was created. They are not the same thing.

If you're managing seven-figure budgets and still relying solely on platform-reported attribution, you're flying with instruments you haven't calibrated. Start small — one channel, one holdout, one quarter. The result will change how you look at your whole account.

The case for boring channels

Everyone wants to be on the channel that's growing. TikTok a few years ago. Whatever's next now. There's a logic to it — early movers get cheap reach before the auction fills up. But most brands aren't early movers. They're followers who arrive after the CPMs have already risen, when the format is mature and the creative advantage has evaporated.

Boring channels — paid search, email, remarketing — have compounding advantages that shiny channels rarely do. The data is richer. The intent signal is clearer. The infrastructure is more sophisticated. And because everyone is chasing the new thing, the incumbents are often underpriced relative to their actual performance.

This isn't an argument against testing

Test new channels. Allocate 10–15% of budget to experimentation. But don't mistake novelty for opportunity. The question isn't whether a channel is exciting — it's whether it can move revenue at a cost your business can sustain.

I've seen brands abandon profitable search campaigns to chase social reach, then spend two years trying to rebuild the commercial infrastructure they dismantled. The boring channel didn't fail them. The planning horizon did.

What regulated industries taught me about creative risk

Spend enough time running campaigns for financial services, pharma, or government clients and you learn something counterintuitive: constraints make you more creative, not less.

When you can't make bold claims, you have to be precise. When you can't use certain words, you find better ones. When legal review will kill anything vague, you write tighter briefs. The discipline that compliance demands bleeds into everything — the creative, the targeting, the measurement.

The specific skill is brevity under pressure

You have less room to be mediocre. A generic FMCG campaign can get away with fuzzy messaging because the category is low-stakes. A financial services ad that's unclear about what it's actually offering will get pulled — and rightly so. The clarity you're forced into tends to perform better even before you account for compliance.

I now apply that same standard to every brief I write, regulated or not. What is the one thing this needs to communicate? What would make a thoughtful person reject it? Answer those two questions honestly before you start, and most of the creative problems solve themselves.

How to brief a programmatic campaign that finance will actually fund

The briefing problem is almost always a translation problem. Marketing thinks in reach, frequency, and brand safety. Finance thinks in cost per acquisition, payback period, and incremental revenue. The campaign that gets funded is the one that speaks finance's language without losing the strategic intent of the marketing one.

Here's the structure I use. Start with the commercial objective — not "increase brand awareness" but "acquire 400 new customers at a CPA under $X with a 6-month LTV of $Y." Then work backwards to the media strategy. Why programmatic? Because the audience data allows us to suppress existing customers, concentrate budget against highest-intent segments, and vary creative by funnel stage — all of which compress CAC.

Numbers finance wants to see

Historical CPA by channel. Forecasted volume at target CPA. Sensitivity analysis — what happens to cost if volume target rises 30%? What's the floor on CPA given audience size? These aren't complex to produce and they signal that you've thought about risk the same way finance has.

A brief that opens with audience strategy will get questioned. A brief that opens with revenue targets and works backwards to audience strategy will get funded. Same campaign. Different entry point.

Full-funnel is not a channel mix. It's a way of thinking.

The phrase "full-funnel" has been so thoroughly absorbed into marketing vocabulary that it's nearly meaningless now. Agencies use it to sell more channels. Platforms use it to justify broader spend. It's become a euphemism for "spend more everywhere" rather than what it actually means.

Full-funnel thinking is about understanding the causal chain between a customer's first exposure to a brand and the moment they buy. What breaks that chain? Where do people drop off, and why? What would shorten the journey without sacrificing the quality of the customer you're acquiring?

The test

If you can't describe what changes at the top of your funnel and trace how that flows through to revenue, you don't have a full-funnel strategy. You have a collection of campaigns running simultaneously and calling it one.

The discipline I try to apply is: for every piece of activity, I want to know the counterfactual. If we didn't run this, what would have been different downstream? If the answer is "nothing much," the activity is decorative. If the answer requires careful thought, it's probably doing real work.

The metric that never lies: revenue per visitor

Every channel has its vanity metric. Search has Quality Score. Display has viewability. Social has engagement rate. These metrics matter at the operational level — they help you diagnose what's working inside a channel. But they're terrible for comparing across channels, because they're not denominated in the same currency as business outcomes.

Revenue per visitor strips that away. It doesn't care where the visitor came from, what the CPM was, or whether the campaign won an award. It asks: did this person spend money, and how much? Applied consistently across channels, it produces a ranking that's hard to argue with.

The objections are usually valid but solvable

"Paid search gets unfair credit because intent is higher." True. Segment by new vs. returning visitors. Run the analysis on new visitors only. If search still wins, it's probably earning it.

"We can't track revenue from display." Fix the tracking before you run the campaign. If you can't measure the outcome, you can't justify the spend — and eventually, you won't be able to defend it when someone asks. Revenue per visitor is blunt. That's the point.

7 things that changed since I left agencies

My perspective has changed since I'm no longer at agencies. Here are some of the main things.

1. ROAS is calculated differently now

I track YoY revenue growth versus ad spend, sometimes adjusting for estimated organic growth. This gives me a clearer picture of actual impact and just makes more sense. Plus, the quality of in-house data is better. Better datasets mean faster, more accurate budget management, planning, and forecasting based on real, timely info, not scattered or outdated data.

2. I ask questions in reverse

When I see a product relevant to me but don't recall any ads, I wonder: why didn't I see it? I want our brands to be known not only by people who need us now but also by those who might need us someday. I want us to become common knowledge, something people mention casually even if they're not customers.

3. I learn directly from users

I read user comments and reviews for fun. People drop questions under ads instead of clicking through, that's a chance to show kindness and helpfulness. It reminds me how varied and unpredictable real user behaviour can be. UX research and CRO testing generalise from small segments. Business doesn't always need to be innovative. Sometimes it's just about going back to basics and being consistently helpful.

4. Writing skill matters more than many think

GenAI is already a key part of life and business. We prompt hard every day. The clearer our thinking, the better our results. Those who break ideas down into detailed, logical steps get more from AI. After years, I'm documenting my thoughts and writing again.

5. AI-generated creative still looks like AI, and I like that

We can often tell when something's AI-made, and that's okay. Like animated characters, it doesn't have to be perfectly human to be expressive or useful. People prompt in different styles, and AI responds in different voices, as we can see comparing OpenAI and Perplexity's chains of thought. GPT-4 and newer can browse the web but still mainly rely on training data, while Perplexity was built with retrieval-augmented generation (RAG) from the start.

6. I'm part of a team built on trust

My manager brought together self-aware, candid people. I feel I can be fully transparent with my manager, which I've always wanted. I've had to unlearn some agency habits that don't transfer well and slow personal growth. I've valued brutal honesty but softened it over time. Now, I bring it back constructively and no longer shy away from difficult conversations.

7. I'm learning to balance productivity with reflection

Stricter working hours have pushed me to prioritise better and use resources wisely. I still feel the urge to take online courses on weekends to stay ahead, like when I took Google Cloud courses to feed my curiosity or spent months starting an online degree in Construction. Now, I'm accepting a different pace and seeing that slowing down and making space to reflect leads to better decisions and stronger results in the long run.