Every attribution model is a story. Last-click tells one story. First-click tells another. Data-driven tells a third — one that sounds more rigorous because it's produced by an algorithm, but is still built on assumptions your vendor chose for you.
The problem isn't the models. The problem is that we treat them as truth rather than as a lens. When a channel looks good in the model, it gets more budget. When it looks bad, it gets cut. We optimise to the model rather than to the outcome — and over time, the model and reality drift apart.
What I've found that actually works
The most useful thing I've done with attribution is run incrementality tests alongside whatever model is in place. Not instead of it — alongside it. The model tells you what the data shows. The incrementality test tells you what would have happened without the spend. The gap between those two numbers is where the truth lives.
Holdout groups are uncomfortable because they mean letting some customers convert without being touched by your best campaigns. The business cost of that feels real. The cost of optimising to a broken model for years feels invisible — until you cut a channel that was holding everything else together.
Attribution tells you where credit was assigned. Incrementality tells you where value was created. They are not the same thing.
If you're managing seven-figure budgets and still relying solely on platform-reported attribution, you're flying with instruments you haven't calibrated. Start small — one channel, one holdout, one quarter. The result will change how you look at your whole account.