Why Marketing Attribution Models Are Increasingly Broken


Marketing attribution—determining which marketing activities contributed to conversions—has always been imperfect. But it’s become substantially more broken over the past few years as privacy changes eliminate data that attribution models relied on.

I work with businesses trying to understand which marketing channels actually drive results, and the honest answer increasingly is: we can’t tell with any confidence.

The attribution models we used five years ago made assumptions about data availability that are no longer valid. Yet many businesses continue using these models and making budget decisions based on increasingly unreliable data.

Marketing attribution traditionally relied heavily on third-party cookies that tracked users across websites. These cookies let you see that someone viewed an ad on Site A, clicked it, visited your website, left, saw another ad on Site B, came back, and eventually converted.

With third-party cookies being phased out—Safari and Firefox already block them by default, Chrome is following suit—this cross-site tracking is breaking down. You can see behavior on your own website, but connecting it to off-site ad impressions and other touchpoints has become much harder.

Attribution platforms that built their entire business model on third-party cookie tracking are scrambling to adapt. Some are pivoting to first-party data and probabilistic modeling. Others are simply less accurate than they used to be while still charging the same prices.

Cross-Device Journey Blindness

Customers regularly switch between devices. They might see an ad on mobile, research on a tablet, and purchase on desktop. Or any other combination of phone, tablet, computer, smart TV, and in-store touchpoints.

Connecting these cross-device interactions into a coherent journey requires either login-based tracking (not available for prospects who aren’t customers yet) or probabilistic modeling that guesses which devices belong to the same person.

The probabilistic models used to work reasonably well when they had access to IP addresses, user agents, and various fingerprinting signals. As privacy protections strengthen, these signals become unavailable or less reliable, and the probabilistic models become less accurate.

The result is that attribution sees disconnected fragments of customer journeys rather than complete paths. Attribution models try to assign credit based on incomplete information and often get it wrong.

The View-Through Attribution Question

View-through attribution credits ad impressions that people saw but didn’t click for eventually leading to conversions. The logic is that seeing the ad influenced the decision even without a direct click.

This has always been somewhat questionable—maybe they would have converted anyway—but it’s become even more dubious as measurement degrades. If you can’t reliably track that the person who saw the ad is the same person who later converted, view-through attribution becomes largely fictional.

I’ve seen attribution platforms report tens of thousands of “view-through conversions” that generate impressive ROI numbers in reports but have questionable connection to reality. The businesses make budget decisions based on these reports, potentially overfunding channels that aren’t actually driving results.

Last-Click Attribution’s Persistence

Despite its obvious flaws, last-click attribution—crediting the final touchpoint before conversion—remains widely used because it’s simple and doesn’t require complex tracking.

But last-click systematically undervalues top-of-funnel and mid-funnel touchpoints. Someone might discover your brand through content marketing, research through organic search, get retargeted by ads, and finally convert through a branded search ad. Last-click gives all credit to the branded search ad and none to the earlier touchpoints.

This leads to budget shifts toward bottom-funnel tactics that capture existing demand rather than creating new demand. Over time, this starves the top of the funnel and conversion rates decline as you run out of people who previously discovered your brand through undercredited channels.

Multi-Touch Attribution’s Data Hunger

Multi-touch attribution models try to assign partial credit to multiple touchpoints in the customer journey. This is conceptually superior to last-click, but it requires even more complete tracking data to work properly.

As tracking capabilities degrade, multi-touch attribution models are working with increasingly incomplete journey data. They’re still producing credit allocation numbers, but those numbers are based on fragmentary information and substantial assumptions.

The sophisticated algorithmic attribution models that use machine learning to assign credit are particularly opaque. They produce outputs that seem authoritative but are difficult to verify and may be quite inaccurate when working with degraded input data.

The Incrementality Testing Alternative

Rather than trying to track every touchpoint, incrementality testing asks a simpler question: if we increase or decrease spend in a channel, what happens to total conversions?

This approach uses randomized experiments—geo splits, audience holdouts, or time-based tests—to measure causal impact rather than correlation. It’s more work than running attribution reports, but it produces more reliable answers about which channels actually drive incremental business.

A company I consulted with in Sydney was allocating budget based on multi-touch attribution that credited retargeting very heavily. Incrementality testing revealed that retargeting was mostly capturing conversions that would have happened anyway, and reducing retargeting spend by 60% had minimal impact on total conversions.

The incrementality tests require larger sample sizes and longer time periods than attribution reports. You can’t get daily results. But the results you get are actually measuring causation rather than correlation.

First-Party Data Emphasis

As third-party tracking degrades, first-party data—information you collect directly from customers—becomes more valuable. This includes email addresses, account information, purchase history, and on-site behavior.

But first-party data only works for logged-in users or people who’ve provided contact information. For the majority of website visitors who are anonymous prospects, you have very limited tracking capability.

Some businesses are adapting by building content and value propositions that encourage email signup earlier in the customer journey. Others are accepting that they simply won’t have attribution data for large portions of the customer journey.

Platform-Reported Metrics’ Unreliability

Facebook, Google, and other ad platforms report their own attribution data showing how many conversions they drove. These numbers are increasingly diverging from reality because the platforms still use tracking methods that users are blocking.

The platforms have every incentive to overreport their effectiveness. They control both the measurement methodology and the reporting. Businesses that make budget decisions based on platform-reported numbers are relying on potentially biased data sources.

Third-party attribution platforms were supposed to provide neutral measurement, but with their tracking capabilities degrading, they’re often less reliable than they used to be while still being positioned as authoritative.

The Holdout Group Reality Check

One approach to cut through attribution uncertainty is simply running periodic holdout tests—turn off specific channels or tactics completely and measure the impact on overall business results.

If pausing a channel that attribution reports as driving 20% of conversions has minimal impact on total revenue, the attribution was likely overcrediting that channel. If pausing it crushes revenue, the attribution was probably directionally correct even if the exact numbers are questionable.

Holdout tests are blunt instruments and don’t provide the granular channel-by-channel insights that attribution promises. But they provide ground truth about what matters, which has more value than precise-seeming but inaccurate attribution percentages.

What to Do Instead

Accept that precise attribution is increasingly impossible. Make peace with more uncertainty about exactly which touchpoints drive conversions. Focus measurement on questions you can actually answer reliably.

Use holdout tests and incrementality experiments to measure causal impact of major channels. Track obvious signals like branded search volume, direct traffic, and customer survey responses about how they heard about you. Pay attention to overall business metrics rather than obsessing over attribution splits.

Diversify marketing rather than over-optimizing to whatever attribution models currently favor. If attribution can’t be trusted, putting all budget into the channel it credits most heavily is risky.

Question platform-reported attribution metrics. Cross-check with independent tracking where possible. Understand the incentive structures that might bias platform reporting.

The era of comprehensive, accurate marketing attribution is ending. The sooner businesses accept this and adapt their measurement approaches accordingly, the better decisions they’ll make. Clinging to attribution models that worked when tracking was comprehensive but don’t work anymore leads to misallocated budgets and declining returns.