Marketing Attribution Models Are Broken (And What to Measure Instead)
Your analytics dashboard says Facebook ads drove 40% of conversions last month. Your marketing director is ready to triple the Facebook budget based on this data.
But here’s what actually happened: Someone saw your Facebook ad, ignored it, then three weeks later remembered your brand name, googled it, and bought. Google Search got credit as “last click” even though Facebook created the awareness.
Or maybe the opposite: Someone clicked your Facebook ad, didn’t convert, then came back directly a week later and purchased. Facebook got credit even though the purchase decision happened independently.
Attribution models try to solve this problem by assigning credit to marketing touchpoints. But every standard model gets it wrong because they’re all based on the same flawed assumption — that you can track and measure all relevant touchpoints.
Why Standard Models Don’t Work
First-click attribution: Credits the first touchpoint that brought someone to your site. Ignores everything after that, including the channels that actually convinced them to buy.
Last-click attribution: Credits the final touchpoint before conversion. Ignores all the awareness and consideration that happened earlier.
Linear attribution: Divides credit equally across all touchpoints. Assumes every touchpoint contributes equally, which is obviously not true.
Time-decay attribution: Gives more credit to recent touchpoints. Better than linear, but still arbitrary in how it weights timing.
Position-based attribution: Gives most credit to first and last touchpoint, splits remaining credit among middle touches. Slightly less arbitrary, still not based on actual influence.
All of these models suffer from the same problem: They only measure what they can track.
The Tracking Blindness Problem
Your attribution model sees:
- Facebook ad clicks
- Google search visits
- Email newsletter clicks
- Display ad impressions (maybe)
- Direct traffic (with no context)
Your attribution model doesn’t see:
- Podcast ad mentions
- Billboards
- Word-of-mouth recommendations
- Offline conversations
- Social media exposure that didn’t result in clicks
- Competitor comparisons
- Review site research
- Industry publication articles
- Conference presentations
The touchpoints you can track are a fraction of the touchpoints that influence buying decisions. Attributing conversions solely to tracked touchpoints creates a systematically distorted picture of what’s actually working.
The Dark Social Problem
“Direct traffic” in your analytics isn’t people typing your URL from memory. It’s mostly people clicking links from private channels that don’t pass referrer information — messaging apps, email apps, PDF readers, native mobile apps.
These dark social channels drive huge amounts of traffic but appear as direct or unknown source in analytics. Attribution models can’t assign credit properly because they don’t know where the traffic came from.
When you optimize based on trackable attribution, you’re implicitly de-prioritizing channels that drive dark social traffic. This creates a systematic bias toward channels that happen to be easily trackable rather than channels that are actually effective.
The Multi-Device Reality
Someone sees your ad on their phone while commuting. Researches on their laptop at work. Reads reviews on their tablet at home. Purchases on their phone later.
Unless they’re logged into your site across all devices, this looks like four different people in your analytics. Attribution is impossible because you can’t connect the journey.
Cross-device tracking exists but requires pervasive tracking infrastructure that’s increasingly blocked by browsers and privacy regulations. Even sophisticated attribution tools only connect 40-60% of multi-device journeys.
What to Measure Instead
If attribution models are fundamentally broken, what should you measure?
Incremental lift testing: Run controlled experiments where you increase or decrease spend in specific channels and measure the business impact. Not attribution to specific conversions, but contribution to overall revenue.
This is harder to set up but provides actual evidence of channel effectiveness. If you increase Facebook spend by 50% and revenue doesn’t move, Facebook isn’t as effective as your attribution model suggests.
Brand search volume: Track branded search terms as a proxy for awareness generated by all channels. If your brand search volume increases significantly after a podcast campaign, that campaign is working even if you can’t directly attribute conversions.
Survey attribution: Ask customers how they heard about you. It’s self-reported and imperfect, but it captures dark social, offline channels, and word-of-mouth that tracking-based attribution misses.
Cohort analysis: Track behavior of users acquired in specific time periods and channels. Don’t attribute individual conversions, but measure the lifetime value and retention of cohorts. This shows which channels bring higher-quality customers over time.
Marketing mix modeling: Statistical analysis of the relationship between marketing spend across channels and business outcomes. Requires significant data history but can show channel effectiveness without needing conversion-level attribution.
The Incrementality Question
The fundamental question isn’t “which touchpoint gets credit for this conversion” but “would this conversion have happened without this marketing activity?”
Attribution models can’t answer this. Someone who clicked your Google search ad might have been going to buy anyway — they were already searching for your brand. Giving Google credit for that “conversion” overstates Google’s contribution.
The only way to measure true incrementality is through controlled experiments: A/B tests, geo holdouts, or time-based tests where you stop spending in a channel and measure the impact.
This is difficult, expensive, and requires discipline. But it produces actual evidence of what’s working rather than sophisticated-looking attribution reports that are fundamentally guessing.
Platform-Reported Attribution is Self-Serving
When Facebook reports that they drove 100 conversions, they’re using their own attribution methodology designed to maximize the numbers they report.
Facebook sees more touchpoints than your analytics (because they track logged-in users across devices and apps). They use broader attribution windows (7 days view, 28 days click). They claim credit even when conversions likely would’ve happened anyway.
Google Ads does the same thing. So does every advertising platform. If you add up the conversions that each platform claims credit for, you’ll get 150-200% of your actual conversions because everyone’s claiming overlapping credit.
Platform-reported metrics are useful for relative comparison within that platform (this campaign vs that campaign) but not for cross-channel budget allocation.
What Actually Drives Conversions
For most businesses, conversions come from:
Brand awareness and trust built over time through multiple exposures across channels you can’t fully track.
Word-of-mouth and social proof that happens off your website and outside your tracking capability.
Direct searches from people who already know about you and have decided to buy, even though some attribution model will give credit to whatever touchpoint happened to occur last.
Problem recognition timing — people buy when they need something, not when your retargeting ad happens to appear.
Attribution models want to make marketing seem more controllable and measurable than it actually is. The reality is messier: Brand awareness accumulates from multiple sources, trust develops gradually, and conversions happen when timing aligns with need.
A More Honest Approach
Instead of pretending you can precisely attribute every conversion to specific touchpoints, accept the limitations and make decisions based on better questions:
Is overall revenue growing as marketing spend increases? Basic correlation, but it catches whether marketing is working in aggregate.
Which channels bring customers with better retention and LTV? Cohort analysis over 6-12 months shows this even without conversion attribution.
What happens when I stop spending in this channel? Holdout tests reveal whether a channel is driving incremental growth or just taking credit for organic activity.
What do customers say about how they found us? Post-purchase surveys and customer conversations provide qualitative insight that complements quantitative data.
Does this channel align with where our target audience actually spends time? Basic strategic thinking sometimes outperforms sophisticated analytics.
When to Ignore Attribution Entirely
For brand building and top-of-funnel activities, attribution is largely useless. Podcast sponsorships, content marketing, social media presence — these build awareness and trust that might influence conversions months later through channels you can’t connect.
Trying to attribute conversions to these activities misses the point. Evaluate them based on reach, engagement, brand lift, and incremental business impact, not conversion attribution.
For bottom-of-funnel activities targeting people with clear purchase intent, attribution is more useful but still imperfect. Someone clicking your branded search ad was probably going to find you anyway.
The Organizational Problem
Marketing teams optimize around metrics they can measure and report. If your organization demands conversion attribution numbers, marketers will produce them even if the underlying data is questionable.
This creates systematic bias toward trackable channels and short-term conversions. Brand building and word-of-mouth generation get underinvested because they’re hard to attribute precisely.
Fixing this requires leadership willing to accept less precise measurement in exchange for more effective strategy. That’s a cultural challenge more than an analytics challenge.
Practical Recommendations
Track last-click attribution as a baseline, but discount its findings by 30-50% when making decisions. It’s directionally useful but not literally true.
Invest in incrementality testing for major budget decisions. If you’re spending $50K+/month on a channel, spend $5K on properly testing whether it’s actually incremental.
Use cohort analysis to evaluate channel quality over time, not just immediate conversions.
Talk to customers regularly about how they discovered you and why they decided to buy.
Accept that some marketing effectiveness will remain unmeasurable. Make strategic decisions based on imperfect data rather than waiting for perfect attribution.
Marketing attribution is trying to solve an unsolvable problem: precisely measuring something that happens across dozens of touchpoints, multiple devices, online and offline, over weeks or months. The standard models fail because the problem is fundamentally intractable.
Better to acknowledge the limitations and make decisions based on a combination of imperfect attribution, incrementality testing, qualitative feedback, and strategic judgment. It’s less satisfying than a detailed attribution report, but it’s more honest and probably more effective.