When Budget Gets Cut, Where Does Attribution Go First?
Q1 2026 was a tough budget cycle for a lot of marketing teams. We’ve watched several enterprise attribution programs get partially or fully dismantled in the last six months as CFOs sharpened their pencils. Some of these cuts were sensible. Others were the kind that look smart in a board pack and quietly destroy attribution maturity that took years to build.
This is what we’re seeing actually happen, and what we’d recommend if you’re staring at a budget conversation right now.
What’s getting cut first
A clear pattern across the teams we’ve talked to.
Multi-touch attribution platforms are the first to go. Tools that promised to allocate credit across the funnel using algorithmic models have struggled to demonstrate clear ROI in the post-iOS-14 world. When budget season comes, these are easy targets. We’ve seen platforms that cost $200-500k annually get cut without much hesitation.
Custom data warehouse modelling teams are next. Internal teams maintaining custom attribution models in BigQuery or Snowflake are vulnerable when leadership can’t explain what they do beyond “they make charts.” Teams of 2-4 people working on attribution often get reorganised or eliminated.
Specialist agencies on retainer are third. Attribution-focused agencies that ran on $20-40k/month retainers are getting cut or reduced to project work.
What survives almost everywhere:
- Marketing mix modelling (MMM) - cheap relative to its strategic value
- Incrementality testing programs - proven and CFO-defensible
- Basic platform attribution from Google, Meta, and the major DSPs - effectively free
The pattern: anything expensive that’s hard to defend in concrete ROI terms is exposed. Anything cheap, defensible, or proven survives.
What we think about this
Mixed feelings, honestly. Some of these cuts are correct. The MTA platforms in particular have genuinely struggled to deliver on their original promise. We’ve worked with teams that spent millions over five years and couldn’t point to a clear business outcome. Cutting those isn’t a tragedy.
But the cuts to internal attribution teams concern us more. Internal modelling teams are often the source of the most defensible attribution work - they understand the business deeply, they can build models that match the actual decision-making process, and they accumulate institutional knowledge that’s hard to replace.
When you cut these teams, you don’t just save the salary cost. You lose the ability to interpret your own data. You become more dependent on platform attribution from Google and Meta, which are aggressively self-serving. That’s a worse spot than spending the modest cost of keeping the internal capability.
A taxonomy of attribution cuts that work
Some cuts genuinely improve the program. The ones we’ve seen work:
Cutting MTA tools while investing in MMM. Replacing an expensive MTA platform with a more disciplined MMM program (whether built internally, with Quantium or similar specialists, or with one of the modern open-source tools like Meridian or Robyn) almost always nets out better. You get strategic insight rather than tactical noise.
Consolidating multiple attribution sources to a single source of truth. Many large enterprises have 4-6 different attribution methods feeding different teams different numbers. Consolidating to one model that everyone uses, even if it’s imperfect, beats running multiple imperfect models that disagree.
Cutting the long tail of small-channel tracking. If you’re spending tracking budget on channels that are 2% of your spend, the marginal value isn’t there. Cut the tracking infrastructure for the small stuff and reinvest in better measurement of the channels that actually matter.
A taxonomy of cuts that destroy value
The ones that look like savings but cost you long-term:
Cutting the team that owns measurement, period. If nobody at the company is responsible for marketing measurement, the function decays fast. You’ll be relying on platform-reported numbers within 12 months and over-investing in whatever channels lie best.
Cutting incrementality testing. Holdout testing is the most defensible attribution method available. It’s cheap to run if you have the discipline. Cutting it doesn’t save much money but loses the only method that actually answers “did this work?”
Cutting first-party data infrastructure. Some teams have cut investment in their data infrastructure (the CDP, the warehouse, the ETL maintenance) thinking they can rebuild later. They can’t, easily. Data infrastructure debt compounds.
What we’d do if we were facing the conversation
A few practical recommendations if you’re heading into a budget defence meeting.
Bring proof of decisions changed by attribution. The strongest defence is “in the last 12 months, attribution work directly informed these N decisions, which we estimate produced these outcomes.” Even imperfect estimates are better than abstract claims about insight.
Quantify what you’d lose, not what you’d save. Don’t make the case for keeping spend on what attribution costs. Make it on what visibility you’d lose. “If we cut this, we won’t be able to answer X” lands harder than “this costs Y.”
Propose a smaller, more defensible program rather than fighting for the full one. If a $1.5m attribution program is on the chopping block, a $400k focused program with clear scope might survive. Half a program is better than no program.
Tie attribution work to business outcomes, not marketing outcomes. Everyone is more interested in revenue and contribution than in marketing-internal metrics like CPA. Reframe attribution outputs in terms the CFO actually cares about.
The AI angle
A pattern we’re seeing more often: AI-augmented attribution capability replacing some of the previously-expensive infrastructure. Modern LLMs can do meaningful work in summarising attribution data, identifying patterns across large data sets, and producing executive-ready narratives that previously required significant analyst time.
This isn’t a substitute for the underlying attribution work - the models still need real data and real frameworks. But it does change the cost structure. A small attribution team augmented with good AI tooling can produce outputs that previously required a much larger team. We’ve seen teams use this to defend their function: “we can do the work of the prior 6-person team with our 3-person team plus AI tooling.”
This is real. The teams making it work have been deliberate about which AI tools they use, how they validate AI outputs against ground truth, and what work they keep human-led versus AI-augmented. Some have brought in AI consulting partners to design these workflows properly rather than reinventing them in-house. The teams doing it badly are shipping AI-generated nonsense to executives and damaging trust in the function.
A note on the agencies
The attribution agency landscape has compressed in 2025-2026. Several specialist firms have either pivoted into broader analytics consulting, been acquired, or quietly downsized. The agencies that have weathered the cuts well share some patterns: they brought genuine specialist capability (not just dashboards), they tied work to business outcomes, and they were honest about what attribution can and can’t do.
If you’re choosing an attribution partner in this environment, look for firms that pass those tests. The ones who promise certainty about marketing impact are the ones who’ll disappoint you. The ones who help you make better decisions under uncertainty are worth keeping.
The bottom line
Budget cuts to attribution programs in 2026 are happening across the board. Some of them are correct - the prior decade saw real overinvestment in tools that didn’t deliver. Some of them are catastrophic mistakes that organisations will regret.
The discipline is figuring out which is which for your specific situation. Cut the things that aren’t earning their keep. Defend the things that genuinely make you better at marketing. Don’t be sentimental about the former or compliant on the latter.
The marketing measurement function is too important to lose, but it doesn’t deserve protection on autopilot. Make the case for what’s working. Cut what isn’t. Survive to do the work next quarter.