Pull up any ad account in 2026 and you'll see a "Purchase ROAS" column in Ads Manager. It's a number. It's specific. It's also a model output — and most advertisers treat it like a measured fact.

This post is about the gap between attributed ROAS and true ROAS, the three measurement layers that close that gap, and how to allocate budget when your tracking can't see every conversion anymore.

Why Last-Click ROAS Lies to You

Last-click attribution made sense when every browser session left a deterministic trail. In 2026, three things have permanently broken that:

  1. Cross-device journeys. A buyer sees the ad on Instagram on their phone, googles you on their laptop, buys on their tablet. Last-click sees a "direct" or "organic" purchase and attributes nothing to the ad.
  2. Privacy-driven signal loss. ITP, ETP, ad blockers, and cookie deprecation drop a meaningful share of touchpoints from view.
  3. View-through invisibility. Meta has limited view-through tracking on iOS for years. A user who sees your ad without clicking and buys later is invisible to the pixel.

The result: Meta's reported conversions undercount what your ads actually drove. Brands we've seen run incrementality tests usually find their real Meta-driven ROAS is 30-100% higher than last-click ROAS suggests.

Layer 1: Modeled Conversions

Modeled conversions are Meta's own statistical estimate of conversions it couldn't directly observe. When the algorithm sees a click but no follow-up event, it uses patterns from observed conversions to estimate whether one likely happened.

You'll see modeled conversions in Ads Manager appear as part of the standard Purchase number — they're not broken out separately by default. Useful properties:

Modeled conversions are a real improvement over raw observed conversions, but they're still calibrated to Meta's view of the world. They don't help with the cross-device problem and they don't tell you whether the conversions would have happened anyway.

Layer 2: Incrementality Testing

Incrementality is the question every advertiser actually wants answered: if I turned these ads off, how much revenue would I lose?

Last-click ROAS doesn't answer this. Modeled conversions don't either. The only way to measure incrementality is to run an experiment.

Geo lift tests

The cleanest design: pick matched pairs of geographic markets, run ads in one half, hold out the other. After 4-8 weeks, compare revenue between treatment and control geos. The difference is incremental ROAS.

Treatment markets: San Diego, Phoenix, Denver, Atlanta
Control markets:    Sacramento, Tucson, Salt Lake City, Charlotte
Test duration:      6 weeks
Treatment spend:    $200K
Treatment revenue:  $1.2M
Control revenue:    $850K
Lift:               $350K (control-adjusted)
Incremental ROAS:   1.75x

Note: that 1.75x might be very different from a 4.0x last-click ROAS in Ads Manager. The gap is the conversions Meta would claim that weren't actually caused by the ad.

Conversion lift studies

Meta's built-in tool. The platform randomly holds out a subset of users from seeing your ads, then measures conversion rate differences between exposed and held-out groups. Easier to set up than geo testing but has its own quirks (the holdout is random Meta users, not random buyers, so noise is higher).

Pulse tests

Cheaper and dirtier: pause Meta spend entirely for 2-4 weeks and see what happens to total revenue. Confounded by everything (seasonality, other channels, weather), but gives you a rough sanity check. Good for very low-budget operations that can't afford a real test.

Layer 3: Marketing Mix Modeling (MMM)

MMM is an econometric approach: build a model of total revenue as a function of media spend, seasonality, promotions, and exogenous factors. The model assigns credit to each channel without needing user-level tracking at all.

What used to be a quarterly enterprise consulting project is now achievable with open-source tools (Meta's own Robyn, Google's Meridian, several commercial alternatives) running on weekly data.

MMM is best suited to:

MMM won't tell you which ad set to pause tomorrow. It will tell you whether $50K/month should go to Meta, TikTok, or YouTube — which is a more important question.

How to Layer These

You don't pick one. Different decisions need different measurement.

DecisionBest Measurement
Which creative variation to scaleIn-platform attributed metrics (modeled conversions)
Whether Meta is profitable as a channelIncrementality test
How to split budget across channelsMMM
Daily campaign optimizationIn-platform metrics + CAPI signal quality
"Should we increase Q4 budget?"MMM + recent incrementality results

The mistake is using one layer for all decisions. In-platform ROAS is fine for picking creative; it's misleading for choosing channels. MMM is great for channel allocation; it's useless for tomorrow's bid decisions.

What "Good" Looks Like in 2026

Most well-run advertising programs we see are doing roughly this:

The key insight: you don't need perfect measurement at every layer. You need good enough measurement that gives you the right signal for the decision being made.

The Calibration Factor Trick

Once you've run an incrementality test, you have a number: "Meta's reported ROAS is X, true incremental ROAS is Y." Divide Y by X and you have a calibration factor.

Apply that factor to ongoing in-platform reporting and you have a much better daily ROAS estimate without re-running the test every week. Most advertisers find their factor is somewhere between 0.5 and 0.9 (Meta over-reports), but for some brands it's >1 (Meta under-reports because it doesn't see brand search lift).

Re-run the calibration test every 6-12 months — the factor changes as your campaigns and tracking change.

Where Tooling Helps

Modeled conversions are baked into Meta. Incrementality tests can be set up in Ads Manager. MMM tooling has gone from enterprise-only to accessible. Where AI-driven platforms like Ads Agents add value is in the connective tissue: ensuring your CAPI signal quality is high enough that modeled conversions are reliable, surfacing when in-platform metrics start to diverge from incrementality benchmarks, and making the calibration factor part of routine reporting rather than a one-off project.

The goal is simple: stop optimizing against numbers that aren't true. The advertisers who measure right will outspend the ones who don't, even at the same headline ROAS.

Ready to automate your ads?

Let AI manage your Facebook & Instagram campaigns. Start free, upgrade when you're ready.

Get Started Free →