10 Reasons You Can't Trust Platforms to Optimize for You in 2026

A classmate of mine asked a good question last week in our MBA program. We were discussing the intersection of automation and advertising, and he made the obvious leap: "If the channels have more data and more processing power than any marketer, why not just give them a URL and a budget and let them run?"

It makes sense on some levels. Google, Meta, and LinkedIn have access to more behavioral data than any agency or in-house team will ever accumulate. Their optimization engines keep getting better and work faster than manual controls. The case for full delegation is intellectually coherent.

But it depends on one assumption: that the platform will optimize for your business outcomes rather than its own revenue. And there is a long, well-documented history of that assumption not holding up.

Here are 10 ways this continues to play out in practice. These are not hypothetical scenarios. They come from years of managing paid media at Blackbird PPC and from patterns I have seen repeated across clients, channels, and contexts. I have grouped them by the type of misalignment rather than listing them chronologically, because the same structural problems show up in different costumes.

The Consent Problem: When "No" Does Not Mean No

1. Defaults That Favor the Platform

This one is pervasive and cuts across platforms. Advertising newcomers lean on platform "help" to launch their first campaigns, follow all the platform's directions and recommendations along the way, and end up unknowingly using default settings that almost never benefit them.

No single platform is uniquely at fault. LinkedIn automatically enables audience network placements beyond the feed. Google defaults search campaigns into display inventory, applies broad match keywords more aggressively than many advertisers expect, inflates suggested CPCs, and sets geographic targeting to "presence or interest" rather than physical location. I could keep going.

You could make a case for any of these settings as a logical starting point, but you can't ignore the aggregate effect: an initial configuration that maximizes platform revenue from day one. A new advertiser who trusts the platforms has a much harder time spending efficiently.

2. The Feature We Declined That Got Turned On Anyway

Roughly six years ago, a Google representative proposed a product designed to enable more aggressive targeting and bidding. We evaluated the proposal and declined it.

Despite our clear rejection, Google activated the feature anyway. What we expected would happen played out: campaign spend increased materially without producing incremental conversions, requiring us to reimburse the client for the wasted budget. When we sought reimbursement from Google, their response was that the campaigns had been assigned budgets at a given level, so they were free to spend to those limits.

That reasoning reframes a budget cap as discretionary authorization rather than a top limit. (At Blackbird, our approach is intentionally to avoid fully exhausting budgets.) The representatives were clearly incentivized to drive adoption, and when the initiative failed, there was no corresponding accountability. I followed up repeatedly over several weeks, but the matter was never resolved.

The Marginal Return Blind Spot: Why "Spend More" Is Always the Recommendation

Three of the most common platform pitches share the same underlying flaw: they ignore basic economic principles.

3. The Deeply Flawed Profit Maximization Model

Several years ago, two senior Google representatives asked about our client's gross margin. When we answered approximately 50 percent, they sketched a simple rule on the whiteboard: continue spending as long as total revenue divided by two, minus total media cost, remained positive.

Although this appears reasonable at first glance, it is flawed in two important respects. First, it assumes that all reported conversions are incremental -- that none would have occurred without advertising. In reality, many of those conversions, particularly in brand and retargeting campaigns, would have happened regardless. Second, the model assumes a flat cost curve, implying that later conversions cost no more than earlier ones. In practice, marginal returns decline as spend increases, making the final dollars the least efficient and the primary target of such recommendations.

A more accurate formulation would define profit maximization as marginal revenue, adjusted for margin, equaling marginal cost. That distinction is critical.

4. The Mathematical Fallacy of Click Quality from Higher CPCs

I still hear this one all the time: reps claim that increasing CPCs unlocks higher-quality traffic, implying that lower bids exclude advertisers from the "best" clicks.

It is partially true. Tighter bids can secure stronger ad positions, increasing impression frequency among the same users, which can raise aggregate conversion rates. That effect is real.

But the reasoning omits the cost side. Additional frequency delivers diminishing marginal value -- the third impression is worth less than the first, and the tenth far less still -- while the cost per click keeps rising. Over and over, I have seen that bidding up to pursue "quality" traffic usually brings declining return on ad spend.

This is another manifestation of the marginal return problem: the argument highlights potential upside while ignoring the cost curve. Increased spend is framed as access to better performance but typically produces similar outcomes at a higher cost.

5. Blended Averages: Clever Camouflage for Wasted Spend

Upper- and lower-funnel campaigns serve distinct objectives and operate at very different CPMs and CPAs. When platforms report a blended CPA across campaign types, an acceptable average can obscure the fact that portions of the media plan are highly inefficient.

The rationale for blending is that upper-funnel investment enables lower-funnel conversions. While plausible, this presumes direct contribution, aggregate profitability, and full incrementality -- none of which are assured.

Advertisers who accept blended averages will have no visibility into where spend is effective and where it is not. Platforms that offer blended reporting prominently know that well.

Claims and Excuses That Are Almost Impossible to Disprove

Two of the most persistent platform arguments share a structural feature: they are nearly impossible to disprove in the short run, which makes them very difficult for advertisers to push back against.

6. The Learning Phase as a Blank Check

When Meta campaigns underperform, representatives often attribute the issue to the algorithm needing more time to learn, advising against changes or budget reductions in favor of providing additional data.

This can be valid in principle. Machine learning systems do require volume, and premature adjustments can disrupt optimization. But the explanation has become a catch-all that deflects scrutiny. It excuses weak CPAs, postpones accountability, and promotes sustained spend during periods when a more experienced advertiser might otherwise reassess.

Compounding the issue, advertisers are rarely given clear expectations for when the learning phase ends. If performance fails to improve, the remedy is simply more learning, with no defined success criteria or endpoint. What is offered as a technical explanation becomes, with a little more scrutiny, an ambiguous excuse to keep the budget pipeline open.

7. The Tracking Gap as an Article of Faith

Privacy regulation and platform changes have created real limitations in conversion tracking. GDPR, Apple's App Tracking Transparency, and cookie deprecation are legitimate obstacles to accurate measurement. Platforms have responded to reduced visibility by layering probabilistic modeling and modeled conversions on top of deterministic tracking.

That is all well and good, but the tracking gap has also become a convenient refuge. Advertisers are told that conversions are occurring but delayed, partially unobservable, or still populating through models, and that longer attribution windows or proxy metrics will eventually clarify performance. Each explanation may be valid in isolation, but in combination, they serve to justify continued spend despite a lack of measurable results.

Remember: measurement limitations cut both ways. If outcomes cannot be observed, they also cannot be proven. Advertisers should challenge claims of hidden performance by asking what the strategy becomes if the modeled conversions never materialize.

Measurement Games: Making Bad Numbers Look Good

8. View-Through Conversions with No Campaign Type Nuance

A view-through conversion is credited when a user is shown an ad, does not click it, and later converts within a defined attribution window, often 24 hours or longer. By default, platforms report these conversions together with click-through results.

This is especially problematic in retargeting, where ads reach users who have already visited a site and were likely to convert anyway. View-through attribution can be meaningful for campaigns targeting new users, where ads may influence behavior without generating a click. But you rarely see that data broken out unless you are persistent about asking for it.

The impact of over-inflated attribution in retargeting can be significant. I have seen reported conversions decline by more than half when retargeting campaigns are removed from the equation.

Credit where credit is due, though: Meta's recent shift toward incremental measurement has improved transparency on this issue.

9. The Metric Pivot: When Conversions Fail, Sell Sentiment

When YouTube and display campaigns fail to drive measurable conversions (which is more the rule than the exception), reps usually offer brand measurement data like recall rates, positive sentiment, and intent to purchase. These are real signals of brand health with long-term impact.

The issue is that the shift from conversion metrics to sentiment metrics almost always happens in response to poor conversion metrics, not as a predetermined measurement strategy. Lift surveys rely on self-reported intent and do not connect to downstream revenue. Recall numbers rarely get translated into a cost-per-point-of-lift that can be compared against anything else in the media plan.

No matter how positive those numbers look, consider this: in a reactive approach, there is no framework for what sufficient lift would look like. If brand lift targets are defined before the campaign launches, the results should carry more weight. If they are not, they are just stand-ins for the performance metrics that came up short.

10. Competitor Benchmarks as Pressure

Benchmark data can provide useful context, and you should always ask your account reps to share what they can. But use it to inform your strategy, not as pressure to keep up with bigger competitor budgets.

Unless you know how your competitors are actually performing, how their product margins compare with yours, or whether bigger business goals like IPOs or acquisitions are in play, it is not good practice to let their spend levels dictate yours, no matter how much pressure you might feel from reps.

So Will Agencies Always Have a Role?

This might sound like it is building to one big argument on behalf of agencies. It is not. We used to be skeptical of things like target CPA bidding, and now that is our go-to. There is no denying that the automation of block-and-tackle work makes in-house teams an effective option for more kinds of organizations. And plenty of agencies work under incentives (like percentage of spend) that are just as easily misaligned with a client's best interests.

What I am arguing is that as an industry, we are not yet ready for platform-managed advertising.

I am not trying to slander specific reps or even specific platforms. Reps generally believe in the products they are pushing, but they are also evaluated in a way that encourages strategic couching of performance data.

For any advertiser, no matter how inexperienced, part of the job must be to ask questions on behalf of their organization: What is our marginal return today? Can you break out VTC data? Is brand search actually incremental? What do those brand lift metrics actually prove?

Those questions will change as platforms provide more data (Meta's incremental measurement is a good example). But keep asking them, and stay skeptical. We might get to a point where full automation is the best option, but you can trust that the platforms will be eager to sell that scenario whether or not their systems are ready to back it up.

Next
Next

Meet the Blackbirdies: Anastasija Fruehling, Account Associate