Agentic AI is having a moment. Autonomous agents that watch your Meta ads, poke around in GA, and shift budgets while you sleep. The pitch sounds great. The problem is the one most posts about it skip past entirely: marketing runs on deterministic data, and LLMs are probabilistic by design.
(Yes, we built an AI ad tool. No, this isn’t going to land where you think.)
Probabilistic vs. deterministic — the actual conflict
To see why agents struggle, you have to look at how they think.
Deterministic systems are the ones you already use. Excel. SQL. Python. Input A, output B. Sum a column of 100,000 rows, and the answer is a fact, not an opinion.
Probabilistic systems — i.e., LLMs — are next-token predictors. They don’t calculate. They predict the most likely next word based on patterns in their training data. When an LLM looks at your performance data, it isn’t really seeing numbers. It’s predicting a linguistic response to numbers.
The risk: an LLM reasoning its way through a budget pacing calculation might hallucinate a decimal and turn a $14.50 CPA into $1.45, because that sequence of characters happens to feel more likely to its predictive engine. In marketing, a 10% error isn’t a creative variation. It’s a fired agency.
500,000 rows don’t fit in a context window
You can’t just feed three years of performance data into an LLM. Models have a context window — a hard ceiling on how much they can hold in working memory at once.
So when you ask an agent to analyze trends across the last three years, it can’t actually read every row. It has to summarize. Chunk. Compress. And the things that get smoothed over in that compression tend to be exactly the things you wanted: the tiny pacing shifts, the early fatigue signals, the weeks where one campaign quietly started carrying the others.
What you end up with is a high-level vibe of your data. Not a statistical analysis. A vibe.
ROAS went up — send more money
This is where the agentic dream tends to break.
Picture an agent who sees a campaign with high ROAS and moves $500 into it. To a machine, this is linear optimization. To anyone who’s actually run accounts, it’s at least three different ways to lose money:
- Ad saturation. The high ROAS exists because the audience is small. Pour budget in and frequency spikes, performance craters.
- Competition. A competitor paused for 48 hours. You’re seeing a temporary, non-repeatable bubble.
- Seasonality. It’s a localized holiday, or a one-off event, or a sale that’s about to end. The trend isn’t a trend.
A marketer feels these after a couple of years of doing the job. An LLM doesn’t feel anything. It pattern-matches.
So where are LLMs actually useful?
Plenty of places. Just not in the seat where the budget decisions get made.
LLMs are translators and creators, not fiduciaries:
- Creative velocity. Generate 1,000 ad variations from a single winning hook.
- Sentiment synthesis. Read 5,000 customer reviews and surface the why behind the what.
- The interface layer. Let people ask their data questions in plain English — as long as a deterministic engine (SQL, an algorithm, an actual calculator) is doing the math underneath.
That last point is the one most AI-for-marketing pitches blur. The LLM should be the front desk, not the accountant.
But agents are different — they can use tools
This is the version of the argument currently doing the rounds, and it’s worth taking seriously. The pitch goes: agents fix the reliability problem because they can call external tools — a SQL database, a calculator, an API. Even if the brain is probabilistic, the tools are deterministic.
True. Also, not enough.
An agent is only as stable as its controller — and the controller is still an LLM. Which means:
- Logical hallucinations. Even with a SQL database attached, the agent has to decide which query to write. That decision is probabilistic.
- Misinterpreted intent. A sudden CPA spike could be bad creative or a competitor bid war. A probabilistic brain feels which one is more likely. It doesn’t know.
- Inconsistent execution. Ask an agent to optimize spend five times, and you can get five different answers, because there’s no single deterministic path it’s locked to. It picks whichever route feels most probable in the moment.
Tools help. Tools don’t change what’s running the show.
The honest version
Agentic AI is genuinely useful. We build with it. We don’t pretend otherwise. But there is a real difference between an AI that drafts your copy, summarizes your reviews, and helps you ask better questions of your data — and one you’d hand the credit card to.
The second one needs deterministic systems doing the math underneath.
Otherwise, you’ve just hired an extremely confident intern with no calculator.