There are two Artificial Intelligence conversations happening in association marketing right now, and they are not really about AI.
One group is publishing results — workflows rebuilt, hours recovered, output scaled without headcount. The other, “I already tried it, and it’s washed” group: AI is already maxed out, the promise outran the reality, time to move on.
Both groups believe they are describing the same tool, which, okay. But, what they’re describing has much more to do with their usage path and experience walking it and hardly anything to do with the actual technology.
I have been in both places, and I think trying to compare them apples-to-apples when one is clearly an orange isn’t right.
So… let’s chat.
The debate is almost always framed as pro-AI versus skeptical-of-AI. The actual variable is how far in you went before you formed a conclusion.
The cost frustration is legitimate. We’ll start there.
The pricing squeeze is real and getting worse. The $20 flat-rate AI subscription that felt like a deal in 2024-25 is structurally unsustainable for the companies offering it, and they are saying so through their actions. In April 2026, Anthropic quietly moved Claude Code — previously included in the $20 Pro plan — toward its $100 and $200 Max tiers. The backlash was immediate enough that they reversed it within hours.
But the signal was already sent: the math on flat-rate access to compute-intensive AI does not work at $20 a month, and providers are actively searching for a pricing model that does.
Stack that against the broader reality. The average serious AI user in 2026 is paying across multiple subscriptions — Claude, ChatGPT, Gemini, and others — often with significant feature overlap, adding up to $110 a month or more before a single productive hour is logged. For an association professional working with constrained budgets and leadership that wants ROI demonstrated quickly, that is not an abstract concern.
I run into this too.
Usage limits mid-session. Models that drift off context in long conversations. Fighting to get something as basic as the correct date in an automated daily output — a problem that compounds the longer the project runs. These are not edge cases. They are the daily friction of operating at a higher level than most AI demos ever show you, and they feel as good as a really bad case of Athlete’s Foot.
This raises a serious misconception, though, and I believe it’s a core one for those who think they’ve already tapped out AI.
Friction is not the same thing as ceiling.
A Salesforce study from late 2025, surveying nearly 4,500 marketing decision-makers, found that 75 percent of marketers have adopted AI and 84 percent are still running generic campaigns. Adoption is high. True transformation is not happening at the same rate. Both things can be (and are) true.
Those are not the same condition, though, and treating them as equivalent is where the “AI doesn’t work” conclusion comes from.
MarTech put it plainly this month: marketers were early adopters, but most teams are still using AI like a smarter autocomplete. The tool improved. The work around it did not change.
When you apply a fundamentally different kind of tool to the same workflow, you get modest speed gains at best. When those gains plateau, it is easy to call it the tool’s outer limit. It is usually the process’s outer limit.
That distinction matters because it explains how two people can pay the same $20 a month, use the same model, and arrive at completely different conclusions about whether any of it was worth it.
The technical barrier is fair to name directly.
Something the enthusiast camp does not say loudly enough: getting real value out of AI at a workflow level requires a degree of technical patience and systems thinking that was never part of the communications job description. Knowing how to prompt well is not a natural extension of knowing how to write. Building a workflow that consistently produces usable output — with the right context loaded, the right constraints set, guardrails that keep the model from confidently fabricating details — requires iteration and troubleshooting that feels nothing like the “just ask it anything” version sold in demos.
The classically trained marcomm professional who invested real time, hit real walls, and still found the output underwhelming is not wrong. They ran into a legitimate gap between what the tool promises and what it takes to actually capture that promise. What I would challenge is the next step — the conclusion that the gap is permanent rather than a function of how far into the implementation they actually got.
What walking away early actually reveals.
A 2026 workflow analysis making the rounds in operations and tech circles makes a consistent observation: this year is separating organizations that use AI as a productivity shortcut from organizations that use it to redesign how work moves. The organizations that treated it as a fad — tried it for 90 days, generated some captions, got mediocre results, declared it hype — are not wrong that their experience was unsatisfying. They are wrong about what the experience means.
Shallow implementation produces shallow results.
That is not a verdict on the technology. It is a verdict on the depth of the change. And by the time the organizations that walked away decide to come back, the ones that kept building will have 18 to 24 months of workflow infrastructure — trained prompts, documented systems, accumulated institutional context — that does not replicate quickly. That compounding gap is invisible until it suddenly is not.
In other words, as I tell my daughter who is learning competitive swimming, the only way she will hit her swim race time goals is to keep practicing with purpose. Skipping practices between meets will not help her stay competitive. If she does miss those opportunities, when she comes back to race again, she’ll find those who put the work in got better when she didn’t.
Where I am in this, and why I keep publishing the specifics.
I started where most people start: Using AI to draft faster, clean up copy, produce a first version of something I was going to write anyway. That phase produced real frustration, because the editing time often erased the drafting gains. The output needed enough work that the net savings were marginal.
The shift happened when I stopped using AI as a drafting accelerator and started treating it as an operating infrastructure.
Workflows built around it. Accumulated context that carries across sessions. Research, competitive intelligence, site management, publication scheduling, analytical work that would have otherwise required contractor hours or simply gone undone. The result — documented in April — was $20,000 to $32,000 in professional services equivalent value produced by one person over a defined window, with AI functioning as a system rather than a shortcut.
I publish those specifics because the progression from pilot to implementation is the story missing from both camps. The debate is almost always framed as pro-AI versus skeptical-of-AI. The actual variable is how far in you went before you formed a conclusion.
Shallow adoption produces shallow results and correctly gets called out. That is a fair critique of a specific depth of implementation. It is not a fair critique of the tool’s range.
So, the question for anyone on either side of this: how far did you actually go before you set up camp?





Leave a comment