Earlier this year, Fonteva reached out and asked me to contribute to a report they were building on AI adoption in the association sector. The result — AI in Associations: From Exploration to Execution — pulls together perspectives from several association professionals, including Rick Bawcum of Cimatri, Dr. Catherine Lada, Lucas Braga from APTA, and Alan DeYoung of the Wisconsin EMS Association.
(If you haven’t, I’d recommend downloading it to read about what other associations and experts already are doing and accomplishing in the association space.)
But, a white paper is a snapshot, and what I contributed to that document represents where my thinking was at a specific moment — not where it is now after three more months of doing the work. So, this post is the extended version: what I said, what I’d add, and what the past several months of serious AI adoption has clarified for me.
What I Said in the Report
My contribution to the Fonteva piece focused on three things SSH has actually done with AI:
- Using it as an instant tutor when hitting a software obstacle
- Deploying it as a department assessment tool to interrogate how marketing can serve the organization more effectively
- Building a custom website auditing tool to surface errors across an expansive site infinitely faster than any manual process could.
AI doesn’t just make existing work faster. At a certain level of integration, it changes what work is possible at all.
I also flagged hallucinations directly, and the report notes that I’m quick to call out the ways AI gets things wrong and needs a domain authority antidote.
Associations and their leaders need to bring genuine expertise to the interaction. That’s what separates useful AI output from confidently wrong AI output, and the gap between those two things is wider than most people assume when they’re new to this.
That point still holds. It’s just not the whole picture anymore.
The Piece I Would Add: AI Is Not Just Smarter Execution
The Fonteva report does a good job capturing the responsible use framework — guardrails, verification culture, transparency, living policy documents. It also captures the strategic layer well: start with your highest-friction problems, don’t layer AI onto broken workflows, close the gap between the 85 percent exploring and the 24 percent who have an actual strategy.
What the report doesn’t fully address — because it’s hard to address in a white paper — is what happens to the organizational capacity equation when AI is genuinely integrated rather than experimentally deployed. That’s a different conversation, and it’s the one I’ve been living inside for the past several months.
Here’s the specific thing that has changed in my thinking: AI doesn’t just make existing work faster. At a certain level of integration, it changes what work is possible at all.
The brand guide that had been on the someday list for two years. The 99-page site audit that now runs automatically every Monday. The 36-article content series aligned to five strategic pathways that two people could not have produced at that speed under any prior conditions. These are not efficiency gains. They are organizational capabilities that didn’t exist before.
That distinction matters for how associations think about AI adoption. If the frame is “how do we do what we already do, faster,” the ROI calculation is real but bounded. If the frame is “what becomes possible for our organization that wasn’t possible before,” the calculation is different — and the answer, to me, is much more interesting.
The Hallucination Problem Is Real, But It’s Manageable
I want to stay on the hallucination point for a moment, because it’s the one I raised in the report and it’s the one I see association professionals most confused about in practice.
AI gets things wrong. Sometimes it gets them wrong confidently, which is worse. In an association context, where the stakes include member trust, credentialing accuracy, and organizational reputation, that failure mode is not theoretical. It is a real risk that requires a real response.
The response, though, is not to avoid AI.
It is to bring the right professional posture to the interaction. The competitive intelligence report that AI helps synthesize is only as accurate as the professional reviewing it who knows the landscape well enough to catch what’s wrong. The member communication AI drafts is only as accurate as the communicator who knows the policy, the nuance, and the tone the member actually needs. The site audit AI generates is only as useful as the staff member who can distinguish a real critical issue from a flag that doesn’t apply to your specific architecture.
Domain authority is the filter. It doesn’t make the hallucination problem go away. It makes the hallucination problem manageable, which is the realistic standard instead of an aspirational one.
What the Report Gets Exactly Right
Lucas Braga’s framing in the report is the one that has stayed with me: “AI isn’t your transformation. Your workflows are.” That’s correct, and it’s the most useful corrective to the hype cycle that most associations are still navigating.
Buying a tool is not a strategy. Running a prompting workshop is not a strategy. The strategy is identifying where your team is spending the most time on work that shouldn’t require human judgment, and building the workflow that removes you from that relay.
Rick Bawcum’s point about starting with highest-friction problems is the tactical corollary. The flashiest use case is rarely the most valuable one. The most valuable one is wherever your team is currently spending the most repetitive hours — the report that gets rebuilt from scratch every quarter, the member inquiry that gets answered the same way 200 times a year, the content series that gets deprioritized because production time exceeds available capacity.
And Dr. Lada’s advice to treat the AI policy as a living document is something I’d underscore with specific urgency: the tools themselves are changing faster than any policy written six months ago anticipated. If your AI policy has a publication date and no review schedule, it’s already out of date.
The Practical Starting Point
If you’re an association professional who has been watching AI from a cautious distance, the Fonteva report is a genuinely useful place to start. It is practitioner-grounded, it names the responsible use framework clearly, and it includes specific examples from real association staff doing real work — not hypotheticals about what AI might eventually do for the sector.
Read the whole thing. Then, ask yourself the question that Bawcum poses and that I’ve found to be the most clarifying one in any conversation about AI adoption: where is your team currently doing the most repetitive, time-consuming work? Start there. Not with the technology. With the problem.
The technology is ready. The question is whether the organization is willing to do the workflow thinking that makes it useful.
Download AI in Associations: From Exploration to Execution from Fonteva here.





Leave a comment