Most association professionals aren’t asking the right question about AI.
The question most people are asking: “Will AI take my job?”
The question worth asking: “What percentage of what I do today could AI perform at 80 percent of my current quality level — and what does that mean for how my organization values me?”
Those are different questions. The first produces anxiety. The second produces a plan.
I ran this assessment recently — both on my own role and on several of the director-level roles across the departments I work closest with. The framework came from Dario Amodei’s writing on AI disruption and his Senate testimony on which white-collar roles AI will automate in the next one to three years versus which ones become more valuable because of AI.
While first blush thoughts felt uncomfortable, it wasn’t nearly as dire as I suppose I expected. For today anyways.
The Pattern Nobody’s Talking About
Across every department I assessed, the same structural problem surfaced.
The work that AI automates fastest is the work most visible to leadership as “what the department does.” Renewal reminder emails. Application status communications. Certificate processing. Registration confirmations. Report generation. These are legible, countable outputs — the kind that historically justified headcount.
The work that AI can’t touch — and that actually determines organizational outcomes — is nearly invisible to leadership. Relationship judgment. Governance instinct. The ability to read a board room. Knowing when a member’s silence means disengagement versus satisfaction. That’s the work that retains members, saves accreditation relationships (and revenue!), and catches problems before they compound.
The threat scenario for most association professionals isn’t “AI takes your job.” It’s “leadership looks at your department and asks whether one AI-fluent person could now do what two people do.”
That question is coming. Probably within the next budget cycle or two.
A Task-by-Task Honest Look
For any association professional doing this assessment, the framework is simple. Take every major task you do and classify it one of three ways.
Automatable means AI can perform it at 80 percent or better of your current quality. For most association communicators and operations directors, this covers first-draft communications, routine member correspondence, report compilation, scheduling logistics, basic data analysis, and certificate or credential processing. This is the work that disappears first.
AI-Augmented means you do it better with AI than without it, but your judgment is still the operating variable. Strategic content planning, member segmentation, event programming decisions, grant narrative development — AI accelerates these, but your knowledge of the field, the membership, and the organization is what makes the output actually useful. This is where most professionals should be spending their time right now.
Human-Essential means AI cannot do it at any quality level that matters. Governance judgment. Board relationships. Crisis communication with a member who’s angry and considering lapsing. The ability to walk into a room and know whether a partnership conversation is going to close. Ethics calls. Lived professional experience that gives you instincts no training data can replicate.
Most association professionals, if they’re honest, will find their Automatable and AI-Augmented categories are larger than they expected, and their Human-Essential category is smaller — but more irreplaceable — than they realized.
The Compressed Timeline Warning
Amodei’s central thesis is that breakthroughs that used to take decades now take five to 10 years. For association professionals, what that means concretely is this: the administrative work AI is absorbing now was, five years ago, considered too nuanced for automation. The relational and governance work that feels safely human today is not automatically safe in five years.
The window to reposition isn’t long. The professionals who will be indispensable in three years are the ones who, right now, are actively making the invisible work visible to leadership — and building AI fluency that multiplies their strategic output rather than just their production speed.
The Centaur Opportunity
The highest-value move available to association professionals right now is what researchers call the centaur model: human judgment combined with AI capability, producing output that neither could generate alone.
In practice this looks like: using AI to synthesize member behavior data into retention risk signals, then applying human relationship judgment to act on them before a renewal lapses. Or using AI to audit content output against strategic pillars, then applying professional knowledge to identify which gaps actually matter. Or using AI to accelerate a content strategy, then applying editorial judgment to ensure it actually reflects where the organization is going — not just where it’s been.
The centaur isn’t someone who uses AI to work faster. It’s someone who uses AI to work at a level of strategic synthesis that wasn’t previously possible for one person. That’s a different job than either “human doing everything manually” or “AI doing everything automatically.” It’s harder to replace, harder to budget away, and harder to replicate without the human half.
The Irreplaceability Audit
If you’re going to do one thing after reading this, do this: audit your own irreplaceability.
Write down specifically what you bring to your organization that no AI can replicate. Not generally — specifically. Not “I have relationships” but “I am the person who knows that the board chair’s real concern about the accreditation renewal cycle is staffing, not standards, and that knowing that changes how every conversation about that program should be framed.”
That specificity is your protection. Vague irreplaceability is easy to dismiss. Specific, demonstrated irreplaceability — combined with AI-multiplied output — is the profile that survives whatever the next budget cycle brings.
The question isn’t whether AI will change your role. It already has. The question is whether you’re the one shaping how it changes, or whether that’s being decided for you because you provided no input.





Leave a comment