In late 2025, Virtuous Software surveyed 346 nonprofits for what's become the most-cited piece of research of 2026: the Nonprofit AI Adoption Report.1 The headline numbers stopped me cold the first time I read them.
92 percent of organizations are now using AI in some capacity. 78 percent report small to moderate improvements from that use. And only 7 percent report major strategic impact.
The report's authors call it the "efficiency plateau." Most associations have one person using ChatGPT to draft an appeal while the rest of the team is still buried in manual processes and disconnected systems. The Virtuous CEO put it more bluntly: "That's not a strategy. It's a workaround."1
That quote is the whole article. Most associations don't have an AI strategy. They have a handful of people using AI tools individually, getting modest individual productivity gains, and then wondering why the organization-level transformation everyone keeps promising isn't materializing.
This is the central question facing association leaders in 2026. AI adoption is no longer the issue. AI value is. And the gap between those two things isn't closing on its own. It's widening.
The plateau is structural, not cultural
The reflexive explanation for the 92/7 gap is "associations need to use AI better." That explanation is wrong, and I see it leading associations down expensive dead ends every week.
The associations stuck on the plateau aren't under-using AI. They're blocked by an infrastructure that can't support the kind of AI use that produces strategic impact. No amount of better prompting fixes a foundation problem.
The Virtuous research breaks down what associations are actually doing with AI:1
- 65 percent describe their AI use as reactive and individual: one-off prompts, personal experimentation, isolated drafts.
- 18 percent report operational use across team workflows.
- Only 7 percent say AI is embedded into goals, budgets, and performance indicators.
The pattern isn't "early adopters versus laggards." It's "people using AI as a personal productivity tool versus organizations using AI as institutional capability." The first requires nothing more than a ChatGPT subscription. The second requires data infrastructure, governance, and architectural decisions most associations haven't made.
The Naylor 2025 Association Benchmarking Report tells the same story from a different angle. AI usage among associations jumped from 36 percent to 58 percent in a single year, but 40 percent of associations still lack regular member feedback loops.2 Read that twice. The infrastructure to listen to members hasn't kept pace with the technology to talk back to them. That's the entire problem in one sentence.
What the 7% are doing differently
Across the associations and nonprofits I've worked with that are actually getting strategic impact from AI, the pattern is consistent. None of it is about which model they use or how clever their prompts are. All of it is about the foundation underneath.
They have a data layer. A modern data warehouse (Snowflake, BigQuery, or Databricks) with the unified member record, full transaction history, and engagement signal from every system. AI grounded in real organizational data instead of a single tool's narrow slice. I've covered this architecture in detail in Why Your Data Warehouse Is the New System of Record. It's the prerequisite that most leaders skip and most projects discover too late.
They have identity resolution. A canonical member record that exists across systems. AI assistants that know the member completed a course, attended an event, posted in the community, and is overdue for renewal, all in one query. Without identity resolution, AI looks confidently stupid. It has access to data, but the data is fragmented across systems with no way to connect them. (I've published a detailed breakdown of why Member 360 is now an operating model question, not a data project question.)
They have governance that doesn't slow them down. A documented data classification scheme, consent tracking that ties to specific use cases, access controls that flow through to AI systems, and an AI use policy staff actually understand. The Virtuous research found that 47 percent of nonprofits have zero AI policy in place.1 Forty-seven percent. The 7 percent doing real work have governance built into the project from week one, not bolted on after launch.
They have use case clarity. "Build a member-facing chatbot" is not a use case. "Reduce time-to-resolution on the top fifteen member service questions, which currently consume 60 percent of our member services team's time, by deploying an AI assistant grounded in our knowledge base" is a use case. Use case clarity drives every other readiness dimension. Without it you're going to spend a lot of money to discover that AI works just like the demos but doesn't actually do anything for your organization.
They are not buying their AI from their AMS vendor. Nearly every AMS in the market now claims an AI feature. Most of these features are demos with a "powered by AI" label slapped on. The 7 percent are building AI on top of their data layer using whichever LLM provider serves the use case best (Anthropic, OpenAI, Google, or open models), not on whatever the AMS vendor happens to ship this quarter.
Why associations on legacy AMS systems are structurally stuck
Here's the part of this conversation nobody at vendor conferences will say out loud. Associations running on legacy monolithic AMS platforms aren't just behind on AI. They are structurally incapable of getting beyond the efficiency plateau without architectural change.
The reasons aren't philosophical. They're mechanical. And they're easy to demonstrate if you sit with the data for an afternoon.
A traditional AMS holds member data in a proprietary schema with limited programmatic access. AI grounded in this data either has to go through the vendor's API (which usually doesn't expose enough), or work around the AMS by extracting data into a separate system. Either way, the AI cannot see the member's full activity, cannot act across systems, and cannot maintain real-time context. It can give the appearance of working in a demo. It cannot actually work in production.
Cross-system data is even worse. The member exists in the AMS, the email platform, the community, the LMS, and the events tool, with a different ID in each. Without identity resolution running somewhere outside the AMS, AI agents cannot construct the member context they need to be useful. The Naylor finding that 40 percent of associations lack feedback loops is a direct consequence of this.2
Governance is also structurally harder. With data fragmented across vendor systems, access control is fragmented too. There's no single place to enforce data classification, consent, or AI use policy. Every new AI initiative becomes its own governance project. That's exhausting, and it's expensive, and it's the reason I see so many associations stall after the first AI pilot.
This is why the same associations who keep adopting more AI tools keep landing in the same spot on the plateau. Adding ChatGPT to the workflow does not solve a foundation problem. It can't.
The four prerequisites for crossing the gap
Across the work I've seen produce real AI impact, four things are non-negotiable.
A modern data warehouse. Snowflake, BigQuery, Databricks, or equivalent. Member, transaction, and engagement data flowing in. Documented data models on top. This is the spine. There is no skipping this step.
Identity resolution and a unified member record. Built in the warehouse, not in any operational system. The canonical view of every member, with documented match rules tuned to your association's specific patterns: chapter members, employer-paid memberships, lapsed-then-rejoined cases, multiple emails, family or organizational memberships. Generic match rules will give you a unified record that's wrong in subtle ways you'll only discover after deploying AI on top of it.
A governance framework that doesn't gate every initiative. Data classification, consent management, access control, and an AI use policy approved at the executive level. The goal isn't to slow AI down. The goal is to build it on a foundation that won't blow up in eighteen months when a member asks an uncomfortable question and you can't answer it.
Use case discipline. A small number of clearly-scoped use cases tied to specific operational outcomes, with the data and governance prerequisites mapped for each. The 7 percent didn't get there by deploying AI everywhere. They got there by deploying it precisely.
Use cases that work even before the foundation is complete
While the deeper foundation work runs in parallel, some use cases are forgiving enough to deliver value early. These are good places to start.
Internal staff copilots over a curated knowledge base. Member services FAQ, bylaws, governance documents, board packets. The content is bounded, the audience is internal, the risk surface is small. This is the single most reliable early win, and the one I recommend almost every association start with.
Renewal propensity scoring on AMS-only data. Most renewal signal comes from the AMS itself. A model trained on the data already in your AMS can deliver real lift even before full Member 360 is in place. It's not as good as it'll be after the foundation is built, but it's good enough to be worth the investment now.
Email subject line and content optimization through generative AI. Works on top of any reasonable email platform. Doesn't depend on deep data integration. Quick to deploy, easy to measure, low risk.
Document search over your library of standards, journals, or course materials. A high-value retrieval use case that depends mostly on document organization and permissions, not member data quality.
Avoid early on: any member-facing AI that needs cross-system context, any AI that touches dues or renewal pricing without governance, any AI agent that takes actions across systems without an identity resolution layer underneath. I've watched all three of these go badly. They're predictable failures.
A 90-day path off the plateau
You can't fully solve this in 90 days. But you can absolutely make 90 days of progress that converts AI from "we use ChatGPT for drafts" to "we have a real AI program."
Days 1 to 30. Pick one analytics use case (e.g., renewal propensity) and one retrieval use case (e.g., internal staff copilot for member services). Map the data and governance prerequisites for each. Document data quality, identity resolution, and policy gaps. Draft an AI use policy.
Days 31 to 60. Stand up a data warehouse if you don't have one. Build the first identity resolution layer covering AMS, email, and one other system. Approve the AI use policy. Deploy the first internal staff copilot.
Days 61 to 90. Build the renewal propensity model on warehouse data. Push the score back to the operational system where renewal staff actually work (this is reverse ETL). Measure the lift. Document what you learned.
At day 90, you've moved from individual AI use to operational AI capability. You know more about your association's AI readiness than 95 percent of the sector. You have something concrete to show the board, which is going to come up in the next quarterly meeting whether you have an answer or not.
The widening gap
Here's the part of this conversation that bothers me most. The 7 percent who are getting strategic impact today are compounding their advantage every quarter. They're building on a foundation that improves with use, deploying new capabilities faster than peers can plan for them, and developing institutional AI literacy that takes years to replicate.
The associations on the plateau are also compounding, in the other direction. Every quarter spent inside the AMS-centered architecture is a quarter of widening gap. The AI tools their vendors ship will not close it. The training programs will not close it. The board pressure will not close it. The only thing that closes it is the foundational work most associations have been deferring.
The cost to start has never been lower. The data warehouse is a managed cloud service. The transformation tools are mature. The reverse ETL platforms are battle-tested. The LLM APIs are commoditized. None of this requires a research team. It requires an architectural decision and the discipline to execute on it.
The 7 percent aren't smarter than their peers. I've worked with both groups, and the talent distribution is similar. The 7 percent made the architectural decision earlier, and they did the unglamorous foundational work that makes everything else possible. The cheapest moment to follow them is now. The next cheapest is six months from now. After that the gap is just going to keep widening.
ARYS Intelligence helps associations and nonprofits build the data foundations that real AI capability depends on. Our Association Data Sprint delivers the architecture, identity resolution, and governance layer that separate the 7 percent from the rest. To explore your association's AI readiness in confidence, connect with us for an assessment.