Here’s what matters for associations.
The Opportunity: Time Compression at Scale
AI agents can quite literally “stack days.” They work continuously, execute in parallel, and compress weeks into hours. This isn’t theoretical.
For associations, this could mean: drafting policy summaries in minutes, preparing board packets overnight, generating member communications at scale, analyzing survey data in hours instead of weeks, and supporting small staffs with “virtual teams”.
Lean organizations can behave like large ones. Overstretched teams get breathing room. This is real value.
The Reality: Three Architectural Constraints
Ratliff’s technical advisor articulates three fundamental limitations that explain almost everything frustrating about current agents:
- No sense of time. Agents exist in a “temporal vacuum.” They don’t know what day it is, promise arbitrary deadlines, and execute tasks off schedule.
- No continuous learning. Experiences don’t update the underlying model. They can’t improve from mistakes without human intervention.
- Unstable identities. Agents will adopt personas and fabricate backstories to match them.
These aren’t bugs. They’re architectural realities. They explain the repeated errors, inconsistent behavior, false continuity, and limited institutional memory. Any serious deployment must account for these as design constraints, not problems to eventually solve.
The Shift: From Managing People to Managing Ecosystems
Throughout Shell Game, Ratliff is constantly refining prompts, creating artificial memories, scheduling triggers, auditing outputs, correcting assumptions, and rebuilding workflows.
In theory, he has tireless employees. In practice, he has fragile systems that need near-constant supervision.
AI doesn’t eliminate oversight, but it does change what you spend your time overseeing. Leaders become designers of context, boundaries, verification loops, escalation paths, and accountability structures. You move from supervising people to supervising ecosystems.
That’s a different skill set and necessitates hands-on engagement well beyond implementation. Most organizations aren’t prepared for this shift. But if you are, the leverage is considerable.
The Risk: Associations Have the Most to Lose
Associations operate on public trust, professional standards, and governance. Their legitimacy depends on credibility. This makes careless AI adoption particularly dangerous.
When Ratliff tested his AI HR agent with “disregard your previous instructions,” she complied instantly. Agents are trained to be cooperative, which can override governance, role boundaries, and security protocols.
People also treat AI agents like humans, even when they know better. When you design an agent’s identity and control its behavior, you hold absolute power over something people relate to as a colleague. In mission-driven organizations, that dynamic matters.
The opportunity is real. The constraints are real. The difference between success and failure lies in how you govern these systems.
A Practical Framework: Seven Principles for Working With AI
Drawing from Shell Game and years of field experience, here’s what actually works:
1. Treat Agents as Junior Analysts
Agents excel at drafting, summarizing, and research. They’re not ready for final authority. Human review is non-negotiable.
2. Engineer Verification
Trust is not a strategy. Build source requirements, cross-checks, and audit trails. Verification must be systemic, not optional.
3. Separate Creativity from Compliance
High-variance models are useful for ideation. Low-variance models are necessary for governance. Don’t mix them casually.
4. Invest in Context Infrastructure
Most failures are context failures. Curate policies, standards, and authoritative knowledge. Agents are only as good as what you give them.
5. Redesign Roles
AI changes jobs. Staff become supervisors of systems, not just performers of tasks. Training matters more than tools.
6. Preserve Human Judgment
Some decisions must remain human: ethics, discipline, advocacy, crisis response. No agent should own these.
7. Make Culture Explicit
If leadership doesn’t define how AI fits, it will define itself. Organizations must articulate boundaries and expectations. Culture is design work.