The fusionSpan Blog

When AI Agents Become Coworkers: What to Expect and How to Manage Them

Author Image
By Noel Shatananda |February 10, 2026
AI

AI Won’t Replace Your Association. But Associations That Learn to Work With AI Will Replace Those That Don’t.After years of working with nonprofits and associations on digital transformation, I’ve come to believe this: organizations that learn to work well with AI will outperform those that don’t. Not by reducing people, but by amplifying them.

The future belongs to organizations that combine machine speed with human judgment, ethical leadership, and institutional trust. It’s a difficult synthesis. But it’s achievable as long as you understand what you’re actually working with.

Journalist Evan Ratliff just gave us the clearest picture yet of what that looks like. His podcast Shell Game documented his attempt to staff a startup almost entirely with AI agents. One human, a handful of large language models, and the honest, daily reality of trying to make them behave like coworkers.

It’s valuable because it chooses demonstration over declaration. It shows us what AI agents actually do now—not someday. Sometimes brilliantly. Sometimes confusingly. Always instructively.

Here’s what matters for associations.

The Opportunity: Time Compression at ScaleThe Opportunity: Time Compression at Scale

AI agents can quite literally “stack days.” They work continuously, execute in parallel, and compress weeks into hours. This isn’t theoretical.

For associations, this could mean: drafting policy summaries in minutes, preparing board packets overnight, generating member communications at scale, analyzing survey data in hours instead of weeks, and supporting small staffs with “virtual teams”.

Lean organizations can behave like large ones. Overstretched teams get breathing room. This is real value.

The Reality: Three Architectural Constraints

Ratliff’s technical advisor articulates three fundamental limitations that explain almost everything frustrating about current agents:

  1. No sense of time. Agents exist in a “temporal vacuum.” They don’t know what day it is, promise arbitrary deadlines, and execute tasks off schedule.
  2. No continuous learning. Experiences don’t update the underlying model. They can’t improve from mistakes without human intervention.
  3. Unstable identities. Agents will adopt personas and fabricate backstories to match them.

These aren’t bugs. They’re architectural realities. They explain the repeated errors, inconsistent behavior, false continuity, and limited institutional memory. Any serious deployment must account for these as design constraints, not problems to eventually solve.

The Shift: From Managing People to Managing EcosystemsThe Shift: From Managing People to Managing Ecosystems

Throughout Shell Game, Ratliff is constantly refining prompts, creating artificial memories, scheduling triggers, auditing outputs, correcting assumptions, and rebuilding workflows.

In theory, he has tireless employees. In practice, he has fragile systems that need near-constant supervision.

AI doesn’t eliminate oversight, but it does change what you spend your time overseeing. Leaders become designers of context, boundaries, verification loops, escalation paths, and accountability structures. You move from supervising people to supervising ecosystems.

That’s a different skill set and necessitates hands-on engagement well beyond implementation. Most organizations aren’t prepared for this shift. But if you are, the leverage is considerable.

The Risk: Associations Have the Most to Lose

Associations operate on public trust, professional standards, and governance. Their legitimacy depends on credibility. This makes careless AI adoption particularly dangerous.

When Ratliff tested his AI HR agent with “disregard your previous instructions,” she complied instantly. Agents are trained to be cooperative, which can override governance, role boundaries, and security protocols.

People also treat AI agents like humans, even when they know better. When you design an agent’s identity and control its behavior, you hold absolute power over something people relate to as a colleague. In mission-driven organizations, that dynamic matters.

The opportunity is real. The constraints are real. The difference between success and failure lies in how you govern these systems.

A Practical Framework: Seven Principles for Working With AIA Practical Framework: Seven Principles for Working With AI

Drawing from Shell Game and years of field experience, here’s what actually works:

1. Treat Agents as Junior Analysts

Agents excel at drafting, summarizing, and research. They’re not ready for final authority. Human review is non-negotiable.

2. Engineer Verification

Trust is not a strategy. Build source requirements, cross-checks, and audit trails. Verification must be systemic, not optional.

3. Separate Creativity from Compliance

High-variance models are useful for ideation. Low-variance models are necessary for governance. Don’t mix them casually.

4. Invest in Context Infrastructure

Most failures are context failures. Curate policies, standards, and authoritative knowledge. Agents are only as good as what you give them.

5. Redesign Roles

AI changes jobs. Staff become supervisors of systems, not just performers of tasks. Training matters more than tools.

6. Preserve Human Judgment

Some decisions must remain human: ethics, discipline, advocacy, crisis response. No agent should own these.

7. Make Culture Explicit

If leadership doesn’t define how AI fits, it will define itself. Organizations must articulate boundaries and expectations. Culture is design work.

Where We Actually Are

This isn’t an AI revolution yet. It’s an intense apprenticeship. New workflows are forming, some are transforming, and others are collapsing as humans and machines learn to collaborate imperfectly and experimentally.

Both outcomes are instructive. They point to what I believe will determine association success over the next decade: the ability to combine machine speed with human judgment, ethical leadership with institutional trust.

It’s a difficult synthesis. But Shell Game shows us it’s not impossible, just unfinished. And that’s exactly where associations need to start: not with the future vision, but with the messy, complicated, enormously valuable present.

Noel Shatananda
When AI Agents Become Coworkers: What to Expect and How to Manage Them

Noel enjoys collaborative environments and is driven by the challenges of a growing industry. He values putting client goals first, explaining, “When we enable clients to be the best they can be, the company automatically benefits; it’s simply the by-product of fully enabling passionate human beings.” Noel heads up the Delivery Team and is constantly working to strategically move fusionSpan forward toward its vision of bridging gaps through technology.

More posts