Most AI Content Strategies Stop at Answer "The Deeper Question I’ve Been Exploring"

The Spark

As a content engineer, I’ve been using AI models heavily — both at work and in my personal projects — to get things done faster and more effectively. I draft outlines, analyze content performance, generate diagrams, research topics, and even refine content governance processes with them almost daily.

Lately, something started shifting in the way I interact with these tools. When I moved beyond simple chat assistants and began using more advanced agentic systems, I became genuinely intrigued by how agents  actually work when I give them a query.

I’d watch them break down a goal, ask clarifying questions, evaluate constraints, call on tools or APIs, reason through steps, and then take action — sometimes even looping back to verify the outcome. It was fascinating. And it made me pause.

    I’ve spent the better part of my career — first at IBM, then VMware, and now leading content strategy for AWS  — making sure our knowledge database is truly useful. We’ve modularized topics, built scalable taxonomies, added rich semantic metadata, and created governance systems that support AI-assisted search and personalization. We've even stood up AI assistants to help our stakeholders get faster answers.

    "On paper, that’s success. The metrics look good. The feedback is positive."

    But this personal experience with agents kept nagging at me. The more I used them, the more I realized we might be optimizing our knowledge database for the wrong end state.

    Understanding the Difference Between Assistants and Agents

    The curiosity of understanding how Agents work led me into deeper independent research on my own time — thinking through the implications, sketching models, and connecting dots across everything I’ve built so far.

    Here’s what became clear to me:

    Most of what we call “AI-ready” content today is built for Assistants. These are the helpful chat interfaces and Q&A tools that respond when you ask a question. They retrieve information, summarize it, and give you an answer. They’re incredibly useful for support tickets, internal FAQs, and quick lookups.

    In simple words, Assistants are Reactive. They excel at reading paragraphs, matching intent to existing text, and generating human-like responses. That’s exactly why we’ve focused so much energy on modular content, semantic tagging, and retrieval-augmented generation (RAG).

    "But the next wave isn’t just Smarter Assistants. It’s Agents..."

    Photo by Eva Bronzini on Pexels

    Agents are different. They don’t just answer — they act.

    An agent can interpret a high-level goal, evaluate constraints, reason through multiple steps, call tools or APIs when needed, make decisions, and execute a workflow — all while verifying the outcome.

    Agents are proactive, stateful, and goal-oriented. They operate in loops of perception → reasoning → action → verification. And they will rely on our content not just as reference material, but as executable knowledge they can reason over directly.

    That distinction — one I only fully appreciated after using agents myself — hit me hard. Because almost everything we’ve built — even the best-structured, semantically enriched content — is still optimized for reading and answering, not for reasoning and acting.

    Let’s Make This Concrete: A Practical Example

    Let's take a simple query...
    “Help me reset my password and set up multi-factor authentication on my company account.”

    How a current assistant would typically respond

    “Sure! Here are the steps:

    1. Go to the login page and click ‘Forgot password’.
    2. Enter your email and follow the reset link.
    3. Once logged in, go to Security settings → Enable MFA. Here are the detailed instructions [links to 3 docs]. Let me know if you get stuck!”

    Helpful, clear — but you still have to do all the clicking yourself.

    Now imagine the same request in a more complex enterprise context (one I deal with regularly):

    “Help me securely deploy a workload in a regulated EU environment while staying compliant.”

    How a current assistant would typically respond (exactly the kind of output I see every day):

    “Sure! Here’s a step-by-step guide for securely deploying a workload in AWS European Sovereign Cloud while meeting compliance requirements:

    1. Ensure your account is enrolled in the EU Sovereign Cloud.
    2. Use the recommended service combinations (EC2, EKS, S3 with specific encryption and logging).
    3. Follow these security best practices… [links to 5–6 documentation pages]
    4. Enable CloudTrail, GuardDuty, and Config rules for auditing.
    5. Review the latest sovereignty and compliance whitepapers here.

    Would you like me to expand on any of these steps or generate a sample CloudFormation template?”

    Again, very useful — but it stops at information. You still have to execute everything manually.

    How an agent could approach the same request :An agent wouldn’t just hand you a list. It would treat the request as a goal to achieve. It might:

    • Parse the full intent and success criteria
    • Check your current permissions, account state, and environment via live API calls
    • Pull the latest regulatory constraints (data residency, encryption, auditing rules, sovereignty commitments)
    • Break the goal into an executable workflow and either propose the exact sequence or (with approval) execute it step-by-step
    • Automatically apply configurations, security policies, and monitoring
    • Run real-time compliance checks and generate an audit trail + final verification report before declaring the task complete

    The gap is obvious: one gives you reading material. The other gets the job done.

    The Current Content Gap — And Why It Matters Yes, there is a real gap

    Our current content — no matter how well modularized, semantically tagged, or governed — is still written primarily for humans and assistants to read and interpret. The meaning, constraints, decision logic, and verification steps are left implicit inside paragraphs and a human-friendly narrative. Agents don’t guess well. They need explicit, connected, verifiable, and executable knowledge.

    How big is the problem? In regulated, high-stakes environments like cloud sovereignty, compliance, security, and enterprise operations, the gap is significant. Today, every complex workflow still requires human intervention, back-and-forth clarification, and manual execution. That creates friction, delays, and risk.

    Business Impact I Keep Coming Back To

    If we don’t close this gap, we risk:

    • Slower time-to-value for new services and features (especially in regulated markets)
    • Higher support and enablement costs as teams wait for humans to interpret documentation
    • Increased compliance and security risk when agents (or humans) misinterpret implicit rules
    • Missed opportunity for true self-service and autonomous operations at scale

    Quantifying this is something I’m still thinking through, but early signals from my own work give me clues: the AI assistant we built for sales and field teams already delivered a 25% lift in enablement efficiency. Moving from answer-first to action-first knowledge could multiply that impact — potentially cutting workflow completion time dramatically, reducing human handoffs, and improving accuracy in regulated scenarios.

    The Shift I’m Exploring Action-First Knowledge Management Systems

    In my independent research, I’ve started thinking about what I’m calling Action-First Knowledge Management Systems — content designed from the ground up not just to be read, but to be acted upon by agents.

    The practical heart of this idea is organizing knowledge around intents rather than topics or pages. For every major user or system intent, we would explicitly define:

    • What the goal actually is (and what success looks like)
    • The context and constraints that must be true
    • The executable logic — decision paths, conditional steps, linked tools or APIs
    • How to verify the outcome and know when the intent has been met

    This isn’t about throwing away the structured content, taxonomies, and governance I’ve spent years building. It feels like the natural next layer — making the meaning and actionability as explicit as the content itself.

    What I’m Still Thinking Through

    This is very much an emerging line of thought for me. I don’t have all the answers yet. I’m still reading, sketching models, and asking myself the hard questions:

    • How big could the real business impact be if we closed this gap?
    • What would success metrics look like — workflow completion rate? Reduction in human intervention? Faster compliance sign-off?
    • How do we evolve our existing content models and governance practices without adding burden to writers and reviewers?
    • How do we make this transition feel like an evolution rather than a rebuild?

    I’m excited by the possibility, but I’m also realistic about how big a shift it represents.

    Skills & Perspectives This Represents

    This line of thinking draws on everything I’ve learned across 15+ years in content strategy, information architecture, and AI-integrated workflows. It reflects my belief that the role of a senior content leader isn’t just to execute today’s best practices — it’s to help our organizations prepare for what’s coming next.

    I’d love to hear how others are thinking about this. If the shift from assistants to agents is on your mind too, I’d be grateful for the conversation.

    Skills & Perspectives This Represents

    "This line of thinking draws on everything I’ve learned across 15+ years in content strategy, information architecture, and AI-integrated workflows. It reflects my belief that the role of a senior content leader isn’t just to execute today’s best practices — it’s to help our organizations prepare for what’s coming next"