vff — the signal in the noise
News

Bedrock AgentCore adds stateful MCP for interactive agent workflows

Evandro FrancoRead original
Share
Bedrock AgentCore adds stateful MCP for interactive agent workflows

Amazon Bedrock AgentCore Runtime now supports stateful Model Context Protocol (MCP) client capabilities, enabling AI agents to pause execution for user input, request LLM-generated content, and stream real-time progress updates. Previously, stateless MCP implementations could only execute one-way tool calls without bidirectional communication. The update introduces three new capabilities: elicitation for mid-execution user requests, sampling for dynamic LLM content generation, and progress notification for long-running tasks. Stateful mode provisions dedicated microVMs per user session with up to 8 hours persistence, maintaining conversation continuity through session identifiers.

TL;DR

  • Bedrock AgentCore Runtime adds stateful MCP support, enabling interactive multi-turn agent workflows that can pause for user input or LLM sampling
  • Three new client capabilities: elicitation (request user input), sampling (request LLM content), and progress notification (stream real-time updates)
  • Stateful mode provisions isolated microVMs per session lasting up to 8 hours, replacing stateless HTTP model that couldn't maintain conversation context
  • Completes bidirectional MCP protocol implementation, allowing servers to initiate requests back to clients rather than only responding to tool calls

Why it matters

Stateful MCP support removes a fundamental constraint in agent design: the inability to maintain conversation threads or request clarification mid-execution. This capability gap has forced developers to work around limitations in long-running workflows, interactive debugging, and real-time feedback loops. The addition brings MCP implementations closer to practical agent requirements where workflows often need human-in-the-loop validation or dynamic content generation during execution.

Business relevance

For operators building production agents, stateful MCP unlocks use cases that require interactive workflows, such as approval gates, clarification requests, or progress reporting during extended operations. This reduces the need for custom workarounds and external orchestration layers, lowering development complexity and time-to-market for agent applications. The 8-hour session persistence and isolated microVMs provide a managed alternative to self-hosted MCP infrastructure.

Key implications

  • Developers can now build agents that pause mid-execution for user input or LLM sampling, enabling interactive workflows previously impossible with stateless implementations
  • Session-based architecture with dedicated microVMs per user creates isolation and state persistence, but introduces operational considerations around resource allocation and session lifecycle management
  • Bidirectional MCP protocol support positions Bedrock as a more complete MCP runtime, potentially increasing adoption among developers already invested in the open standard
  • The 8-hour session limit and 15-minute idle timeout create practical constraints for long-running or intermittent workflows that may require external session management

What to watch

Monitor adoption patterns to see which use cases drive stateful MCP demand and whether the session timeout windows prove sufficient for real-world workflows. Watch for competing implementations from other cloud providers or open-source projects that may offer different session persistence models. Track whether the bidirectional protocol capabilities become table stakes for agent platforms or remain a differentiator.

Share

vff Briefing

Weekly signal. No noise. Built for founders, operators, and AI-curious professionals.

No spam. Unsubscribe any time.

Related stories

AWS Launches G7e GPU Instances for Cheaper Large Model Inference
TrendingModel Release

AWS Launches G7e GPU Instances for Cheaper Large Model Inference

AWS has launched G7e instances on Amazon SageMaker AI, powered by NVIDIA RTX PRO 6000 Blackwell GPUs with 96 GB of GDDR7 memory per GPU. The instances deliver up to 2.3x inference performance compared to previous-generation G6e instances and support configurations from 1 to 8 GPUs, enabling deployment of large language models up to 300B parameters on the largest 8-GPU node. This represents a significant upgrade in memory bandwidth, networking throughput, and model capacity for generative AI inference workloads.

about 11 hours ago· AWS Machine Learning Blog
Anthropic Launches Claude Design for Non-Designers
Model Release

Anthropic Launches Claude Design for Non-Designers

Anthropic has launched Claude Design, a new product aimed at helping non-designers like founders and product managers create visuals quickly to communicate their ideas. The tool addresses a gap for early-stage teams and individuals who need to share concepts visually but lack design expertise or resources. Claude Design integrates with Anthropic's Claude AI platform, leveraging its capabilities to streamline the visual creation process. The launch reflects growing demand for AI-powered design tools that lower barriers to entry for non-technical users.

1 day ago· TechCrunch AI
Phononic Eyes $1.5B+ Valuation in AI Data Center Cooling Play

Phononic Eyes $1.5B+ Valuation in AI Data Center Cooling Play

Phononic, a 17-year-old Durham, North Carolina semiconductor company that makes cooling components for AI data center servers, is in talks with potential buyers at a valuation of at least $1.5 billion, with some buyers expressing interest above $2 billion. The company has engaged investment bank Lazard to evaluate its options since early 2026. This valuation would more than double its last private funding round, reflecting broader investor appetite for industrial suppliers tied to AI infrastructure demand. Phononic may also choose to raise additional capital instead of pursuing a sale.

about 12 hours ago· The Information