Raindrop launches Workshop, open source debugger for AI agents

Raindrop AI launched Workshop, an open source MIT-licensed debugging and evaluation tool for AI agents that runs locally on developers' machines. The tool captures real-time traces of agent behavior, tool calls, and decisions in a lightweight SQL database file, displayed on a local dashboard, eliminating the need to send telemetry to external servers. A standout feature is the self-healing eval loop, which allows coding agents like Claude Code to autonomously read traces, write evaluations, identify logic errors, and fix broken code until assertions pass. The tool supports TypeScript, Python, Rust, and Go, and integrates with major frameworks including Vercel AI SDK, OpenAI, Anthropic, LangChain, LlamaIndex, and CrewAI.
TL;DR
- →Workshop provides local debugging and evaluation specifically designed for AI agents, storing all traces in a single .db file accessible via localhost dashboard
- →Self-healing eval loop enables coding agents to autonomously read traces, write evals, identify errors, and fix code without manual intervention
- →Supports multiple languages and integrates with popular SDKs, frameworks, and coding agents including Claude Code, Cursor, and Devin
- →MIT-licensed open source tool addresses developer concerns about data privacy and latency by keeping all telemetry local rather than sending to external servers
Why it matters
As AI agents become more complex and autonomous, developers lack purpose-built tools to understand what agents are actually doing and why they fail. Workshop fills this gap by providing real-time visibility into agent behavior locally, eliminating privacy concerns and latency issues associated with cloud-based observability. The self-healing eval loop represents a meaningful step toward autonomous debugging, where agents can identify and fix their own errors without human intervention.
Business relevance
For teams building agentic systems, Workshop reduces debugging time and improves iteration speed by providing immediate visibility into agent failures and decision-making. The local-first approach addresses enterprise data sovereignty concerns, making it viable for organizations that cannot send execution traces to external services. As a free, open source tool with broad framework compatibility, it lowers the barrier to entry for agent development and could accelerate adoption of agentic AI across teams.
Key implications
- →Local-first observability for AI agents may become table stakes as enterprises prioritize data sovereignty and privacy over cloud-based monitoring solutions
- →Self-healing eval loops could shift debugging from manual, reactive processes to autonomous, continuous improvement cycles, fundamentally changing how teams iterate on agent behavior
- →Open source tooling in the agent debugging space may commoditize observability, pushing commercial vendors toward higher-level services like prompt optimization or agent orchestration
What to watch
Monitor whether Workshop gains adoption among developers building with Claude Code and other coding agents, which would validate demand for local agent debugging. Watch for competing tools or features from established observability vendors like Datadog or New Relic, which may add agent-specific debugging to their platforms. Track whether the self-healing eval loop becomes a standard pattern in agent development or remains a niche feature.
vff Briefing
Weekly signal. No noise. Built for founders, operators, and AI-curious professionals.
No spam. Unsubscribe any time.



