Workers
Workers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.
When to Use Workers
Workers are ideal for:
- Background processing — Long-running tasks that don't need conversation
- Composable tasks — Reusable units of work called by other agents
- Pipelines — Multi-step processing with structured output
- Parallel execution — Tasks that can run independently
Use interactive agents instead when:
- Conversation is needed — Multi-turn dialogue with users
- Persistence matters — State should survive across interactions
- Session context — User context needs to persist
Worker vs Interactive
| Aspect | Interactive | Worker |
|---|---|---|
| Structure | triggers + handlers + agent | steps + output |
| LLM Config | Global agent: section | Per-thread via start-thread |
| Invocation | Fire a named trigger | Direct execution with input |
| Session | Persists across triggers (24h TTL) | Single execution |
| Result | Streaming chat | Streaming + output value |
Protocol Structure
Workers use a simpler protocol structure than interactive agents:
settings.json
Workers are identified by the format field:
Key Differences
No Global Agent Config
Interactive agents have a global agent: section that configures a main thread. Workers don't have this — every thread must be explicitly created via start-thread:
This gives workers flexibility to use different models, tools, skills, and settings at different stages.
Steps Instead of Handlers
Workers use steps: instead of handlers:. Steps execute sequentially, like handler blocks:
Output Value
Workers can return an output value to the caller:
The output field references a variable declared in variables:. If omitted, the worker completes without returning a value.
Available Blocks
Workers support the same blocks as handlers:
| Block | Purpose |
|---|---|
start-thread | Create a named thread with LLM configuration |
add-message | Add a message to a thread |
next-message | Generate LLM response |
tool-call | Call a tool deterministically |
set-resource | Update a resource value |
serialize-thread | Convert thread to text |
generate-image | Generate an image from a prompt variable |
start-thread (Required for LLM)
Every thread must be initialized with start-thread before using next-message:
All LLM configuration goes here:
| Field | Description |
|---|---|
thread | Thread name (defaults to block name) |
model | LLM model to use |
system | System prompt filename (required) |
input | Variables for system prompt |
tools | Tools available in this thread |
skills | Octavus skills available in this thread |
imageModel | Image generation model |
webSearch | Enable built-in web search tool |
thinking | Extended reasoning level |
temperature | Model temperature |
maxSteps | Maximum tool call cycles (enables agentic if > 1) |
Simple Example
A worker that generates a title from a summary:
Advanced Example
A worker with multiple threads, tools, and agentic behavior:
Skills, Image Generation, and Web Search
Workers can use Octavus skills, image generation, and web search, configured per-thread via start-thread:
Workers define their own skills independently -- they don't inherit skills from a parent interactive agent. Each thread gets its own sandbox scoped to only its listed skills.
See Skills for full documentation.
Tool Handling
Workers support the same tool handling as interactive agents:
- Server tools — Handled by tool handlers you provide
- Client tools — Pause execution, return tool request to caller
See Server SDK Workers for tool handling details.
Stream Events
Workers emit the same events as interactive agents, plus worker-specific events:
| Event | Description |
|---|---|
worker-start | Worker execution begins |
worker-result | Worker completes (includes output) |
All standard events (text-delta, tool calls, etc.) are also emitted.
Calling Workers from Interactive Agents
Interactive agents can call workers in two ways:
- Deterministically — Using the
run-workerblock - Agentically — LLM calls worker as a tool
Worker Declaration
First, declare workers in your interactive agent's protocol:
run-worker Block
Call a worker deterministically from a handler:
LLM Tool Invocation
Make workers available to the LLM:
The LLM can then call workers as tools during conversation.
Display Modes
Control how worker execution appears to users:
| Mode | Behavior |
|---|---|
hidden | Worker runs silently |
name | Shows worker name |
description | Shows description text |
stream | Streams all worker events to user |
Tool Mapping
Map parent tools to worker tools when the worker needs access to your tool handlers:
When the worker calls its search tool, your web-search handler executes.
Next Steps
- Server SDK Workers — Executing workers from code
- Handlers — Block reference for steps
- Agent Config — Model and settings