Skills Advanced Guide
This guide covers advanced patterns and best practices for using Octavus skills in your agents.
When to Use Skills
Skills are ideal for:
- Code execution - Running Python/Bash scripts
- File generation - Creating images, PDFs, reports
- Data processing - Analyzing, transforming, or visualizing data
- Provider-agnostic needs - Features that should work with any LLM
Use external tools instead when:
- Simple API calls - Database queries, external services
- Authentication required - Accessing user-specific resources
- Backend integration - Tight coupling with your infrastructure
Skill Selection Strategy
Defining Available Skills
Define all skills in the skills: section, then reference which skills are available where they're used:
Interactive agents — reference in agent.skills:
Workers and named threads — reference per-thread in start-thread.skills:
Match Skills to Use Cases
Different threads can have different skills. Define all skills at the protocol level, then scope them to each thread:
For a data analysis thread, you would specify [data-analysis, visualization] in agent.skills or in a start-thread block's skills field.
Display Mode Strategy
Choose display modes based on user experience:
Guidelines
hidden: Background work that doesn't need user awarenessdescription: User-facing operations (default)name: Quick operations where name is sufficientstream: Long-running operations where progress matters
System Prompt Integration
Skills are automatically injected into the system prompt. The LLM learns:
- Available skills - List of enabled skills with descriptions
- How to use skills - Instructions for using skill tools
- Tool reference - Available skill tools (
octavus_skill_read,octavus_code_run, etc.)
You don't need to manually document skills in your system prompt. However, you can guide the LLM:
Error Handling
Skills handle errors gracefully:
Common error scenarios:
- Invalid skill slug - Skill not found in organization
- Code execution errors - Syntax errors, runtime exceptions
- Missing dependencies - Required packages not installed
- File I/O errors - Permission issues, invalid paths
The LLM receives error messages and can:
- Retry with corrected code
- Explain errors to users
- Suggest alternatives
File Output Patterns
Single File Output
Multiple Files
Structured Output
Performance Considerations
Lazy Initialization
Sandboxes are created only when a skill tool is first called:
This means:
- No cost if skills aren't used
- Fast startup (no sandbox creation delay)
- Each
next-messageexecution gets its own sandbox with only the skills it needs
Timeout Limits
Sandboxes default to a 5-minute timeout. Configure sandboxTimeout on the agent config or per thread:
Thread-level sandboxTimeout takes priority. Maximum: 1 hour (3,600,000 ms).
Sandbox Lifecycle
Each next-message execution gets its own sandbox:
- Scoped - Only contains the skills available to that thread
- Isolated - Interactive agents and workers don't share sandboxes
- Resilient - If a sandbox expires, it's transparently recreated
- Cleaned up - Sandbox destroyed when the LLM call completes
Combining Skills with Tools
Skills and tools can work together:
Pattern:
- Fetch data via tool (from your backend)
- LLM uses skill to analyze/process the data
- Generate outputs (files, reports)
Secure Skills
When a skill declares secrets and an organization configures them, the skill runs in secure mode with its own isolated sandbox.
Standard vs Secure Skills
| Aspect | Standard Skills | Secure Skills |
|---|---|---|
| Sandbox | Shared with other standard skills | Isolated (one per skill) |
| Available tools | All 6 skill tools | skill_read, skill_list, skill_run only |
| Script input | CLI arguments via args | JSON via stdin (use input parameter) |
| Environment | No secrets | Secrets as env vars |
| Output | Raw stdout/stderr | Redacted (secret values replaced with [REDACTED]) |
Writing Scripts for Secure Skills
Secure skill scripts receive structured input via stdin (JSON) and access secrets from environment variables:
Key patterns:
- Read stdin:
json.load(sys.stdin)to get theinputobject from theoctavus_skill_runcall - Access secrets:
os.environ["SECRET_NAME"]— secrets are injected as env vars - Print output: Write results to stdout — the LLM sees the (redacted) stdout
- Error handling: Write errors to stderr and exit with non-zero code
Declaring Secrets in SKILL.md
Testing Secure Skills Locally
You can test scripts locally by piping JSON to stdin:
Skill Development Tips
Writing SKILL.md
Focus on when and how to use the skill:
Script Organization
Organize scripts logically:
Error Messages
Provide helpful error messages:
The LLM sees these errors and can retry or explain to users.
Security Considerations
Sandbox Isolation
- No network access (unless explicitly configured)
- No persistent storage (sandbox destroyed after each
next-messageexecution) - File output only via
/output/directory - Time limits enforced (5-minute default, configurable via
sandboxTimeout)
Secret Protection
For skills with configured secrets:
- Isolated sandbox — each secure skill gets its own sandbox, preventing cross-skill secret leakage
- No arbitrary code —
octavus_code_run,octavus_file_write, andoctavus_file_readare blocked for secure skills, so only pre-built scripts can execute - Output redaction — all stdout and stderr are scanned for secret values before being returned to the LLM
- Encrypted at rest — secrets are encrypted using AES-256-GCM and only decrypted at execution time
Input Validation
Skills should validate inputs:
Resource Limits
Be aware of:
- File size limits - Large files may fail to upload
- Execution time - Sandbox timeout (5-minute default, 1-hour maximum)
- Memory limits - Sandbox environment constraints
Debugging Skills
Check Skill Documentation
The LLM can read skill docs:
Test Locally
Test skills before uploading:
Monitor Execution
Check execution logs in the platform debug view:
- Tool calls and arguments
- Code execution results
- File outputs
- Error messages
Common Patterns
Pattern 1: Generate and Return
Pattern 2: Analyze and Report
Pattern 3: Transform and Save
Best Practices Summary
- Enable only needed skills — Don't overwhelm the LLM
- Choose appropriate display modes — Match user experience needs
- Write clear skill descriptions — Help LLM understand when to use
- Handle errors gracefully — Provide helpful error messages
- Test skills locally — Verify before uploading
- Monitor execution — Check logs for issues
- Combine with tools — Use tools for data, skills for processing
- Consider performance — Be aware of timeouts and limits
- Use secrets for credentials — Declare secrets in frontmatter instead of hardcoding tokens
- Design scripts for stdin input — Secure skills receive JSON via stdin, so plan for both input methods if the skill might be used in either mode
Next Steps
- Skills - Basic skills documentation
- Agent Config - Configuring skills
- Tools - External tools integration