Skip to main content

Skills Advanced Guide

This guide covers advanced patterns and best practices for using Octavus skills in your agents.

When to Use Skills

Skills are ideal for:

  • Code execution - Running Python/Bash scripts
  • File generation - Creating images, PDFs, reports
  • Data processing - Analyzing, transforming, or visualizing data
  • Provider-agnostic needs - Features that should work with any LLM

Use external tools instead when:

  • Simple API calls - Database queries, external services
  • Authentication required - Accessing user-specific resources
  • Backend integration - Tight coupling with your infrastructure

Skill Selection Strategy

Defining Available Skills

Define all skills in the skills: section, then reference which skills are available where they're used:

Interactive agents — reference in agent.skills:

yaml

Workers and named threads — reference per-thread in start-thread.skills:

yaml

Match Skills to Use Cases

Different threads can have different skills. Define all skills at the protocol level, then scope them to each thread:

yaml

For a data analysis thread, you would specify [data-analysis, visualization] in agent.skills or in a start-thread block's skills field.

Display Mode Strategy

Choose display modes based on user experience:

yaml

Guidelines

  • hidden: Background work that doesn't need user awareness
  • description: User-facing operations (default)
  • name: Quick operations where name is sufficient
  • stream: Long-running operations where progress matters

System Prompt Integration

Skills are automatically injected into the system prompt. The LLM learns:

  1. Available skills - List of enabled skills with descriptions
  2. How to use skills - Instructions for using skill tools
  3. Tool reference - Available skill tools (octavus_skill_read, octavus_code_run, etc.)

You don't need to manually document skills in your system prompt. However, you can guide the LLM:

markdown

Error Handling

Skills handle errors gracefully:

yaml

Common error scenarios:

  1. Invalid skill slug - Skill not found in organization
  2. Code execution errors - Syntax errors, runtime exceptions
  3. Missing dependencies - Required packages not installed
  4. File I/O errors - Permission issues, invalid paths

The LLM receives error messages and can:

  • Retry with corrected code
  • Explain errors to users
  • Suggest alternatives

File Output Patterns

Single File Output

python

Multiple Files

python

Structured Output

python

Performance Considerations

Lazy Initialization

Sandboxes are created only when a skill tool is first called:

yaml

This means:

  • No cost if skills aren't used
  • Fast startup (no sandbox creation delay)
  • Each next-message execution gets its own sandbox with only the skills it needs

Timeout Limits

Sandboxes default to a 5-minute timeout. Configure sandboxTimeout on the agent config or per thread:

yaml
yaml

Thread-level sandboxTimeout takes priority. Maximum: 1 hour (3,600,000 ms).

Sandbox Lifecycle

Each next-message execution gets its own sandbox:

  • Scoped - Only contains the skills available to that thread
  • Isolated - Interactive agents and workers don't share sandboxes
  • Resilient - If a sandbox expires, it's transparently recreated
  • Cleaned up - Sandbox destroyed when the LLM call completes

Combining Skills with Tools

Skills and tools can work together:

yaml

Pattern:

  1. Fetch data via tool (from your backend)
  2. LLM uses skill to analyze/process the data
  3. Generate outputs (files, reports)

Secure Skills

When a skill declares secrets and an organization configures them, the skill runs in secure mode with its own isolated sandbox.

Standard vs Secure Skills

AspectStandard SkillsSecure Skills
SandboxShared with other standard skillsIsolated (one per skill)
Available toolsAll 6 skill toolsskill_read, skill_list, skill_run only
Script inputCLI arguments via argsJSON via stdin (use input parameter)
EnvironmentNo secretsSecrets as env vars
OutputRaw stdout/stderrRedacted (secret values replaced with [REDACTED])

Writing Scripts for Secure Skills

Secure skill scripts receive structured input via stdin (JSON) and access secrets from environment variables:

python

Key patterns:

  • Read stdin: json.load(sys.stdin) to get the input object from the octavus_skill_run call
  • Access secrets: os.environ["SECRET_NAME"] — secrets are injected as env vars
  • Print output: Write results to stdout — the LLM sees the (redacted) stdout
  • Error handling: Write errors to stderr and exit with non-zero code

Declaring Secrets in SKILL.md

yaml

Testing Secure Skills Locally

You can test scripts locally by piping JSON to stdin:

bash

Skill Development Tips

Writing SKILL.md

Focus on when and how to use the skill:

markdown

Script Organization

Organize scripts logically:

text

Error Messages

Provide helpful error messages:

python

The LLM sees these errors and can retry or explain to users.

Security Considerations

Sandbox Isolation

  • No network access (unless explicitly configured)
  • No persistent storage (sandbox destroyed after each next-message execution)
  • File output only via /output/ directory
  • Time limits enforced (5-minute default, configurable via sandboxTimeout)

Secret Protection

For skills with configured secrets:

  • Isolated sandbox — each secure skill gets its own sandbox, preventing cross-skill secret leakage
  • No arbitrary codeoctavus_code_run, octavus_file_write, and octavus_file_read are blocked for secure skills, so only pre-built scripts can execute
  • Output redaction — all stdout and stderr are scanned for secret values before being returned to the LLM
  • Encrypted at rest — secrets are encrypted using AES-256-GCM and only decrypted at execution time

Input Validation

Skills should validate inputs:

python

Resource Limits

Be aware of:

  • File size limits - Large files may fail to upload
  • Execution time - Sandbox timeout (5-minute default, 1-hour maximum)
  • Memory limits - Sandbox environment constraints

Debugging Skills

Check Skill Documentation

The LLM can read skill docs:

python

Test Locally

Test skills before uploading:

bash

Monitor Execution

Check execution logs in the platform debug view:

  • Tool calls and arguments
  • Code execution results
  • File outputs
  • Error messages

Common Patterns

Pattern 1: Generate and Return

yaml

Pattern 2: Analyze and Report

yaml

Pattern 3: Transform and Save

yaml

Best Practices Summary

  1. Enable only needed skills — Don't overwhelm the LLM
  2. Choose appropriate display modes — Match user experience needs
  3. Write clear skill descriptions — Help LLM understand when to use
  4. Handle errors gracefully — Provide helpful error messages
  5. Test skills locally — Verify before uploading
  6. Monitor execution — Check logs for issues
  7. Combine with tools — Use tools for data, skills for processing
  8. Consider performance — Be aware of timeouts and limits
  9. Use secrets for credentials — Declare secrets in frontmatter instead of hardcoding tokens
  10. Design scripts for stdin input — Secure skills receive JSON via stdin, so plan for both input methods if the skill might be used in either mode

Next Steps