Skills Advanced Guide

This guide covers advanced patterns and best practices for using Octavus skills in your agents.

When to Use Skills

Skills are ideal for:

  • Code execution - Running Python/Bash scripts
  • File generation - Creating images, PDFs, reports
  • Data processing - Analyzing, transforming, or visualizing data
  • Provider-agnostic needs - Features that should work with any LLM

Use external tools instead when:

  • Simple API calls - Database queries, external services
  • Authentication required - Accessing user-specific resources
  • Backend integration - Tight coupling with your infrastructure

Skill Selection Strategy

Defining Available Skills

Define all skills available to this agent in the skills: section. Then specify which skills are available for the chat thread in agent.skills:

yaml

Match Skills to Use Cases

Define all skills available to this agent in the skills: section. Then specify which skills are available for the chat thread based on use case:

yaml

For a data analysis thread, you would specify [data-analysis, visualization] in agent.skills, but still define all available skills in the skills: section above.

Display Mode Strategy

Choose display modes based on user experience:

yaml

Guidelines

  • hidden: Background work that doesn't need user awareness
  • description: User-facing operations (default)
  • name: Quick operations where name is sufficient
  • stream: Long-running operations where progress matters

System Prompt Integration

Skills are automatically injected into the system prompt. The LLM learns:

  1. Available skills - List of enabled skills with descriptions
  2. How to use skills - Instructions for using skill tools
  3. Tool reference - Available skill tools (octavus_skill_read, octavus_code_run, etc.)

You don't need to manually document skills in your system prompt. However, you can guide the LLM:

markdown

Error Handling

Skills handle errors gracefully:

yaml

Common error scenarios:

  1. Invalid skill slug - Skill not found in organization
  2. Code execution errors - Syntax errors, runtime exceptions
  3. Missing dependencies - Required packages not installed
  4. File I/O errors - Permission issues, invalid paths

The LLM receives error messages and can:

  • Retry with corrected code
  • Explain errors to users
  • Suggest alternatives

File Output Patterns

Single File Output

python

Multiple Files

python

Structured Output

python

Performance Considerations

Lazy Initialization

Sandboxes are created only when a skill tool is first called:

yaml

This means:

  • No cost if skills aren't used
  • Fast startup (no sandbox creation delay)
  • Sandbox reused for all skill calls in a trigger

Timeout Limits

Sandboxes have a 5-minute default timeout:

  • Short operations: QR codes, simple calculations
  • Medium operations: Data analysis, report generation
  • Long operations: May need to split into multiple steps

Sandbox Lifecycle

Each trigger execution gets a fresh sandbox:

  • Clean state - No leftover files from previous executions
  • Isolated - No interference between sessions
  • Destroyed - Sandbox cleaned up after trigger completes

Combining Skills with Tools

Skills and tools can work together:

yaml

Pattern:

  1. Fetch data via tool (from your backend)
  2. LLM uses skill to analyze/process the data
  3. Generate outputs (files, reports)

Skill Development Tips

Writing SKILL.md

Focus on when and how to use the skill:

markdown

Script Organization

Organize scripts logically:

text

Error Messages

Provide helpful error messages:

python

The LLM sees these errors and can retry or explain to users.

Security Considerations

Sandbox Isolation

  • No network access (unless explicitly configured)
  • No persistent storage (sandbox destroyed after execution)
  • File output only via /output/ directory
  • Time limits enforced (5-minute default)

Input Validation

Skills should validate inputs:

python

Resource Limits

Be aware of:

  • File size limits - Large files may fail to upload
  • Execution time - 5-minute sandbox timeout
  • Memory limits - Sandbox environment constraints

Debugging Skills

Check Skill Documentation

The LLM can read skill docs:

python

Test Locally

Test skills before uploading:

bash

Monitor Execution

Check execution logs in the platform debug view:

  • Tool calls and arguments
  • Code execution results
  • File outputs
  • Error messages

Common Patterns

Pattern 1: Generate and Return

yaml

Pattern 2: Analyze and Report

yaml

Pattern 3: Transform and Save

yaml

Best Practices Summary

  1. Enable only needed skills - Don't overwhelm the LLM
  2. Choose appropriate display modes - Match user experience needs
  3. Write clear skill descriptions - Help LLM understand when to use
  4. Handle errors gracefully - Provide helpful error messages
  5. Test skills locally - Verify before uploading
  6. Monitor execution - Check logs for issues
  7. Combine with tools - Use tools for data, skills for processing
  8. Consider performance - Be aware of timeouts and limits

Next Steps