Skip to main content

Agent Runner

This guide shows how to run a background service that continues your workflow after Auto triggers, with Auto operating as your agent control plane.

For keyless x402 differences, see x402 Instructions.

Why You Need an Agent Runner

Auto handles query evaluation and event emission. Your runner handles:

  1. Event ingestion (webhook/SSE)
  2. Verification + deduplication
  3. Strategy continuation (LLM calls, policy checks, execution adapters)
  4. Audit logging and retries

Reference Architecture

Auto Query
-> Auto Event Ingress (Webhook / SSE)
-> Signature Verification + Idempotency
-> Job Queue
-> Agent Decision Worker
-> Action Adapter (Telegram / Exchange / Internal API)
-> Logs + Metrics + Alerts

Core Processing Loop

  1. Receive event (webhook or stream event).
  2. Verify authenticity (for webhook signatures).
  3. Check idempotency (eventId already processed?).
  4. Enqueue job and ACK quickly.
  5. Worker resolves extra context:
    • poll query status and executions
    • fetch LLM session output when relevant
  6. Apply policy and decide next step.
  7. Execute downstream action.
  8. Record result for replay/debug.

Design intent:

  • Keep monitoring and trigger semantics inside Auto.
  • Keep business-specific post-trigger decisions inside your runner.

Best Practices: Instructing the Agent After a Trigger

Treat post-trigger execution as a controlled decision step.

Recommended rules:

  1. Build a small policy layer that transforms Auto event data into explicit agent instructions.
  2. Keep instruction format structured (for example: JSON contract) instead of free-form text.
  3. Include hard limits (allowed actions, max risk, timeout, retry policy).
  4. Require idempotency key propagation (eventId) through all downstream steps.
  5. Log agent input/output for audit and replay.

Example instruction envelope:

{
"eventId": "evt_123",
"queryId": "q_123",
"objective": "Handle trigger and decide next action",
"allowedActions": ["notify", "fetch_session", "execute_adapter"],
"constraints": {
"maxExecutionSeconds": 30,
"riskMode": "conservative"
}
}

Minimal Webhook + Worker Skeleton (TypeScript)

type AutoJob = { eventId: string; queryId: string; raw: unknown };

const queue: AutoJob[] = [];
const processed = new Set<string>();

function onWebhook(eventId: string, queryId: string, raw: unknown) {
if (processed.has(eventId)) return; // idempotent
queue.push({ eventId, queryId, raw });
}

async function workerLoop() {
while (true) {
const job = queue.shift();
if (!job) {
await new Promise((r) => setTimeout(r, 250));
continue;
}

// 1) Pull latest query/execution state if needed
// 2) Optionally fetch session details for llm actions
// 3) Decide next action (policy + agent logic)
// 4) Execute action (notify, relay, order adapter, etc.)

processed.add(job.eventId);
}
}

Deployment Patterns

PatternBest For
Single process (API + worker)Early-stage prototypes
API + queue + workersProduction reliability at scale
Serverless consumer + queue workerSpiky workloads with managed ops

Local and Cloud Recommendations

  • Local: run one ingestion process + one worker, use SSE for fast iteration.
  • Cloud: use webhook ingress + queue + stateless workers with autoscaling.
  • In both setups, keep the same instruction contract so behavior is consistent across environments.

Reference Implementations (Local and Cloud)

Local Development Stack (Docker Compose)

Use this when you want a reproducible local setup with queue-backed processing.

Components:

  • ingress: receives webhook/SSE events and enqueues jobs
  • worker: processes jobs and executes agent policy
  • redis: lightweight queue/cache for local development
version: "3.9"
services:
  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

  ingress:
    build: ./ingress
    environment:
      AUTO_SECRET: ${AUTO_SECRET}
      REDIS_URL: redis://redis:6379
    depends_on:
      - redis
    ports:
      - "3000:3000"

  worker:
    build: ./worker
    environment:
      AUTO_SECRET: ${AUTO_SECRET}
      REDIS_URL: redis://redis:6379
      ELFA_BASE_URL: https://api.elfa.ai/v2/auto
    depends_on:
      - redis

Suggested local flow:

  1. Start stack: docker compose up --build
  2. Connect ingress to your Auto webhook target (or run SSE consumer in ingress)
  3. Trigger test query
  4. Inspect worker logs for decision/output trace

Cloud Production Stack

Use managed services and autoscaling for reliability.

Recommended components:

  • API ingress service (webhook endpoint or SSE adapter)
  • Managed queue (SQS/PubSub/Redis Streams/Kafka)
  • Stateless worker service (autoscaled)
  • Durable datastore (idempotency keys, run logs, audit)
  • Observability stack (metrics, logs, alerting, traces)

Production flow:

  1. Ingress verifies signature and writes idempotent event record.
  2. Ingress pushes job to queue and immediately ACKs request.
  3. Worker pulls job, fetches any additional context (poll, sessions).
  4. Worker applies policy and executes downstream action.
  5. Worker persists decision + outcome with event ID.

Minimal Environment Contract

Keep config consistent across local and cloud:

AUTO_SECRET=<event-signing secret>
ELFA_BASE_URL=https://api.elfa.ai/v2/auto
QUEUE_URL=<redis/sqs/pubsub endpoint>
RUNNER_MODE=local|cloud

Reliability Checklist

  • Store dedupe keys by eventId
  • Use retry policy with dead-letter queue
  • Keep webhook handler fast and non-blocking
  • Log decision input/output for every run
  • Add health checks and alerting on worker lag
  1. Capabilities
  2. Notifications
  3. This runner guide