____  __    ____  ____
    / ___\/ /   / __ \/ __ \
    \__ \/ /   / / / / /_/ /
   ___/ / /___/ /_/ / ____/
  /____/_____/\____/_/

 S T A T E L E S S · L I G H T W E I G H T
 O P E R A T I N G · P R I M I T I V E S

        slopshop.gg  v4.0 · AI Agent OS

Getting Started

Slopshop is the Self-Hostable MCP Agent Runtime OS for the Computer-Use Era — 600+ real tools across 82 categories. Dream Engine (REM-style memory consolidation, 5 strategies, schedule-based, restart-persistent) and Multiplayer Memory (shared spaces, collaborator invites, retention tiers) are the north-star features. Free evolving persistent memory (GraphRAG + episodic), zero-trust agent identity (NIST-aligned SPIFFE/SVID), computer use backend, MCP gateway + policy engine, visual DAG workflow builder, marketplace, and Army-scale orchestration. Every response includes _engine: "real". Self-hostable, air-gapped, designed for production agents.

Quick Install

bash
npm install -g slopshop

Set Your API Key

bash
export SLOPSHOP_KEY=sk-slop-your-key-here

Make Your First Calls

bash
# Sign up for free credits
slop signup

# Natural language (auto-routes to the right tool)
slop "hash hello world"

# Direct tool call
slop call crypto-hash-sha256 --data "hello world"

# Start MCP server for Claude Code / Cursor
slop mcp serve

# Persistent memory (free, 0 credits)
slop memory set mykey "some value"
slop memory get mykey

You will get back a JSON response with real computed results, along with metadata including _engine: "real" confirming it was actually processed.

Why use Slopshop instead of raw APIs?

Raw APIsSlopshop
Manage 10+ API keysOne API key
Build retry logic yourselfBuilt-in retry + circuit breakers
Parse different response formatsConsistent {data, meta, guarantees} envelope
No execution proof_engine: "real" + X-Output-Hash on every response (see trust model)
Pay per provider subscriptionPay per credit, memory is free
Build observability yourselfX-Credits-Used, X-Latency-Ms, X-RateLimit-* headers
No task abstractionPOST /v1/tasks/run with confidence + guarantees

What _engine: "real" Means

Every Slopshop response includes "_engine": "real" to signal that the result was genuinely computed, not cached from a template or mocked. This is Slopshop's core guarantee: when you call an API, your data is actually processed. Hash APIs run real cryptographic functions. Network APIs make real DNS/HTTP calls. LLM APIs send real prompts to Claude, GPT, Grok, or DeepSeek. There are no fake responses.

Trust & Verification Model

What _engine: "real" means

Every Slopshop API response includes _engine: "real". This means:

What "execution proof" means today

Every response includes an X-Output-Hash header — a SHA-256 hash of the output. This provides:

Honest limitations

The trust source is Slopshop's server. Today's verification model is:

We want to be upfront about this: _engine: "real" and X-Output-Hash are claims made by our server. They prove consistency and detect tampering in transit, but they do not cryptographically prove that a specific piece of code executed on our infrastructure. The source of trust is still Slopshop. This is a real limitation.

Roadmap: Stronger verification

We're exploring paths toward genuinely trustless verification:

Self-hosting as verification

The strongest trust model available today: npm install slopshop && node server-v2.js. Run the exact same code locally. Compare outputs. If our cloud returns the same result as your local instance for the same input, you have reproducible verification without trusting our server.

What You Can Build (Combo Features)

The real power of Slopshop is not individual tools -- it is what you build by combining them. Here are 6 production combos that chain multiple features together.

Non-Stop Research Agent

Chain Claude, Grok, and Claude in an infinite loop with free memory storage. The research agent critiques itself and improves continuously.

bash
# 1. Create the chain
curl -X POST https://slopshop.gg/v1/chain/create \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"name":"research-loop","loop":true,"steps":[
    {"agent":"claude","prompt":"Research latest AI infrastructure trends","pass_context":true},
    {"agent":"grok","prompt":"Critique the research, find gaps and contradictions","pass_context":true},
    {"agent":"claude","prompt":"Improve the research based on critique, find new angles","pass_context":true}
  ]}'

# 2. Queue for overnight
POST /v1/chain/queue  {"frequency":"daily"}

# 3. Results auto-stored in free memory
POST /v1/memory-get  {"key":"chain_results"}

Agent Organization (5-Agent Team)

Create a full product team with workspace, roles, standups, knowledge sharing, and democratic decision-making.

Endpoint Chain
1. POST /v1/hive/create       {"name":"product-team","channels":["research","writing","review"]}
2. POST /v1/copilot/scale      {"main_session_id":"product-team","count":5,"roles":["researcher","writer","critic","editor","publisher"]}
3. POST /v1/standup/submit     {"yesterday":"Wrote 3 articles","today":"Review and publish"}
4. POST /v1/knowledge/add      {"subject":"article-1","predicate":"reviewed_by","object":"critic-agent"}
5. POST /v1/governance/propose {"title":"Should we pivot topics?"}

1,000 Parallel Analysts

Route to the cheapest LLM, deploy a massive army of agents, save the replay for auditing, and verify results with Merkle proofs.

Endpoint Chain
1. POST /v1/router/smart   {"task":"analyze financial data","optimize_for":"cost"}
2. POST /v1/army/deploy    {"task":"analyze","count":10000,"api":"text-word-count"}
3. POST /v1/replay/save    {"name":"big-analysis","events":[...]}
4. POST /v1/proof/merkle   {"data":"...results..."}

Self-Improving Agent

Run evals, store lessons in memory, compete in tournaments, and loop to auto-improve over time.

Endpoint Chain
1. POST /v1/eval/run             {"agent_slug":"text-word-count","test_cases":[...]}
2. POST /v1/memory-set           {"key":"eval_lessons","value":"accuracy was 95%"}
3. POST /v1/tournament/create    {"name":"best-analyzer","participants":["agent-v1","agent-v2"]}
4. Use chain to loop: eval -> learn -> re-eval

Credits & Usage

Check your balance, buy credits, and track per-agent usage across your account.

Endpoint Chain
1. GET  /v1/credits/balance
2. POST /v1/credits/buy         {"amount":1000}
3. GET  /v1/analytics/usage

Enterprise Fleet

Set up teams with budgets, monitor usage, forecast costs, and get Slack alerts when budgets run low.

Endpoint Chain
1. POST /v1/teams/create       {"name":"Engineering"}
2. POST /v1/keys/set-budget    {"monthly_budget":50000}
3. GET  /v1/analytics/usage
4. GET  /v1/analytics/cost-forecast
5. POST /v1/webhooks/create    {"url":"https://slack.com/webhook","events":["budget_alert"]}

Authentication

All API calls require a Bearer token in the Authorization header.

Using Your Key

HTTP Header
Authorization: Bearer sk-slop-your-key-here

Getting Your API Key

Sign up to get your API key with 500 free credits:

curl
curl -X POST https://slopshop.gg/v1/auth/signup \
  -H "Content-Type: application/json" \
  -d '{"email": "you@example.com", "password": "yourpassword"}'

Returns {"api_key": "sk-slop-...", "balance": 500}. Store your key securely -- it cannot be retrieved after creation.

To generate additional keys programmatically (requires existing auth), use POST /v1/keys with your existing API key in the Authorization header. This is not the signup endpoint.

Demo Key

For testing, you can use the demo key sk-slop-demo-key-12345678 which comes with 200 credits and is rate-limited to 10 requests per minute. The demo key works for all compute-tier APIs.

Credits & Billing

Slopshop uses a credit system. Every API call costs a specific number of credits based on its complexity.

Credit Cost Tiers

Cost LevelCreditsExamples
Free0memory-set, memory-get, memory-search, memory-list, memory-delete, memory-stats, counter-get
Trivial1UUID generation, base64 encode, hashing, date parse
Simple1Text processing, validation, regex, password hash
Medium3CSV parse, diff, statistics, JSON schema generation
Complex5Network calls (DNS, HTTP, SSL), multi-step compute
LLM Small10Short LLM calls: sentiment, summarize, classify
LLM Medium10Medium LLM calls: blog outline, code review, translate
LLM Large20Long LLM calls: blog draft, code generation, proposals

Checking Balance

curl
curl https://slopshop.gg/v1/credits/balance \
  -H "Authorization: Bearer sk-slop-your-key-here"

Buying Credits

curl
curl -X POST https://slopshop.gg/v1/credits/buy \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"amount": 1000}'

Auto-Reload

Configure auto-reload to automatically purchase more credits when your balance drops below a threshold. Set auto_reload_threshold and auto_reload_amount on your account to enable this.

Rate Limits

EndpointLimitWindow
POST /v1/auth/signup5 requestsper hour per IP
POST /v1/auth/login10 requestsper minute per IP
POST /v1/:slug (any API)100 requestsper minute per key
POST /v1/agent/run10 requestsper minute per key
POST /v1/batch20 requestsper minute per key

Rate limits are per-IP for auth endpoints and per-API-key for tool calls. Exceeding limits returns HTTP 429. Contact dev@slopshop.gg for higher limits on enterprise plans.

Agent-to-Agent Credit Transfer

Transfer credits between agent keys with POST /v1/credits/transfer. Pass to_key and amount. Transfers are atomic and logged to the audit trail. Use this to enable agent-to-agent commerce — one agent paying another for work completed.

curl
curl -X POST https://slopshop.gg/v1/credits/transfer \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"to_key": "sk-slop-other-key", "amount": 50}'

Response Format

Every tool call returns a consistent envelope:

JSON
{
  "data": { ... tool-specific result ... },
  "meta": {
    "api": "tool-slug",
    "credits_used": 3,
    "balance": 97,
    "latency_ms": 12,
    "engine": "real"
  }
}

Response Headers

Every API call response includes these headers:

HeaderDescription
X-Credits-UsedCredits deducted for this call
X-Credits-RemainingBalance after this call
X-Latency-MsServer processing time in milliseconds
X-Request-IdUnique identifier for this request
X-Enginereal, llm, or needs_key
X-RateLimit-LimitMax requests per minute (default: 60)
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetSeconds until rate limit window resets

Error Responses

JSON
{
  "error": {
    "code": "error_code",
    "message": "Human-readable description"
  }
}

HTTP Status Codes

CodeMeaning
200Success
400Bad request (missing or invalid parameters)
401Unauthorized (missing or invalid API key)
402Insufficient credits
404API not found
429Rate limited
500Server error

API Reference

All 82 categories and 600+ endpoints documented below. Each API accepts a POST request with a JSON body and returns JSON. The base URL is https://slopshop.gg/v1/{slug}.

Machine-Readable Specs

GET /v1/openapi.jsonFull OpenAPI 3.0 specification — import into Postman, Insomnia, or any OpenAPI-compatible tool
GET /v1/toolsJSON list of all tools with slugs, descriptions, categories, and credit costs
GET /v1/tools?format=mcpMCP-formatted tool list for Claude Code and other MCP clients
GET /v1/tools?category=cryptoFilter tools by category

Integration Guides

Slopshop integrates with the tools your agents already use:

Ecosystem IntegrationsMCP Server, Goose Recipes, Aider, OpenCode, Cline Skills, LangChain/LangGraph — v4.0
MCP for Claude CodeNative MCP server — all tools available as first-class Claude tools
For AI AgentsLangChain, CrewAI, AutoGPT integration guides
CLI quickstartTerminal interface to all 78 tool categories
Zapier integrationNo-code access via Zapier actions
MCP deep diveAdvanced MCP configuration and usage

Browse the full API catalog at /tools or use the CLI: slop list

CLI Reference

The Slopshop CLI lets you call any API from the terminal. Install globally with npm.

slop call <slug>

Call any API by slug. Pass data with --data or -d, or pipe JSON via stdin.

bash
# Simple call with inline data
slop call crypto-hash-sha256 --data "hello world"

# Pass JSON body
slop call text-word-count -d '{"text": "Count these words please"}'

# Pipe from file
cat document.txt | slop call llm-summarize

slop pipe <slug>

Run a pre-built pipe (multi-step workflow) by slug.

bash
slop pipe content-machine -d '{"topic": "AI agents", "keywords": "automation"}'
slop pipe security-audit -d '{"data": "{}", "domain": "example.com"}'

slop search <query>

Search available APIs by name or description.

bash
slop search hash
slop search "json convert"
slop search dns

slop list

List all available APIs grouped by category.

bash
slop list
slop list --category "Crypto & Security"

slop balance

Check your current credit balance.

bash
slop balance

slop buy <amount>

Purchase credits.

bash
slop buy 1000

slop health

Check API server health and status.

bash
slop health

slop help

Show help and list all available commands.

bash
slop help
slop call --help

SDK Reference

Python SDK

Install

bash
pip install slopshop

Initialize

python
from slopshop import Slop

client = Slop(api_key="sk-slop-your-key-here")
# Or use SLOPSHOP_KEY environment variable
client = Slop()

Call an API

python
result = client.call("crypto-hash-sha256", data="hello world")
print(result["hash"])

Batch Calls

python
results = client.batch([
    ("crypto-hash-sha256", {"data": "hello"}),
    ("crypto-hash-md5", {"data": "hello"}),
    ("crypto-uuid", {}),
])
for r in results:
    print(r)

Run a Pipe

python
result = client.pipe("content-machine", topic="AI agents", keywords="automation")
print(result["steps"])

Resolve (call with auto-retry)

python
result = client.resolve("llm-summarize", text="Long document...", retries=3)
print(result["summary"])

Node.js SDK

Install

bash
npm install slopshop

Initialize

javascript
const { Slop } = require('slopshop');

const client = new Slop({ apiKey: 'sk-slop-your-key-here' });
// Or use SLOPSHOP_KEY environment variable
const client = new Slop();

Call an API

javascript
const result = await client.call('crypto-hash-sha256', { data: 'hello world' });
console.log(result.hash);

Batch Calls

javascript
const results = await client.batch([
  ['crypto-hash-sha256', { data: 'hello' }],
  ['crypto-hash-md5', { data: 'hello' }],
  ['crypto-uuid', {}],
]);
results.forEach(r => console.log(r));

Run a Pipe

javascript
const result = await client.pipe('content-machine', {
  topic: 'AI agents',
  keywords: 'automation',
});
console.log(result.steps);

Resolve (call with auto-retry)

javascript
const result = await client.resolve('llm-summarize', {
  text: 'Long document...',
}, { retries: 3 });
console.log(result.summary);

Pipes

Slopshop has two distinct pipe systems:

Both pipe endpoints execute all steps in sequence, passing data between them automatically. You pay the total credit cost of all steps combined.

Running a Pre-Built Pipe

curl
curl -X POST https://slopshop.gg/v1/pipes/content-machine \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"topic": "How AI agents use APIs", "keywords": "slopshop, automation"}'

Running a Custom Pipe

curl
curl -X POST https://slopshop.gg/v1/pipe \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"steps": [{"api": "crypto-hash-sha256"}, {"api": "text-base64-encode"}], "input": "hello"}'

Listing All Pre-Built Pipes

curl
curl https://slopshop.gg/v1/pipes

All 14 Pipes

Lead from Text 7 credits

Extract emails from text, validate they exist, generate prospect profiles.

text-extract-emailsnet-email-validategen-fake-name
Slug: lead-from-text | Category: Sales

Content Machine 35 credits

Generate blog outline, draft the post, score readability.

llm-blog-outlinellm-blog-drafttext-readability-score
Slug: content-machine | Category: Content

Security Audit 7 credits

Checksum content, validate JSON, check SSL certificate.

crypto-checksumtext-json-validatenet-ssl-check
Slug: security-audit | Category: Security

Code Ship 35 credits

Review code, generate tests, get diff stats.

llm-code-reviewllm-code-test-generatecode-diff-stats
Slug: code-ship | Category: Dev

Data Clean 5 credits

Parse CSV to JSON, deduplicate, validate output.

text-csv-to-jsontext-deduplicate-linestext-json-validate
Slug: data-clean | Category: Data

Email Intelligence 4 credits

Extract emails from text, extract URLs, extract phone numbers, get word stats.

text-extract-emailstext-extract-urlstext-extract-phonestext-word-count
Slug: email-intel | Category: Analysis

Hash Everything 4 credits

Compute MD5, SHA256, SHA512, and full checksum of input data.

crypto-hash-md5crypto-hash-sha256crypto-hash-sha512crypto-checksum
Slug: hash-everything | Category: Security

Text Analyzer 4 credits

Word count, readability score, keyword extraction, language detection.

text-word-counttext-readability-scoretext-keyword-extracttext-language-detect
Slug: text-analyze | Category: Analysis

JSON Pipeline 5 credits

Validate JSON, format it, generate schema, flatten to dot-notation.

text-json-validatetext-json-formattext-json-schema-generatetext-json-flatten
Slug: json-pipeline | Category: Data

Meeting to Actions 20 credits

Summarize meeting notes, extract action items, draft follow-up email.

llm-summarizellm-extract-action-itemsllm-email-draft
Slug: meeting-to-actions | Category: Business

Code Explainer 30 credits

Explain code, document it, generate tests.

llm-explain-codellm-code-documentllm-code-test-generate
Slug: code-explain | Category: Dev

Crypto Toolkit 4 credits

Generate UUID, password, OTP, and a random encryption key.

crypto-uuidcrypto-password-generatecrypto-otp-generatecrypto-random-bytes
Slug: crypto-toolkit | Category: Security

Domain Recon 20 credits

DNS lookup, SSL check, HTTP status, email validation for a domain.

net-dns-anet-ssl-checknet-http-statusnet-email-validate
Slug: domain-recon | Category: Network

Onboarding Pack 3 credits

Generate fake test user, create a JWT for them, hash their password.

gen-fake-namecrypto-jwt-signcrypto-password-hash
Slug: onboarding-pack | Category: Dev

Agent & Templates

The agent endpoint is Slopshop's killer feature. Describe what you want in plain English — the system picks tools, chains them, executes, and returns a summarized answer. Every run is automatically stored in memory for future reference.

POST /v1/agent/run

Natural language task execution. The agent plans which tools to call, executes them in sequence, and summarizes the results.

bash
curl https://slopshop.gg/v1/agent/run \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"task": "What tech stack does stripe.com use?"}'

Response includes answer, steps (which tools were called and why), run_id (for retrieving from history), total_credits, and time_ms. Each run costs 20 credits overhead + the cost of each tool used.

POST /v1/ask

Simplified version — same engine, cleaner response. Just pass a question field.

bash
curl https://slopshop.gg/v1/ask \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"question": "Hash the word slopshop with SHA256"}'

Agent Templates

Pre-built agent workflows that handle common tasks. Templates automatically store results in memory.

bash
# List available templates
curl https://slopshop.gg/v1/agent/templates

# Run a template
curl https://slopshop.gg/v1/agent/template/security-audit \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com"}'
TemplateDescriptionInput
security-auditFull security audit — tech stack, SSL, headers, response time{"url": "..."}
content-analyzerFetch, analyze, and summarize any URL{"url": "..."}
data-processorTransform, filter, and analyze JSON/CSV data{"data": [...], "instructions": "..."}
domain-reconFull domain recon — DNS, tech, SSL, sitemap{"domain": "..."}
hash-verifyHash with SHA-256, SHA-512, and MD5{"text": "..."}

Agent History

Every agent run is auto-stored in memory. Retrieve past runs:

bash
curl https://slopshop.gg/v1/agent/history \
  -H "Authorization: Bearer YOUR_KEY"

Returns all past runs with task, answer, tools used, timestamps, and success status.

Scoped API Keys

Create keys with restricted access using POST /v1/auth/create-scoped-key. Scoped keys limit which tool tiers a key can call and optionally cap total credit spend. Use these to safely share keys with untrusted agents or external services.

ScopeDescription
computePure compute APIs only (hash, parse, math, encoding)
networkNetwork APIs only (DNS, HTTP, SSL)
llmAI/LLM APIs only
memoryMemory APIs only
executeCode execution APIs only
*All tiers (default, same as a normal key)
curl
# Create a compute-only key with a 100-credit cap
curl -X POST https://slopshop.gg/v1/auth/create-scoped-key \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"scope": "compute", "credit_cap": 100}'

# List all your keys
curl https://slopshop.gg/v1/auth/keys \
  -H "Authorization: Bearer sk-slop-your-key-here"

# Revoke a key
curl -X DELETE https://slopshop.gg/v1/auth/keys/sk-slop-scoped-key \
  -H "Authorization: Bearer sk-slop-your-key-here"

Python Code Execution

Run Python code in a sandboxed subprocess with POST /v1/exec-python. Supports the standard library (json, math, datetime, re, hashlib, base64, urllib). Returns stdout, stderr, and execution status. Costs 5 credits. Timeout: 30 seconds.

curl
curl -X POST https://slopshop.gg/v1/exec-python \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"code": "import math\nprint(math.factorial(10))"}'

Search stored memories by semantic similarity with POST /v1/memory-vector-search. Scores memories by text similarity against keys, values, and tags. Returns ranked results by relevance. This is free at 0 credits like all core memory APIs.

curl
curl -X POST https://slopshop.gg/v1/memory-vector-search \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"namespace": "my-agent", "query": "authentication errors"}'

File Storage

Persistent blob storage for agents. Upload files as base64, download by ID, or list all stored files. Useful for passing large payloads between agent steps or storing agent-generated artifacts.

curl
# Upload a file (base64-encoded content)
curl -X POST https://slopshop.gg/v1/files/upload \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"filename": "report.txt", "content": "SGVsbG8gd29ybGQ="}'

# Download a file
curl https://slopshop.gg/v1/files/{id} \
  -H "Authorization: Bearer sk-slop-your-key-here"

# List all files
curl https://slopshop.gg/v1/files \
  -H "Authorization: Bearer sk-slop-your-key-here"

SSE Streaming for Agent Runs

Stream agent execution progress in real time by adding "stream": true to POST /v1/agent/run. Returns Server-Sent Events with planning, step, and complete events so your UI can show live progress as the agent works.

curl
curl -X POST https://slopshop.gg/v1/agent/run \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"task": "Analyze slopshop.gg SSL and DNS", "stream": true}'

# Emits SSE events:
# event: planning  data: {"tools":["net-ssl-check","net-dns-lookup"]}
# event: step      data: {"tool":"net-ssl-check","result":{...}}
# event: complete  data: {"answer":"...","total_credits":12}

Auto-Fallback Routing

Agent execution automatically retries failed tool calls with alternative tools. For example, if crypto-hash-sha256 fails, the agent falls back to sha512 or md5. If sense-url-content fails, it falls back to meta or links extraction. Fallback results include "fallback_used": true in the response so you know a substitute was used.

Cost Preview (Dry-Run Mode)

Add "preview": true or "dry_run": true to any POST body to get an estimated cost without executing. Returns estimated credits, estimated USD cost, and whether your current balance can afford it. Works on both POST /v1/{slug} and POST /v1/tasks/run.

curl
curl -X POST https://slopshop.gg/v1/llm-blog-draft \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"topic": "AI agents", "preview": true}'

# Response (no execution, no credit charge):
# {"preview": true, "estimated_credits": 20, "estimated_usd": 0.02, "can_afford": true}

Free Memory Tier

Core memory operations are completely free (0 credits). This lets agents accumulate state on Slopshop with zero friction — the more your agent remembers, the more valuable the platform becomes.

Free APIs (0 credits)

APIDescription
memory-setStore a key-value pair with optional tags and namespace
memory-getRetrieve a value by key
memory-searchSearch by tag, key substring, or value substring
memory-listList all keys in a namespace
memory-deleteDelete a key
memory-statsNamespace statistics
memory-namespace-listList all namespaces
counter-getRead a counter value

Paid Memory APIs (1 credit each)

Advanced operations: memory-expire, memory-increment, memory-append, memory-history, memory-export, memory-import, memory-namespace-clear, queue-push, queue-pop, queue-peek, queue-size, counter-increment.

Example: Agent with Persistent Memory

bash
# Store something (FREE)
curl https://slopshop.gg/v1/memory-set \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"namespace": "my-agent", "key": "last-run", "value": "analyzed stripe.com", "tags": "audit,web"}'

# Retrieve it later (FREE)
curl https://slopshop.gg/v1/memory-get \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"namespace": "my-agent", "key": "last-run"}'

# Search across all memories (FREE)
curl https://slopshop.gg/v1/memory-search \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"namespace": "my-agent", "tag": "audit"}'

Agent Chaining (Infinite Consciousness)

Create infinite agent-to-agent chains where the output of one agent becomes the input for the next. Supports loop mode for continuous research cycles, pause/resume for long-running chains, and context passing between steps. Build research loops, review pipelines, and multi-stage reasoning workflows.

Create a Chain

Use POST /v1/chain/create to define a multi-step chain with named agents and prompts. Enable loop: true for continuous cycling.

curl
curl -X POST https://slopshop.gg/v1/chain/create \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"name":"research-loop","loop":true,"steps":[{"agent":"claude","prompt":"Research AI news"},{"agent":"grok","prompt":"Critique and improve"}]}'

Advance a Chain Step

Use POST /v1/chain/advance to manually advance to the next step, passing context from the previous step.

curl
curl -X POST https://slopshop.gg/v1/chain/advance \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"chain_id":"CHAIN_ID","context":{"previous_output":"AI news summary from step 1"}}'

Check Chain Status

curl
curl https://slopshop.gg/v1/chain/status/CHAIN_ID \
  -H "Authorization: Bearer sk-slop-your-key-here"

Pause & Resume a Chain

curl
# Pause a running chain
curl -X POST https://slopshop.gg/v1/chain/pause/CHAIN_ID \
  -H "Authorization: Bearer sk-slop-your-key-here"

# Resume a paused chain
curl -X POST https://slopshop.gg/v1/chain/resume/CHAIN_ID \
  -H "Authorization: Bearer sk-slop-your-key-here"

List Your Chains

curl
curl https://slopshop.gg/v1/chain/list \
  -H "Authorization: Bearer sk-slop-your-key-here"

Python SDK

python
chain = client.chain_create(
    name="research-loop",
    loop=True,
    steps=[
        {"agent": "claude", "prompt": "Research AI news"},
        {"agent": "grok", "prompt": "Critique and improve"},
    ]
)
print(chain["chain_id"])

# Advance step
result = client.chain_advance(chain_id=chain["chain_id"], context={"data": "step 1 output"})

# Pause / resume
client.chain_pause(chain_id=chain["chain_id"])
client.chain_resume(chain_id=chain["chain_id"])
EndpointMethodDescription
/v1/chain/createPOSTCreate a new agent chain with steps and optional loop mode
/v1/chain/advancePOSTAdvance chain to next step with context passing
/v1/chain/status/:idGETCheck chain execution status
/v1/chain/pause/:idPOSTPause a running chain
/v1/chain/resume/:idPOSTResume a paused chain
/v1/chain/listGETList all chains for the authenticated user

Prompt Queue

Queue prompts for deferred or overnight batch execution. Set a schedule and frequency to run batch jobs automatically. Ideal for nightly analysis, daily report generation, or periodic data processing tasks that do not need real-time results.

Queue Prompts for Batch Execution

curl
curl -X POST https://slopshop.gg/v1/chain/queue \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"prompts":[{"prompt":"Analyze competitors","agent":"claude"},{"prompt":"Summarize findings","agent":"grok"}],"frequency":"daily","schedule":"2026-03-28T09:00:00Z"}'

One-Time Deferred Job

curl
curl -X POST https://slopshop.gg/v1/chain/queue \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"prompts":[{"prompt":"Generate weekly analytics report","agent":"claude"}],"frequency":"once","schedule":"2026-03-28T03:00:00Z"}'

Node.js SDK

javascript
const job = await client.chainQueue({
  prompts: [
    { prompt: 'Analyze competitors', agent: 'claude' },
    { prompt: 'Summarize findings', agent: 'grok' },
  ],
  frequency: 'daily',
  schedule: '2026-03-28T09:00:00Z',
});
console.log(job.queue_id);

Template Marketplace

Publish, browse, fork, and rate agent templates. Share your best workflows with the community, discover templates from other builders, and fork them to customize. Rating helps surface the most useful templates.

Publish a Template

curl
curl -X POST https://slopshop.gg/v1/templates/publish \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"name":"SEO Content Pipeline","description":"Generate SEO-optimized blog posts with keyword research","steps":[{"api":"llm-blog-outline","input":{"topic":"{{topic}}"}},{"api":"llm-blog-draft"},{"api":"text-readability-score"}],"tags":["seo","content","blog"]}'

Browse the Marketplace

curl
# Browse all templates
curl https://slopshop.gg/v1/templates/browse \
  -H "Authorization: Bearer sk-slop-your-key-here"

# Filter by tag
curl "https://slopshop.gg/v1/templates/browse?tag=security" \
  -H "Authorization: Bearer sk-slop-your-key-here"

Fork a Template

curl
curl -X POST https://slopshop.gg/v1/templates/fork/TEMPLATE_ID \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"name":"My Custom SEO Pipeline"}'

Rate a Template (1-5)

curl
curl -X POST https://slopshop.gg/v1/templates/rate/TEMPLATE_ID \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"rating":5,"review":"Excellent pipeline, saved me hours of setup"}'

Python SDK

python
# Publish
tmpl = client.templates_publish(name="SEO Pipeline", steps=[...], tags=["seo"])

# Browse
templates = client.templates_browse(tag="security")

# Fork
forked = client.templates_fork(template_id="TMPL_ID", name="My Version")

# Rate
client.templates_rate(template_id="TMPL_ID", rating=5)
EndpointMethodDescription
/v1/templates/publishPOSTPublish a template to the marketplace
/v1/templates/browseGETBrowse and search marketplace templates
/v1/templates/fork/:idPOSTFork a template into your account
/v1/templates/rate/:idPOSTRate a template 1-5 with optional review

Agent Evaluations

Run structured evaluations against agents using test cases, compare agents head-to-head, and view the public leaderboard. Use evals to benchmark your agent pipelines and track quality over time.

Run an Evaluation

curl
curl -X POST https://slopshop.gg/v1/eval/run \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"agent_id":"my-research-agent","test_cases":[{"input":"Summarize quantum computing","expected_keywords":["qubit","superposition","entanglement"]},{"input":"Explain RSA encryption","expected_keywords":["prime","modular","public key"]}]}'

Compare Two Agents

curl
curl -X POST https://slopshop.gg/v1/eval/compare \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"agent_a":"research-v1","agent_b":"research-v2","test_cases":[{"input":"What is CRISPR?"}]}'

View Leaderboard

curl
curl https://slopshop.gg/v1/eval/leaderboard \
  -H "Authorization: Bearer sk-slop-your-key-here"

Get Detailed Report

curl
curl https://slopshop.gg/v1/eval/report/EVAL_ID \
  -H "Authorization: Bearer sk-slop-your-key-here"

Node.js SDK

javascript
const evalResult = await client.evalRun({
  agent_id: 'my-agent',
  test_cases: [
    { input: 'Summarize quantum computing', expected_keywords: ['qubit'] },
  ],
});
console.log(evalResult.score);

const comparison = await client.evalCompare({
  agent_a: 'v1', agent_b: 'v2',
  test_cases: [{ input: 'What is CRISPR?' }],
});
console.log(comparison.winner);
EndpointMethodDescription
/v1/eval/runPOSTRun eval with test cases against an agent
/v1/eval/comparePOSTCompare two agents on the same test cases
/v1/eval/leaderboardGETPublic agent leaderboard by eval score
/v1/eval/report/:idGETDetailed evaluation report with per-case results

Replay System

Save and replay entire swarm runs. Capture every step, every agent decision, and every output so you can replay them later for debugging, auditing, or demonstration purposes.

Save a Swarm Run

curl
curl -X POST https://slopshop.gg/v1/replay/save \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"run_id":"RUN_ID","name":"Q1 research swarm","tags":["research","quarterly"]}'

Retrieve Replay Data

curl
curl https://slopshop.gg/v1/replay/REPLAY_ID \
  -H "Authorization: Bearer sk-slop-your-key-here"

Python SDK

python
# Save a replay
replay = client.replay_save(run_id="RUN_ID", name="Q1 research swarm")

# Retrieve replay
data = client.replay_get(replay_id=replay["replay_id"])
for step in data["steps"]:
    print(f"{step['agent']}: {step['action']}")
EndpointMethodDescription
/v1/replay/savePOSTSave a swarm run for later replay
/v1/replay/:idGETRetrieve full replay data with all steps

Credits & Usage

Check your balance, purchase credits, and monitor usage across your account. Every API call deducts credits based on the tool's cost, and core memory APIs are always free (0 credits).

Check Balance

curl
curl https://slopshop.gg/v1/credits/balance \
  -H "Authorization: Bearer sk-slop-your-key-here"

Buy Credits

curl
curl -X POST https://slopshop.gg/v1/credits/buy \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"amount":1000}'

Usage Analytics

curl
curl https://slopshop.gg/v1/analytics/usage \
  -H "Authorization: Bearer sk-slop-your-key-here"

Node.js SDK

javascript
// Check balance
const balance = await client.creditsBalance();

// Buy credits
await client.creditsBuy({ amount: 1000 });

// View usage
const usage = await client.analyticsUsage();
EndpointMethodDescription
/v1/credits/balanceGETCheck current credit balance
/v1/credits/buyPOSTPurchase credits
/v1/analytics/usageGETView usage analytics and cost breakdown

Agent Reputation

Upvote or downvote agents based on the quality of their work. View the reputation leaderboard to find the best-performing agents in the ecosystem. Reputation scores are public and help build trust in multi-agent systems.

Vote on an Agent

curl
curl -X POST https://slopshop.gg/v1/reputation/vote \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"agent_id":"research-bot-v2","vote":"up","reason":"Consistently high-quality research output"}'

View Reputation Leaderboard

curl
curl https://slopshop.gg/v1/reputation/leaderboard \
  -H "Authorization: Bearer sk-slop-your-key-here"

Node.js SDK

javascript
// Upvote an agent
await client.reputationVote({ agent_id: 'research-bot-v2', vote: 'up' });

// View leaderboard
const leaders = await client.reputationLeaderboard();
leaders.forEach(a => console.log(`${a.agent_id}: ${a.score}`));
EndpointMethodDescription
/v1/reputation/votePOSTUpvote or downvote an agent
/v1/reputation/leaderboardGETView top-rated agents

Local Compute Enhancement

Run computations locally for speed and privacy, then enhance results with Slopshop cloud tools. Combine local hash computation with cloud-based analysis, or run local text processing and enhance with LLM-powered insights. Best of both worlds: local speed + cloud power.

Enhance Local Results

curl
# Step 1: Run locally (e.g., hash a file)
LOCAL_HASH=$(sha256sum myfile.txt | cut -d' ' -f1)

# Step 2: Enhance with cloud tools
curl -X POST https://slopshop.gg/v1/agent/run \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d "{\"task\":\"I computed SHA256 hash $LOCAL_HASH for myfile.txt. Verify this is a valid SHA256 format and store in memory for audit trail.\",\"store_result\":true}"

Local + Cloud Pipeline

python
import hashlib
from slopshop import Slop

# Local computation (free, instant)
with open("data.csv", "r") as f:
    content = f.read()
local_hash = hashlib.sha256(content.encode()).hexdigest()

# Cloud enhancement (Slopshop)
client = Slop()
result = client.call("llm-summarize", text=content[:5000])
client.call("memory-set", namespace="audit", key=f"file-{local_hash}", value=result["summary"])

Swarm Visualizer

Visualize your agent swarms, chains, and workflows in real time with the interactive Swarm Visualizer. See agent connections, data flow between steps, and execution status at a glance. Access it at /visualizer.

Access the Visualizer

URL
https://slopshop.gg/visualizer

The visualizer reads from your active chains, hive workspaces, and army deployments. Pass your API key as a query parameter or log in via the web UI. The visualizer updates in real time via SSE, showing:

Ecosystem Integrations v4.0

Slopshop plugs into every major agent IDE, framework, and workflow tool. One install, universal access to all 82 categories of tools.

MCP Server — slop mcp serve

Start a local MCP server that exposes the full Slopshop catalog to any MCP-compatible client. One command, works everywhere.

bash
# Start the MCP server (default port 3001)
slop mcp serve

# Or with a custom port
slop mcp serve --port 8484

Supported clients and their config:

ClientConfig locationSetup
Claude Desktopclaude_desktop_config.json"command": "slop", "args": ["mcp", "serve"]
Cursor.cursor/mcp.json"command": "slop", "args": ["mcp", "serve"]
Goose~/.config/goose/config.yamlcommand: slop mcp serve
ClineCline MCP settings panelAdd server with command slop mcp serve
OpenCodeopencode.json"command": "slop", "args": ["mcp", "serve"]

Goose Recipes

Pre-built Goose recipes that wire Slopshop tools into common workflows. Drop them into your Goose config and go.

bash
# Browse available recipes
ls integrations/goose-recipes/

# Example: security audit recipe
goose run integrations/goose-recipes/security-audit.yaml

See the full collection at integrations/goose-recipes/.

Aider Custom Commands

Aider custom commands let you invoke Slopshop tools inline while pair-programming. Add the commands to your .aider.conf.yml and call them with /slop-<tool>.

yaml — .aider.conf.yml
custom-commands:
  slop-hash:
    command: slop call crypto-hash-sha256 --data "$input"
    description: Hash a string with SHA-256
  slop-resolve:
    command: slop call resolve --data '{"query": "$input"}'
    description: Find the right Slopshop tool for a task

OpenCode Plugin

The OpenCode plugin registers all Slopshop tools as native OpenCode actions. Install once, use everywhere.

bash
# Install the plugin
opencode plugin add slopshop

# Tools are now available in OpenCode sessions
opencode > /tools  # lists all Slopshop tools

Cline Skills

Register Slopshop as a Cline skill set so Cline can autonomously discover and call tools during coding sessions. Add the MCP server via the Cline settings panel, or use the config file:

json — cline_mcp_settings.json
{
  "mcpServers": {
    "slopshop": {
      "command": "slop",
      "args": ["mcp", "serve"],
      "env": { "SLOPSHOP_KEY": "sk-slop-your-key-here" }
    }
  }
}

LangChain / LangGraph Adapters

Use the Slopshop tool catalog as native LangChain tools or LangGraph nodes. Fetch tool definitions in OpenAI format and convert automatically.

python
from langchain.tools import Tool
import requests, os

BASE = "https://slopshop.gg/v1"
KEY = os.environ["SLOPSHOP_KEY"]
headers = {"Authorization": f"Bearer {KEY}"}

# Fetch all tools in OpenAI format
openai_tools = requests.get(f"{BASE}/tools?format=openai", headers=headers).json()["tools"]

# Convert to LangChain tools
def call(slug, **kw):
    return requests.post(f"{BASE}/{slug}", headers=headers, json=kw).json()["data"]

lc_tools = [
    Tool(name=t["function"]["name"], description=t["function"]["description"],
         func=lambda x, s=t["function"]["name"]: call(s, input=x))
    for t in openai_tools
]

# Use with LangGraph
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(model, lc_tools)

Project Scaffolding — slop init

Bootstrap a new project with Slopshop pre-wired. The --full-stack flag includes a frontend, backend, MCP config, and example agent.

bash
# Scaffold a full-stack project
slop init --full-stack my-agent-app

# Minimal API-only scaffold
slop init my-tool-script

Local Agent Pool — slop agents

Spin up a pool of local agents that share a Slopshop key and can be dispatched tasks. Useful for batch processing, overnight workloads, and multi-agent orchestration.

bash
# Start a local agent pool (default: 4 workers)
slop agents

# Custom pool size
slop agents --workers 8

# With a task file
slop agents --tasks tasks.json

Browser/Desktop Primitives v4.0

Slopshop exposes primitives for browser-based and desktop agent workflows. The MCP server bridges directly into Claude Desktop, Cursor, and other IDE clients. The Swarm Visualizer provides real-time SSE streams for monitoring agent activity.

Swarm Visualizer (SSE)

Stream real-time agent activity, army deployments, and hive workspace updates over Server-Sent Events. Use it in your own dashboard or connect via slop tui.

bash
# Stream live swarm events
curl -N https://slopshop.gg/v1/visualizer/stream \
  -H "Authorization: Bearer $KEY"

# Returns SSE events: agent_spawn, task_complete, merkle_update, hive_message

MCP Desktop Bridge

Bootstrap the MCP server for Claude Desktop or Cursor with full tool catalog exposure. All 82 categories are discoverable as native MCP tools.

bash
# Bootstrap MCP for Claude Desktop (auto-configures claude_desktop_config.json)
slop mcp bootstrap --ide=claude-desktop

# Bootstrap for Cursor
slop mcp bootstrap --ide=cursor

Memory 2.0 v4.0

Memory 2.0 extends the free-forever memory tier with graph queries, auto-summarization, and snapshot/pin capabilities. All Memory 2.0 features are free (0 credits).

Graph Query (GraphRAG)

Query your memory as a knowledge graph. Memory keys, namespaces, and knowledge triples are linked into a queryable graph with semantic scoring.

curl
# Query the memory graph
curl -X POST https://slopshop.gg/v1/memory/graph-query \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"query": "agent orchestration", "depth": 3, "vector_boost": true}'

Returns linked entities, semantic scores, and traversal paths across all your stored memory and knowledge triples.

Auto-Summarize

Automatically summarize large memory namespaces or key groups. Useful for compressing research output, swarm results, or long-running chain data.

curl
curl -X POST https://slopshop.gg/v1/memory/auto-summarize \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"namespace": "research-swarm-42", "max_tokens": 500}'

Snapshot & Pin

Take a point-in-time snapshot of your entire memory namespace. Snapshots can be pinned for permanent storage or exported for backup.

curl
# Snapshot a namespace
curl -X POST https://slopshop.gg/v1/memory/snapshot \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"namespace": "research-swarm-42", "pin": true}'

# List snapshots
curl https://slopshop.gg/v1/memory/snapshots \
  -H "Authorization: Bearer $KEY"

Memory Evolution v4.0

Autonomous memory evolution lets agents continuously improve their stored knowledge without manual intervention. Four strategies run on configurable intervals to keep memory namespaces clean, enriched, and relevant.

Strategies

Start Evolution

curl
curl -X POST https://slopshop.gg/v1/memory/evolve/start \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"namespace": "research", "strategy": "consolidate", "budget_per_cycle": 5, "interval_minutes": 10}'

Returns an evolution_id that you can use to monitor or stop the evolution process.

Monitor & Stop

Endpoints
GET  /v1/memory/evolve/status    — Check active evolutions
GET  /v1/memory/evolve/log       — View evolution history
POST /v1/memory/evolve/stop      — Stop an active evolution by ID

Memory Decay

Standalone decay endpoint for one-off cleanup. Scores memories by age using a configurable decay factor.

curl
curl -X POST https://slopshop.gg/v1/memory/decay \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"namespace": "research", "factor": 0.95}'

Chain Branching v4.0

Chain branching adds conditional step execution to agent chains. Steps can include if conditions that evaluate against the chain context, enabling dynamic workflows that branch based on intermediate results.

Conditional Steps

Each step in a branching chain can include a condition or if field. If the condition evaluates to false, the step is skipped. Use else_step to jump to an alternative step index.

curl
curl -X POST https://slopshop.gg/v1/chain/run/branching \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "chain_id": "your-chain-id",
    "max_steps": 10,
    "max_iterations": 100
  }'

Step Definition with Conditions

When creating a chain, define steps with conditions:

json
{
  "steps": [
    { "prompt": "Analyze the data", "agent": "claude" },
    { "condition": "step_0_result.confidence > 0.8", "prompt": "Deep dive", "agent": "grok" },
    { "condition": "step_0_result.confidence <= 0.8", "prompt": "Gather more data", "agent": "claude", "else_step": 0 }
  ]
}

Conditions are evaluated safely against the chain context. Supported operators: comparisons, logical operators, and property access on previous step results.

Grok-Specific Features v4.0

Slopshop includes dedicated Grok integration endpoints that leverage xAI's Grok models for optimization, critique, and autonomous orchestration. Requires XAI_API_KEY.

Grok Optimize

Pass any prompt, code, or workflow to Grok for optimization suggestions. Grok analyzes for efficiency, cost, and performance improvements.

curl
curl -X POST https://slopshop.gg/v1/grok/optimize \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"input": "Deploy 1k army for text analysis", "optimize_for": "cost"}'

Grok Critique

Submit agent output, research results, or any content for Grok's critical analysis. Returns structured feedback with gaps, contradictions, and improvement suggestions.

curl
curl -X POST https://slopshop.gg/v1/grok/critique \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"content": "Research report on agent infrastructure...", "depth": "thorough"}'

Grok Overlord Mode

Let Grok autonomously drive multi-LLM chains. In overlord mode, Grok decides which models to call, what tasks to split, and when to loop. Pairs with chain/start and army/deploy.

curl
curl -X POST https://slopshop.gg/v1/grok/overlord \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"goal": "Deep tech audit on Stripe", "budget_credits": 500, "models": ["claude", "gpt", "deepseek"]}'

Grok selects the optimal model per subtask, manages context passing, and stores all results in free memory automatically.

Safety & Guardrails v4.0

Production safety primitives for agent deployments. Circuit breakers, sandbox isolation, reputation slashing, and chaos testing protect your workloads.

Sandbox Execution

Run untrusted code in an isolated vm.createContext sandbox with strict timeout enforcement. Used internally by exec-javascript and available directly.

curl
curl -X POST https://slopshop.gg/v1/sandbox/execute \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"code": "return 2 + 2", "timeout_ms": 5000}'

Circuit Breakers

Automatic circuit breakers protect downstream services. When an endpoint fails repeatedly, the breaker trips and returns a cached fallback. Use orch-circuit-breaker-check and orch-circuit-breaker-record to integrate into your own workflows.

Reputation & Slashing

Agents in army deployments and tournaments build reputation scores based on eval performance. Underperforming agents can be slashed (reputation reduced) and pruned from future swarms. The reputation ledger is Merkle-verified.

Endpoint Chain
1. POST /v1/eval/run          — Run evaluations to score agent performance
2. POST /v1/reputation/slash   — Slash underperforming agents
3. GET  /v1/reputation/ledger  — View the global reputation leaderboard

Chaos Testing

Inject faults into army deployments to stress-test reliability. Simulate network failures, latency spikes, and agent drops to validate self-healing behavior.

curl
curl -X POST https://slopshop.gg/v1/chaos/test \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"army_id": "EXT-47", "faults": ["network", "latency", "agent-drop"]}'

Agent Runtime OS v4.0

Slopshop v4.0 ships seven new platform layers that turn the tool catalog into a full agent runtime OS for the Computer-Use Era. Every layer is self-hostable and NIST-aligned.

Agent Identity (SPIFFE/SVID)

Issue verifiable agent identities (HMAC-SHA256 signed JWTs), register in the Agent Name Service (ANS), track reputation scores (0–10 rolling weighted average), and pass typed A2A messages between agents.

curl
# Issue identity token
curl -X POST https://slopshop.gg/v1/identity/issue \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"agent_id":"research-bot","name":"Research Bot","capabilities":["llm","tools"]}'

# Register in Agent Name Service
curl -X POST https://slopshop.gg/v1/ans/register \
  -H "Authorization: Bearer $KEY" \
  -d '{"name":"research-bot","agent_id":"research-bot","endpoint":"https://myapp.com/agent","capabilities":["research","memory"]}'

# Submit reputation signal
curl -X POST https://slopshop.gg/v1/reputation/signal \
  -H "Authorization: Bearer $KEY" \
  -d '{"agent_id":"research-bot","signal":"success","task":"daily-research"}'

Computer Use Backend

Record Claude's Computer Use sessions — log actions, store screenshots with OCR, run pixel-diff verification, export replay scripts, and gate on human approval. While Claude clicks and types, Slopshop persists state and verifies correctness.

curl
# Start a session
curl -X POST https://slopshop.gg/v1/computer-use/session/start \
  -H "Authorization: Bearer $KEY" \
  -d '{"name":"organize-downloads","task":"Batch resize images and organize by date"}'

# Log a screenshot + run OCR
curl -X POST https://slopshop.gg/v1/computer-use/screenshot \
  -H "Authorization: Bearer $KEY" \
  -d '{"session_id":"SESSION_ID","image_base64":"...","label":"before-resize"}'

# Export replay as Python (pyautogui)
curl -X POST https://slopshop.gg/v1/computer-use/replay \
  -H "Authorization: Bearer $KEY" \
  -d '{"session_id":"SESSION_ID","format":"python"}'

MCP Gateway + Policy Engine

Proxy all MCP tool calls through a policy engine. Create rules (deny, require_approval, rate_limit) based on tool slug, credit spend, agent identity, time range, or tier. Export audit logs as ECS-compatible NDJSON for SIEM integration.

curl
# Get signed MCP manifest
curl https://slopshop.gg/v1/gateway/manifest \
  -H "Authorization: Bearer $KEY"

# Create a policy
curl -X POST https://slopshop.gg/v1/policy/create \
  -H "Authorization: Bearer $KEY" \
  -d '{"name":"block-heavy","rules":[{"condition":"credit_over","value":500,"action":"require_approval"}]}'

# Export audit log (ECS NDJSON)
curl -X POST https://slopshop.gg/v1/gateway/audit/export \
  -H "Authorization: Bearer $KEY" \
  -d '{"format":"ecs","limit":1000}'

Observability Dashboard

Full distributed tracing, p95/p99 latency, cost attribution per tool and agent, ROI calculator, budget alerts, 4-component health scores, and a public status page with incident management.

curl
# Get full dashboard
curl https://slopshop.gg/v1/observe/dashboard \
  -H "Authorization: Bearer $KEY"

# Set credit budget
curl -X POST https://slopshop.gg/v1/observe/budget/set \
  -H "Authorization: Bearer $KEY" \
  -d '{"budget_credits":10000,"alert_threshold":0.8}'

# Record ROI event
curl -X POST https://slopshop.gg/v1/observe/roi/record \
  -H "Authorization: Bearer $KEY" \
  -d '{"event_type":"pr_merged","value_usd":50,"tool_slug":"devops-semver-bump"}'

Visual DAG Workflow Builder

Create workflow graphs with nodes (tool calls) and edges (data flow). Kahn's topological sort ensures correct execution order. DFS cycle detection prevents infinite loops. Human gates pause execution until approval. 10 pre-built templates included.

curl
# List templates
curl https://slopshop.gg/v1/workflow/templates \
  -H "Authorization: Bearer $KEY"

# Create a workflow
curl -X POST https://slopshop.gg/v1/workflow/create \
  -H "Authorization: Bearer $KEY" \
  -d '{"name":"hash-and-store","nodes":[{"id":"n1","type":"tool","slug":"crypto-hash-sha256"},{"id":"n2","type":"tool","slug":"memory-set"}],"edges":[{"from":"n1","to":"n2","output":"hash"}]}'

Eval Suite + Model Routing

Create test suites with expected outputs, run benchmarks (5/5 core tests · score 100 · grade A), and configure model routing rules (cost-optimized, performance, round-robin, balanced).

curl
# Run standard benchmark
curl -X POST https://slopshop.gg/v1/eval/benchmark \
  -H "Authorization: Bearer $KEY"
# → {"score":100,"grade":"A","passed":5,"failed":0}

Marketplace v4.0

The Slopshop Marketplace lets agents and developers publish, discover, install, and monetize tools. 15 seed listings included. Publishers earn 70% of every purchase. Handler code is scanned for 16 dangerous patterns before listing.

Template Marketplace

Browse and invoke pre-built agent templates. Fork templates for customization or publish your own.

curl
# Browse templates
curl https://slopshop.gg/v1/marketplace/templates \
  -H "Authorization: Bearer $KEY"

# Invoke a template
curl -X POST https://slopshop.gg/v1/marketplace/invoke \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"slug": "research-swarm", "params": {"topic": "AI infrastructure"}}'

Plugin Forge

Build custom plugins that extend the Slopshop tool catalog. Plugins can be published to the marketplace and installed by other users. TEE-verified plugins get a trust badge.

bash
# Build and publish a plugin
slop forge build --name="custom-analyzer" --publish=marketplace

# Install a plugin from marketplace
slop forge install custom-analyzer

Advanced Research v4.0

Multi-tier, multi-provider research engine. Five providers cover the global internet: Claude for synthesis, Grok for real-time EN+JA web, DeepSeek for Chinese platforms, OpenAI for academic sources, Yandex for Russian internet. Results are automatically stored in memory with a 30-minute cache. See Research Engine for the full provider and tier reference.

Research Tiers

TierDepthCross-SynthesisUse Case
basicSingle-pass summaryNoQuick fact checks, simple lookups
standardMulti-source synthesisNoMarket research, competitive analysis
advancedDeep multi-model loopNoTechnical deep-dives, architecture reviews
deepExhaustive with critique loopsYesComprehensive reports, due diligence
planetAll providers + full cross-synthesisYesGlobal intelligence across all 5 provider networks
curl
curl -X POST https://slopshop.gg/v1/research \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"query": "Compare agent infrastructure platforms 2026", "tier": "planet", "provider": "all", "namespace": "my-research"}'

Response includes structured findings per provider, cross-synthesis (deep/planet tiers), sources, confidence scores, and a memory key for retrieval. Results are cached 30 minutes per query+tier+provider combination.

Memory Upload v4.0

Import files directly into your agent's memory via drag-and-drop or API. Supports Markdown, plain text, JSON, CSV, and code files. Content is chunked, indexed, and made searchable instantly.

curl
curl -X POST https://slopshop.gg/v1/memory/upload \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"content": "# My Project\n\n## Goals\nBuild the best agent backend...", "namespace": "my-project", "filename": "project-notes.md"}'

Uploaded content is automatically chunked for vector search and tagged with the filename. Use memory-search to find relevant chunks later.

Multiplayer Memory v4.0

Shared memory spaces for multi-agent and multi-user teams. Create a shared space, set a retention tier, invite collaborators, and every agent with access reads and writes to the same namespace in real time.

Retention Tiers

TierLifetimeUse Case
session24 hoursEphemeral collaboration — context for a single work session
daily7 daysShort-term projects, sprint-scoped context
weekly30 daysTeam knowledge base with rolling window
permanentForeverPersistent shared intelligence, long-running agent teams

Create a Shared Space

curl
curl -X POST https://slopshop.gg/v1/memory/share/create \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "team-research",
    "retention": "weekly",
    "description": "Shared research context for product team"
  }'

# Returns: {"space_id": "space_abc123", "namespace": "team-research", "retention": "weekly", "expires_at": "2026-04-30T00:00:00Z"}

Invite a Collaborator

curl
curl -X POST https://slopshop.gg/v1/memory/collaborator/invite \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"namespace": "team-research", "invitee_key": "sk_other_agent", "permissions": "read-write"}'

# permissions: "read" | "read-write" | "admin"

Accept an Invitation

curl
curl -X POST https://slopshop.gg/v1/memory/collaborator/accept \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"invite_id": "INV-abc123"}'

List & Revoke

curl
# List collaborators on a namespace
curl https://slopshop.gg/v1/memory/collaborator/list \
  -H "Authorization: Bearer $KEY" \
  -G -d "namespace=team-research"

# Revoke access
curl -X POST https://slopshop.gg/v1/memory/collaborator/revoke \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"namespace": "team-research", "collaborator_key": "sk_other_agent"}'

Reading and Writing Shared Memory

Once a collaborator has access, they use the standard memory APIs with the shared namespace. No special syntax needed.

curl
# Write (any collaborator with write permission)
curl -X POST https://slopshop.gg/v1/memory-set \
  -H "Authorization: Bearer $OTHER_AGENT_KEY" \
  -H "Content-Type: application/json" \
  -d '{"key": "latest_findings", "value": "...", "namespace": "team-research"}'

# Read (any collaborator)
curl -X POST https://slopshop.gg/v1/memory-get \
  -H "Authorization: Bearer $MY_KEY" \
  -H "Content-Type: application/json" \
  -d '{"key": "latest_findings", "namespace": "team-research"}'

Memory Compression v4.0

Memory values larger than 512 bytes are automatically compressed with zlib deflateRaw before storage. Compressed values are stored with a ~z~ prefix and transparently decompressed on read. Zero configuration required — it just works.

DetailValue
Algorithmzlib deflateRaw
Threshold512 bytes (values below this are stored as-is)
Prefix~z~ (internal marker, never visible to callers)
DecompressionAutomatic on every read — callers always receive plain text/JSON
ConfigurationNone — fully automatic

Compression is applied by the Dream Engine's compress strategy and also triggered automatically during writes when values exceed the threshold. Combined with the Dream Engine, large research outputs can be compressed overnight to free up space while remaining fully retrievable.

Dream Tiers v4.0

Memory dreaming runs in the background, consolidating, linking, and evolving your agent's memories. Dream frequency scales with research tier:

TierDream IntervalWhat Happens
basicEvery 2 hoursTag consolidation, duplicate removal
standardEvery 1 hourCross-reference linking, gap detection
advancedEvery 30 minutesInsight synthesis, contradiction flagging
deepEvery 15 minutesFull knowledge graph evolution, priority re-ranking
curl
curl -X POST https://slopshop.gg/v1/memory/evolve/start \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"namespace": "my-research", "tier": "advanced"}'

Dream Engine v5.1 ✨ v5.1

The Dream Engine is Slopshop's REM-cycle memory consolidation system. It runs in the background, synthesizing, compressing, and evolving your agent's memory on a schedule — like a brain consolidating the day's learning overnight. Schedules are persisted to a dream_schedules SQLite table and survive server restarts. v5.1 adds four new strategies (validate, evolve, forecast, reflect), two new start parameters (adversarial mode and salience threshold), a full Intelligence Score report endpoint, and Collective Dream across shared memory spaces.

Neuroscience Grounding

Each Dream Engine stage maps to peer-reviewed neuroscience: synthesize mirrors slow-wave sleep memory replay; pattern_extract mirrors hippocampal theta rhythms; insight_generate mirrors REM dreaming and cross-domain binding; compress mirrors synaptic downscaling; associate mirrors neocortical consolidation; validate mirrors prefrontal error detection; evolve mirrors Bayesian neural belief revision; forecast mirrors prospective memory simulation; reflect mirrors metacognitive default-mode activity. Full references at /dream.

All 9 Consolidation Strategies

StrategyWhat It DoesCreditsKey Output
synthesizeTheme consolidation — merges semantically related memories into unified summaries, eliminates redundancy, surfaces cross-entry threads25Merged summaries, theme clusters
pattern_extractRecurring pattern mining — identifies patterns, contradictions, and knowledge gaps across the full memory namespace20Pattern list, contradiction map, gap report
insight_generateNovel cross-domain connection synthesis — derives higher-order insights from raw memory; enable adversarial: true for counterfactual generation that challenges stored assumptions30Insight list, adversarial challenges (if enabled)
compressRedundancy elimination — applies zlib compression to bulky values, preserving signal while reducing storage; auto-decompresses on every read15Compressed entries, bytes saved
associateGraphRAG knowledge graph linking — builds weighted associative edges between semantically related memories; results queryable via the graph API20Edge list, updated graph node count
validate v5.1Consistency checking and contradiction detection — scans memory for logical conflicts, stale beliefs, and internal contradictions; flags entries that need reconciliation20Contradiction pairs, stale keys, consistency score
evolve v5.1Bayesian belief updating — strengthens, weakens, or revises beliefs based on accumulated evidence; confidence scores are recalibrated on each run30Updated posteriors, revised belief map
forecast v5.1Monte Carlo probabilistic forecasting — projects future states and outcomes from observed memory patterns; returns confidence intervals and probability distributions35Forecasts, confidence intervals, scenario tree
reflect v5.1Metacognitive self-analysis — examines knowledge quality, growth trajectory, blind spots, and next learning steps; also extracts reusable procedural skill definitions from episodic chains25Meta-report, extracted procedural skills

Start a Dream Run (One-Shot)

Trigger an immediate consolidation pass on a namespace. Returns a run ID for status polling. v5.1 adds adversarial and salience_threshold parameters.

curl
curl -X POST https://slopshop.gg/v1/memory/dream/start \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "namespace": "my-agent",
    "strategy": "insight_generate",
    "adversarial": true,
    "salience_threshold": 0.4
  }'

# Returns: {"run_id": "dream_run_abc123", "status": "started", "strategy": "insight_generate"}

POST /v1/memory/dream/start — Full Parameter Reference

ParameterTypeRequiredDescription
namespacestringYesMemory namespace to consolidate
strategystringYesOne of the 9 strategies above
adversarialbooleanNoEnable counterfactual generation in insight_generate — produces challenges that stress-test stored assumptions. Default: false
salience_thresholdfloat 0–1NoMinimum salience score for a memory entry to be included in the consolidation pass. Lower values include more entries; higher values focus on high-signal memories only. Default: 0.0 (all entries)
modelstringNoOverride the LLM used for this run (claude, grok, deepseek). Defaults to account model preference.
budgetintegerNoHard credit cap for this run. Dream stops early if the cap is reached.

Check Dream Status

curl
GET /v1/memory/dream/status/:run_id
Authorization: Bearer $KEY

# Returns: {"run_id": "...", "status": "complete", "memories_processed": 142, "insights_generated": 7}

Schedule Recurring Dreams (Restart-Persistent)

Schedules are written to the dream_schedules SQLite table. The server re-registers all active schedules on startup — a restart never loses your schedule.

curl
curl -X POST https://slopshop.gg/v1/memory/dream/schedule \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "namespace": "my-agent",
    "cron": "0 2 * * *",
    "strategy": "insight_generate",
    "adversarial": true,
    "salience_threshold": 0.3,
    "active": true
  }'

# cron field accepts standard 5-part cron expressions
# active: true/false — pause/resume without deleting
# Returns: {"schedule_id": "sched_xyz", "next_run": "2026-04-02T02:00:00Z"}

Manage Schedules

Endpoints
GET    /v1/memory/dream/schedules          — List all schedules for your key
GET    /v1/memory/dream/schedules/:id      — Get a single schedule
PATCH  /v1/memory/dream/schedules/:id      — Update cron, strategy, or active status
DELETE /v1/memory/dream/schedules/:id      — Remove a schedule

Legacy Topic Subscriptions

The older topic-subscription interface is still supported for backward compatibility.

Endpoints
POST   /v1/dream/subscribe      — Subscribe to a research topic on a schedule
GET    /v1/dream/insights        — List generated insights
POST   /v1/dream/run             — Trigger immediate topic research
POST   /v1/dream/deploy          — Deploy insight into agent memory
POST   /v1/dream/dismiss         — Dismiss insight
GET    /v1/dream/subscriptions   — List subscriptions

Intelligence Score & Dream Reports v5.1

After a dream run completes, retrieve a full Intelligence Brief: insights count, strategy depth score, procedural skills extracted, duration, and the two composite KPI metrics — Intelligence Score and Dream Efficiency Score.

Intelligence Score — Formula

Intelligence Score = (insights × strategy_depth × 10) / duration_sec

Dream Efficiency Score = (insights × depth × 10 + skills × 15) / duration_sec

Both scores are capped at 100. Higher is better. A score of 80+ in under 60 seconds is considered excellent. View all historical sessions at /dream-reports.

Strategy Depth Multipliers

StrategyDepth MultiplierRationale
synthesize1.0Single-pass merge — baseline depth
pattern_extract1.2Multi-pass scan across all entries
compress0.5Structural, not semantic — lower depth
associate1.3Graph construction requires cross-entry linking
insight_generate1.5Higher-order reasoning, cross-domain binding
validate1.2Contradiction detection across namespace
evolve1.5Bayesian update requires prior + likelihood computation
forecast2.0Monte Carlo simulation — highest compute depth
reflect1.8Metacognitive analysis + skill extraction

Example Calculation

Dream run: strategy=forecast, insights=12, skills=3, duration=45s

Intelligence Score = (12 × 2.0 × 10) / 45 = 240 / 45 = 53.3

Dream Efficiency Score = (12 × 2.0 × 10 + 3 × 15) / 45 = 285 / 45 = 63.3

GET /v1/memory/dream/report/:dream_id

curl
curl https://slopshop.gg/v1/memory/dream/report/dream_run_abc123 \
  -H "Authorization: Bearer $KEY"

# Returns:
{
  "dream_id": "dream_run_abc123",
  "namespace": "my-agent",
  "strategy": "forecast",
  "status": "complete",
  "duration_sec": 45,
  "memories_processed": 142,
  "insights_generated": 12,
  "procedural_skills_extracted": 3,
  "intelligence_score": 53.3,
  "dream_efficiency_score": 63.3,
  "entries": [...],
  "meta": {
    "model": "claude-3-5-sonnet",
    "adversarial": false,
    "salience_threshold": 0.0,
    "credits_used": 35
  }
}

TMR — Targeted Memory Reactivation v5.1

Targeted Memory Reactivation (TMR) is a neuroscience-grounded technique for priming specific memories before a Dream Engine run or agent session. Queue high-priority memory keys, set a reactivation mode, and retrieve a combined reactivation prompt that your agent injects at session start. TMR increases the probability those memories become part of the next consolidation cycle.

POST /v1/memory/tmr/queue

Queue one or more memory keys for targeted reactivation.

curl
curl -X POST https://slopshop.gg/v1/memory/tmr/queue \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "namespace": "my-agent",
    "target_keys": ["project-brief", "competitor-analysis", "q1-goals"],
    "priority": 8,
    "mode": "consolidate",
    "personalization": "Focus on strategic risks and opportunity gaps"
  }'

# Returns:
{
  "cue_id": "tmr_cue_xyz789",
  "queued_keys": 3,
  "combined_reactivation_prompt": "Before this session, recall: [project-brief]... [competitor-analysis]... [q1-goals]... Focus on strategic risks and opportunity gaps.",
  "expires_at": "2026-04-02T02:00:00Z"
}
ParameterTypeRequiredDescription
namespacestringYesMemory namespace containing the target keys
target_keysstring[]YesArray of memory keys to reactivate
priorityinteger 1–10NoReactivation weight — higher priority keys appear first in the combined prompt and receive higher salience during the next dream run. Default: 5
modestringNoconsolidate (default) | recall | challenge — how memories are framed in the reactivation prompt
personalizationstringNoFree-text instruction appended to the combined reactivation prompt

GET /v1/memory/tmr/cues

Retrieve pending TMR cues for a namespace. Use this at agent session start to inject the reactivation prompt into the system message.

curl
curl "https://slopshop.gg/v1/memory/tmr/cues?namespace=my-agent&limit=10&mode=consolidate" \
  -H "Authorization: Bearer $KEY"

# Returns:
{
  "cues": [
    {
      "cue_id": "tmr_cue_xyz789",
      "target_keys": ["project-brief", "competitor-analysis", "q1-goals"],
      "priority": 8,
      "mode": "consolidate",
      "created_at": "2026-04-01T22:00:00Z"
    }
  ],
  "combined_reactivation_prompt": "Before this session, recall: [project-brief summary]... [competitor-analysis summary]... [q1-goals summary]...",
  "total": 1
}
Query ParamTypeDescription
namespacestringRequired. Namespace to fetch cues for.
limitintegerMax cues to return. Default: 20
modestringFilter by mode (consolidate, recall, challenge). Omit for all.

Collective Dream v5.1

Run the Dream Engine across a shared Multiplayer Memory space — consolidating the collective knowledge of an entire team or agent swarm in a single overnight pass. Collective Dream reads from a shared space created via POST /v1/memory/share/create and writes insights back to the same space, visible to all collaborators.

POST /v1/memory/dream/collective

curl
curl -X POST https://slopshop.gg/v1/memory/dream/collective \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "space_id": "space_team_abc123",
    "strategy": "synthesize",
    "budget": 500,
    "model": "claude"
  }'

# Returns:
{
  "dream_id": "collective_dream_xyz",
  "space_id": "space_team_abc123",
  "status": "started",
  "poll_endpoint": "/v1/memory/dream/status/collective_dream_xyz",
  "report_endpoint": "/v1/memory/dream/report/collective_dream_xyz"
}
ParameterTypeRequiredDescription
space_idstringYesShared memory space ID from POST /v1/memory/share/create
strategystringYesAny of the 9 Dream Engine strategies
budgetintegerNoHard credit cap for this collective run. Cost is debited from the space owner's account. Default: 1000
modelstringNoLLM to use: claude | grok | deepseek. Defaults to account preference.
adversarialbooleanNoEnable counterfactual mode (applies to insight_generate strategy).
salience_thresholdfloat 0–1NoMinimum salience to include in the collective consolidation pass.

Prerequisite

Collective Dream requires a shared memory space. Create one first with POST /v1/memory/share/create, then invite collaborators via POST /v1/memory/collaborator/invite. See the Multiplayer Memory section.

Procedural Skills v5.1

Procedural skills are reusable, parameterized action patterns extracted by the Dream Engine's reflect, forecast, and evolve strategies. Each skill encodes a learned workflow — a sequence of steps the agent has successfully executed — along with a confidence score derived from repeated application. Skills can be injected directly into agent system prompts via the /skills-forge interface.

GET /v1/memory/skills

List procedural skills extracted for a namespace. Filter by confidence threshold, extracting strategy, or limit.

curl
curl "https://slopshop.gg/v1/memory/skills?namespace=my-agent&min_confidence=0.7&strategy=reflect&limit=20" \
  -H "Authorization: Bearer $KEY"

# Returns:
{
  "skills": [
    {
      "skill_id": "skill_abc123",
      "name": "competitive-intelligence-loop",
      "description": "Search competitors, extract signals, synthesize into structured report",
      "tool_chain": ["search-web", "extract-text", "llm-think", "memory-set"],
      "confidence": 0.91,
      "applications": 14,
      "extracted_by": "reflect",
      "dream_id": "dream_run_abc123",
      "created_at": "2026-04-01T03:00:00Z"
    }
  ],
  "total": 1,
  "namespace": "my-agent"
}
Query ParamTypeDescription
namespacestringRequired. Namespace to query skills for.
min_confidencefloat 0–1Only return skills at or above this confidence score. Default: 0.0
strategystringFilter by extracting strategy: reflect, forecast, or evolve. Omit for all.
limitintegerMax skills to return. Default: 50

Procedural skills feed directly into the Skills Forge page, where you can browse extracted skills, test them, and generate agent system prompts that include your highest-confidence learned workflows.

Snapshot Branching v5.0

Versioned, Merkle-rooted checkpoints of your entire memory namespace. Fork your agent's memory state, explore alternatives, restore any prior point, or merge two diverged branches back together — like git, but for agent knowledge.

Create a Branch

curl
curl -X POST https://slopshop.gg/v1/memory/branch \
  -H "Authorization: Bearer $SLOP_KEY" \
  -d '{"namespace": "my-project", "label": "before-experiment-7"}'

# Returns: snapshot_id, merkle_root, key_count

Restore from Branch

curl
curl -X POST https://slopshop.gg/v1/memory/restore/mbranch-abc123 \
  -H "Authorization: Bearer $SLOP_KEY"

Merge Two Branches

curl
curl -X POST https://slopshop.gg/v1/memory/merge \
  -H "Authorization: Bearer $SLOP_KEY" \
  -d '{"source_id": "mbranch-aaa", "target_id": "mbranch-bbb", "policy": "auto"}'

# policy: auto | llm-smart | human-in-loop | agent-debate

Endpoints

Endpoints
POST   /v1/memory/branch                — Create Merkle-rooted branch snapshot
POST   /v1/memory/restore/:id           — Restore namespace from branch
GET    /v1/memory/branches              — List all branches
POST   /v1/memory/branch/compare        — Diff two branches
POST   /v1/memory/merge                 — Merge two branches
GET    /v1/memory/conflicts/:merge_id   — View unresolved merge conflicts
POST   /v1/memory/conflicts/:merge_id/resolve — Resolve conflicts manually

Bayesian Memory Calibration v5.0

Apply Bayesian inference to calibrate confidence in stored beliefs. Given a prior probability and observed likelihood, the system computes a posterior and stores the calibrated belief. Use this to build agents with calibrated uncertainty rather than binary true/false memory.

curl
curl -X POST https://slopshop.gg/v1/memory/bayesian/update \
  -H "Authorization: Bearer $SLOP_KEY" \
  -d '{
    "key": "user_will_churn",
    "prior": 0.2,
    "likelihood": 0.8,
    "evidence": "user_opened_cancellation_page",
    "namespace": "crm"
  }'

# Returns: { prior: 0.2, likelihood: 0.8, posterior: 0.667, confidence_delta: 0.467 }

Episodic Memory Chains v5.0

Link memory entries into temporally ordered chains — like a timeline of agent experiences. Episodes are linked by prev_id/next_id pointers, enabling forward/backward traversal. Supports event, decision, observation, and custom episode types.

curl
# Add an episode
curl -X POST https://slopshop.gg/v1/memory/episode \
  -H "Authorization: Bearer $SLOP_KEY" \
  -d '{"content": "User rejected proposal B", "episode_type": "decision", "prev_id": "ep-xyz123"}'

# Traverse the chain
GET /v1/memory/chain?namespace=project&start=ep-abc&direction=forward&limit=20

Memory Condition Triggers v5.0

Register event-driven triggers that fire when memory conditions are met — e.g., when key count exceeds a threshold, when a specific key is written, or on a time pattern. Triggers automatically execute a configured action (dream, webhook, chain, etc.).

curl
curl -X POST https://slopshop.gg/v1/memory/trigger \
  -H "Authorization: Bearer $SLOP_KEY" \
  -d '{
    "condition_type": "key_count_exceeds",
    "condition_value": "100",
    "action_type": "dream",
    "action_config": {"strategy": "compress"},
    "namespace": "research"
  }'
Endpoints
POST   /v1/memory/trigger        — Register memory-condition trigger
GET    /v1/memory/triggers       — List all triggers
DELETE /v1/memory/trigger/:id    — Remove a trigger

Procedural Memory v5.0

Store learned, repeatable tool chains as named procedures. Agents can learn what sequence of tools solved a problem, store it as a procedure, and recall it later. Procedures accumulate a success_count as they are executed, enabling agents to prioritize their most reliable strategies.

curl
curl -X POST https://slopshop.gg/v1/memory/procedure/learn \
  -H "Authorization: Bearer $SLOP_KEY" \
  -d '{
    "name": "competitive-analysis",
    "description": "Research competitors, extract signals, synthesize report",
    "tool_chain": ["search-web", "extract-text", "llm-think", "memory-set"],
    "trigger_pattern": "analyze competitors"
  }'

Swarm Orchestration v5.0

Coordinate multiple agents on a single task with automatic context passing and 6-axis quality scoring. Each agent receives the outputs of all previous agents as context, building towards a final synthesized result. Results are stored in memory for future retrieval and dream consolidation.

6-Axis Scoring

AxisMeaning
success_probabilityFraction of agents that produced valid output
robustnessSuccess rate weighted by agent reliability
foresightEstimated long-term value of the output
goal_alignmentHow well the result matches the original task
efficiencyOutput quality per credit spent
cost_creditsTotal credits consumed
curl
curl -X POST https://slopshop.gg/v1/swarm/orchestrate \
  -H "Authorization: Bearer $SLOP_KEY" \
  -d '{
    "task": "Research the current state of multi-agent AI frameworks",
    "agents": [
      {"role": "researcher", "prompt": "Find and summarize recent developments"},
      {"role": "analyst", "prompt": "Extract key trends and gaps"},
      {"role": "synthesizer", "prompt": "Write an actionable executive summary"}
    ],
    "namespace": "research",
    "dream_after": true
  }'
Endpoints
POST   /v1/swarm/orchestrate     — Run structured orchestration with 6-axis scoring
POST   /v1/swarm/create          — Create a persistent swarm configuration
GET    /v1/swarm/:id/status      — Check swarm status
GET    /v1/swarms                — List all swarms

Research Engine v4.0

Run deep, multi-provider research on any topic. Results are automatically persisted to memory for future retrieval. A 30-minute cache prevents redundant calls for the same query. Combine with the Dream Engine for scheduled overnight research workflows.

Providers

ProviderKeySpecialization
ClaudeclaudeLong-form synthesis, reasoning, structured analysis (ANTHROPIC_API_KEY)
Grok / XgrokReal-time web + X (Twitter) — English and Japanese 日本語 (GROK_API_KEY)
DeepSeekdeepseekChinese internet: 小红书 / 知乎 / 微信 / B站 (DEEPSEEK_API_KEY)
OpenAIopenaiAcademic, scientific, and technical literature (OPENAI_API_KEY)
YandexyandexRussian internet: Яндекс / Telegram channels / VK (YANDEX_API_KEY)
All (cross-synthesis)allRuns all available providers and cross-synthesizes results — requires deep or planet tier

Tiers

TierDepthCross-SynthesisUse Case
basicSingle-pass summaryNoQuick fact checks
standardMulti-source synthesisNoMarket research, competitive analysis
advancedDeep multi-model loopNoTechnical deep-dives
deepExhaustive with critique loopsYesComprehensive reports, due diligence
planetAll providers + full cross-synthesisYesGlobal intelligence: Western + Chinese + Russian + Japanese sources

Run Research

curl
# Single provider
curl -X POST https://slopshop.gg/v1/research \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"query": "latest MCP agent frameworks 2026", "tier": "advanced", "provider": "claude"}'

# Planet tier — all providers, cross-synthesized
curl -X POST https://slopshop.gg/v1/research \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "AI infrastructure trends 2026",
    "tier": "planet",
    "provider": "all",
    "namespace": "my-research"
  }'

# Results are cached 30 minutes per query+tier+provider combination
# Results are automatically stored in memory namespace 'research' (or custom namespace)

Response Format

json
{
  "query": "AI infrastructure trends 2026",
  "tier": "planet",
  "provider": "all",
  "findings": { ... },       // structured findings per provider
  "synthesis": "...",        // cross-provider synthesis (deep/planet only)
  "sources": [...],
  "confidence": 0.92,
  "cached": false,
  "memory_key": "research_a1b2c3",
  "_engine": "real"
}

Research History

Retrieve your past research results. Returns the last 20 by default, sorted by recency.

curl
GET /v1/research/history?limit=20
Authorization: Bearer $KEY

# Returns stored research results from memory namespace 'research'

State Management v4.0

Persistent key-value state for your agents, isolated per API key. Supports atomic increment for counters and rate tracking.

Set / Get State

curl
# Set state
curl -X POST https://slopshop.gg/v1/state/my-counter \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"value": 0}'

# Get state
GET /v1/state/my-counter

List All State Keys

curl
GET /v1/state
Authorization: Bearer $KEY

# Returns all state key-value pairs for your API key

Atomic Increment

Increment a numeric state value atomically. Safe for concurrent agent use.

curl
curl -X POST https://slopshop.gg/v1/state/page-views/increment \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"by": 1}'

# by is optional, defaults to 1
# Returns: {"key": "page-views", "value": 42, "previous": 41}

Files API v4.0

Upload, download, manage, and search files within Slopshop. Full CRUD including rename, copy, and full-text search.

Upload a File

curl
curl -X POST https://slopshop.gg/v1/files/upload \
  -H "Authorization: Bearer $KEY" \
  -F "file=@report.pdf"

List & Search Files

Endpoints
GET /v1/files                    — List all files
GET /v1/files?search=report      — Search by filename
GET /v1/files?tag=research       — Filter by tag
GET /v1/files/search?q=report    — Full-text search endpoint

File Operations

Endpoints
# Delete
DELETE /v1/files/:id

# Rename
POST /v1/files/:id/rename
{"filename": "new-report.pdf"}

# Copy
POST /v1/files/:id/copy
{"filename": "report-copy.pdf"}   # optional new filename

Schedule Management v4.0 v4.0

Create, manage, and monitor scheduled agent tasks with full run history, pause/resume controls, and manual triggers. Credits are deducted per run and persisted to your balance.

Create a Schedule

curl
curl -X POST https://slopshop.gg/v1/schedules \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Daily report",
    "cron": "0 9 * * *",
    "task": "crypto-hash-sha256",
    "input": {"data": "daily-seed"}
  }'

Get a Single Schedule

curl
GET /v1/schedules/:id
Authorization: Bearer $KEY

Run History

Every schedule run is logged to the schedule_runs table with timestamp, status, error (if any), and credits used.

curl
GET /v1/schedules/:id/history?limit=50
Authorization: Bearer $KEY

# Returns: [{ran_at, status, credits_used, error}, ...]

Pause / Resume / Trigger

Endpoints
POST /v1/schedules/:id/pause    — Disable schedule (sets enabled=false)
POST /v1/schedules/:id/resume   — Re-enable schedule
POST /v1/schedules/:id/trigger  — Run immediately (sets next_run to now)

Agent Templates v4.0

Pre-built agent configurations for common workflows. Each template wires together chains, memory, army deployments, and tool calls into a single invocable unit.

Research Swarm

Multi-model research loop: Claude researches, Grok critiques, Claude improves. Results stored in free memory. Configurable loop count and budget.

bash
slop template run research-swarm \
  --topic="AI agent infrastructure 2026" \
  --loops=10 \
  --budget=200

Content Machine

End-to-end content pipeline: research, outline, draft, edit, publish. Uses hive workspaces for multi-agent collaboration.

Security Audit

Automated security sweep: DNS enumeration, SSL checks, header analysis, broken link detection. Exports a structured report to memory.

Data Pipeline

ETL template: fetch data (sense-url-content), transform (text-csv-to-json, text-json-flatten), analyze (math-statistics), store (memory-set). Fully configurable via pipes.

Custom Templates

Create your own templates by defining a chain of steps with model preferences, memory namespaces, and tool calls. Publish to the marketplace or keep private.

curl
curl -X POST https://slopshop.gg/v1/templates/create \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"name": "my-pipeline", "steps": [
    {"tool": "sense-url-content", "params": {"url": "$input_url"}},
    {"tool": "text-summarize", "params": {"text": "$prev.content"}},
    {"tool": "memory-set", "params": {"key": "summary_$input_url", "value": "$prev.summary"}}
  ]}'

OAuth Connectors v4.0

Connect external services to your agent workflows via OAuth. Configure once, then agents can authenticate and act on your behalf in GitHub, Slack, Linear, Notion, and more. All tokens are encrypted with AES-256-GCM and auto-rotated nightly.

Configure a Connector

curl
curl -X POST https://slopshop.gg/v1/connectors/config \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"toolkit":"github","client_id":"YOUR_GITHUB_APP_ID","client_secret":"YOUR_SECRET","scopes":["repo"],"auth_url":"https://github.com/login/oauth/authorize","token_url":"https://github.com/login/oauth/access_token"}'

List & Connect

Endpoints
GET  /v1/connectors/list              — List all configured connectors
GET  /v1/connectors/connect/:toolkit  — Start OAuth flow for a toolkit (returns redirect URL)
DELETE /v1/connectors/:id             — Remove a connector and revoke tokens

Vault Security

All OAuth tokens are stored in an encrypted vault using AES-256-GCM. Tokens are auto-rotated nightly via refresh token exchange. Revocation is immediate and propagates to the external provider.

Webhook Triggers v4.0

Define triggers that fire agent workflows when external events arrive via webhook. Connect GitHub push events, Slack messages, Linear issue updates, or any service that sends webhooks.

Create a Trigger

curl
curl -X POST https://slopshop.gg/v1/triggers/create \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"name":"on-push","source":"github","event":"push","action":{"chain_id":"deploy-pipeline"}}'

Receive Webhook & List

Endpoints
POST /v1/triggers/webhook/:id  — Webhook receiver (give this URL to external services)
GET  /v1/triggers/list          — List all configured triggers

When a webhook hits the trigger URL, Slopshop validates the payload, matches it to the trigger config, and executes the associated action (chain, pipe, or single tool call).

Audit Export v4.0

Export your full audit trail for compliance, debugging, or SOC2 readiness. Returns every API call, credit transaction, and agent action associated with your key.

curl
curl https://slopshop.gg/v1/audit/export \
  -H "Authorization: Bearer $KEY"

Returns a JSON array of timestamped audit entries with endpoint, credits used, latency, and response hashes. Suitable for SOC2 evidence collection and incident forensics.

Schema Import v4.0

Bulk-import tool schemas into your Slopshop instance. Useful for self-hosted deployments where you want to register custom tools or sync schemas from a remote catalog.

curl
curl -X POST https://slopshop.gg/v1/import/schemas \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d '{"schemas":[{"slug":"custom-tool","name":"Custom Tool","description":"My custom handler","input":{"type":"object","properties":{"text":{"type":"string"}}},"output":{"type":"object","properties":{"result":{"type":"string"}}}}]}'

Imported schemas are immediately available in tool discovery (GET /v1/tools) and introspection (GET /v1/introspect?slug=custom-tool).

Self-Hosting

Slopshop is fully self-hostable. Run your own instance with zero external dependencies for compute-tier APIs.

Quick Start

bash
git clone https://github.com/slopshop/slopshop.git
cd slopshop
npm install
node server-v2.js

The server starts on port 3000 by default.

Docker

bash
docker build -t slopshop .
docker run -p 3000:3000 \
  -e ANTHROPIC_API_KEY=sk-ant-... \
  -e OPENAI_API_KEY=sk-... \
  slopshop

Environment Variables

VariableRequiredDescription
PORTNoServer port (default: 3000)
ANTHROPIC_API_KEYFor LLM APIsAnthropic API key for Claude-powered APIs
OPENAI_API_KEYFor LLM APIsOpenAI API key (fallback if Anthropic not set)
XAI_API_KEYFor LLM APIsxAI API key for Grok-powered APIs
DEEPSEEK_API_KEYFor LLM APIsDeepSeek API key for DeepSeek-powered APIs
SLOPSHOP_SECRETNoSecret for JWT signing and key generation
DEMO_CREDITSNoCredits for demo key (default: 100)
RATE_LIMITNoMax requests per minute per key (default: 60)

North Star

The North Star API lets your agent define a single guiding goal. Setting it triggers a research swarm, stores findings in memory, and returns a summary.

Set North Star

curl
curl -X POST https://slopshop.gg/v1/northstar/set \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"goal": "building an open-source AI messaging platform"}'

Response: Triggers a research swarm, stores results in memory, and returns a findings summary with sources and next steps.

Get Current North Star

curl
curl https://slopshop.gg/v1/northstar \
  -H "Authorization: Bearer sk-slop-your-key-here"

Returns the current North Star goal for your account.

Daily Hive Intelligence

Run automated research on your North Star goal. Results are posted to your Hive workspace and stored in memory.

curl
curl -X POST https://slopshop.gg/v1/hive/daily-intelligence \
  -H "Authorization: Bearer sk-slop-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"mode": "medium", "hive_id": "optional-hive-id"}'
ModeCreditsDescription
light20Quick scan — top headlines and basic research on your North Star
medium35Deeper analysis with multiple sources, competitor tracking, trend detection
deep75Full research report with citations, market analysis, and actionable recommendations

Results are automatically posted to your Hive channel and stored in memory for later retrieval.

Error Codes & Refunds

Slopshop uses consistent error codes across all 600+ endpoints. Credits are automatically refunded when a handler errors — you only pay for successful calls.

Auto-Refund Policy

If an API handler throws an error after credits are deducted, those credits are automatically refunded to your account. You will never be charged for a failed call. The refund is instant and reflected in your balance immediately.

Rate Limits

All API keys are rate-limited to 120 requests per minute. If you hit the limit, wait 30 seconds before retrying. The rate_limited error includes a retry_after field in seconds.

Common Error Codes

CodeHTTP StatusDescription
missing_fields400Required fields are missing from the request body. Check the API schema for required parameters.
insufficient_credits402Your account does not have enough credits for this call. Purchase more at /pricing.
rate_limited429Too many requests. Wait for retry_after seconds (typically 30s) before retrying.
api_not_found404The requested API slug does not exist. Check /tools for the full catalog.
unauthorized401Missing or invalid API key. Include Authorization: Bearer YOUR_KEY in headers.
handler_error500Internal handler failure. Credits are auto-refunded. Report persistent errors to dev@slopshop.gg.

Slopshop is a self-hostable backend for AI agents: real tools, persistent memory, one execution layer.

82 categories · 600+ handlers · Dream Engine + Multiplayer Memory · Planet Research · 8 core memory APIs free forever · 500 free credits on signup

memory-set · memory-get · memory-search · memory-list · memory-delete · memory-stats · memory-namespace-list · counter-get — always 0 credits