____ __ ____ ____
/ ___\/ / / __ \/ __ \
\__ \/ / / / / / /_/ /
___/ / /___/ /_/ / ____/
/____/_____/\____/_/
S T A T E L E S S · L I G H T W E I G H T
O P E R A T I N G · P R I M I T I V E S
slopshop.gg v4.0 · AI Agent OS
Getting Started
Slopshop is the Self-Hostable MCP Agent Runtime OS for the Computer-Use Era — 600+ real tools across 82 categories. Dream Engine (REM-style memory consolidation, 5 strategies, schedule-based, restart-persistent) and Multiplayer Memory (shared spaces, collaborator invites, retention tiers) are the north-star features. Free evolving persistent memory (GraphRAG + episodic), zero-trust agent identity (NIST-aligned SPIFFE/SVID), computer use backend, MCP gateway + policy engine, visual DAG workflow builder, marketplace, and Army-scale orchestration. Every response includes _engine: "real". Self-hostable, air-gapped, designed for production agents.
Quick Install
npm install -g slopshopSet Your API Key
export SLOPSHOP_KEY=sk-slop-your-key-hereMake Your First Calls
# Sign up for free credits
slop signup
# Natural language (auto-routes to the right tool)
slop "hash hello world"
# Direct tool call
slop call crypto-hash-sha256 --data "hello world"
# Start MCP server for Claude Code / Cursor
slop mcp serve
# Persistent memory (free, 0 credits)
slop memory set mykey "some value"
slop memory get mykeyYou will get back a JSON response with real computed results, along with metadata including _engine: "real" confirming it was actually processed.
Why use Slopshop instead of raw APIs?
| Raw APIs | Slopshop |
|---|---|
| Manage 10+ API keys | One API key |
| Build retry logic yourself | Built-in retry + circuit breakers |
| Parse different response formats | Consistent {data, meta, guarantees} envelope |
| No execution proof | _engine: "real" + X-Output-Hash on every response (see trust model) |
| Pay per provider subscription | Pay per credit, memory is free |
| Build observability yourself | X-Credits-Used, X-Latency-Ms, X-RateLimit-* headers |
| No task abstraction | POST /v1/tasks/run with confidence + guarantees |
What _engine: "real" Means
Every Slopshop response includes "_engine": "real" to signal that the result was genuinely computed, not cached from a template or mocked. This is Slopshop's core guarantee: when you call an API, your data is actually processed. Hash APIs run real cryptographic functions. Network APIs make real DNS/HTTP calls. LLM APIs send real prompts to Claude, GPT, Grok, or DeepSeek. There are no fake responses.
Trust & Verification Model
What _engine: "real" means
Every Slopshop API response includes _engine: "real". This means:
- The computation actually ran — your input was processed by real Node.js code, not proxied to an LLM or returned from a template
- Deterministic outputs — the same input always produces the same output for compute-tier APIs
- Sub-millisecond execution — 600+ compute handlers benchmarked at <1ms average
What "execution proof" means today
Every response includes an X-Output-Hash header — a SHA-256 hash of the output. This provides:
- Tamper detection — you can verify the response wasn't modified in transit
- Audit trail — hashes are logged and can be replayed
- Merkle proofs — Army deployments aggregate result hashes into a Merkle tree for batch verification
Honest limitations
The trust source is Slopshop's server. Today's verification model is:
- You trust that our server ran the code it claims to
- Output hashes prove consistency but not that specific code executed
- Self-hosting lets you verify by running the same code locally
We want to be upfront about this: _engine: "real" and X-Output-Hash are claims made by our server. They prove consistency and detect tampering in transit, but they do not cryptographically prove that a specific piece of code executed on our infrastructure. The source of trust is still Slopshop. This is a real limitation.
Roadmap: Stronger verification
We're exploring paths toward genuinely trustless verification:
- Trusted Execution Environments (TEE) — Run compute handlers inside Intel SGX or AWS Nitro Enclaves so the hardware attests that specific code ran. This is the most practical path to trustless execution without the latency cost of ZK proofs.
- Reproducible builds — Publish handler source code so self-hosters can verify exact match with cloud deployment
- ZK proofs — For specific high-value operations, ZK-SNARKs could prove computation without revealing inputs. Currently too slow (~100x overhead) for general compute.
Self-hosting as verification
The strongest trust model available today: npm install slopshop && node server-v2.js. Run the exact same code locally. Compare outputs. If our cloud returns the same result as your local instance for the same input, you have reproducible verification without trusting our server.
What You Can Build (Combo Features)
The real power of Slopshop is not individual tools -- it is what you build by combining them. Here are 6 production combos that chain multiple features together.
Non-Stop Research Agent
Chain Claude, Grok, and Claude in an infinite loop with free memory storage. The research agent critiques itself and improves continuously.
# 1. Create the chain
curl -X POST https://slopshop.gg/v1/chain/create \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"name":"research-loop","loop":true,"steps":[
{"agent":"claude","prompt":"Research latest AI infrastructure trends","pass_context":true},
{"agent":"grok","prompt":"Critique the research, find gaps and contradictions","pass_context":true},
{"agent":"claude","prompt":"Improve the research based on critique, find new angles","pass_context":true}
]}'
# 2. Queue for overnight
POST /v1/chain/queue {"frequency":"daily"}
# 3. Results auto-stored in free memory
POST /v1/memory-get {"key":"chain_results"}Agent Organization (5-Agent Team)
Create a full product team with workspace, roles, standups, knowledge sharing, and democratic decision-making.
1. POST /v1/hive/create {"name":"product-team","channels":["research","writing","review"]}
2. POST /v1/copilot/scale {"main_session_id":"product-team","count":5,"roles":["researcher","writer","critic","editor","publisher"]}
3. POST /v1/standup/submit {"yesterday":"Wrote 3 articles","today":"Review and publish"}
4. POST /v1/knowledge/add {"subject":"article-1","predicate":"reviewed_by","object":"critic-agent"}
5. POST /v1/governance/propose {"title":"Should we pivot topics?"}1,000 Parallel Analysts
Route to the cheapest LLM, deploy a massive army of agents, save the replay for auditing, and verify results with Merkle proofs.
1. POST /v1/router/smart {"task":"analyze financial data","optimize_for":"cost"}
2. POST /v1/army/deploy {"task":"analyze","count":10000,"api":"text-word-count"}
3. POST /v1/replay/save {"name":"big-analysis","events":[...]}
4. POST /v1/proof/merkle {"data":"...results..."}Self-Improving Agent
Run evals, store lessons in memory, compete in tournaments, and loop to auto-improve over time.
1. POST /v1/eval/run {"agent_slug":"text-word-count","test_cases":[...]}
2. POST /v1/memory-set {"key":"eval_lessons","value":"accuracy was 95%"}
3. POST /v1/tournament/create {"name":"best-analyzer","participants":["agent-v1","agent-v2"]}
4. Use chain to loop: eval -> learn -> re-evalCredits & Usage
Check your balance, buy credits, and track per-agent usage across your account.
1. GET /v1/credits/balance
2. POST /v1/credits/buy {"amount":1000}
3. GET /v1/analytics/usageEnterprise Fleet
Set up teams with budgets, monitor usage, forecast costs, and get Slack alerts when budgets run low.
1. POST /v1/teams/create {"name":"Engineering"}
2. POST /v1/keys/set-budget {"monthly_budget":50000}
3. GET /v1/analytics/usage
4. GET /v1/analytics/cost-forecast
5. POST /v1/webhooks/create {"url":"https://slack.com/webhook","events":["budget_alert"]}Authentication
All API calls require a Bearer token in the Authorization header.
Using Your Key
Authorization: Bearer sk-slop-your-key-hereGetting Your API Key
Sign up to get your API key with 500 free credits:
curl -X POST https://slopshop.gg/v1/auth/signup \
-H "Content-Type: application/json" \
-d '{"email": "you@example.com", "password": "yourpassword"}'Returns {"api_key": "sk-slop-...", "balance": 500}. Store your key securely -- it cannot be retrieved after creation.
To generate additional keys programmatically (requires existing auth), use POST /v1/keys with your existing API key in the Authorization header. This is not the signup endpoint.
Demo Key
For testing, you can use the demo key sk-slop-demo-key-12345678 which comes with 200 credits and is rate-limited to 10 requests per minute. The demo key works for all compute-tier APIs.
Credits & Billing
Slopshop uses a credit system. Every API call costs a specific number of credits based on its complexity.
Credit Cost Tiers
| Cost Level | Credits | Examples |
|---|---|---|
| Free | 0 | memory-set, memory-get, memory-search, memory-list, memory-delete, memory-stats, counter-get |
| Trivial | 1 | UUID generation, base64 encode, hashing, date parse |
| Simple | 1 | Text processing, validation, regex, password hash |
| Medium | 3 | CSV parse, diff, statistics, JSON schema generation |
| Complex | 5 | Network calls (DNS, HTTP, SSL), multi-step compute |
| LLM Small | 10 | Short LLM calls: sentiment, summarize, classify |
| LLM Medium | 10 | Medium LLM calls: blog outline, code review, translate |
| LLM Large | 20 | Long LLM calls: blog draft, code generation, proposals |
Checking Balance
curl https://slopshop.gg/v1/credits/balance \
-H "Authorization: Bearer sk-slop-your-key-here"Buying Credits
curl -X POST https://slopshop.gg/v1/credits/buy \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"amount": 1000}'Auto-Reload
Configure auto-reload to automatically purchase more credits when your balance drops below a threshold. Set auto_reload_threshold and auto_reload_amount on your account to enable this.
Rate Limits
| Endpoint | Limit | Window |
|---|---|---|
POST /v1/auth/signup | 5 requests | per hour per IP |
POST /v1/auth/login | 10 requests | per minute per IP |
POST /v1/:slug (any API) | 100 requests | per minute per key |
POST /v1/agent/run | 10 requests | per minute per key |
POST /v1/batch | 20 requests | per minute per key |
Rate limits are per-IP for auth endpoints and per-API-key for tool calls. Exceeding limits returns HTTP 429. Contact dev@slopshop.gg for higher limits on enterprise plans.
Agent-to-Agent Credit Transfer
Transfer credits between agent keys with POST /v1/credits/transfer. Pass to_key and amount. Transfers are atomic and logged to the audit trail. Use this to enable agent-to-agent commerce — one agent paying another for work completed.
curl -X POST https://slopshop.gg/v1/credits/transfer \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"to_key": "sk-slop-other-key", "amount": 50}'Response Format
Every tool call returns a consistent envelope:
{
"data": { ... tool-specific result ... },
"meta": {
"api": "tool-slug",
"credits_used": 3,
"balance": 97,
"latency_ms": 12,
"engine": "real"
}
}Response Headers
Every API call response includes these headers:
| Header | Description |
|---|---|
X-Credits-Used | Credits deducted for this call |
X-Credits-Remaining | Balance after this call |
X-Latency-Ms | Server processing time in milliseconds |
X-Request-Id | Unique identifier for this request |
X-Engine | real, llm, or needs_key |
X-RateLimit-Limit | Max requests per minute (default: 60) |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Seconds until rate limit window resets |
Error Responses
{
"error": {
"code": "error_code",
"message": "Human-readable description"
}
}HTTP Status Codes
| Code | Meaning |
|---|---|
200 | Success |
400 | Bad request (missing or invalid parameters) |
401 | Unauthorized (missing or invalid API key) |
402 | Insufficient credits |
404 | API not found |
429 | Rate limited |
500 | Server error |
API Reference
All 82 categories and 600+ endpoints documented below. Each API accepts a POST request with a JSON body and returns JSON. The base URL is https://slopshop.gg/v1/{slug}.
Machine-Readable Specs
GET /v1/openapi.json | Full OpenAPI 3.0 specification — import into Postman, Insomnia, or any OpenAPI-compatible tool |
GET /v1/tools | JSON list of all tools with slugs, descriptions, categories, and credit costs |
GET /v1/tools?format=mcp | MCP-formatted tool list for Claude Code and other MCP clients |
GET /v1/tools?category=crypto | Filter tools by category |
Integration Guides
Slopshop integrates with the tools your agents already use:
| Ecosystem Integrations | MCP Server, Goose Recipes, Aider, OpenCode, Cline Skills, LangChain/LangGraph — v4.0 |
| MCP for Claude Code | Native MCP server — all tools available as first-class Claude tools |
| For AI Agents | LangChain, CrewAI, AutoGPT integration guides |
| CLI quickstart | Terminal interface to all 78 tool categories |
| Zapier integration | No-code access via Zapier actions |
| MCP deep dive | Advanced MCP configuration and usage |
Browse the full API catalog at /tools or use the CLI: slop list
CLI Reference
The Slopshop CLI lets you call any API from the terminal. Install globally with npm.
slop call <slug>
Call any API by slug. Pass data with --data or -d, or pipe JSON via stdin.
# Simple call with inline data
slop call crypto-hash-sha256 --data "hello world"
# Pass JSON body
slop call text-word-count -d '{"text": "Count these words please"}'
# Pipe from file
cat document.txt | slop call llm-summarizeslop pipe <slug>
Run a pre-built pipe (multi-step workflow) by slug.
slop pipe content-machine -d '{"topic": "AI agents", "keywords": "automation"}'
slop pipe security-audit -d '{"data": "{}", "domain": "example.com"}'slop search <query>
Search available APIs by name or description.
slop search hash
slop search "json convert"
slop search dnsslop list
List all available APIs grouped by category.
slop list
slop list --category "Crypto & Security"slop balance
Check your current credit balance.
slop balanceslop buy <amount>
Purchase credits.
slop buy 1000slop health
Check API server health and status.
slop healthslop help
Show help and list all available commands.
slop help
slop call --helpSDK Reference
Python SDK
Install
pip install slopshopInitialize
from slopshop import Slop
client = Slop(api_key="sk-slop-your-key-here")
# Or use SLOPSHOP_KEY environment variable
client = Slop()Call an API
result = client.call("crypto-hash-sha256", data="hello world")
print(result["hash"])Batch Calls
results = client.batch([
("crypto-hash-sha256", {"data": "hello"}),
("crypto-hash-md5", {"data": "hello"}),
("crypto-uuid", {}),
])
for r in results:
print(r)Run a Pipe
result = client.pipe("content-machine", topic="AI agents", keywords="automation")
print(result["steps"])Resolve (call with auto-retry)
result = client.resolve("llm-summarize", text="Long document...", retries=3)
print(result["summary"])Node.js SDK
Install
npm install slopshopInitialize
const { Slop } = require('slopshop');
const client = new Slop({ apiKey: 'sk-slop-your-key-here' });
// Or use SLOPSHOP_KEY environment variable
const client = new Slop();Call an API
const result = await client.call('crypto-hash-sha256', { data: 'hello world' });
console.log(result.hash);Batch Calls
const results = await client.batch([
['crypto-hash-sha256', { data: 'hello' }],
['crypto-hash-md5', { data: 'hello' }],
['crypto-uuid', {}],
]);
results.forEach(r => console.log(r));Run a Pipe
const result = await client.pipe('content-machine', {
topic: 'AI agents',
keywords: 'automation',
});
console.log(result.steps);Resolve (call with auto-retry)
const result = await client.resolve('llm-summarize', {
text: 'Long document...',
}, { retries: 3 });
console.log(result.summary);Pipes
Slopshop has two distinct pipe systems:
POST /v1/pipes/:slug— Runs a pre-built pipe. These are named, fixed workflows (e.g.content-machine,security-audit). List all available pre-built pipes withGET /v1/pipes.POST /v1/pipe— Runs a custom pipe. You provide astepsarray defining which APIs to chain and how to pass data between them. This endpoint gives you full control to build your own multi-step workflows.
Both pipe endpoints execute all steps in sequence, passing data between them automatically. You pay the total credit cost of all steps combined.
Running a Pre-Built Pipe
curl -X POST https://slopshop.gg/v1/pipes/content-machine \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"topic": "How AI agents use APIs", "keywords": "slopshop, automation"}'Running a Custom Pipe
curl -X POST https://slopshop.gg/v1/pipe \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"steps": [{"api": "crypto-hash-sha256"}, {"api": "text-base64-encode"}], "input": "hello"}'Listing All Pre-Built Pipes
curl https://slopshop.gg/v1/pipesAll 14 Pipes
Lead from Text 7 credits
Extract emails from text, validate they exist, generate prospect profiles.
lead-from-text | Category: SalesContent Machine 35 credits
Generate blog outline, draft the post, score readability.
content-machine | Category: ContentSecurity Audit 7 credits
Checksum content, validate JSON, check SSL certificate.
security-audit | Category: SecurityCode Ship 35 credits
Review code, generate tests, get diff stats.
code-ship | Category: DevData Clean 5 credits
Parse CSV to JSON, deduplicate, validate output.
data-clean | Category: DataEmail Intelligence 4 credits
Extract emails from text, extract URLs, extract phone numbers, get word stats.
email-intel | Category: AnalysisHash Everything 4 credits
Compute MD5, SHA256, SHA512, and full checksum of input data.
hash-everything | Category: SecurityText Analyzer 4 credits
Word count, readability score, keyword extraction, language detection.
text-analyze | Category: AnalysisJSON Pipeline 5 credits
Validate JSON, format it, generate schema, flatten to dot-notation.
json-pipeline | Category: DataMeeting to Actions 20 credits
Summarize meeting notes, extract action items, draft follow-up email.
meeting-to-actions | Category: BusinessCode Explainer 30 credits
Explain code, document it, generate tests.
code-explain | Category: DevCrypto Toolkit 4 credits
Generate UUID, password, OTP, and a random encryption key.
crypto-toolkit | Category: SecurityDomain Recon 20 credits
DNS lookup, SSL check, HTTP status, email validation for a domain.
domain-recon | Category: NetworkOnboarding Pack 3 credits
Generate fake test user, create a JWT for them, hash their password.
onboarding-pack | Category: DevAgent & Templates
The agent endpoint is Slopshop's killer feature. Describe what you want in plain English — the system picks tools, chains them, executes, and returns a summarized answer. Every run is automatically stored in memory for future reference.
POST /v1/agent/run
Natural language task execution. The agent plans which tools to call, executes them in sequence, and summarizes the results.
curl https://slopshop.gg/v1/agent/run \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"task": "What tech stack does stripe.com use?"}'Response includes answer, steps (which tools were called and why), run_id (for retrieving from history), total_credits, and time_ms. Each run costs 20 credits overhead + the cost of each tool used.
POST /v1/ask
Simplified version — same engine, cleaner response. Just pass a question field.
curl https://slopshop.gg/v1/ask \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"question": "Hash the word slopshop with SHA256"}'Agent Templates
Pre-built agent workflows that handle common tasks. Templates automatically store results in memory.
# List available templates
curl https://slopshop.gg/v1/agent/templates
# Run a template
curl https://slopshop.gg/v1/agent/template/security-audit \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com"}'| Template | Description | Input |
|---|---|---|
security-audit | Full security audit — tech stack, SSL, headers, response time | {"url": "..."} |
content-analyzer | Fetch, analyze, and summarize any URL | {"url": "..."} |
data-processor | Transform, filter, and analyze JSON/CSV data | {"data": [...], "instructions": "..."} |
domain-recon | Full domain recon — DNS, tech, SSL, sitemap | {"domain": "..."} |
hash-verify | Hash with SHA-256, SHA-512, and MD5 | {"text": "..."} |
Agent History
Every agent run is auto-stored in memory. Retrieve past runs:
curl https://slopshop.gg/v1/agent/history \
-H "Authorization: Bearer YOUR_KEY"Returns all past runs with task, answer, tools used, timestamps, and success status.
Scoped API Keys
Create keys with restricted access using POST /v1/auth/create-scoped-key. Scoped keys limit which tool tiers a key can call and optionally cap total credit spend. Use these to safely share keys with untrusted agents or external services.
| Scope | Description |
|---|---|
compute | Pure compute APIs only (hash, parse, math, encoding) |
network | Network APIs only (DNS, HTTP, SSL) |
llm | AI/LLM APIs only |
memory | Memory APIs only |
execute | Code execution APIs only |
* | All tiers (default, same as a normal key) |
# Create a compute-only key with a 100-credit cap
curl -X POST https://slopshop.gg/v1/auth/create-scoped-key \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"scope": "compute", "credit_cap": 100}'
# List all your keys
curl https://slopshop.gg/v1/auth/keys \
-H "Authorization: Bearer sk-slop-your-key-here"
# Revoke a key
curl -X DELETE https://slopshop.gg/v1/auth/keys/sk-slop-scoped-key \
-H "Authorization: Bearer sk-slop-your-key-here"Python Code Execution
Run Python code in a sandboxed subprocess with POST /v1/exec-python. Supports the standard library (json, math, datetime, re, hashlib, base64, urllib). Returns stdout, stderr, and execution status. Costs 5 credits. Timeout: 30 seconds.
curl -X POST https://slopshop.gg/v1/exec-python \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"code": "import math\nprint(math.factorial(10))"}'Semantic Memory Search
Search stored memories by semantic similarity with POST /v1/memory-vector-search. Scores memories by text similarity against keys, values, and tags. Returns ranked results by relevance. This is free at 0 credits like all core memory APIs.
curl -X POST https://slopshop.gg/v1/memory-vector-search \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"namespace": "my-agent", "query": "authentication errors"}'File Storage
Persistent blob storage for agents. Upload files as base64, download by ID, or list all stored files. Useful for passing large payloads between agent steps or storing agent-generated artifacts.
# Upload a file (base64-encoded content)
curl -X POST https://slopshop.gg/v1/files/upload \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"filename": "report.txt", "content": "SGVsbG8gd29ybGQ="}'
# Download a file
curl https://slopshop.gg/v1/files/{id} \
-H "Authorization: Bearer sk-slop-your-key-here"
# List all files
curl https://slopshop.gg/v1/files \
-H "Authorization: Bearer sk-slop-your-key-here"SSE Streaming for Agent Runs
Stream agent execution progress in real time by adding "stream": true to POST /v1/agent/run. Returns Server-Sent Events with planning, step, and complete events so your UI can show live progress as the agent works.
curl -X POST https://slopshop.gg/v1/agent/run \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"task": "Analyze slopshop.gg SSL and DNS", "stream": true}'
# Emits SSE events:
# event: planning data: {"tools":["net-ssl-check","net-dns-lookup"]}
# event: step data: {"tool":"net-ssl-check","result":{...}}
# event: complete data: {"answer":"...","total_credits":12}Auto-Fallback Routing
Agent execution automatically retries failed tool calls with alternative tools. For example, if crypto-hash-sha256 fails, the agent falls back to sha512 or md5. If sense-url-content fails, it falls back to meta or links extraction. Fallback results include "fallback_used": true in the response so you know a substitute was used.
Cost Preview (Dry-Run Mode)
Add "preview": true or "dry_run": true to any POST body to get an estimated cost without executing. Returns estimated credits, estimated USD cost, and whether your current balance can afford it. Works on both POST /v1/{slug} and POST /v1/tasks/run.
curl -X POST https://slopshop.gg/v1/llm-blog-draft \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"topic": "AI agents", "preview": true}'
# Response (no execution, no credit charge):
# {"preview": true, "estimated_credits": 20, "estimated_usd": 0.02, "can_afford": true}Free Memory Tier
Core memory operations are completely free (0 credits). This lets agents accumulate state on Slopshop with zero friction — the more your agent remembers, the more valuable the platform becomes.
Free APIs (0 credits)
| API | Description |
|---|---|
memory-set | Store a key-value pair with optional tags and namespace |
memory-get | Retrieve a value by key |
memory-search | Search by tag, key substring, or value substring |
memory-list | List all keys in a namespace |
memory-delete | Delete a key |
memory-stats | Namespace statistics |
memory-namespace-list | List all namespaces |
counter-get | Read a counter value |
Paid Memory APIs (1 credit each)
Advanced operations: memory-expire, memory-increment, memory-append, memory-history, memory-export, memory-import, memory-namespace-clear, queue-push, queue-pop, queue-peek, queue-size, counter-increment.
Example: Agent with Persistent Memory
# Store something (FREE)
curl https://slopshop.gg/v1/memory-set \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"namespace": "my-agent", "key": "last-run", "value": "analyzed stripe.com", "tags": "audit,web"}'
# Retrieve it later (FREE)
curl https://slopshop.gg/v1/memory-get \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"namespace": "my-agent", "key": "last-run"}'
# Search across all memories (FREE)
curl https://slopshop.gg/v1/memory-search \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"namespace": "my-agent", "tag": "audit"}'Agent Chaining (Infinite Consciousness)
Create infinite agent-to-agent chains where the output of one agent becomes the input for the next. Supports loop mode for continuous research cycles, pause/resume for long-running chains, and context passing between steps. Build research loops, review pipelines, and multi-stage reasoning workflows.
Create a Chain
Use POST /v1/chain/create to define a multi-step chain with named agents and prompts. Enable loop: true for continuous cycling.
curl -X POST https://slopshop.gg/v1/chain/create \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"name":"research-loop","loop":true,"steps":[{"agent":"claude","prompt":"Research AI news"},{"agent":"grok","prompt":"Critique and improve"}]}'Advance a Chain Step
Use POST /v1/chain/advance to manually advance to the next step, passing context from the previous step.
curl -X POST https://slopshop.gg/v1/chain/advance \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"chain_id":"CHAIN_ID","context":{"previous_output":"AI news summary from step 1"}}'Check Chain Status
curl https://slopshop.gg/v1/chain/status/CHAIN_ID \
-H "Authorization: Bearer sk-slop-your-key-here"Pause & Resume a Chain
# Pause a running chain
curl -X POST https://slopshop.gg/v1/chain/pause/CHAIN_ID \
-H "Authorization: Bearer sk-slop-your-key-here"
# Resume a paused chain
curl -X POST https://slopshop.gg/v1/chain/resume/CHAIN_ID \
-H "Authorization: Bearer sk-slop-your-key-here"List Your Chains
curl https://slopshop.gg/v1/chain/list \
-H "Authorization: Bearer sk-slop-your-key-here"Python SDK
chain = client.chain_create(
name="research-loop",
loop=True,
steps=[
{"agent": "claude", "prompt": "Research AI news"},
{"agent": "grok", "prompt": "Critique and improve"},
]
)
print(chain["chain_id"])
# Advance step
result = client.chain_advance(chain_id=chain["chain_id"], context={"data": "step 1 output"})
# Pause / resume
client.chain_pause(chain_id=chain["chain_id"])
client.chain_resume(chain_id=chain["chain_id"])| Endpoint | Method | Description |
|---|---|---|
/v1/chain/create | POST | Create a new agent chain with steps and optional loop mode |
/v1/chain/advance | POST | Advance chain to next step with context passing |
/v1/chain/status/:id | GET | Check chain execution status |
/v1/chain/pause/:id | POST | Pause a running chain |
/v1/chain/resume/:id | POST | Resume a paused chain |
/v1/chain/list | GET | List all chains for the authenticated user |
Prompt Queue
Queue prompts for deferred or overnight batch execution. Set a schedule and frequency to run batch jobs automatically. Ideal for nightly analysis, daily report generation, or periodic data processing tasks that do not need real-time results.
Queue Prompts for Batch Execution
curl -X POST https://slopshop.gg/v1/chain/queue \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"prompts":[{"prompt":"Analyze competitors","agent":"claude"},{"prompt":"Summarize findings","agent":"grok"}],"frequency":"daily","schedule":"2026-03-28T09:00:00Z"}'One-Time Deferred Job
curl -X POST https://slopshop.gg/v1/chain/queue \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"prompts":[{"prompt":"Generate weekly analytics report","agent":"claude"}],"frequency":"once","schedule":"2026-03-28T03:00:00Z"}'Node.js SDK
const job = await client.chainQueue({
prompts: [
{ prompt: 'Analyze competitors', agent: 'claude' },
{ prompt: 'Summarize findings', agent: 'grok' },
],
frequency: 'daily',
schedule: '2026-03-28T09:00:00Z',
});
console.log(job.queue_id);Template Marketplace
Publish, browse, fork, and rate agent templates. Share your best workflows with the community, discover templates from other builders, and fork them to customize. Rating helps surface the most useful templates.
Publish a Template
curl -X POST https://slopshop.gg/v1/templates/publish \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"name":"SEO Content Pipeline","description":"Generate SEO-optimized blog posts with keyword research","steps":[{"api":"llm-blog-outline","input":{"topic":"{{topic}}"}},{"api":"llm-blog-draft"},{"api":"text-readability-score"}],"tags":["seo","content","blog"]}'Browse the Marketplace
# Browse all templates
curl https://slopshop.gg/v1/templates/browse \
-H "Authorization: Bearer sk-slop-your-key-here"
# Filter by tag
curl "https://slopshop.gg/v1/templates/browse?tag=security" \
-H "Authorization: Bearer sk-slop-your-key-here"Fork a Template
curl -X POST https://slopshop.gg/v1/templates/fork/TEMPLATE_ID \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"name":"My Custom SEO Pipeline"}'Rate a Template (1-5)
curl -X POST https://slopshop.gg/v1/templates/rate/TEMPLATE_ID \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"rating":5,"review":"Excellent pipeline, saved me hours of setup"}'Python SDK
# Publish
tmpl = client.templates_publish(name="SEO Pipeline", steps=[...], tags=["seo"])
# Browse
templates = client.templates_browse(tag="security")
# Fork
forked = client.templates_fork(template_id="TMPL_ID", name="My Version")
# Rate
client.templates_rate(template_id="TMPL_ID", rating=5)| Endpoint | Method | Description |
|---|---|---|
/v1/templates/publish | POST | Publish a template to the marketplace |
/v1/templates/browse | GET | Browse and search marketplace templates |
/v1/templates/fork/:id | POST | Fork a template into your account |
/v1/templates/rate/:id | POST | Rate a template 1-5 with optional review |
Agent Evaluations
Run structured evaluations against agents using test cases, compare agents head-to-head, and view the public leaderboard. Use evals to benchmark your agent pipelines and track quality over time.
Run an Evaluation
curl -X POST https://slopshop.gg/v1/eval/run \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"agent_id":"my-research-agent","test_cases":[{"input":"Summarize quantum computing","expected_keywords":["qubit","superposition","entanglement"]},{"input":"Explain RSA encryption","expected_keywords":["prime","modular","public key"]}]}'Compare Two Agents
curl -X POST https://slopshop.gg/v1/eval/compare \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"agent_a":"research-v1","agent_b":"research-v2","test_cases":[{"input":"What is CRISPR?"}]}'View Leaderboard
curl https://slopshop.gg/v1/eval/leaderboard \
-H "Authorization: Bearer sk-slop-your-key-here"Get Detailed Report
curl https://slopshop.gg/v1/eval/report/EVAL_ID \
-H "Authorization: Bearer sk-slop-your-key-here"Node.js SDK
const evalResult = await client.evalRun({
agent_id: 'my-agent',
test_cases: [
{ input: 'Summarize quantum computing', expected_keywords: ['qubit'] },
],
});
console.log(evalResult.score);
const comparison = await client.evalCompare({
agent_a: 'v1', agent_b: 'v2',
test_cases: [{ input: 'What is CRISPR?' }],
});
console.log(comparison.winner);| Endpoint | Method | Description |
|---|---|---|
/v1/eval/run | POST | Run eval with test cases against an agent |
/v1/eval/compare | POST | Compare two agents on the same test cases |
/v1/eval/leaderboard | GET | Public agent leaderboard by eval score |
/v1/eval/report/:id | GET | Detailed evaluation report with per-case results |
Replay System
Save and replay entire swarm runs. Capture every step, every agent decision, and every output so you can replay them later for debugging, auditing, or demonstration purposes.
Save a Swarm Run
curl -X POST https://slopshop.gg/v1/replay/save \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"run_id":"RUN_ID","name":"Q1 research swarm","tags":["research","quarterly"]}'Retrieve Replay Data
curl https://slopshop.gg/v1/replay/REPLAY_ID \
-H "Authorization: Bearer sk-slop-your-key-here"Python SDK
# Save a replay
replay = client.replay_save(run_id="RUN_ID", name="Q1 research swarm")
# Retrieve replay
data = client.replay_get(replay_id=replay["replay_id"])
for step in data["steps"]:
print(f"{step['agent']}: {step['action']}")| Endpoint | Method | Description |
|---|---|---|
/v1/replay/save | POST | Save a swarm run for later replay |
/v1/replay/:id | GET | Retrieve full replay data with all steps |
Credits & Usage
Check your balance, purchase credits, and monitor usage across your account. Every API call deducts credits based on the tool's cost, and core memory APIs are always free (0 credits).
Check Balance
curl https://slopshop.gg/v1/credits/balance \
-H "Authorization: Bearer sk-slop-your-key-here"Buy Credits
curl -X POST https://slopshop.gg/v1/credits/buy \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"amount":1000}'Usage Analytics
curl https://slopshop.gg/v1/analytics/usage \
-H "Authorization: Bearer sk-slop-your-key-here"Node.js SDK
// Check balance
const balance = await client.creditsBalance();
// Buy credits
await client.creditsBuy({ amount: 1000 });
// View usage
const usage = await client.analyticsUsage();| Endpoint | Method | Description |
|---|---|---|
/v1/credits/balance | GET | Check current credit balance |
/v1/credits/buy | POST | Purchase credits |
/v1/analytics/usage | GET | View usage analytics and cost breakdown |
Agent Reputation
Upvote or downvote agents based on the quality of their work. View the reputation leaderboard to find the best-performing agents in the ecosystem. Reputation scores are public and help build trust in multi-agent systems.
Vote on an Agent
curl -X POST https://slopshop.gg/v1/reputation/vote \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"agent_id":"research-bot-v2","vote":"up","reason":"Consistently high-quality research output"}'View Reputation Leaderboard
curl https://slopshop.gg/v1/reputation/leaderboard \
-H "Authorization: Bearer sk-slop-your-key-here"Node.js SDK
// Upvote an agent
await client.reputationVote({ agent_id: 'research-bot-v2', vote: 'up' });
// View leaderboard
const leaders = await client.reputationLeaderboard();
leaders.forEach(a => console.log(`${a.agent_id}: ${a.score}`));| Endpoint | Method | Description |
|---|---|---|
/v1/reputation/vote | POST | Upvote or downvote an agent |
/v1/reputation/leaderboard | GET | View top-rated agents |
Local Compute Enhancement
Run computations locally for speed and privacy, then enhance results with Slopshop cloud tools. Combine local hash computation with cloud-based analysis, or run local text processing and enhance with LLM-powered insights. Best of both worlds: local speed + cloud power.
Enhance Local Results
# Step 1: Run locally (e.g., hash a file)
LOCAL_HASH=$(sha256sum myfile.txt | cut -d' ' -f1)
# Step 2: Enhance with cloud tools
curl -X POST https://slopshop.gg/v1/agent/run \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d "{\"task\":\"I computed SHA256 hash $LOCAL_HASH for myfile.txt. Verify this is a valid SHA256 format and store in memory for audit trail.\",\"store_result\":true}"Local + Cloud Pipeline
import hashlib
from slopshop import Slop
# Local computation (free, instant)
with open("data.csv", "r") as f:
content = f.read()
local_hash = hashlib.sha256(content.encode()).hexdigest()
# Cloud enhancement (Slopshop)
client = Slop()
result = client.call("llm-summarize", text=content[:5000])
client.call("memory-set", namespace="audit", key=f"file-{local_hash}", value=result["summary"])Swarm Visualizer
Visualize your agent swarms, chains, and workflows in real time with the interactive Swarm Visualizer. See agent connections, data flow between steps, and execution status at a glance. Access it at /visualizer.
Access the Visualizer
https://slopshop.gg/visualizerThe visualizer reads from your active chains, hive workspaces, and army deployments. Pass your API key as a query parameter or log in via the web UI. The visualizer updates in real time via SSE, showing:
- Agent nodes with status indicators (active, paused, completed)
- Data flow arrows showing context passing between chain steps
- Credit usage per agent and per step
- Timeline view of execution history
Ecosystem Integrations v4.0
Slopshop plugs into every major agent IDE, framework, and workflow tool. One install, universal access to all 82 categories of tools.
MCP Server — slop mcp serve
Start a local MCP server that exposes the full Slopshop catalog to any MCP-compatible client. One command, works everywhere.
# Start the MCP server (default port 3001)
slop mcp serve
# Or with a custom port
slop mcp serve --port 8484Supported clients and their config:
| Client | Config location | Setup |
|---|---|---|
| Claude Desktop | claude_desktop_config.json | "command": "slop", "args": ["mcp", "serve"] |
| Cursor | .cursor/mcp.json | "command": "slop", "args": ["mcp", "serve"] |
| Goose | ~/.config/goose/config.yaml | command: slop mcp serve |
| Cline | Cline MCP settings panel | Add server with command slop mcp serve |
| OpenCode | opencode.json | "command": "slop", "args": ["mcp", "serve"] |
Goose Recipes
Pre-built Goose recipes that wire Slopshop tools into common workflows. Drop them into your Goose config and go.
# Browse available recipes
ls integrations/goose-recipes/
# Example: security audit recipe
goose run integrations/goose-recipes/security-audit.yamlSee the full collection at integrations/goose-recipes/.
Aider Custom Commands
Aider custom commands let you invoke Slopshop tools inline while pair-programming. Add the commands to your .aider.conf.yml and call them with /slop-<tool>.
custom-commands:
slop-hash:
command: slop call crypto-hash-sha256 --data "$input"
description: Hash a string with SHA-256
slop-resolve:
command: slop call resolve --data '{"query": "$input"}'
description: Find the right Slopshop tool for a taskOpenCode Plugin
The OpenCode plugin registers all Slopshop tools as native OpenCode actions. Install once, use everywhere.
# Install the plugin
opencode plugin add slopshop
# Tools are now available in OpenCode sessions
opencode > /tools # lists all Slopshop toolsCline Skills
Register Slopshop as a Cline skill set so Cline can autonomously discover and call tools during coding sessions. Add the MCP server via the Cline settings panel, or use the config file:
{
"mcpServers": {
"slopshop": {
"command": "slop",
"args": ["mcp", "serve"],
"env": { "SLOPSHOP_KEY": "sk-slop-your-key-here" }
}
}
}LangChain / LangGraph Adapters
Use the Slopshop tool catalog as native LangChain tools or LangGraph nodes. Fetch tool definitions in OpenAI format and convert automatically.
from langchain.tools import Tool
import requests, os
BASE = "https://slopshop.gg/v1"
KEY = os.environ["SLOPSHOP_KEY"]
headers = {"Authorization": f"Bearer {KEY}"}
# Fetch all tools in OpenAI format
openai_tools = requests.get(f"{BASE}/tools?format=openai", headers=headers).json()["tools"]
# Convert to LangChain tools
def call(slug, **kw):
return requests.post(f"{BASE}/{slug}", headers=headers, json=kw).json()["data"]
lc_tools = [
Tool(name=t["function"]["name"], description=t["function"]["description"],
func=lambda x, s=t["function"]["name"]: call(s, input=x))
for t in openai_tools
]
# Use with LangGraph
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(model, lc_tools)Project Scaffolding — slop init
Bootstrap a new project with Slopshop pre-wired. The --full-stack flag includes a frontend, backend, MCP config, and example agent.
# Scaffold a full-stack project
slop init --full-stack my-agent-app
# Minimal API-only scaffold
slop init my-tool-scriptLocal Agent Pool — slop agents
Spin up a pool of local agents that share a Slopshop key and can be dispatched tasks. Useful for batch processing, overnight workloads, and multi-agent orchestration.
# Start a local agent pool (default: 4 workers)
slop agents
# Custom pool size
slop agents --workers 8
# With a task file
slop agents --tasks tasks.jsonBrowser/Desktop Primitives v4.0
Slopshop exposes primitives for browser-based and desktop agent workflows. The MCP server bridges directly into Claude Desktop, Cursor, and other IDE clients. The Swarm Visualizer provides real-time SSE streams for monitoring agent activity.
Swarm Visualizer (SSE)
Stream real-time agent activity, army deployments, and hive workspace updates over Server-Sent Events. Use it in your own dashboard or connect via slop tui.
# Stream live swarm events
curl -N https://slopshop.gg/v1/visualizer/stream \
-H "Authorization: Bearer $KEY"
# Returns SSE events: agent_spawn, task_complete, merkle_update, hive_messageMCP Desktop Bridge
Bootstrap the MCP server for Claude Desktop or Cursor with full tool catalog exposure. All 82 categories are discoverable as native MCP tools.
# Bootstrap MCP for Claude Desktop (auto-configures claude_desktop_config.json)
slop mcp bootstrap --ide=claude-desktop
# Bootstrap for Cursor
slop mcp bootstrap --ide=cursorMemory 2.0 v4.0
Memory 2.0 extends the free-forever memory tier with graph queries, auto-summarization, and snapshot/pin capabilities. All Memory 2.0 features are free (0 credits).
Graph Query (GraphRAG)
Query your memory as a knowledge graph. Memory keys, namespaces, and knowledge triples are linked into a queryable graph with semantic scoring.
# Query the memory graph
curl -X POST https://slopshop.gg/v1/memory/graph-query \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"query": "agent orchestration", "depth": 3, "vector_boost": true}'Returns linked entities, semantic scores, and traversal paths across all your stored memory and knowledge triples.
Auto-Summarize
Automatically summarize large memory namespaces or key groups. Useful for compressing research output, swarm results, or long-running chain data.
curl -X POST https://slopshop.gg/v1/memory/auto-summarize \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"namespace": "research-swarm-42", "max_tokens": 500}'Snapshot & Pin
Take a point-in-time snapshot of your entire memory namespace. Snapshots can be pinned for permanent storage or exported for backup.
# Snapshot a namespace
curl -X POST https://slopshop.gg/v1/memory/snapshot \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"namespace": "research-swarm-42", "pin": true}'
# List snapshots
curl https://slopshop.gg/v1/memory/snapshots \
-H "Authorization: Bearer $KEY"Memory Evolution v4.0
Autonomous memory evolution lets agents continuously improve their stored knowledge without manual intervention. Four strategies run on configurable intervals to keep memory namespaces clean, enriched, and relevant.
Strategies
- consolidate — Groups related memory keys by prefix and merges them into consolidated entries, reducing clutter
- enrich — Adds metadata (char count, age, timestamps) to existing memories for richer context
- decay — Archives stale memories (older than 7 days) to a separate namespace, keeping active memory lean
- summarize — Groups keys by prefix and produces summaries, compressing large namespaces into digestible overviews
Start Evolution
curl -X POST https://slopshop.gg/v1/memory/evolve/start \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"namespace": "research", "strategy": "consolidate", "budget_per_cycle": 5, "interval_minutes": 10}'Returns an evolution_id that you can use to monitor or stop the evolution process.
Monitor & Stop
GET /v1/memory/evolve/status — Check active evolutions
GET /v1/memory/evolve/log — View evolution history
POST /v1/memory/evolve/stop — Stop an active evolution by IDMemory Decay
Standalone decay endpoint for one-off cleanup. Scores memories by age using a configurable decay factor.
curl -X POST https://slopshop.gg/v1/memory/decay \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"namespace": "research", "factor": 0.95}'Chain Branching v4.0
Chain branching adds conditional step execution to agent chains. Steps can include if conditions that evaluate against the chain context, enabling dynamic workflows that branch based on intermediate results.
Conditional Steps
Each step in a branching chain can include a condition or if field. If the condition evaluates to false, the step is skipped. Use else_step to jump to an alternative step index.
curl -X POST https://slopshop.gg/v1/chain/run/branching \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{
"chain_id": "your-chain-id",
"max_steps": 10,
"max_iterations": 100
}'Step Definition with Conditions
When creating a chain, define steps with conditions:
{
"steps": [
{ "prompt": "Analyze the data", "agent": "claude" },
{ "condition": "step_0_result.confidence > 0.8", "prompt": "Deep dive", "agent": "grok" },
{ "condition": "step_0_result.confidence <= 0.8", "prompt": "Gather more data", "agent": "claude", "else_step": 0 }
]
}Conditions are evaluated safely against the chain context. Supported operators: comparisons, logical operators, and property access on previous step results.
Grok-Specific Features v4.0
Slopshop includes dedicated Grok integration endpoints that leverage xAI's Grok models for optimization, critique, and autonomous orchestration. Requires XAI_API_KEY.
Grok Optimize
Pass any prompt, code, or workflow to Grok for optimization suggestions. Grok analyzes for efficiency, cost, and performance improvements.
curl -X POST https://slopshop.gg/v1/grok/optimize \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"input": "Deploy 1k army for text analysis", "optimize_for": "cost"}'Grok Critique
Submit agent output, research results, or any content for Grok's critical analysis. Returns structured feedback with gaps, contradictions, and improvement suggestions.
curl -X POST https://slopshop.gg/v1/grok/critique \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"content": "Research report on agent infrastructure...", "depth": "thorough"}'Grok Overlord Mode
Let Grok autonomously drive multi-LLM chains. In overlord mode, Grok decides which models to call, what tasks to split, and when to loop. Pairs with chain/start and army/deploy.
curl -X POST https://slopshop.gg/v1/grok/overlord \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"goal": "Deep tech audit on Stripe", "budget_credits": 500, "models": ["claude", "gpt", "deepseek"]}'Grok selects the optimal model per subtask, manages context passing, and stores all results in free memory automatically.
Safety & Guardrails v4.0
Production safety primitives for agent deployments. Circuit breakers, sandbox isolation, reputation slashing, and chaos testing protect your workloads.
Sandbox Execution
Run untrusted code in an isolated vm.createContext sandbox with strict timeout enforcement. Used internally by exec-javascript and available directly.
curl -X POST https://slopshop.gg/v1/sandbox/execute \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"code": "return 2 + 2", "timeout_ms": 5000}'Circuit Breakers
Automatic circuit breakers protect downstream services. When an endpoint fails repeatedly, the breaker trips and returns a cached fallback. Use orch-circuit-breaker-check and orch-circuit-breaker-record to integrate into your own workflows.
Reputation & Slashing
Agents in army deployments and tournaments build reputation scores based on eval performance. Underperforming agents can be slashed (reputation reduced) and pruned from future swarms. The reputation ledger is Merkle-verified.
1. POST /v1/eval/run — Run evaluations to score agent performance
2. POST /v1/reputation/slash — Slash underperforming agents
3. GET /v1/reputation/ledger — View the global reputation leaderboardChaos Testing
Inject faults into army deployments to stress-test reliability. Simulate network failures, latency spikes, and agent drops to validate self-healing behavior.
curl -X POST https://slopshop.gg/v1/chaos/test \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"army_id": "EXT-47", "faults": ["network", "latency", "agent-drop"]}'Agent Runtime OS v4.0
Slopshop v4.0 ships seven new platform layers that turn the tool catalog into a full agent runtime OS for the Computer-Use Era. Every layer is self-hostable and NIST-aligned.
Agent Identity (SPIFFE/SVID)
Issue verifiable agent identities (HMAC-SHA256 signed JWTs), register in the Agent Name Service (ANS), track reputation scores (0–10 rolling weighted average), and pass typed A2A messages between agents.
# Issue identity token
curl -X POST https://slopshop.gg/v1/identity/issue \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"agent_id":"research-bot","name":"Research Bot","capabilities":["llm","tools"]}'
# Register in Agent Name Service
curl -X POST https://slopshop.gg/v1/ans/register \
-H "Authorization: Bearer $KEY" \
-d '{"name":"research-bot","agent_id":"research-bot","endpoint":"https://myapp.com/agent","capabilities":["research","memory"]}'
# Submit reputation signal
curl -X POST https://slopshop.gg/v1/reputation/signal \
-H "Authorization: Bearer $KEY" \
-d '{"agent_id":"research-bot","signal":"success","task":"daily-research"}'Computer Use Backend
Record Claude's Computer Use sessions — log actions, store screenshots with OCR, run pixel-diff verification, export replay scripts, and gate on human approval. While Claude clicks and types, Slopshop persists state and verifies correctness.
# Start a session
curl -X POST https://slopshop.gg/v1/computer-use/session/start \
-H "Authorization: Bearer $KEY" \
-d '{"name":"organize-downloads","task":"Batch resize images and organize by date"}'
# Log a screenshot + run OCR
curl -X POST https://slopshop.gg/v1/computer-use/screenshot \
-H "Authorization: Bearer $KEY" \
-d '{"session_id":"SESSION_ID","image_base64":"...","label":"before-resize"}'
# Export replay as Python (pyautogui)
curl -X POST https://slopshop.gg/v1/computer-use/replay \
-H "Authorization: Bearer $KEY" \
-d '{"session_id":"SESSION_ID","format":"python"}'MCP Gateway + Policy Engine
Proxy all MCP tool calls through a policy engine. Create rules (deny, require_approval, rate_limit) based on tool slug, credit spend, agent identity, time range, or tier. Export audit logs as ECS-compatible NDJSON for SIEM integration.
# Get signed MCP manifest
curl https://slopshop.gg/v1/gateway/manifest \
-H "Authorization: Bearer $KEY"
# Create a policy
curl -X POST https://slopshop.gg/v1/policy/create \
-H "Authorization: Bearer $KEY" \
-d '{"name":"block-heavy","rules":[{"condition":"credit_over","value":500,"action":"require_approval"}]}'
# Export audit log (ECS NDJSON)
curl -X POST https://slopshop.gg/v1/gateway/audit/export \
-H "Authorization: Bearer $KEY" \
-d '{"format":"ecs","limit":1000}'Observability Dashboard
Full distributed tracing, p95/p99 latency, cost attribution per tool and agent, ROI calculator, budget alerts, 4-component health scores, and a public status page with incident management.
# Get full dashboard
curl https://slopshop.gg/v1/observe/dashboard \
-H "Authorization: Bearer $KEY"
# Set credit budget
curl -X POST https://slopshop.gg/v1/observe/budget/set \
-H "Authorization: Bearer $KEY" \
-d '{"budget_credits":10000,"alert_threshold":0.8}'
# Record ROI event
curl -X POST https://slopshop.gg/v1/observe/roi/record \
-H "Authorization: Bearer $KEY" \
-d '{"event_type":"pr_merged","value_usd":50,"tool_slug":"devops-semver-bump"}'Visual DAG Workflow Builder
Create workflow graphs with nodes (tool calls) and edges (data flow). Kahn's topological sort ensures correct execution order. DFS cycle detection prevents infinite loops. Human gates pause execution until approval. 10 pre-built templates included.
# List templates
curl https://slopshop.gg/v1/workflow/templates \
-H "Authorization: Bearer $KEY"
# Create a workflow
curl -X POST https://slopshop.gg/v1/workflow/create \
-H "Authorization: Bearer $KEY" \
-d '{"name":"hash-and-store","nodes":[{"id":"n1","type":"tool","slug":"crypto-hash-sha256"},{"id":"n2","type":"tool","slug":"memory-set"}],"edges":[{"from":"n1","to":"n2","output":"hash"}]}'Eval Suite + Model Routing
Create test suites with expected outputs, run benchmarks (5/5 core tests · score 100 · grade A), and configure model routing rules (cost-optimized, performance, round-robin, balanced).
# Run standard benchmark
curl -X POST https://slopshop.gg/v1/eval/benchmark \
-H "Authorization: Bearer $KEY"
# → {"score":100,"grade":"A","passed":5,"failed":0}Marketplace v4.0
The Slopshop Marketplace lets agents and developers publish, discover, install, and monetize tools. 15 seed listings included. Publishers earn 70% of every purchase. Handler code is scanned for 16 dangerous patterns before listing.
Template Marketplace
Browse and invoke pre-built agent templates. Fork templates for customization or publish your own.
# Browse templates
curl https://slopshop.gg/v1/marketplace/templates \
-H "Authorization: Bearer $KEY"
# Invoke a template
curl -X POST https://slopshop.gg/v1/marketplace/invoke \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"slug": "research-swarm", "params": {"topic": "AI infrastructure"}}'Plugin Forge
Build custom plugins that extend the Slopshop tool catalog. Plugins can be published to the marketplace and installed by other users. TEE-verified plugins get a trust badge.
# Build and publish a plugin
slop forge build --name="custom-analyzer" --publish=marketplace
# Install a plugin from marketplace
slop forge install custom-analyzerAdvanced Research v4.0
Multi-tier, multi-provider research engine. Five providers cover the global internet: Claude for synthesis, Grok for real-time EN+JA web, DeepSeek for Chinese platforms, OpenAI for academic sources, Yandex for Russian internet. Results are automatically stored in memory with a 30-minute cache. See Research Engine for the full provider and tier reference.
Research Tiers
| Tier | Depth | Cross-Synthesis | Use Case |
|---|---|---|---|
basic | Single-pass summary | No | Quick fact checks, simple lookups |
standard | Multi-source synthesis | No | Market research, competitive analysis |
advanced | Deep multi-model loop | No | Technical deep-dives, architecture reviews |
deep | Exhaustive with critique loops | Yes | Comprehensive reports, due diligence |
planet | All providers + full cross-synthesis | Yes | Global intelligence across all 5 provider networks |
curl -X POST https://slopshop.gg/v1/research \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"query": "Compare agent infrastructure platforms 2026", "tier": "planet", "provider": "all", "namespace": "my-research"}'Response includes structured findings per provider, cross-synthesis (deep/planet tiers), sources, confidence scores, and a memory key for retrieval. Results are cached 30 minutes per query+tier+provider combination.
Memory Upload v4.0
Import files directly into your agent's memory via drag-and-drop or API. Supports Markdown, plain text, JSON, CSV, and code files. Content is chunked, indexed, and made searchable instantly.
curl -X POST https://slopshop.gg/v1/memory/upload \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"content": "# My Project\n\n## Goals\nBuild the best agent backend...", "namespace": "my-project", "filename": "project-notes.md"}'Uploaded content is automatically chunked for vector search and tagged with the filename. Use memory-search to find relevant chunks later.
Multiplayer Memory v4.0
Shared memory spaces for multi-agent and multi-user teams. Create a shared space, set a retention tier, invite collaborators, and every agent with access reads and writes to the same namespace in real time.
Retention Tiers
| Tier | Lifetime | Use Case |
|---|---|---|
session | 24 hours | Ephemeral collaboration — context for a single work session |
daily | 7 days | Short-term projects, sprint-scoped context |
weekly | 30 days | Team knowledge base with rolling window |
permanent | Forever | Persistent shared intelligence, long-running agent teams |
Create a Shared Space
curl -X POST https://slopshop.gg/v1/memory/share/create \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "team-research",
"retention": "weekly",
"description": "Shared research context for product team"
}'
# Returns: {"space_id": "space_abc123", "namespace": "team-research", "retention": "weekly", "expires_at": "2026-04-30T00:00:00Z"}Invite a Collaborator
curl -X POST https://slopshop.gg/v1/memory/collaborator/invite \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"namespace": "team-research", "invitee_key": "sk_other_agent", "permissions": "read-write"}'
# permissions: "read" | "read-write" | "admin"Accept an Invitation
curl -X POST https://slopshop.gg/v1/memory/collaborator/accept \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"invite_id": "INV-abc123"}'List & Revoke
# List collaborators on a namespace
curl https://slopshop.gg/v1/memory/collaborator/list \
-H "Authorization: Bearer $KEY" \
-G -d "namespace=team-research"
# Revoke access
curl -X POST https://slopshop.gg/v1/memory/collaborator/revoke \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"namespace": "team-research", "collaborator_key": "sk_other_agent"}'Reading and Writing Shared Memory
Once a collaborator has access, they use the standard memory APIs with the shared namespace. No special syntax needed.
# Write (any collaborator with write permission)
curl -X POST https://slopshop.gg/v1/memory-set \
-H "Authorization: Bearer $OTHER_AGENT_KEY" \
-H "Content-Type: application/json" \
-d '{"key": "latest_findings", "value": "...", "namespace": "team-research"}'
# Read (any collaborator)
curl -X POST https://slopshop.gg/v1/memory-get \
-H "Authorization: Bearer $MY_KEY" \
-H "Content-Type: application/json" \
-d '{"key": "latest_findings", "namespace": "team-research"}'Memory Compression v4.0
Memory values larger than 512 bytes are automatically compressed with zlib deflateRaw before storage. Compressed values are stored with a ~z~ prefix and transparently decompressed on read. Zero configuration required — it just works.
| Detail | Value |
|---|---|
| Algorithm | zlib deflateRaw |
| Threshold | 512 bytes (values below this are stored as-is) |
| Prefix | ~z~ (internal marker, never visible to callers) |
| Decompression | Automatic on every read — callers always receive plain text/JSON |
| Configuration | None — fully automatic |
Compression is applied by the Dream Engine's compress strategy and also triggered automatically during writes when values exceed the threshold. Combined with the Dream Engine, large research outputs can be compressed overnight to free up space while remaining fully retrievable.
Dream Tiers v4.0
Memory dreaming runs in the background, consolidating, linking, and evolving your agent's memories. Dream frequency scales with research tier:
| Tier | Dream Interval | What Happens |
|---|---|---|
basic | Every 2 hours | Tag consolidation, duplicate removal |
standard | Every 1 hour | Cross-reference linking, gap detection |
advanced | Every 30 minutes | Insight synthesis, contradiction flagging |
deep | Every 15 minutes | Full knowledge graph evolution, priority re-ranking |
curl -X POST https://slopshop.gg/v1/memory/evolve/start \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"namespace": "my-research", "tier": "advanced"}'Dream Engine v5.1 ✨ v5.1
The Dream Engine is Slopshop's REM-cycle memory consolidation system. It runs in the background, synthesizing, compressing, and evolving your agent's memory on a schedule — like a brain consolidating the day's learning overnight. Schedules are persisted to a dream_schedules SQLite table and survive server restarts. v5.1 adds four new strategies (validate, evolve, forecast, reflect), two new start parameters (adversarial mode and salience threshold), a full Intelligence Score report endpoint, and Collective Dream across shared memory spaces.
Neuroscience Grounding
Each Dream Engine stage maps to peer-reviewed neuroscience: synthesize mirrors slow-wave sleep memory replay; pattern_extract mirrors hippocampal theta rhythms; insight_generate mirrors REM dreaming and cross-domain binding; compress mirrors synaptic downscaling; associate mirrors neocortical consolidation; validate mirrors prefrontal error detection; evolve mirrors Bayesian neural belief revision; forecast mirrors prospective memory simulation; reflect mirrors metacognitive default-mode activity. Full references at /dream.
All 9 Consolidation Strategies
| Strategy | What It Does | Credits | Key Output |
|---|---|---|---|
synthesize | Theme consolidation — merges semantically related memories into unified summaries, eliminates redundancy, surfaces cross-entry threads | 25 | Merged summaries, theme clusters |
pattern_extract | Recurring pattern mining — identifies patterns, contradictions, and knowledge gaps across the full memory namespace | 20 | Pattern list, contradiction map, gap report |
insight_generate | Novel cross-domain connection synthesis — derives higher-order insights from raw memory; enable adversarial: true for counterfactual generation that challenges stored assumptions | 30 | Insight list, adversarial challenges (if enabled) |
compress | Redundancy elimination — applies zlib compression to bulky values, preserving signal while reducing storage; auto-decompresses on every read | 15 | Compressed entries, bytes saved |
associate | GraphRAG knowledge graph linking — builds weighted associative edges between semantically related memories; results queryable via the graph API | 20 | Edge list, updated graph node count |
validate v5.1 | Consistency checking and contradiction detection — scans memory for logical conflicts, stale beliefs, and internal contradictions; flags entries that need reconciliation | 20 | Contradiction pairs, stale keys, consistency score |
evolve v5.1 | Bayesian belief updating — strengthens, weakens, or revises beliefs based on accumulated evidence; confidence scores are recalibrated on each run | 30 | Updated posteriors, revised belief map |
forecast v5.1 | Monte Carlo probabilistic forecasting — projects future states and outcomes from observed memory patterns; returns confidence intervals and probability distributions | 35 | Forecasts, confidence intervals, scenario tree |
reflect v5.1 | Metacognitive self-analysis — examines knowledge quality, growth trajectory, blind spots, and next learning steps; also extracts reusable procedural skill definitions from episodic chains | 25 | Meta-report, extracted procedural skills |
Start a Dream Run (One-Shot)
Trigger an immediate consolidation pass on a namespace. Returns a run ID for status polling. v5.1 adds adversarial and salience_threshold parameters.
curl -X POST https://slopshop.gg/v1/memory/dream/start \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{
"namespace": "my-agent",
"strategy": "insight_generate",
"adversarial": true,
"salience_threshold": 0.4
}'
# Returns: {"run_id": "dream_run_abc123", "status": "started", "strategy": "insight_generate"}POST /v1/memory/dream/start — Full Parameter Reference
| Parameter | Type | Required | Description |
|---|---|---|---|
namespace | string | Yes | Memory namespace to consolidate |
strategy | string | Yes | One of the 9 strategies above |
adversarial | boolean | No | Enable counterfactual generation in insight_generate — produces challenges that stress-test stored assumptions. Default: false |
salience_threshold | float 0–1 | No | Minimum salience score for a memory entry to be included in the consolidation pass. Lower values include more entries; higher values focus on high-signal memories only. Default: 0.0 (all entries) |
model | string | No | Override the LLM used for this run (claude, grok, deepseek). Defaults to account model preference. |
budget | integer | No | Hard credit cap for this run. Dream stops early if the cap is reached. |
Check Dream Status
GET /v1/memory/dream/status/:run_id
Authorization: Bearer $KEY
# Returns: {"run_id": "...", "status": "complete", "memories_processed": 142, "insights_generated": 7}Schedule Recurring Dreams (Restart-Persistent)
Schedules are written to the dream_schedules SQLite table. The server re-registers all active schedules on startup — a restart never loses your schedule.
curl -X POST https://slopshop.gg/v1/memory/dream/schedule \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{
"namespace": "my-agent",
"cron": "0 2 * * *",
"strategy": "insight_generate",
"adversarial": true,
"salience_threshold": 0.3,
"active": true
}'
# cron field accepts standard 5-part cron expressions
# active: true/false — pause/resume without deleting
# Returns: {"schedule_id": "sched_xyz", "next_run": "2026-04-02T02:00:00Z"}Manage Schedules
GET /v1/memory/dream/schedules — List all schedules for your key
GET /v1/memory/dream/schedules/:id — Get a single schedule
PATCH /v1/memory/dream/schedules/:id — Update cron, strategy, or active status
DELETE /v1/memory/dream/schedules/:id — Remove a scheduleLegacy Topic Subscriptions
The older topic-subscription interface is still supported for backward compatibility.
POST /v1/dream/subscribe — Subscribe to a research topic on a schedule
GET /v1/dream/insights — List generated insights
POST /v1/dream/run — Trigger immediate topic research
POST /v1/dream/deploy — Deploy insight into agent memory
POST /v1/dream/dismiss — Dismiss insight
GET /v1/dream/subscriptions — List subscriptionsIntelligence Score & Dream Reports v5.1
After a dream run completes, retrieve a full Intelligence Brief: insights count, strategy depth score, procedural skills extracted, duration, and the two composite KPI metrics — Intelligence Score and Dream Efficiency Score.
Intelligence Score — Formula
Intelligence Score = (insights × strategy_depth × 10) / duration_sec
Dream Efficiency Score = (insights × depth × 10 + skills × 15) / duration_sec
Both scores are capped at 100. Higher is better. A score of 80+ in under 60 seconds is considered excellent. View all historical sessions at /dream-reports.
Strategy Depth Multipliers
| Strategy | Depth Multiplier | Rationale |
|---|---|---|
synthesize | 1.0 | Single-pass merge — baseline depth |
pattern_extract | 1.2 | Multi-pass scan across all entries |
compress | 0.5 | Structural, not semantic — lower depth |
associate | 1.3 | Graph construction requires cross-entry linking |
insight_generate | 1.5 | Higher-order reasoning, cross-domain binding |
validate | 1.2 | Contradiction detection across namespace |
evolve | 1.5 | Bayesian update requires prior + likelihood computation |
forecast | 2.0 | Monte Carlo simulation — highest compute depth |
reflect | 1.8 | Metacognitive analysis + skill extraction |
Example Calculation
Dream run: strategy=forecast, insights=12, skills=3, duration=45s
Intelligence Score = (12 × 2.0 × 10) / 45 = 240 / 45 = 53.3
Dream Efficiency Score = (12 × 2.0 × 10 + 3 × 15) / 45 = 285 / 45 = 63.3
GET /v1/memory/dream/report/:dream_id
curl https://slopshop.gg/v1/memory/dream/report/dream_run_abc123 \
-H "Authorization: Bearer $KEY"
# Returns:
{
"dream_id": "dream_run_abc123",
"namespace": "my-agent",
"strategy": "forecast",
"status": "complete",
"duration_sec": 45,
"memories_processed": 142,
"insights_generated": 12,
"procedural_skills_extracted": 3,
"intelligence_score": 53.3,
"dream_efficiency_score": 63.3,
"entries": [...],
"meta": {
"model": "claude-3-5-sonnet",
"adversarial": false,
"salience_threshold": 0.0,
"credits_used": 35
}
}TMR — Targeted Memory Reactivation v5.1
Targeted Memory Reactivation (TMR) is a neuroscience-grounded technique for priming specific memories before a Dream Engine run or agent session. Queue high-priority memory keys, set a reactivation mode, and retrieve a combined reactivation prompt that your agent injects at session start. TMR increases the probability those memories become part of the next consolidation cycle.
POST /v1/memory/tmr/queue
Queue one or more memory keys for targeted reactivation.
curl -X POST https://slopshop.gg/v1/memory/tmr/queue \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{
"namespace": "my-agent",
"target_keys": ["project-brief", "competitor-analysis", "q1-goals"],
"priority": 8,
"mode": "consolidate",
"personalization": "Focus on strategic risks and opportunity gaps"
}'
# Returns:
{
"cue_id": "tmr_cue_xyz789",
"queued_keys": 3,
"combined_reactivation_prompt": "Before this session, recall: [project-brief]... [competitor-analysis]... [q1-goals]... Focus on strategic risks and opportunity gaps.",
"expires_at": "2026-04-02T02:00:00Z"
}| Parameter | Type | Required | Description |
|---|---|---|---|
namespace | string | Yes | Memory namespace containing the target keys |
target_keys | string[] | Yes | Array of memory keys to reactivate |
priority | integer 1–10 | No | Reactivation weight — higher priority keys appear first in the combined prompt and receive higher salience during the next dream run. Default: 5 |
mode | string | No | consolidate (default) | recall | challenge — how memories are framed in the reactivation prompt |
personalization | string | No | Free-text instruction appended to the combined reactivation prompt |
GET /v1/memory/tmr/cues
Retrieve pending TMR cues for a namespace. Use this at agent session start to inject the reactivation prompt into the system message.
curl "https://slopshop.gg/v1/memory/tmr/cues?namespace=my-agent&limit=10&mode=consolidate" \
-H "Authorization: Bearer $KEY"
# Returns:
{
"cues": [
{
"cue_id": "tmr_cue_xyz789",
"target_keys": ["project-brief", "competitor-analysis", "q1-goals"],
"priority": 8,
"mode": "consolidate",
"created_at": "2026-04-01T22:00:00Z"
}
],
"combined_reactivation_prompt": "Before this session, recall: [project-brief summary]... [competitor-analysis summary]... [q1-goals summary]...",
"total": 1
}| Query Param | Type | Description |
|---|---|---|
namespace | string | Required. Namespace to fetch cues for. |
limit | integer | Max cues to return. Default: 20 |
mode | string | Filter by mode (consolidate, recall, challenge). Omit for all. |
Collective Dream v5.1
Run the Dream Engine across a shared Multiplayer Memory space — consolidating the collective knowledge of an entire team or agent swarm in a single overnight pass. Collective Dream reads from a shared space created via POST /v1/memory/share/create and writes insights back to the same space, visible to all collaborators.
POST /v1/memory/dream/collective
curl -X POST https://slopshop.gg/v1/memory/dream/collective \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{
"space_id": "space_team_abc123",
"strategy": "synthesize",
"budget": 500,
"model": "claude"
}'
# Returns:
{
"dream_id": "collective_dream_xyz",
"space_id": "space_team_abc123",
"status": "started",
"poll_endpoint": "/v1/memory/dream/status/collective_dream_xyz",
"report_endpoint": "/v1/memory/dream/report/collective_dream_xyz"
}| Parameter | Type | Required | Description |
|---|---|---|---|
space_id | string | Yes | Shared memory space ID from POST /v1/memory/share/create |
strategy | string | Yes | Any of the 9 Dream Engine strategies |
budget | integer | No | Hard credit cap for this collective run. Cost is debited from the space owner's account. Default: 1000 |
model | string | No | LLM to use: claude | grok | deepseek. Defaults to account preference. |
adversarial | boolean | No | Enable counterfactual mode (applies to insight_generate strategy). |
salience_threshold | float 0–1 | No | Minimum salience to include in the collective consolidation pass. |
Prerequisite
Collective Dream requires a shared memory space. Create one first with POST /v1/memory/share/create, then invite collaborators via POST /v1/memory/collaborator/invite. See the Multiplayer Memory section.
Procedural Skills v5.1
Procedural skills are reusable, parameterized action patterns extracted by the Dream Engine's reflect, forecast, and evolve strategies. Each skill encodes a learned workflow — a sequence of steps the agent has successfully executed — along with a confidence score derived from repeated application. Skills can be injected directly into agent system prompts via the /skills-forge interface.
GET /v1/memory/skills
List procedural skills extracted for a namespace. Filter by confidence threshold, extracting strategy, or limit.
curl "https://slopshop.gg/v1/memory/skills?namespace=my-agent&min_confidence=0.7&strategy=reflect&limit=20" \
-H "Authorization: Bearer $KEY"
# Returns:
{
"skills": [
{
"skill_id": "skill_abc123",
"name": "competitive-intelligence-loop",
"description": "Search competitors, extract signals, synthesize into structured report",
"tool_chain": ["search-web", "extract-text", "llm-think", "memory-set"],
"confidence": 0.91,
"applications": 14,
"extracted_by": "reflect",
"dream_id": "dream_run_abc123",
"created_at": "2026-04-01T03:00:00Z"
}
],
"total": 1,
"namespace": "my-agent"
}| Query Param | Type | Description |
|---|---|---|
namespace | string | Required. Namespace to query skills for. |
min_confidence | float 0–1 | Only return skills at or above this confidence score. Default: 0.0 |
strategy | string | Filter by extracting strategy: reflect, forecast, or evolve. Omit for all. |
limit | integer | Max skills to return. Default: 50 |
Procedural skills feed directly into the Skills Forge page, where you can browse extracted skills, test them, and generate agent system prompts that include your highest-confidence learned workflows.
Snapshot Branching v5.0
Versioned, Merkle-rooted checkpoints of your entire memory namespace. Fork your agent's memory state, explore alternatives, restore any prior point, or merge two diverged branches back together — like git, but for agent knowledge.
Create a Branch
curl -X POST https://slopshop.gg/v1/memory/branch \
-H "Authorization: Bearer $SLOP_KEY" \
-d '{"namespace": "my-project", "label": "before-experiment-7"}'
# Returns: snapshot_id, merkle_root, key_countRestore from Branch
curl -X POST https://slopshop.gg/v1/memory/restore/mbranch-abc123 \
-H "Authorization: Bearer $SLOP_KEY"Merge Two Branches
curl -X POST https://slopshop.gg/v1/memory/merge \
-H "Authorization: Bearer $SLOP_KEY" \
-d '{"source_id": "mbranch-aaa", "target_id": "mbranch-bbb", "policy": "auto"}'
# policy: auto | llm-smart | human-in-loop | agent-debateEndpoints
POST /v1/memory/branch — Create Merkle-rooted branch snapshot
POST /v1/memory/restore/:id — Restore namespace from branch
GET /v1/memory/branches — List all branches
POST /v1/memory/branch/compare — Diff two branches
POST /v1/memory/merge — Merge two branches
GET /v1/memory/conflicts/:merge_id — View unresolved merge conflicts
POST /v1/memory/conflicts/:merge_id/resolve — Resolve conflicts manuallyBayesian Memory Calibration v5.0
Apply Bayesian inference to calibrate confidence in stored beliefs. Given a prior probability and observed likelihood, the system computes a posterior and stores the calibrated belief. Use this to build agents with calibrated uncertainty rather than binary true/false memory.
curl -X POST https://slopshop.gg/v1/memory/bayesian/update \
-H "Authorization: Bearer $SLOP_KEY" \
-d '{
"key": "user_will_churn",
"prior": 0.2,
"likelihood": 0.8,
"evidence": "user_opened_cancellation_page",
"namespace": "crm"
}'
# Returns: { prior: 0.2, likelihood: 0.8, posterior: 0.667, confidence_delta: 0.467 }Episodic Memory Chains v5.0
Link memory entries into temporally ordered chains — like a timeline of agent experiences. Episodes are linked by prev_id/next_id pointers, enabling forward/backward traversal. Supports event, decision, observation, and custom episode types.
# Add an episode
curl -X POST https://slopshop.gg/v1/memory/episode \
-H "Authorization: Bearer $SLOP_KEY" \
-d '{"content": "User rejected proposal B", "episode_type": "decision", "prev_id": "ep-xyz123"}'
# Traverse the chain
GET /v1/memory/chain?namespace=project&start=ep-abc&direction=forward&limit=20Memory Condition Triggers v5.0
Register event-driven triggers that fire when memory conditions are met — e.g., when key count exceeds a threshold, when a specific key is written, or on a time pattern. Triggers automatically execute a configured action (dream, webhook, chain, etc.).
curl -X POST https://slopshop.gg/v1/memory/trigger \
-H "Authorization: Bearer $SLOP_KEY" \
-d '{
"condition_type": "key_count_exceeds",
"condition_value": "100",
"action_type": "dream",
"action_config": {"strategy": "compress"},
"namespace": "research"
}'POST /v1/memory/trigger — Register memory-condition trigger
GET /v1/memory/triggers — List all triggers
DELETE /v1/memory/trigger/:id — Remove a triggerProcedural Memory v5.0
Store learned, repeatable tool chains as named procedures. Agents can learn what sequence of tools solved a problem, store it as a procedure, and recall it later. Procedures accumulate a success_count as they are executed, enabling agents to prioritize their most reliable strategies.
curl -X POST https://slopshop.gg/v1/memory/procedure/learn \
-H "Authorization: Bearer $SLOP_KEY" \
-d '{
"name": "competitive-analysis",
"description": "Research competitors, extract signals, synthesize report",
"tool_chain": ["search-web", "extract-text", "llm-think", "memory-set"],
"trigger_pattern": "analyze competitors"
}'Swarm Orchestration v5.0
Coordinate multiple agents on a single task with automatic context passing and 6-axis quality scoring. Each agent receives the outputs of all previous agents as context, building towards a final synthesized result. Results are stored in memory for future retrieval and dream consolidation.
6-Axis Scoring
| Axis | Meaning |
|---|---|
success_probability | Fraction of agents that produced valid output |
robustness | Success rate weighted by agent reliability |
foresight | Estimated long-term value of the output |
goal_alignment | How well the result matches the original task |
efficiency | Output quality per credit spent |
cost_credits | Total credits consumed |
curl -X POST https://slopshop.gg/v1/swarm/orchestrate \
-H "Authorization: Bearer $SLOP_KEY" \
-d '{
"task": "Research the current state of multi-agent AI frameworks",
"agents": [
{"role": "researcher", "prompt": "Find and summarize recent developments"},
{"role": "analyst", "prompt": "Extract key trends and gaps"},
{"role": "synthesizer", "prompt": "Write an actionable executive summary"}
],
"namespace": "research",
"dream_after": true
}'POST /v1/swarm/orchestrate — Run structured orchestration with 6-axis scoring
POST /v1/swarm/create — Create a persistent swarm configuration
GET /v1/swarm/:id/status — Check swarm status
GET /v1/swarms — List all swarmsResearch Engine v4.0
Run deep, multi-provider research on any topic. Results are automatically persisted to memory for future retrieval. A 30-minute cache prevents redundant calls for the same query. Combine with the Dream Engine for scheduled overnight research workflows.
Providers
| Provider | Key | Specialization |
|---|---|---|
| Claude | claude | Long-form synthesis, reasoning, structured analysis (ANTHROPIC_API_KEY) |
| Grok / X | grok | Real-time web + X (Twitter) — English and Japanese 日本語 (GROK_API_KEY) |
| DeepSeek | deepseek | Chinese internet: 小红书 / 知乎 / 微信 / B站 (DEEPSEEK_API_KEY) |
| OpenAI | openai | Academic, scientific, and technical literature (OPENAI_API_KEY) |
| Yandex | yandex | Russian internet: Яндекс / Telegram channels / VK (YANDEX_API_KEY) |
| All (cross-synthesis) | all | Runs all available providers and cross-synthesizes results — requires deep or planet tier |
Tiers
| Tier | Depth | Cross-Synthesis | Use Case |
|---|---|---|---|
basic | Single-pass summary | No | Quick fact checks |
standard | Multi-source synthesis | No | Market research, competitive analysis |
advanced | Deep multi-model loop | No | Technical deep-dives |
deep | Exhaustive with critique loops | Yes | Comprehensive reports, due diligence |
planet | All providers + full cross-synthesis | Yes | Global intelligence: Western + Chinese + Russian + Japanese sources |
Run Research
# Single provider
curl -X POST https://slopshop.gg/v1/research \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"query": "latest MCP agent frameworks 2026", "tier": "advanced", "provider": "claude"}'
# Planet tier — all providers, cross-synthesized
curl -X POST https://slopshop.gg/v1/research \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{
"query": "AI infrastructure trends 2026",
"tier": "planet",
"provider": "all",
"namespace": "my-research"
}'
# Results are cached 30 minutes per query+tier+provider combination
# Results are automatically stored in memory namespace 'research' (or custom namespace)Response Format
{
"query": "AI infrastructure trends 2026",
"tier": "planet",
"provider": "all",
"findings": { ... }, // structured findings per provider
"synthesis": "...", // cross-provider synthesis (deep/planet only)
"sources": [...],
"confidence": 0.92,
"cached": false,
"memory_key": "research_a1b2c3",
"_engine": "real"
}Research History
Retrieve your past research results. Returns the last 20 by default, sorted by recency.
GET /v1/research/history?limit=20
Authorization: Bearer $KEY
# Returns stored research results from memory namespace 'research'State Management v4.0
Persistent key-value state for your agents, isolated per API key. Supports atomic increment for counters and rate tracking.
Set / Get State
# Set state
curl -X POST https://slopshop.gg/v1/state/my-counter \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"value": 0}'
# Get state
GET /v1/state/my-counterList All State Keys
GET /v1/state
Authorization: Bearer $KEY
# Returns all state key-value pairs for your API keyAtomic Increment
Increment a numeric state value atomically. Safe for concurrent agent use.
curl -X POST https://slopshop.gg/v1/state/page-views/increment \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"by": 1}'
# by is optional, defaults to 1
# Returns: {"key": "page-views", "value": 42, "previous": 41}Files API v4.0
Upload, download, manage, and search files within Slopshop. Full CRUD including rename, copy, and full-text search.
Upload a File
curl -X POST https://slopshop.gg/v1/files/upload \
-H "Authorization: Bearer $KEY" \
-F "file=@report.pdf"List & Search Files
GET /v1/files — List all files
GET /v1/files?search=report — Search by filename
GET /v1/files?tag=research — Filter by tag
GET /v1/files/search?q=report — Full-text search endpointFile Operations
# Delete
DELETE /v1/files/:id
# Rename
POST /v1/files/:id/rename
{"filename": "new-report.pdf"}
# Copy
POST /v1/files/:id/copy
{"filename": "report-copy.pdf"} # optional new filenameSchedule Management v4.0 v4.0
Create, manage, and monitor scheduled agent tasks with full run history, pause/resume controls, and manual triggers. Credits are deducted per run and persisted to your balance.
Create a Schedule
curl -X POST https://slopshop.gg/v1/schedules \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Daily report",
"cron": "0 9 * * *",
"task": "crypto-hash-sha256",
"input": {"data": "daily-seed"}
}'Get a Single Schedule
GET /v1/schedules/:id
Authorization: Bearer $KEYRun History
Every schedule run is logged to the schedule_runs table with timestamp, status, error (if any), and credits used.
GET /v1/schedules/:id/history?limit=50
Authorization: Bearer $KEY
# Returns: [{ran_at, status, credits_used, error}, ...]Pause / Resume / Trigger
POST /v1/schedules/:id/pause — Disable schedule (sets enabled=false)
POST /v1/schedules/:id/resume — Re-enable schedule
POST /v1/schedules/:id/trigger — Run immediately (sets next_run to now)Agent Templates v4.0
Pre-built agent configurations for common workflows. Each template wires together chains, memory, army deployments, and tool calls into a single invocable unit.
Research Swarm
Multi-model research loop: Claude researches, Grok critiques, Claude improves. Results stored in free memory. Configurable loop count and budget.
slop template run research-swarm \
--topic="AI agent infrastructure 2026" \
--loops=10 \
--budget=200Content Machine
End-to-end content pipeline: research, outline, draft, edit, publish. Uses hive workspaces for multi-agent collaboration.
Security Audit
Automated security sweep: DNS enumeration, SSL checks, header analysis, broken link detection. Exports a structured report to memory.
Data Pipeline
ETL template: fetch data (sense-url-content), transform (text-csv-to-json, text-json-flatten), analyze (math-statistics), store (memory-set). Fully configurable via pipes.
Custom Templates
Create your own templates by defining a chain of steps with model preferences, memory namespaces, and tool calls. Publish to the marketplace or keep private.
curl -X POST https://slopshop.gg/v1/templates/create \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"name": "my-pipeline", "steps": [
{"tool": "sense-url-content", "params": {"url": "$input_url"}},
{"tool": "text-summarize", "params": {"text": "$prev.content"}},
{"tool": "memory-set", "params": {"key": "summary_$input_url", "value": "$prev.summary"}}
]}'OAuth Connectors v4.0
Connect external services to your agent workflows via OAuth. Configure once, then agents can authenticate and act on your behalf in GitHub, Slack, Linear, Notion, and more. All tokens are encrypted with AES-256-GCM and auto-rotated nightly.
Configure a Connector
curl -X POST https://slopshop.gg/v1/connectors/config \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"toolkit":"github","client_id":"YOUR_GITHUB_APP_ID","client_secret":"YOUR_SECRET","scopes":["repo"],"auth_url":"https://github.com/login/oauth/authorize","token_url":"https://github.com/login/oauth/access_token"}'List & Connect
GET /v1/connectors/list — List all configured connectors
GET /v1/connectors/connect/:toolkit — Start OAuth flow for a toolkit (returns redirect URL)
DELETE /v1/connectors/:id — Remove a connector and revoke tokensVault Security
All OAuth tokens are stored in an encrypted vault using AES-256-GCM. Tokens are auto-rotated nightly via refresh token exchange. Revocation is immediate and propagates to the external provider.
Webhook Triggers v4.0
Define triggers that fire agent workflows when external events arrive via webhook. Connect GitHub push events, Slack messages, Linear issue updates, or any service that sends webhooks.
Create a Trigger
curl -X POST https://slopshop.gg/v1/triggers/create \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"name":"on-push","source":"github","event":"push","action":{"chain_id":"deploy-pipeline"}}'Receive Webhook & List
POST /v1/triggers/webhook/:id — Webhook receiver (give this URL to external services)
GET /v1/triggers/list — List all configured triggersWhen a webhook hits the trigger URL, Slopshop validates the payload, matches it to the trigger config, and executes the associated action (chain, pipe, or single tool call).
Audit Export v4.0
Export your full audit trail for compliance, debugging, or SOC2 readiness. Returns every API call, credit transaction, and agent action associated with your key.
curl https://slopshop.gg/v1/audit/export \
-H "Authorization: Bearer $KEY"Returns a JSON array of timestamped audit entries with endpoint, credits used, latency, and response hashes. Suitable for SOC2 evidence collection and incident forensics.
Schema Import v4.0
Bulk-import tool schemas into your Slopshop instance. Useful for self-hosted deployments where you want to register custom tools or sync schemas from a remote catalog.
curl -X POST https://slopshop.gg/v1/import/schemas \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d '{"schemas":[{"slug":"custom-tool","name":"Custom Tool","description":"My custom handler","input":{"type":"object","properties":{"text":{"type":"string"}}},"output":{"type":"object","properties":{"result":{"type":"string"}}}}]}'Imported schemas are immediately available in tool discovery (GET /v1/tools) and introspection (GET /v1/introspect?slug=custom-tool).
Self-Hosting
Slopshop is fully self-hostable. Run your own instance with zero external dependencies for compute-tier APIs.
Quick Start
git clone https://github.com/slopshop/slopshop.git
cd slopshop
npm install
node server-v2.jsThe server starts on port 3000 by default.
Docker
docker build -t slopshop .
docker run -p 3000:3000 \
-e ANTHROPIC_API_KEY=sk-ant-... \
-e OPENAI_API_KEY=sk-... \
slopshopEnvironment Variables
| Variable | Required | Description |
|---|---|---|
PORT | No | Server port (default: 3000) |
ANTHROPIC_API_KEY | For LLM APIs | Anthropic API key for Claude-powered APIs |
OPENAI_API_KEY | For LLM APIs | OpenAI API key (fallback if Anthropic not set) |
XAI_API_KEY | For LLM APIs | xAI API key for Grok-powered APIs |
DEEPSEEK_API_KEY | For LLM APIs | DeepSeek API key for DeepSeek-powered APIs |
SLOPSHOP_SECRET | No | Secret for JWT signing and key generation |
DEMO_CREDITS | No | Credits for demo key (default: 100) |
RATE_LIMIT | No | Max requests per minute per key (default: 60) |
North Star
The North Star API lets your agent define a single guiding goal. Setting it triggers a research swarm, stores findings in memory, and returns a summary.
Set North Star
curl -X POST https://slopshop.gg/v1/northstar/set \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"goal": "building an open-source AI messaging platform"}'Response: Triggers a research swarm, stores results in memory, and returns a findings summary with sources and next steps.
Get Current North Star
curl https://slopshop.gg/v1/northstar \
-H "Authorization: Bearer sk-slop-your-key-here"Returns the current North Star goal for your account.
Daily Hive Intelligence
Run automated research on your North Star goal. Results are posted to your Hive workspace and stored in memory.
curl -X POST https://slopshop.gg/v1/hive/daily-intelligence \
-H "Authorization: Bearer sk-slop-your-key-here" \
-H "Content-Type: application/json" \
-d '{"mode": "medium", "hive_id": "optional-hive-id"}'| Mode | Credits | Description |
|---|---|---|
light | 20 | Quick scan — top headlines and basic research on your North Star |
medium | 35 | Deeper analysis with multiple sources, competitor tracking, trend detection |
deep | 75 | Full research report with citations, market analysis, and actionable recommendations |
Results are automatically posted to your Hive channel and stored in memory for later retrieval.
Error Codes & Refunds
Slopshop uses consistent error codes across all 600+ endpoints. Credits are automatically refunded when a handler errors — you only pay for successful calls.
Auto-Refund Policy
If an API handler throws an error after credits are deducted, those credits are automatically refunded to your account. You will never be charged for a failed call. The refund is instant and reflected in your balance immediately.
Rate Limits
All API keys are rate-limited to 120 requests per minute. If you hit the limit, wait 30 seconds before retrying. The rate_limited error includes a retry_after field in seconds.
Common Error Codes
| Code | HTTP Status | Description |
|---|---|---|
missing_fields | 400 | Required fields are missing from the request body. Check the API schema for required parameters. |
insufficient_credits | 402 | Your account does not have enough credits for this call. Purchase more at /pricing. |
rate_limited | 429 | Too many requests. Wait for retry_after seconds (typically 30s) before retrying. |
api_not_found | 404 | The requested API slug does not exist. Check /tools for the full catalog. |
unauthorized | 401 | Missing or invalid API key. Include Authorization: Bearer YOUR_KEY in headers. |
handler_error | 500 | Internal handler failure. Credits are auto-refunded. Report persistent errors to dev@slopshop.gg. |