Execute untrusted code safely in your AI product
Add code execution to your app in minutes. Run user scripts, LLM-generated code, and dynamic workflows in isolated sandboxes—without managing infrastructure.
Scale to 1,000+ concurrent sandboxes without enterprise commitments.
import { Sandbox } from '@simplesandbox/sdk'
const client = Sandbox()
const sandbox = await client.sandboxes.create({ image: 'node:lts' })
const result = await sandbox.exec("echo 'Hello, world!'")
console.log(result.stdout) // Hello, world!
await sandbox.kill() Integrate in 6 lines of code. No setup calls, no complex configuration.
When you need SimpleSandbox
If your app needs to execute code dynamically, you're in the right place.
You need SimpleSandbox if you're building:
- ✓ Code interpreter features (like ChatGPT's Code Interpreter)
- ✓ LLM-generated code execution (Python scripts, shell commands, data analysis)
- ✓ User-submitted code runners (plugins, automations, custom formulas)
- ✓ Agent workflows that execute code (data processing, web scraping, testing)
- ✓ Isolated test environments (CI/CD, code validation, security scanning)
You don't need SimpleSandbox if:
- ✗ You just want to deploy a standard web app → Try Vercel, Railway, or Fly.io instead
- ✗ Your AI features don't execute code dynamically → You don't need sandboxing
- ✗ You're looking for a serverless functions platform → Check out AWS Lambda or Cloudflare Workers
- ✗ You need managed Kubernetes or container orchestration → This is specifically for code execution
Building secure code execution shouldn't be this hard
You need code execution, not a PhD in infrastructure management.
Building from scratch takes months
Setting up secure sandboxes, isolation, resource limits, network policies, and monitoring from scratch can take 3-6 months of engineering time.
Enterprise solutions force you to migrate your whole stack
Most platforms require you to adopt their entire ecosystem. You just need sandboxes, but they want you to move everything.
Scaling hits enterprise pricing walls
Start with free tier, then suddenly $150/month base fees or $100k/year commitments. No gradual path for growing startups.
How it works
npm install @simplesandbox/sdk await client.sandboxes.create() That's it. Add code execution without managing infrastructure
No Kubernetes, no VMs, no security policies to configure
We handle isolation, resource limits, network policies, and monitoring. You just create a sandbox and execute code. That's it.
Integrate with your existing app in minutes
Standard REST API that works with Next.js, Express, FastAPI, or any HTTP client. No need to migrate your entire stack to a new platform.
Scale from 3 to 1,000+ concurrent sandboxes without sales calls
Start free, upgrade to $10/mo (10 concurrent) or $50/mo (1,000+ concurrent). No enterprise commitments. 50% cheaper at $0.0252/vCPU-hour.
Real-world code execution examples
See how developers integrate SimpleSandbox into their apps. Tap a file to view the code.
import { Sandbox } from '@simplesandbox/sdk'
const client = Sandbox()
const sandbox = await client.sandboxes.create({ image: 'node:lts-alpine3.22' })
const shell = await sandbox.exec("echo 'Hello from shell!' && pwd")
console.log(shell.stdout.trim())
await sandbox.kill() import { Sandbox } from '@simplesandbox/sdk'
const client = Sandbox()
const sandbox = await client.sandboxes.create({ image: 'python:3.12-slim' })
await sandbox.exec('pip install --quiet pandas', { timeoutMs: 60_000 })
const program = `
import pandas as pd
data = {'product': ['A', 'B', 'C'], 'sales': [100, 200, 150]}
df = pd.DataFrame(data)
print(df['sales'].sum())
`
await sandbox.files.write('script.py', program)
const result = await sandbox.exec('python script.py', { timeoutMs: 30_000 })
console.log(result.stdout.trim())
await sandbox.kill() import { Sandbox } from '@simplesandbox/sdk'
const client = Sandbox()
const sandbox = await client.sandboxes.create({
image: 'node:lts-alpine3.22',
timeoutMs: 60_000,
})
const script = `
const fs = require('fs')
fs.writeFileSync('/tmp/output.json', JSON.stringify({ generated: Date.now() }))
console.log('wrote output.json')
`
await sandbox.files.write('/tmp/script.js', script)
const result = await sandbox.exec('node /tmp/script.js')
const output = await sandbox.files.read('/tmp/output.json')
console.log(result.stdout)
console.log('File contents:', output)
await sandbox.kill() import { Sandbox } from '@simplesandbox/sdk'
const client = Sandbox()
const sandbox = await client.sandboxes.create({ image: 'python:3.12-slim' })
const flaskApp = `
from flask import Flask
app = Flask(__name__)
@app.get('/')
def hello():
return 'Hello from Sandbox!'
if __name__ == '__main__':
app.run(host='::', port=5000)
`
await sandbox.files.write('server.py', flaskApp)
await sandbox.exec('pip install flask')
await sandbox.exec('python server.py >/tmp/server.log 2>&1', { background: true })
const host = sandbox.expose(5000)
console.log(`Preview URL: https://${host}`)
// Clean sandbox later
// await sandbox.kill() import { Sandbox } from '@simplesandbox/sdk'
const client = Sandbox()
const sandbox = await client.sandboxes.create({ image: 'node:lts-alpine3.22' })
const expressApp = `
const express = require('express')
const app = express()
app.get('/', (req, res) => {
res.json({ message: 'Hello from Sandbox!' })
})
app.listen(3000, '::', () => console.log('server listening on 3000'))
`
await sandbox.exec('mkdir app')
await sandbox.files.write('app/app.js', expressApp)
await sandbox.exec('cd app && npm init -y')
await sandbox.exec('cd app && npm install express')
await sandbox.exec('cd app && node app.js >/tmp/server.log 2>&1', { background: true })
const host = sandbox.expose(3000)
console.log(`API available at https://${host}`) Production-ready code execution infrastructure
Everything you need to run untrusted code safely, without managing servers.
Firecracker microVM cold start in ~1s
MicroVMs start in 800-1200ms using AWS Lambda-grade Firecracker technology. Warm pools (coming Q1 2026) start in under 100ms.
REST API + Native SDKs
OpenAPI-documented REST API with official JavaScript/TypeScript SDK. Python and Go SDKs coming soon. Works with any HTTP client. 5-minute integration time.
Per-second billing at $0.0252/vCPU-hour
1M credits = $1. Pay only for what you use, billed per-second. 50% cheaper than comparable services. No hidden costs or hourly minimums.
Firecracker microVM isolation
AWS Lambda-grade isolation technology. Network-isolated by default with optional internet access. Run untrusted LLM-generated code safely in production.
Any Docker image supported
Use official images like node:lts, python:3.12, or bring your own custom Docker images. Full control over the runtime environment and dependencies.
Standard REST, no lock-in
Works with your existing infrastructure on any cloud. Standard JSON over HTTPS. No proprietary protocols or vendor lock-in. Migrate anytime.
No 15-minute Lambda limits
Run training jobs for hours or keep agents alive indefinitely. No maximum duration. Billed per-second with no minimum. Perfect for long-running AI workflows.
Scale to 1,000+ concurrent (Pro tier)
From 3 concurrent on free tier to 1,000+ on Pro ($50/mo). Scale instantly without enterprise commitments. No rate limit negotiations required.
AWS Lambda-grade security without AWS complexity
Same Firecracker isolation technology, simpler pricing and deployment
Firecracker Isolation
AWS Lambda-grade microVM technology. Complete network isolation by default with optional internet access. Each sandbox runs in its own encrypted environment with kernel-level separation.
Fly.io Infrastructure
Multi-region deployment on Fly.io with 99.9% uptime SLA. SOC 2 Type II certification in progress. Automated failover and health monitoring across all regions.
Data Privacy
Your code and data never leave the sandbox. No logging of execution content or file contents. GDPR and CCPA compliant. Full data encryption at rest and in transit.
50% cheaper than E2B, no enterprise commitments
1M credits = $1. Billed per-second. Scale without talking to sales.
- Up to 3 concurrent sandboxes
- 17 hours of compute time
- No credit card required
- Community support
- Up to 10 concurrent sandboxes
- 171 hours of compute time
- Priority email support
- Usage dashboard
- Up to 1,000+ concurrent sandboxes
- 855 hours of compute time
- Priority support + Slack
- Pre-warmed pools (coming soon)
- Unlimited concurrent sandboxes
- Custom credit packages
- Dedicated support + SLA
- Volume discounts
Auto top-up: When you run low on credits, we automatically add 1M credits ($1) to keep your sandboxes running.
30-day money-back guarantee. Cancel anytime, no questions asked.
Built for scaling without enterprise fees
How we compare to alternatives when you need to scale
* Pay only for what you use. Plans start at $0 (free tier), $10/mo (hobby), or $50/mo (pro) with included credits. Auto top-up when needed.
Questions from developers like you
Is this for deploying my app, or for executing code within my app? ▼
SimpleSandbox is for executing code WITHIN your app, not for deploying your app itself.
Use it when your app needs to run user code, LLM-generated scripts, or untrusted plugins.
For example: building a code interpreter feature, running automation scripts, or executing
data analysis code from AI agents.
If you just want to deploy a web app, use Vercel, Railway, or Fly.io instead.
If your app executes code dynamically, you need SimpleSandbox.
How long does integration actually take? ▼
Most developers integrate in under 5 minutes. Install the SDK, create a sandbox, execute code. That's it. No complex setup, no sales calls, no onboarding sessions.
Is 1-second cold start fast enough for production? ▼
For most agent workloads—data processing, code execution, API calls—a one-second start time won't be noticeable to users. If you need faster starts for real-time use cases, we're working on pre-warmed pools that start in under 500ms.
How does per-second billing work? ▼
You're billed only for the time your sandboxes are running, calculated to the second. If you run a sandbox for 30 seconds, you pay for 30 seconds. No hourly minimums, no idle charges. 1M credits = $1, so costs are completely transparent.
What isolation technology do you use? ▼
We use Firecracker MicroVM isolation, the same technology powering AWS Lambda. Powered by Fly.io infrastructure with 99.9% uptime SLA. 50% cheaper pricing than alternatives.
What languages and runtimes are supported? ▼
Any language with an official Docker image: Node.js, Python, Go, Ruby, Java, Rust, PHP, and more. Use official images like node:lts, python:3.12, or bring your own custom Docker images with pre-installed dependencies.
How do I debug when something goes wrong? ▼
All stdout/stderr is captured and returned in the exec response. For real-time output, use streaming callbacks
(onStdout/onStderr)
or connect to interactive terminals via WebSocket. See the docs for examples.
Can I use this for production workloads? ▼
Yes. Built on Fly.io infrastructure with 99.9% uptime. Firecracker provides AWS Lambda-grade isolation. Start with the free tier to validate your use case.
How does this compare to AWS Lambda or Cloud Run? ▼
Unlike Lambda's 15-minute limit, sandboxes run indefinitely. Unlike Cloud Run's complexity, we handle all orchestration. Better for dynamic AI workloads needing full file system access and long-running processes. 50% cheaper per vCPU-hour than comparable services.
Do you have persistent storage/volumes? ▼
Yes. Persistent volumes are in beta and currently free. Mount volumes to preserve your workspace, dependencies, and build artifacts across sandbox sessions. See the volumes documentation for details.
What happens if I exceed my plan's credits? ▼
Your sandboxes won't stop. Auto top-up adds 1M credits ($1) to keep running. No surprise shutdowns mid-execution. You can also manually add credits anytime via the dashboard.
Add code execution to your app today
Start free with 1M credits (17 hours of compute).
No credit card required. Integrate in minutes.