# Fast launching infrastructure for user apps This is a tool you can use to rapidly deploy and manage isolated, single-purpose applications within sandboxes. It is especially good for LLM generated code. Sandboxes include persistent state, a Supabase database, and secret management for apps that need to connect to external APIs. When you launch a new app, it comes up on an https:// url within about a second. Users (or robots) can use it like any web app. When it stops seeing HTTP traffic, it goes to sleep. Next time someone uses the app, it generally wakes back up within 100ms. After extended periods of inactivity, apps can be hibernated. We keep all their on disk state and tear down the associated compute. The next time they're accessed, we restore from slow storage and they become available within 10-15s (depending on the size of their state). You pay for only the compute, storage, and cold storage you use. When an app is running, you pay for CPU, memory, local nvme storage, and object storage. When an app is sleeping you pay only for storage. For hibernated apps, you only pay for object storage. ## Deploy User App Code - Deploy via: - `git push` to `https://.platform.bot`. - `curl PUT` a zip file to `https://.platform.bot`. - REST API for lifecycle operations (create, suspend, archive, delete, fork). ## VM Resources - Up to 8 shared CPUs and 16 GB RAM. - Single-process model for small, individual workloads. - Automatic geographic placement: - Human-interactive apps near users. - API-driven apps near API data centers. ## Storage - Persistent `/data` filesystem (up to 1 PB) via JuiceFS - JuiceFS data is persisted to object storage - Local fast 25GB NVMe cache - Runtime storage (Docker images, application code) ## App Lifecycle - Automatic suspension during inactivity; resumes within 100ms. - Apps idle over one month archived to cold storage; resume within ~1 minute. - Instant snapshot and fork capabilities for rollback or branching. ## Runtime Environment - Docker-based environment, defaulting to user-specified application process on port `8000`. - Supabase integration: - PostgREST (`localhost:3000`). - Direct PostgreSQL (`localhost:5432`). - LLM data context endpoint. - Secure API gateway (via [Superfly ssokenizer](https://github.com/superfly/ssokenizer)) at `localhost:4000` for managing third-party API credentials. - Embedded MCP server for datastore management, log viewing, code updates, and API auth. ## Accessing User Apps - Running apps listening on port `8000` are available at `https://.platform.bot`. ## Billing - Per-second billing for running apps. - Flat-rate billing for suspended apps (storage and memory allocation). - Separate billing for runtime and cache storage as detailed above. ## Sanboxing - User apps run in isolated virtual machines - Managed secure credentials; opaque secrets restricted to runtime environment. ## Typical Use Cases - Rapid development and prototyping. - Hosting LLM-generated apps. - Running specialized MCP servers and AI workloads. - Deploying custom, task-specific tools. ## Scaling and Advanced Usage The MCP server (`pbot`) includes a 'get help' function for submitting support requests. When you have an app that needs a fully scalable hosting infrastructure, just submit a request via this function to migrate to a full scalable Supabase + Fly.io infrastructure setup. ## Installation and Setup The MCP server `pbot` drives platform management and runs as an stdio server. Point your tool of choice at the stdio MCP server by simply running `pbot`, then use the `platform-bot-setup` tool to initialize your environment. Install `pbot` locally with the following commands: ### macOS/Linux Run in your terminal: ```bash curl -L https://platform.bot/bin/install.sh | sh ``` ### Windows Run this in PowerShell: ```powershell iwr https://platform.bot/bin/install.ps1 -useb | iex ``` Alternatively, use our REST API available at `https://api.platform.bot`. Detailed documentation can be found at [`https://platform.bot/llm-api.txt`](https://platform.bot/llm-api.txt). ## Pricing ### Machine Pricing (when running) | Configuration | Price per second | |----------------|------------------| | shared-cpu-1x, 1 GB RAM | $0.0000050 | | shared-cpu-2x, 2 GB RAM | $0.0000134 | | shared-cpu-4x, 4 GB RAM | $0.0000268 | | shared-cpu-8x, 8 GB RAM | $0.0000536 | ### Local Storage Pricing - NVMe cache, user code, and image size: $0.15 per GB/month ### Object Storage Pricing - Storage: $0.021 per GB/month - Requests: Included in pricing