Apex is a self-hosted ML platform for small AI teams. One pip install, full job queue, browser-native VS Code, real-time GPU monitoring — all running on the workstation you already own.
Compare a full month of training compute, 24/7.
No Kubernetes, no cloud bill, no DevOps engineer required. Just the things that matter.

Jobs today, GPU hours used, queue depth, success rate — at a glance. Updates in real-time via server-sent events.

Click any running job to attach a WebSocket log stream. See loss curves, step counts, and OOM warnings as they happen.

Launch a code-server container on a free port with your workspace pre-mounted. Full VS Code, no install, GPU attached.

pynvml under the hood. GPU util, VRAM used/total, temperature, power draw, CPU, RAM — every 2 seconds, in the topbar and on the metrics page.

Sortable table of every job you've ever run. Filter by status, tail logs for any of them, cancel running jobs, remove old ones.

Reads directly from your Docker daemon — no registry push required. Pre-built apex/code-server images for Python and PyTorch-CUDA included.
No YAML. No helm charts. No IAM roles.
pip install apex — registers the apex CLI. Python 3.10+ and a running Docker daemon are the only prerequisites.
apex start — boots the platform on port 7000, starts the GPU monitor, opens your browser automatically.
Paste a Docker image, an entry script, click Queue job →. Apex pulls, runs, streams logs, and reports success — all on your GPU.
Python 3.10+ · pynvml · psutil · vanilla HTML/CSS/JS
React · Redis · Postgres · RabbitMQ · Kubernetes · Helm · Node.js · webpack · Terraform
One Python process. One pip install. One GPU machine.
All tiers run on your hardware. You're not renting compute, you're not renting seats — you're renting the multi-user bits.