Vercel vs Cloudflare vs Fly.io β Pricing, Performance, and Developer Experience Compared
Jump to: Pricing Β· Performance Benchmarks Β· Developer Experience
Pricing Sections: Cloudflare Β· Vercel Β· Fly.io
See sandbox creation in action (2x speed)
Introduction
This year, I went deep into sandbox providers β environments where you can run untrusted code safely, spin up VMs in seconds, and tear them down just as fast.
How did I get here?
It started with my obsession with agentic coding. After building a handful of startups, I wanted to push the limits β see how far automation could go.
But being bootstrapped changes how you think. Every token, every VM second matters. I wasn't just testing infrastructure; I was hunting for the setup that could power my vision without burning cash.
And here's the truth I ran into: The hardest problem isn't tech. It's cost.
How much value can I pack into $20/month and still make it sustainable?
That's the question that has kept me up for months, and it's the only thing standing between me and launch. Maybe all roads really lead to Mr. Wonderful.
Article challenge: Each platform uses different pricing models and factors, around active CPU time tracking, disk, instance, network, etc. I've done my best to digest these differences and create the closest thing to an apples-to-apples comparison. Note: Vercel's pricing listed assumes the CPU is active for the entire lifespan of the sandbox. Always reference and understand your selected provider pricing model as pricing can change over time.
Pricing
Quick comparison:
| Provider | vCPU-hr | Memory-hr | Disk | Network | Notes | 
|---|---|---|---|---|---|
| Vercel | $0.128 | $0.0106/GB | β | $0.15/GB | Active CPU tracking | 
| Cloudflare | $0.072 | $0.009/GiB | $0.000252/GB | $0.025-0.05/GB | - | 
| Fly.io | β | β | β | $0.02β0.12/GB | Instance-based pricing | 
Note: Fly.io uses more of an instance-based pricing model where RAM and CPU are bundled together. See GPU Pricing
Apples-to-apples comparison: Similar 4-vCPU instances
| Platform | Instance Type | vCPU | RAM | Disk | Estimated Cost/Hour | 
|---|---|---|---|---|---|
| Vercel | 4cpu-8gb | 4 | 8 GB | β | β $0.597/hr (assumes active CPU for entire lifespan) | 
| Cloudflare | standard-4 (4cpu/12gb*) | 4 | 12 GB | 20 GB | β $0.401/hr | 
| Fly.io-(iad) | performance-4x | 4 | 8 GB | β | $0.172/hr | 
Cloudflare
To get started you will need a worker pro plan at $5/month.
This is the container pricing, please note you are also billed for Worker requests, Durable Objects, and Worker logs which for the purposes of this article I'm deeming as insignificant to the core focus of this article.
Cloudflare instance type is the outlier here, usually it's 2x RAM to vCPU ratio, so unfortunately we can't really compare apples to apples.
Pricing breakdown:
| CPU (vCPU-hour) | Memory (GiB-hour) | Disk (GB-hour) | 
|---|---|---|
| $0.072 | $0.009 | $0.000252 | 
Note: Cloudflare lists pricing in per-second rates. I converted them to hourly using the formula: (per-second rate Γ 3,600)
Included with Workers Paid ($5/month):
- Memory: 25 GiB-hours/month
- CPU: 375 vCPU-minutes/month
- Disk: 200 GB-hours/month
| Instance Type | vCPU | Memory | Disk | Estimated Cost / Hour | 
|---|---|---|---|---|
| lite | 1/16 | 0.25 GiB | 2 GB | β $0.007/hr | 
| basic | 1/4 | 1 GiB | 4 GB | β $0.028/hr | 
| standard-1 | 1/2 | 4 GiB | 8 GB | β $0.074/hr | 
| standard-2 | 1 | 6 GiB | 12 GB | β $0.129/hr | 
| standard-3 | 2 | 8 GiB | 16 GB | β $0.220/hr | 
| standard-4 | 4 | 12 GiB | 20 GB | β $0.401/hr | 
// This is just so I can paste in chrome to verify the math.
0.072 * .5 + 0.009 * 4 + 0.000252 * 8
Network pricing:
- North America & Europe: $0.025/GB (1 TB included)
- Oceania, Korea, Taiwan: $0.05/GB (500 GB included)
- Elsewhere: $0.04/GB (500 GB included)
Vercel
Needs pro plan or enterprise plan to scale: Pro starts at $20/month. Can test drive for free with hobby plan with included allotment.
Each sandbox can use a maximum of 8 vCPUs with 2 GB of memory allocated per vCPU
Sandboxes can have up to 4 open ports.
π‘ Important: Active CPU Tracking
Vercel tracks sandbox usage by Active CPU β the actual CPU time your code consumes, measured in milliseconds.
Waiting for I/O operations (e.g., calling AI models, database queries, external APIs) does not count towards Active CPU. This means you're only billed for compute time, not idle waiting time.
Hot take: This can be hard to predict.
Pricing breakdown:
| vCPU-hr | GB-hr | Network | Sandbox creations | 
|---|---|---|---|
| $0.128 | $0.0106 | $0.15/GB | $0.60 per 1M | 
| Instance Type | vCPU | Memory | Estimated Cost / Hour | 
|---|---|---|---|
| 2cpu-4gb | 2 | 4 GB | β $0.298/hr | 
| 4cpu-8gb | 4 | 8 GB | β $0.597/hr | 
| 8cpu-16gb | 8 | 16 GB | β $1.194/hr | 
// This is just so I can paste in chrome to verify the math.
0.128 * 2 + 0.0106 * 4
Network: $0.15/GB
Included allotment:
- CPU: 5 hours
- Memory: 420 GB-hours
- Sandbox creations: 5,000
Additional costs:
- $0.60 per 1M sandbox creations
β οΈ Limitation: Availability zone is limited to
iad1
Fly.io
Fly.io bills by the second with no minimum commitment.
π Note: Pricing shown is for Ashburn, Virginia (US) - iad region. Other regions may vary.
| Instance Type | vCPU (Performance) | Memory | Cost / Second | Cost / Hour | Cost / Month* | 
|---|---|---|---|---|---|
| performance-2x | 2 | 4 GB | $0.00002392 | $0.0861 | $62.00 | 
| performance-4x | 4 | 8 GB | $0.00004784 | $0.1722 | $124.00 | 
| performance-8x | 8 | 16 GB | $0.00009568 | $0.3444 | $248.00 | 
| performance-16x | 16 | 32 GB | $0.00019136 | $0.6889 | $496.01 | 
*Based on 720 hours (30 days)
π° Savings Tip: Fly.io offers reservation pricing β pay for one year of usage upfront and get a 40% discount for that usage over the year. This can significantly reduce costs for predictable, long-running workloads. Fly.io also offers shared-resource VMs for cost-sensitive workloads. In my experience, performance VMs are the way to go.
Related pricing mentions:
- Wildcard certificates: $1/month
- Dedicated IPv4 addresses are $2/mo.
Performance Benchmarks
To provide a real-world comparison, I measured the time to start a Next.js application on each platform. This includes spinning up the sandbox and getting the Next.js dev server ready to accept requests.
Test Configuration
- Application: Next.js 16 development server
- Test Method: Time from sandbox creation until the dev command is started.
Results
| Platform | Startup Time | vCPU | RAM | Instance Type | 
|---|---|---|---|---|
| Cloudflare | 14.085s | 4 | 12 GB | standard-4 | 
| Vercel | 14.244s | 4 | 8 GB | 4cpu-8gb | 
| Fly.io | 27.124s | 4 (performance) | 8 GB | performance-4x | 
Key Observations
Cloudflare & Vercel: Impressed with start performance β virtually no noticeable difference between the two.
Fly.io: Takes approximately 2x longer to start. We create an app for isolation, then wait for the machine to become available, then set up the project and install dependencies and start the dev server. We could speed this up if we pre-warmed the machine. In my case I'm trying to do full spin up and complete tear down. This is not really fair to Fly here. But I'm yet to build out a pre-warmed machine. But if I do, I'll update this article. FYI.
Developer Experience
Please note, this is my take, I am a fan of all platforms; all platforms are great in their own way.
Vercel
DX Rating: 5/5
First time setup took the least amount of time. Easy to use SDK and great documentation.
Cons: Used sb-******.vercel.run domain. No option for secure Preview URLs.
Pros: Handle up to 4 ports dynamically out of the box.
Fly.io
DX Rating: 4/5
First time setup took the longest β approximately 3 days. I was learning how to set it up. You will create a router/proxy app to route to the correct machine. Then you will deploy a base image for your sandbox machines. On my 3rd round of setup it took 3 hours and it's on-demand spin up and tear down like Cloudflare and Vercel.
What I like about Fly.io:
I do like the predictable billing model here.
Fly.io also provided a lot of flexibility around auto stopping/suspending machines, auto start. Wake up on demand via the proxy.
Where I see improvements:
Fly.io feels ahead of their time in terms of infra capabilities. If there was a simple abstraction/SDK around their infra where I don't have to do everything manually from the proxy, building API to run commands. Auto wake up and suspend, pre warm controls (keep x warm but suspend), secure Preview URLs, API connect token handled automatically. This would have made DX 5/5. Perhaps I'll end up building an SDK.
My config looks like this:
Router app = nginx reverse proxy to the correct machine. (port 80) Machine app = nginx proxy (port 80) that is created dynamically based on the user specified ports to internally proxy the request to the users desired port (example: 3000, 3001, 3002).
Cons: Handling multiple ports dynamically requires more complex configuration.
Cloudflare
DX Rating: 4/5
The Cloudflare docs are heavily focused on one instance per worker. In my cases I needed multiple instances per container. Cloudflare imposes caps on containers based on resources so then you would have to build out multiple workers and route based on available capacity vs Cloudflare handling this internally.
First time setup took about 6 hours. Note: I'm relatively new to the Cloudflare & Worker ecosystem.
Mental model of the architecture:
- Worker = Backend API/Controller (handles requests and orchestrates containers & proxy)
- Container Image = Docker Image (the blueprint/template)
- Container Instance = Running sandbox (spawned from the image)
Noted from CF: While in open beta, the following limits are currently in effect:
Per worker limit table:
| Instance Type | Specs (vCPU / RAM) | Max Concurrent | Limit Hit First | 
|---|---|---|---|
| lite | 1/16 vCPU / 0.25 GiB | 1600 | Memory (400 GiB) | 
| basic | 1/4 vCPU / 1 GiB | 400 | Memory (400 GiB) | 
| standard-1 | 1/2 vCPU / 4 GiB | 100 | Memory (400 GiB) | 
| standard-2 | 1 vCPU / 6 GiB | 66 | Memory (400 GiB) | 
| standard-3 | 2 vCPU / 8 GiB | 50 | Memory = CPU (400 GiB = 100 vCPU) | 
| standard-4 | 4 vCPU / 12 GiB | 25 | CPU (100 vCPU) | 
Important consideration:
You have to create a worker β you can't call getSandbox() from your app if it's hosted outside of CF Workers. So to create a sandbox it goes to my backend (auth, rate limit checks, usage tracking, etc.) β worker API. You'll have to build out your own auth and communication layer between your backend and worker when spawning a sandbox. For production Preview URLs, you'll need a wildcard DNS record that points at your Worker, and you'll route requests via proxyToSandbox() inside that Worker to reach the correct container.
Pro β Built-in Security:
One standout feature is Cloudflare's built-in security for exposed ports. When you expose a port, Cloudflare automatically generates a unique access token embedded in the URL (e.g., https://8080-sandbox-abc123token.example.com). This provides:
- Token-based access control - Each port gets its own unique, randomly generated token
- Automatic HTTPS/TLS - All traffic is encrypted by default in production
- Unpredictable URLs - Tokens are difficult to guess, adding a layer of security
- Request routing via proxyToSandbox()- Clean routing in your Worker's fetch handler
This is particularly valuable for sandboxes since you don't need to build your own authentication layer for Preview URLs - it's handled out of the box. You can still add application-level auth for additional security, but the baseline protection is already there.
Cons β Port 3000 Limitation: Port 3000 is reserved for Cloudflare's internal Bun service. Since 3000 is popular port for local development, it's a key part of good customer DX. I understand Bun uses it internally, but it feels like Cloudflare is prioritizing internal DX over customer DX. Perhaps Cloudflare could have chosen another port.
Note: This perspective comes from an outside developer who isn't aware of Cloudflare's internal systems or the complexity that choosing a different port might introduce.
Pros: Can open multiple ports dynamically out of the box. Cloudflare's docs demonstrate exposing multiple ports without publishing a numeric maximum; in my testing, I was able to open five ports versus Vercel's four-port limit.
Reference: Cloudflare Containers Limits
Unexplored Options
Other sandbox providers worth exploring:
- E2B β Code interpreter sandbox environments
- CodeSandbox β Browser-based development environments
What I Plan on Rolling With for Now
All three providers are excellent working solutions. Due to other pressing priorities, building out the availability layer right now isn't where I should be focusing my time.
Have you tried any of these platforms? What has your experience been? Feel free to reach out with questions or share your story and thoughts!
What's Next for Me?
Well, once I get my hands on a bare metal machine, I want to experiment with my own Firecracker setup, primarily due to cost savings. I'm still faced with the same challenge: how do I provide sandboxes and AI credits in a $20/month subscription while offering attractive pricing? Maybe I'll apply to the E2B startup program.
References:
Cloudflare:
Vercel:
Fly.io:
Thank and shout out to Vercel, Cloudflare and Fly.io for their excellent work!
I'm blessed to be able to work with these platforms and learn from them and share my thoughts and experiences with you.
Thanks for reading! DMs open and open to work. @jerrickhakim.
