AWS · Free tool
AWS Lambda cost calculator
The line item that hides in plain sight. Plug in invocations, duration, memory, and architecture, and see the real monthly bill plus the three knobs that actually move it.
Estimated monthly Lambda cost
$0
$0 per year
Invocation cost
$0
50,000,000 calls × $0.20 / 1M
GB-second cost
$0
5,000,000 GB-s × $0.0000133/GB-s
Comparison
If your monthly Lambda bill looks high, check these first
- Functions on x86 that should be on ARM64. Free 20 percent saving for one Terraform flag.
- Functions over-provisioned at 1024 MB or higher when 256 MB would do, especially I/O-bound handlers.
- Functions under-provisioned at 128 MB doing CPU-bound work, paying for 5x more wall-clock time.
- Provisioned concurrency on functions that do not need it. The bill is 24/7 even when traffic is zero.
- Recursive triggers, the classic Lambda → S3 → Lambda loop that quietly bills millions of invocations a day.
Why Lambda surprises teams in audits
Lambda looks free. The pricing per invocation is $0.20 per million. The pricing per GB-second is six zeros and a three-digit number. Then a single chatty function at 1 GB memory and 200 ms duration, called 200 times per second, runs $5,200 a month and nobody on the team knew.
Three knobs do most of the work: memory tuning, ARM64, and duration. Provisioned Concurrency is a tax most teams pay without needing it. We wrote up the full pattern in the three knobs that matter.
Run this on your real account
Free 14-day audit, read-only IAM role, one-page CFO summary.
We pull your actual Lambda spend, identify functions still on x86, memory mismatches, and recursive trigger patterns. The audit is free, the report is yours.
Frequently asked
How is Lambda priced exactly?
Two components. First, $0.20 per 1 million invocations on x86. Second, GB-seconds: memory in GB multiplied by execution time in seconds, billed at $0.0000166667 per GB-second on x86. ARM64 (Graviton) is 20 percent cheaper on the GB-second rate. The free tier of 1M invocations and 400,000 GB-seconds per month still applies, indefinitely.
Why does memory tuning matter so much?
Lambda allocates CPU proportional to memory. A function set at 128 MB gets a fraction of a vCPU, a function at 1769 MB gets a full vCPU, anything above that gets multi-core. Doubling memory often halves duration on CPU-bound work, leaving total cost unchanged or lower while latency drops by half. The AWS Lambda Power Tuning state machine finds the sweet spot in 30 seconds per function.
Should I use ARM64 (Graviton)?
Almost always yes for Python, Node.js, Go, and Rust functions. ARM64 is 20 percent cheaper per GB-second. Performance is equal or better on most workloads. Switch the architecture flag in your Terraform or SAM template, redeploy, you are done. The only blockers are functions that bundle x86-only native binaries, rare in 2026 but worth checking.
When is Provisioned Concurrency worth it?
Two use cases. One, latency-critical user-facing endpoints where cold starts hurt. Two, very high steady-state traffic where the GB-second savings on warm invocations exceed the provisioned cost. For everything else, on-demand is cheaper. Calculate the break-even: provisioned concurrency cost per hour divided by the GB-second saving per invocation gives you the invocation rate where it pays off.
Related free tools
Keep going. No email.
AWS · Networking
NAT Gateway cost calculator
The line item nobody budgets for. Hourly cost, data-processing cost, monthly total, and a checklist of the most common NAT cost drivers we catch in audits.
AWS · Commitments
Reserved Instance break-even calculator
Standard or Convertible, 1 or 3 year, every payment option. Monthly savings, break-even month, net return over the term. Defaults you can override with your real EDP rate.
AWS · Commitments
AWS Savings Plan ROI calculator
Plug in your on-demand spend, commit term, and payment option. Get monthly savings, break-even month, and net return over the term. Honest about the assumptions, no email gate.