AWS Compute Optimizer is not a bad tool. For engineering teams with a single AWS account, fewer than 200 EC2 instances, and a FinOps practice centered on reading Trusted Advisor recommendations and acting on them, Compute Optimizer is genuinely useful. It requires no setup beyond opt-in, it covers EC2, ECS on Fargate, Lambda, and EBS volumes, and it is free.
The problems emerge at scale and in multi-account environments — which is precisely where the costs are large enough to justify a dedicated cost optimization practice. Understanding Compute Optimizer's specific limitations is useful both for teams evaluating whether they need additional tooling and for teams building on top of Compute Optimizer's recommendations API.
The 14-Day Analysis Window Is Often Too Short
Compute Optimizer analyzes the last 14 days of CloudWatch metrics when generating recommendations. This is the most frequently cited limitation, and it matters for two reasons.
First, 14 days may not capture a workload's full utilization cycle. A month-end batch processing workload that runs heavy for 3 days per month will appear low-utilization during the 11 days that Compute Optimizer analyzes, generating a downsize recommendation that is incorrect. The analysis window needs to span at least one full utilization cycle for the recommendation to be valid — which for most workloads means 30-90 days, not 14.
Second, 14 days captures recent utilization patterns that may be atypical. A team that just concluded a major load test will have elevated metrics for the past 14 days. A team whose traffic dropped due to a seasonal pattern will have depressed metrics. Both scenarios produce recommendations that are accurate for the observed window but do not reflect the workload's normal operating range.
Compute Optimizer does support an opt-in enhanced recommendations feature that uses up to 3 months of data, but it requires the CloudWatch Agent to be installed on all instances and the feature must be explicitly enabled per account. In practice, agent coverage and feature enrollment are inconsistent across large account fleets.
Memory Utilization: The Missing Dimension
Without the CloudWatch Agent providing memory metrics, Compute Optimizer makes right-sizing recommendations based solely on CPU utilization. For compute-bound workloads, this is reasonable. For memory-bound workloads — JVM applications, in-memory databases, analytics engines, Elasticsearch clusters — CPU-only recommendations are routinely incorrect.
The engineering reality is that getting the CloudWatch Agent deployed uniformly across a large EC2 fleet is a significant project. It requires configuration management tooling, IAM policy updates for each instance profile, and ongoing maintenance as instances are replaced. Many organizations have partial agent coverage — certain environments have it, others do not — and this inconsistency means Compute Optimizer produces different quality recommendations across different parts of the fleet without making this quality difference visible to the user.
When you run Compute Optimizer recommendations without memory data on a fleet that includes r-series instances (memory-optimized), you will see frequent recommendations to downsize r5 instances to m5 equivalents. The CPU utilization data supports the recommendation because memory-intensive workloads often run at low CPU utilization. The recommendation is wrong for the workload, and applying it will cause performance issues that the engineer will correctly identify as a reason not to trust the tool.
Multi-Account Attribution: The Organizational Problem
Compute Optimizer's recommendations are per-account. In an AWS Organization with 40 accounts, you receive 40 separate Compute Optimizer consoles, each with its own set of recommendations. Aggregating those recommendations into a coherent list ranked by dollar impact requires custom tooling or manual work.
AWS Organizations integration in Compute Optimizer allows a management account to view recommendations for member accounts in a single console, but the cost attribution context is still missing. A recommendation to downsize an r5.4xlarge in account 123456789012 to an r5.2xlarge is worth approximately $400/month. But which team owns that account? Which product does that instance serve? What is the approval process for changes in that account?
The gap between "here is a list of right-sizing recommendations sorted by projected savings" and "here is how those recommendations get reviewed, approved, and implemented by the right people" is not a technical gap — it is an organizational workflow gap that Compute Optimizer does not address. Engineering organizations typically need a layer of tooling that maps AWS accounts and resources to teams, projects, and approval owners before right-sizing recommendations can actually be acted on at scale.
The Acceptance Rate Problem
The metric that matters for a cost optimization program is not the dollar value of recommendations generated — it is the dollar value of recommendations implemented. A tool that generates $2M in annual savings recommendations that the engineering team acts on 20% of the time delivers $400K. A tool that generates $1M in recommendations that the team acts on 70% of the time delivers $700K.
Compute Optimizer recommendations are presented with a confidence score and the underlying utilization data, but they do not include workload context, change history, or an integrated approval workflow. The engineer receiving the recommendation must independently evaluate whether the recommendation is safe, identify who needs to approve the change, execute the change, and verify the result. This process is high-friction enough that most recommendations sit unactioned for weeks or months.
In our experience, the acceptance gap is the largest single source of unrealized savings in FinOps programs. Teams that integrate right-sizing recommendations directly into their Slack workflows — with approval buttons, projected savings, and one-click rollback — act on recommendations 2-3x more frequently than teams reviewing recommendations in a separate console.
What Compute Optimizer Does Well
It is worth being precise about where Compute Optimizer is the right tool. For Lambda function memory sizing, Compute Optimizer's analysis is excellent. Lambda functions have consistent execution patterns that are well-captured by 14 days of data, memory is the primary cost dimension, and the recommendations are generally actionable without additional context.
For EBS volume type optimization — identifying gp2 volumes that should be migrated to gp3, or identifying volumes with consistently low IOPS that are over-provisioned — Compute Optimizer's recommendations are reliable and the implementation is low-risk. Volume type changes can be applied with zero downtime and the cost impact is immediate.
For ECS on Fargate task sizing, Compute Optimizer provides useful CPU and memory utilization analysis. Fargate task definitions often inherit memory allocations from EC2-era configurations that are substantially higher than actual usage, and Compute Optimizer identifies these reliably.
Building on Top of Compute Optimizer
For teams building internal FinOps tooling, the Compute Optimizer API provides programmatic access to recommendations. The most useful approach is to pull recommendations nightly, enrich them with internal cost attribution data (account-to-team mapping, tagging inference for untagged resources), calculate the fully-loaded dollar impact per recommendation, and route them to the appropriate engineering team's Slack channel with the relevant context already attached.
This approach inherits Compute Optimizer's analytical capabilities while addressing the organizational workflow gap. The 14-day analysis window limitation and the memory data gap remain, but for a large fraction of the recommendation set — compute-bound workloads with consistent utilization patterns — the underlying analysis is accurate enough to act on.
As we describe in our article on why CloudWatch metrics alone don't solve right-sizing, the multi-dimensional analysis gap is real and affects recommendation quality for memory-intensive and network-intensive workloads. For those workload categories, a supplementary analysis layer is necessary to avoid high false-positive rates.
The Practical Takeaway
Compute Optimizer is a starting point for EC2 right-sizing in accounts up to moderate scale with consistent workload patterns and good CloudWatch Agent coverage. It is not a complete FinOps platform and was not designed to be. The gap it leaves — multi-account attribution, organizational workflow, multi-dimensional analysis, approval automation — is the gap that purpose-built cloud cost optimization platforms fill.
Teams evaluating whether to invest beyond Compute Optimizer should ask three questions: What fraction of the recommendations are we actually implementing? Do we have full CloudWatch Agent coverage for memory data on our critical instances? Do we have an account-to-team attribution map that allows us to route recommendations to the right owner? If the answer to any of the three is "no" or "partial," the recommendations are leaving savings on the table regardless of what tool generates them.
Close the gap between recommendations and implementation
KernelRun routes right-sizing proposals to the right team, with full utilization evidence and one-click approval via Slack. Connect your first account in 4 minutes.
Request a Demo