Akamai CFO Edward McGowan detailed the company's "Act Three" transformation, highlighting the acceleration of its Compute business to a $400 million run rate and the launch of the Akamai Inference Cloud. A significant catalyst is a new $200 million, four-year contract with a major tech client for GPU capacity. While this buildout and the broader AI strategy are creating near-term margin pressure due to upfront capital expenditures and colocation costs, McGowan outlined attractive unit economics for GPU deployments. Additionally, Akamai is implementing price increases in its traditional delivery business to offset rising infrastructure costs while seeing triple-digit growth in API security.
Key Takeaways
- Compute Momentum: Akamai’s Cloud Infrastructure Services (CIS) business has reached a $400 million annualized run rate, driven by broad adoption across hundreds of customers rather than reliance on a small number of large accounts.
- Major AI Contract: Akamai secured a $200 million, four-year contract with a major technology customer for its Inference Cloud. Revenue is expected to ramp in the second half of the year after data center preparation is completed.
- GPU Unit Economics: Management highlighted attractive returns on GPU deployments. On the high end, 1,000 GPUs require about $20 million in CapEx, while potential rental revenue could reach $22 million annually at list prices, implying a payback period of 1 to 2 years.
- Margin Compression: Akamai guided 2026 margins to 23%–26%, down from roughly 29% in 2025, reflecting the upfront impact of colocation costs and depreciation tied to its AI infrastructure buildout.
- Security Growth: API security is exiting the year at an over $100 million annualized run rate, growing more than 100% year over year, while penetration remains below 10% of the installed base.
- Delivery Pricing Power: For the first time in its history, Akamai is raising prices on delivery renewals to offset $200 million of additional CapEx driven by higher memory and colocation costs.
- Infrastructure Expansion: Akamai’s Inference Cloud is currently live in 20 locations and is expected to expand to 20–40 locations depending on customer demand.
Q&A
Sanjit K. Singh (Morgan Stanley): Can you lay out the vision for Akamai's "Act Three" in public cloud and edge AI inferencing and why the team is excited about growth in the AI era?
- Edward J. McGowan: The strategy is a natural extension of Akamai’s distributed platform; by acquiring Linode, Akamai can now offer enterprise-grade managed container services for customers who need to run proprietary code. The move into Inference Cloud addresses demand for distributed compute that public clouds struggle to service due to performance issues and high costs, specifically leveraging Akamai’s network to minimize egress fees.
Sanjit K. Singh (Morgan Stanley): Is the acceleration in the Cloud Infrastructure Services business fueled by a handful of customers or broad breadth?
- Edward J. McGowan: The "$400 million run rate" is driven by "hundreds and hundreds of customers," ranging from small spenders to millions per month, rather than a few mega-deals. Demand is robust across use cases like observability and media workflow, with a pipeline filled by both partners and direct inbound interest.
Sanjit K. Singh (Morgan Stanley): Why do customers choose Linode over hyperscalers, given that the major hyperscalers are also Akamai customers?
- Edward J. McGowan: It is not a zero-sum game; customers choose Akamai for specific performance needs, diversity, and economics—specifically the elimination of egress fees. Because Akamai operates one of the world's largest backbones, the cost of data transfer is "virtually nothing," allowing them to pass significant savings to customers compared to public cloud providers.
Sanjit K. Singh (Morgan Stanley): Regarding the "$200 million four-year deal" with a major tech customer, why did they choose Akamai and what is the timeline for revenue realization?
- Edward J. McGowan: The customer required specific low-latency performance for a GPU cluster that Akamai validated through a proof of concept. While the contract is signed, revenue realization will lag by "three to six months" as Akamai lights up new data center space to support the massive deployment.