Akamai Launches Cloud Inference: Revolutionizing AI Performance with Edge Computing

5 Sources

Share

Akamai Technologies introduces Cloud Inference, a distributed AI inference platform promising improved performance, lower latency, and cost savings compared to traditional cloud infrastructures.

News article

Akamai Unveils Cloud Inference: A Game-Changer for AI Performance

Akamai Technologies, a leader in cybersecurity and cloud computing, has launched Akamai Cloud Inference, a groundbreaking service designed to revolutionize AI inference capabilities

1

2

. This new offering leverages Akamai's globally distributed network to address the limitations of centralized cloud models, promising significant improvements in AI application performance and efficiency.

The Power of Distributed AI Inference

Akamai Cloud Inference runs on Akamai Cloud, touted as the world's most distributed cloud infrastructure platform. With over 4,100 points of presence across more than 1,200 networks in over 130 countries, Akamai's platform is uniquely positioned to bring AI inference closer to end-users and data sources

1

4

.

Adam Karon, Chief Operating Officer and General Manager of Akamai's Cloud Technology Group, explains the significance: "Getting AI data closer to users and devices is hard and it's where legacy clouds struggle. While the heavy lifting of training LLMs will continue to happen in big hyperscale datacenters, the actionable work of inference will take place at the edge"

3

.

Key Features and Benefits

Akamai Cloud Inference offers a comprehensive suite of tools for developers and platform engineers:

  1. Compute Resources: A range of options from CPUs for fine-tuned inference to powerful GPUs and ASICs, integrated with Nvidia's AI Enterprise ecosystem

    4

    .

  2. Data Management: Partnership with VAST Data for a cutting-edge data fabric optimized for AI workloads, complemented by scalable object storage and integration with vector database vendors

    4

    .

  3. Containerization: Leveraging Kubernetes for improved scalability, resilience, and portability of AI workloads

    1

    4

    .

  4. Edge Compute: WebAssembly capabilities for executing LLM inference directly from serverless apps at the network edge

    4

    .

Performance and Cost Advantages

The distributed nature of Akamai Cloud Inference translates into tangible benefits:

  • 3x improvement in throughput
  • Up to 2.5x reduction in latency
  • Potential cost savings of up to 86% on AI inference workloads compared to traditional hyperscaler infrastructure

    2

    4

Shifting Focus from Training to Inference

As AI adoption matures, there's a growing recognition that the emphasis on large language models (LLMs) may have overshadowed more practical AI solutions. Akamai's platform caters to this shift, enabling enterprises to leverage lightweight AI models optimized for specific business problems

4

.

The Future of Distributed AI

Gartner predicts that by 2025, 75% of data will be generated outside of centralized data centers or cloud regions. This trend underscores the importance of Akamai's approach, which processes data closer to its point of origin

4

.

Akamai Cloud Inference represents a significant step towards more efficient, responsive, and cost-effective AI applications. By bringing inference capabilities to the edge of the network, Akamai is positioning itself at the forefront of the next wave of AI innovation, promising to deliver faster, smarter, and more personalized experiences for users across the globe.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo