IBM's Co-Packaged Optics Breakthrough Promises to Revolutionize AI Computing

9 Sources

Share

IBM Research unveils co-packaged optics technology that could dramatically improve data center efficiency for AI workloads, potentially reducing energy consumption and accelerating model training times.

News article

IBM's Breakthrough in Co-Packaged Optics

IBM Research has announced a significant advancement in chip assembly and packaging technology called co-packaged optics (CPO). This innovation promises to revolutionize data center efficiency, particularly for AI workloads, by bringing optical link connections inside devices and within data center walls

1

.

The Technology Behind Co-Packaged Optics

At the heart of this breakthrough is the polymer optical waveguide (PWG), which IBM claims is the world's first successful implementation for chip-level connectivity. This technology allows for a sixfold increase in the number of optical fibers that can be connected at a chip's edge, a measure known as "beachfront density"

1

.

The CPO modules use PWGs to facilitate high-speed data transmission, replacing traditional copper-based electrical interconnects. IBM has demonstrated a 50-micron pitch for optical channels, coupled to silicon photonics waveguides and connector pluggable to single-mode glass fiber arrays

2

.

Implications for AI and Data Centers

The potential impact of this technology on AI computing is substantial:

  1. Energy Efficiency: CPO reduces energy consumption in data centers by over 80% compared to mid-range electrical interconnects

    3

    .

  2. Faster Model Training: IBM claims this technology could speed up AI model training, potentially cutting training times for large language models (LLMs) from months to weeks

    2

    .

  3. Increased Bandwidth: The technology promises to boost bandwidth between chips by up to 80 times compared to current electrical connections

    3

    .

  4. Extended Reach: CPO extends the reach of interconnect cables from one to hundreds of meters, allowing greater flexibility in data center layouts

    2

    .

Real-World Testing and Results

IBM has conducted tests using Meta's Llama 3-70 billion parameter model with a distributed setup. Results showed that with an increase in tensor parallelism degree, the throughput can be affected by as much as 5 times

2

.

The prototypes have demonstrated durability under extreme conditions, including high humidity and temperatures ranging from -40°C to 125°C, passing all stress tests required for manufacturing

3

.

Industry Impact and Future Prospects

This development is particularly significant given the exponential growth in demand for AI computing power. Mukesh Khare, GM of IBM's semiconductors division, noted that since the advent of LLMs, compute demand is doubling every six months, compared to every 20 months previously

4

.

IBM plans to work with component suppliers to support production quantities of this technology, potentially reshaping the landscape of data center infrastructure and AI computing capabilities

5

.

[2]

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo