d-Matrix Unveils JetStream: A Game-Changing AI Network Accelerator for Ultra-Low-Latency Inference

2 Sources

Share

d-Matrix introduces JetStream, a custom network card designed for high-speed, ultra-low-latency AI inference in data centers, aiming to revolutionize rack-scale AI computing.

d-Matrix Introduces JetStream: A New Era in AI Network Acceleration

d-Matrix, an AI chip startup, has unveiled its latest innovation in the field of artificial intelligence computing: the JetStream I/O card. This custom network card is designed to revolutionize rack-scale AI by enabling larger models to be distributed across multiple servers or racks while minimizing performance bottlenecks

1

2

.

Source: The Register

Source: The Register

Technical Specifications and Performance

JetStream presents itself as a PCIe 5.0 NIC (Network Interface Card) with impressive capabilities:

  • Supports two ports at 200 Gb/s or a single port at 400 Gb/s
  • Operates over standard Ethernet
  • Consumes about 150W
  • Achieves network latency of just two microseconds

    1

The JetStream card is designed to work in tandem with d-Matrix's Corsair AI accelerators. Each Corsair card boasts:

  • 2.4 petaFLOPS performance using MXINT8 data type
  • 9.6 petaFLOPS when using the lower precision MXINT4 type
  • 2GB of SRAM with 150 TB/s bandwidth
  • 256GB of LPDDR5 memory with 400 GB/s bandwidth

    1

Innovative Memory Hierarchy

d-Matrix's approach to memory management is unique, pairing fast SRAM with higher capacity LPDDR5 memory. This strategy allows for flexibility in balancing speed and cost, depending on customer requirements

1

.

Scaling AI Models

The JetStream and Corsair combination enables:

  • Running multi-trillion parameter models at 4-bit precision
  • Models up to about 200 billion parameters at MXINT4 precision in a single rack
  • Larger models when scaling across multiple racks

    1

Parallelism and Performance

d-Matrix utilizes a combination of tensor, expert, data, and pipeline parallelism to maximize performance in rack-scale compute clusters. This approach creates an "inference assembly line," where the model is processed in sequence across multiple nodes

1

.

Competitive Advantage

According to d-Matrix, the JetStream-Corsair combination offers significant advantages over GPU-based solutions:

  • Up to 10 times the speed
  • Triple the cost performance
  • Triple the energy efficiency

    2

Market Position and Future Developments

Source: SiliconANGLE

Source: SiliconANGLE

While d-Matrix's current offering positions it well in the AI acceleration market, the company is already looking ahead:

  • The next-gen Raptor family of accelerators will feature 3D stacked SRAM and an integrated electrical I/O chiplet
  • Future plans include transitioning to an optical I/O chiplet for even greater scalability

    1

Availability and Industry Impact

JetStream is currently sampling to customers, with production expected to ramp up before the end of the year

1

2

. As AI models continue to grow in size and complexity, solutions like JetStream could play a crucial role in enabling more efficient and powerful AI infrastructure.

The introduction of JetStream comes at a critical time when the AI industry is moving towards multimodal capabilities and demanding faster interactivity. d-Matrix's innovation addresses key challenges in AI scalability and performance, potentially reshaping the landscape of AI computing infrastructure

2

.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved