Curated by THEOUTPOST
On Tue, 1 Apr, 8:02 AM UTC
5 Sources
[1]
Lightmatter unveils high-performance photonic 'superchip', claims world's fastest AI interconnect
The challenges facing multi-chiplet processors are the performance and power consumption of inter-chiplet interconnects, which may eventually limit their performance. Lightmatter, a startup that has worked on various optical interconnects for several years now, this week introduced a possible solution: its Photonic M1000 high-performance photonic interconnect platform that promises to enable large multi-chiplet processors with optical interconnects supporting bandwidth of up to 114 Tbps (14.25 TB/s). Lightmatter's Passage M1000 is a multi-reticle eight-tile active 3D interposer enabling die complexes of 4,000 mm^2, which by far exceeds the sizes of today's multi-chiplet solutions. The device includes eight connected chip sections in one package and integrates 1,024 serial data channels. Each of these supports 56 Gbps transmission using a straightforward modulation method. For external connections, the M1000 incorporates 256 fiber-optic lines with eight wavelengths per signal line, each offering 448 Gbps. The Passage M1000 comes in a 7,735 mm^2 package that can deliver up to 1,500W of power to its chiplets, which is more or less in line with expectations for next-generation AI processors. In addition to the Passage M1000 -- which can serve as base for ultra-high-performance multi-chiplet AI processors -- Lightmatter also unveiled its Passage L200. The Passage L200 is a 3D optical chiplet that replaces traditional copper interconnects with ultra-high-speed photonic links. It offers 32 Tbps (L200) and 64 Tbps (L200X) of total bandwidth, supporting over 200 Tbps per chip package when integrated. Unlike conventional designs limited to edge I/O, L200 enables edgeless connectivity, enabling data channels to be placed anywhere on the die surface for better performance. The L200 users chiplet IP from Alphawave with a UCIe die-to-die interface and Lightmatter's photonic circuits that support supports 320 multi-protocol SerDes, 16-wavelength WDM per fiber for up to 1.6 Tbps per fiber. "Passage M1000 is a breakthrough achievement in photonics and semiconductor packaging for AI infrastructure," said Nick Harris, founder and CEO of Lightmatter. "We are delivering a cutting-edge photonics roadmap years ahead of industry projections. Shoreline is no longer a limitation for I/O. This is all made possible by our close co-engineering with leading foundry and assembly partners and our supply chain ecosystem." Being a fabless chip designer, Lightmatter orders silicon from GlobalFoundries (which used its Fotonix silicon photonics platform that integrating photonics with CMOS logic) and then packaging services from Amkor and ASE. The M1000 will be available in summer 2025, whereas L200 will be available in 2026.
[2]
Lightmatter's photonic interposers set to ship this summer
Lightmatter this week unveiled a pair of silicon photonic interconnects designed to satiate the growing demand for chip-to-chip bandwidth associated with ever-denser AI deployments. The first of these is an optical interposer called the Passage M1000, which the California biz expects to begin shipping later this summer, and targets XPUs -- think GPU or AI accelerators -- or extremely high-bandwidth multi-die-switches on the order of petabits per second of capacity. The tech, which pipes data directly in and out of chips using light, was talked up by Lightmatter at the Optical Fiber Conference, running this week in San Francisco. If any of this sounds familiar, Lightmatter is one of many looking to photonics to overcome power and bandwidth limitations. At Nvidia's GPU Technology Conference, aka GTC, last month, Nv revealed a pair of photonic switches designed to cut down on the number of transceivers necessary to build out large AI clusters. Intel, Broadcom, and Ayar Labs have also demonstrated co-packaged optical I/O functionality with a variety of CPUs and XPUs. What sets Lightmatter's M1000 apart from the rest of the pack is that it's designed to function as an interposer that sits between the compute logic and the substrate. Multiple ASICs or GPU dies can be stacked on top of the Passage tile. All of these layers communicate electrically, and from the interposer, traffic destined to be exchanged between connected chip packages is transmitted optically between each package over a dense network of wave guides. Traffic headed off the package is routed over any number of the 256 fiber optic attach points which line its edge. One of the biggest advantages of this approach, Lightmatter says, is that communications between chips aren't limited to a so-called beachhead within the processor package. Instead, with interposer designs, data can move vertically over the entire surface area of the chip resulting in a greater aggregate bandwidth. For its first interposer, Lightmatter is sticking with a combination of 56 Gb/s NRZ modulation and wave division multiplexing with support for eight wavelengths per fiber, which conveniently works out to 56 GB/s of bandwidth. In total Lightmatter claims each M1000 tile can support up to 14.25 TB/s second of aggregate bandwidth. Following the debut of the M1000 interposer later this year, Lightmatter plans to bring a pair of smaller co-packaged optical designs to market in 2026. The Passage L200 and and L200X are designed to fill the role of more traditional co-packaged optics and promise either 32 Tb/s or 64 Tb/s of bidirectional bandwidth, respectively. For comparison, Ayar Labs' next-gen photonics chips we looked at last year, boast up to 8 Tb/s. From what we gather, the main difference between the L200 and L200X is the former is using 56 Gb/s NRZ, and the latter is using 112 Gb/s PAM4 SerDes. Like the M1000, Lightmatter's L200-series use the same bandwidth-boosting 3D-packaging approach, and multiple chiplet stacks can be used to support off-package communications at speeds of more than 200 Tb/s. According to Lightmatter, these chips incorporate a variety of technologies from Alphawave Semi including "low-pwer and low-latency UCIe and optics ready SerDes." If you're not familiar, UCIe is an emerging interconnect standard not unlike PCIe or CXL designed to enable chiplets from multiple vendors to communicate with one another using a common language. ® Speaking of photonic interconnects: The US Defense Advanced Research Projects Agency, aka DARPA, on Tuesday awarded AI chip startup Cerebras Systems and Canadian co-packaged optics vendor Ranovus a $45 million contract. Under the contract, Cerebras will integrate Ranovus' CPO tech into its wafer-scale compute platform in order to support "real-time, high-fidelity simulations" and "large-scale AI workloads," Cerebras CEO Andrew Feldman said in a statement.
[3]
Lightmatter releases new photonics technology for AI chips
SAN FRANCISCO, March 31 (Reuters) - Lightmatter, a startup valued at $4.4 billion, on Monday released two pieces of technology aimed at speeding up the connections between artificial intelligence chips. Instead of moving information between computer chips as electrical signals, Lightmatter's technology uses optical connections and what are known as silicon photonics to move the information using light. Mountain View, California-based Lightmatter has raised $850 million in venture funding to date as such optical technologies have kicked off a wave of investments in Silicon Valley amid a search for better ways to string together chips to power chatbots, image generators and other AI applications. AI chip firms like Advanced Micro Devices (AMD.O), opens new tab have demonstrated the use of optical technologies packaged together with their chips. Nvidia (NVDA.O), opens new tab earlier this month introduced optical technology in some of its networking chips, though its CEO said the technology is not yet mature enough to use in all of its chips. Lightmatter on Monday introduced two new products that are designed to be packaged together with AI chips. One is called an interposer, a layer of material that the AI chip sits atop to connect to neighboring chips that also sit atop the interposer. The other is a small tile called a "chiplet" that can be placed on top of an AI chip. Lightmatter said its interposer will be released in 2025 and the chiplet in 2026. The interposer is manufactured by GlobalFoundries (GFS.O), opens new tab. Reporting by Stephen Nellis in San Francisco; Editing by Leslie Adler Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence
[4]
Lightmatter turbocharges GPU connectivity with its first photonics-based networking interconnects - SiliconANGLE
The silicon photonics pioneer Lightmatter Inc. says it's ready to spearhead a revolution in data center connectivity with the coming launch of its first 3D co-packaged optics product, Passage L200. The Passage L200, which will be available in 32 terabyte and 64TB flavors, is designed to integrate with the latest graphics processing units and networking switch designs to speed up chip-to-chip communications and eliminate the bottlenecks created by today's existing chip interconnects. It can support massive clusters of thousands of GPUs with unprecedented bandwidth. In addition to its first CPO, Lightmatter also unveiled the Passage M1000 reference platform for a 3D Photonic Superchip, designed for customers to create their own customized GPU interconnects that utilize silicon photonics. It's a "multi-reticle active photonic interposer" that makes it possible for customers to create larger die complexes on silicon wafers to enhance connectivity for large-scale GPU clusters. Lightmatter, which came to prominence last year after raising $400 million in a Series D funding, is one of the leading players in the nascent silicon photonics industry, seeking to transform how GPUs and other chips communicate and exchange data with each other. It's aiming to provide much higher bandwidth and lower-latency connections using optical fiber connections, so enterprises can scale their data centers to support more powerful artificial intelligence applications and high-performance computing workloads. In an interview with SiliconANGLE last year, Lightmatter Chief Executive Nick Harris explained that GPUs have increased their processing power in terms of operations per second by more than 1,000 times. Those gains, which took place in less than a decade, mean that even the fastest network interconnects, used to link clusters of GPUs, cannot even hope to keep up with the number of computations they can perform. In the AI industry, it has become essential for companies to link thousands or even tens of thousands of GPUs together, so they can work in concert to power the most advanced large language models. However, with the improvements in GPU processing, the interconnects that support these clusters have become the weakest link. GPUs are constantly having to wait for data to arrive, which means they spend most of their time sitting idle instead of processing information. "If you're one of these GPUs, your life is like: OK, I'm waiting for data from memory, crunch, crunch, crunch, OK, waiting for data from another GPU, crunch, crunch, crunch, and just sitting there," Harris told SiliconANGLE last year. "Only 30% of the time you're doing calculations. So, you've got this Ferrari engine, it's in Manhattan, and there's stoplights everywhere." As a result, the main bottleneck for AI these days is the ability of GPUs to communicate with one another. Solve this and it should be possible to accelerate AI to unprecedented speeds, and that's the promise Lightmatter is making with its new Passage L200 and Passage M1000 3D co-packaged optics. Lightmatter's Passage solves the problem of cumbersome networking interconnects by interposing its ultra-dense optical fiber technology with data center chips to improve bandwidth by as much as 100 times compared to the best solutions in use today. Essentially, it combines its fiber optic interconnects directly into a package of silicon chips. The Passage L200 CPO chips, which are expected to launch next year, and the Passage M1000 reference design, which will be made available this summer, are said to shatter existing interconnect bottlenecks. The most powerful 64TB L200 enables multiple GPUs to be packaged together on a single chip to provide more than 200 terabytes per second of input/output bandwidth, speeding up AI training and inference by more than eight times, the company says. In traditional data centers, GPUs are interconnected using an array of networked switches that form a kind of layered hierarchy. But this architecture creates too much latency, because for one GPU to talk to another, it must go through multiple switches to reach it. Harris said Lightmatter flattens that hierarchy. "So instead of six or seven layers of switches, you've got two, and each GPU can connect to thousands of others," he explained. Lightmatter refers to its novel interconnect architecture as "edgeless I/O," and says it can scale bandwidth across the entire die area on any GPU. In the context of integrated circuits, a die refers to the individual circuits integrated onto silicon wafers. GPUs contain thousands of these dies, all working in unison to crunch and process data. Traditional dies can only connect to other dies at the "shoreline," essentially at the edge of each die. Lightmatter, on the other hand, allows I/O connectivity anywhere on the surface of the die, vastly increasing the bandwidth it supports. The company said the upcoming L200 CPO is engineered for high-volume manufacturing, and it's working closely with semiconductor fabrication partners like Global Foundries Inc. to facilitate production-readiness. "The Shoreline is no longer a limitation for I/O," Harris said in a statement. "This is all made possible by our close co-engineering with leading foundry and assembly partners and our supply chain ecosystem."
[5]
Lightmatter releases new photonics technology for AI chips
Lightmatter, a startup valued at $4.4 billion, on Monday released two pieces of technology aimed at speeding up the connections between artificial intelligence chips. Instead of moving information between computer chips as electrical signals, Lightmatter's technology uses optical connections and what are known as silicon photonics to move the information using light. Mountain View, California-based Lightmatter has raised $850 million in venture funding to date as such optical technologies have kicked off a wave of investments in Silicon Valley amid a search for better ways to string together chips to power chatbots, image generators and other AI applications. AI chip firms like Advanced Micro Devices have demonstrated the use of optical technologies packaged together with their chips. Nvidia earlier this month introduced optical technology in some of its networking chips, though its CEO said the technology is not yet mature enough to use in all of its chips. Lightmatter on Monday introduced two new products that are designed to be packaged together with AI chips. One is called an interposer, a layer of material that the AI chip sits atop to connect to neighboring chips that also sit atop the interposer. The other is a small tile called a "chiplet" that can be placed on top of an AI chip. Lightmatter said its interposer will be released in 2025 and the chiplet in 2026. The interposer is manufactured by GlobalFoundries.
Share
Share
Copy Link
Lightmatter introduces two new photonic interconnect technologies, the Passage M1000 and L200 series, promising to revolutionize AI chip connectivity with unprecedented bandwidth and efficiency.
Lightmatter, a Silicon Valley startup valued at $4.4 billion, has unveiled two groundbreaking photonic interconnect technologies aimed at revolutionizing AI chip connectivity. The company's innovations, the Passage M1000 and L200 series, promise to address the growing demand for chip-to-chip bandwidth in AI deployments by leveraging silicon photonics 1.
The Passage M1000 is a multi-reticle eight-tile active 3D interposer that enables die complexes of 4,000 mm^2. This high-performance photonic interconnect platform boasts:
The M1000 is designed to function as an interposer between compute logic and the substrate, allowing multiple ASICs or GPU dies to be stacked on top. This design enables data to move vertically over the entire surface area of the chip, resulting in greater aggregate bandwidth 2.
Following the M1000, Lightmatter plans to introduce the Passage L200 and L200X in 2026:
Lightmatter's approach offers several key benefits:
Edgeless connectivity: Data channels can be placed anywhere on the die surface, overcoming limitations of traditional edge I/O designs 4.
Increased efficiency: By using light instead of electrical signals, the technology promises to reduce power consumption and increase speed 5.
Scalability: The technology can support massive clusters of thousands of GPUs with unprecedented bandwidth 4.
Lightmatter's innovations come at a crucial time for the AI industry, where the need for faster and more efficient chip-to-chip communication is paramount. The company's technology has the potential to accelerate AI processing speeds by up to eight times 4.
With $850 million in venture funding and partnerships with leading foundry and assembly partners like GlobalFoundries, Lightmatter is well-positioned to bring its photonic interconnect technology to market 3 5.
As the AI industry continues to grow and demand ever-increasing computational power, Lightmatter's photonic interconnect technology could play a crucial role in overcoming current bottlenecks and enabling the next generation of AI applications.
Reference
[1]
[2]
[4]
[5]
Lightmatter raises $400 million in Series D funding, while other photonic startups like Oriole Networks and Xscape Photonics also secure significant investments. The surge in funding highlights the growing importance of photonics in addressing AI data center challenges.
5 Sources
5 Sources
Ayar Labs secures $155 million in Series D funding from major chipmakers and investors to scale up its light-based chip-to-chip communication technology, promising to revolutionize AI infrastructure.
6 Sources
6 Sources
Nvidia introduces new Spectrum-X and Quantum-X switch platforms using co-packaged optics technology, promising significant improvements in bandwidth, power efficiency, and scalability for AI data centers.
5 Sources
5 Sources
Swiss startup Lightium secures $7 million in funding to mass-produce thin-film lithium niobate photonic chips, aiming to reduce data center energy consumption and improve interconnect performance.
2 Sources
2 Sources
MIT researchers have created a new photonic chip that can perform all key computations of a deep neural network optically, achieving ultrafast speeds and high energy efficiency. This breakthrough could revolutionize AI applications in various fields.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved