Curated by THEOUTPOST
On Tue, 3 Dec, 12:06 AM UTC
4 Sources
[1]
Photonic processor could enable ultrafast AI computations with extreme energy efficiency
The deep neural network models that power today's most demanding machine-learning applications have grown so large and complex that they are pushing the limits of traditional electronic computing hardware. Photonic hardware, which can perform machine-learning computations with light, offers a faster and more energy-efficient alternative. However, there are some types of neural network computations that a photonic device can't perform, requiring the use of off-chip electronics or other techniques that hamper speed and efficiency. Building on a decade of research, scientists from MIT and elsewhere have developed a new photonic chip that overcomes these roadblocks. They demonstrated a fully integrated photonic processor that can perform all the key computations of a deep neural network optically on the chip. The optical device was able to complete the key computations for a machine-learning classification task in less than half a nanosecond while achieving more than 92 percent accuracy -- performance that is on par with traditional hardware. The chip, composed of interconnected modules that form an optical neural network, is fabricated using commercial foundry processes, which could enable the scaling of the technology and its integration into electronics. In the long run, the photonic processor could lead to faster and more energy-efficient deep learning for computationally demanding applications like lidar, scientific research in astronomy and particle physics, or high-speed telecommunications. "There are a lot of cases where how well the model performs isn't the only thing that matters, but also how fast you can get an answer. Now that we have an end-to-end system that can run a neural network in optics, at a nanosecond time scale, we can start thinking at a higher level about applications and algorithms," says Saumil Bandyopadhyay '17, MEng '18, PhD '23, a visiting scientist in the Quantum Photonics and AI Group within the Research Laboratory of Electronics (RLE) and a postdoc at NTT Research, Inc., who is the lead author of a paper on the new chip. Bandyopadhyay is joined on the paper by Alexander Sludds '18, MEng '19, PhD '23; Nicholas Harris PhD '17; Darius Bunandar PhD '19; Stefan Krastanov, a former RLE research scientist who is now an assistant professor at the University of Massachusetts at Amherst; Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research; Matthew Streshinsky, a former silicon photonics lead at Nokia who is now co-founder and CEO of Enosemi; Michael Hochberg, president of Periplous, LLC; and Dirk Englund, a professor in the Department of Electrical Engineering and Computer Science, principal investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE, and senior author of the paper. The research appears today in Nature Photonics. Machine learning with light Deep neural networks are composed of many interconnected layers of nodes, or neurons, that operate on input data to produce an output. One key operation in a deep neural network involves the use of linear algebra to perform matrix multiplication, which transforms data as it is passed from layer to layer. But in addition to these linear operations, deep neural networks perform nonlinear operations that help the model learn more intricate patterns. Nonlinear operations, like activation functions, give deep neural networks the power to solve complex problems. In 2017, Englund's group, along with researchers in the lab of Marin Soljačić, the Cecil and Ida Green Professor of Physics, demonstrated an optical neural network on a single photonic chip that could perform matrix multiplication with light. But at the time, the device couldn't perform nonlinear operations on the chip. Optical data had to be converted into electrical signals and sent to a digital processor to perform nonlinear operations. "Nonlinearity in optics is quite challenging because photons don't interact with each other very easily. That makes it very power consuming to trigger optical nonlinearities, so it becomes challenging to build a system that can do it in a scalable way," Bandyopadhyay explains. They overcame that challenge by designing devices called nonlinear optical function units (NOFUs), which combine electronics and optics to implement nonlinear operations on the chip. The researchers built an optical deep neural network on a photonic chip using three layers of devices that perform linear and nonlinear operations. A fully-integrated network At the outset, their system encodes the parameters of a deep neural network into light. Then, an array of programmable beamsplitters, which was demonstrated in the 2017 paper, performs matrix multiplication on those inputs. The data then pass to programmable NOFUs, which implement nonlinear functions by siphoning off a small amount of light to photodiodes that convert optical signals to electric current. This process, which eliminates the need for an external amplifier, consumes very little energy. "We stay in the optical domain the whole time, until the end when we want to read out the answer. This enables us to achieve ultra-low latency," Bandyopadhyay says. Achieving such low latency enabled them to efficiently train a deep neural network on the chip, a process known as in situ training that typically consumes a huge amount of energy in digital hardware. "This is especially useful for systems where you are doing in-domain processing of optical signals, like navigation or telecommunications, but also in systems that you want to learn in real time," he says. The photonic system achieved more than 96 percent accuracy during training tests and more than 92 percent accuracy during inference, which is comparable to traditional hardware. In addition, the chip performs key computations in less than half a nanosecond. "This work demonstrates that computing -- at its essence, the mapping of inputs to outputs -- can be compiled onto new architectures of linear and nonlinear physics that enable a fundamentally different scaling law of computation versus effort needed," says Englund. The entire circuit was fabricated using the same infrastructure and foundry processes that produce CMOS computer chips. This could enable the chip to be manufactured at scale, using tried-and-true techniques that introduce very little error into the fabrication process. Scaling up their device and integrating it with real-world electronics like cameras or telecommunications systems will be a major focus of future work, Bandyopadhyay says. In addition, the researchers want to explore algorithms that can leverage the advantages of optics to train systems faster and with better energy efficiency. This research was funded, in part, by the U.S. National Science Foundation, the U.S. Air Force Office of Scientific Research, and NTT Research.
[2]
Photonic processor could enable ultrafast AI computations with extreme energy efficiency
The deep neural network models that power today's most demanding machine-learning applications have grown so large and complex that they are pushing the limits of traditional electronic computing hardware. Photonic hardware, which can perform machine-learning computations with light, offers a faster and more energy-efficient alternative. However, there are some types of neural network computations that a photonic device can't perform, requiring the use of off-chip electronics or other techniques that hamper speed and efficiency. Building on a decade of research, scientists from MIT and elsewhere have developed a new photonic chip that overcomes these roadblocks. They demonstrated a fully integrated photonic processor that can perform all the key computations of a deep neural network optically on the chip. The research appears in Nature Photonics. The optical device was able to complete the key computations for a machine-learning classification task in less than half a nanosecond while achieving more than 92% accuracy -- performance that is on par with traditional hardware. The chip, composed of interconnected modules that form an optical neural network, is fabricated using commercial foundry processes, which could enable the scaling of the technology and its integration into electronics. In the long run, the photonic processor could lead to faster and more energy-efficient deep learning for computationally demanding applications like lidar, scientific research in astronomy and particle physics, or high-speed telecommunications. "There are a lot of cases where how well the model performs isn't the only thing that matters, but also how fast you can get an answer. Now that we have an end-to-end system that can run a neural network in optics, at a nanosecond time scale, we can start thinking at a higher level about applications and algorithms," says Saumil Bandyopadhyay, MEng, Ph.D., a visiting scientist in the Quantum Photonics and AI Group within the Research Laboratory of Electronics (RLE) and a postdoc at NTT Research, Inc., who is the lead author of a paper on the new chip. Bandyopadhyay is joined on the paper by Alexander Sludds, MEng, Ph.D., senior author Dirk Englund, a professor in the Department of Electrical Engineering and Computer Science, principal investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE, and others. Machine learning with light Deep neural networks are composed of many interconnected layers of nodes, or neurons, that operate on input data to produce an output. One key operation in a deep neural network involves the use of linear algebra to perform matrix multiplication, which transforms data as it is passed from layer to layer. But in addition to these linear operations, deep neural networks perform nonlinear operations that help the model learn more intricate patterns. Nonlinear operations, like activation functions, give deep neural networks the power to solve complex problems. In 2017, Englund's group, along with researchers in the lab of Marin Soljačić, the Cecil and Ida Green Professor of Physics, demonstrated an optical neural network on a single photonic chip that could perform matrix multiplication with light. But at the time, the device couldn't perform nonlinear operations on the chip. Optical data had to be converted into electrical signals and sent to a digital processor to perform nonlinear operations. "Nonlinearity in optics is quite challenging because photons don't interact with each other very easily. That makes it very power consuming to trigger optical nonlinearities, so it becomes challenging to build a system that can do it in a scalable way," Bandyopadhyay explains. They overcame that challenge by designing devices called nonlinear optical function units (NOFUs), which combine electronics and optics to implement nonlinear operations on the chip. The researchers built an optical deep neural network on a photonic chip using three layers of devices that perform linear and nonlinear operations. A fully-integrated network At the outset, their system encodes the parameters of a deep neural network into light. Then, an array of programmable beamsplitters, which was demonstrated in the 2017 paper, performs matrix multiplication on those inputs. The data then pass to programmable NOFUs, which implement nonlinear functions by siphoning off a small amount of light to photodiodes that convert optical signals to electric current. This process, which eliminates the need for an external amplifier, consumes very little energy. "We stay in the optical domain the whole time, until the end when we want to read out the answer. This enables us to achieve ultra-low latency," Bandyopadhyay says. Achieving such low latency enabled them to efficiently train a deep neural network on the chip, a process known as in situ training that typically consumes a huge amount of energy in digital hardware. "This is especially useful for systems where you are doing in-domain processing of optical signals, like navigation or telecommunications, but also in systems that you want to learn in real time," he says. The photonic system achieved more than 96% accuracy during training tests and more than 92% accuracy during inference, which is comparable to traditional hardware. In addition, the chip performs key computations in less than half a nanosecond. "This work demonstrates that computing -- at its essence, the mapping of inputs to outputs -- can be compiled onto new architectures of linear and nonlinear physics that enable a fundamentally different scaling law of computation versus effort needed," says Englund. The entire circuit was fabricated using the same infrastructure and foundry processes that produce CMOS computer chips. This could enable the chip to be manufactured at scale, using tried-and-true techniques that introduce very little error into the fabrication process. Scaling up their device and integrating it with real-world electronics like cameras or telecommunications systems will be a major focus of future work, Bandyopadhyay says. In addition, the researchers want to explore algorithms that can leverage the advantages of optics to train systems faster and with better energy efficiency.
[3]
Photonic processor could enable ultrafast AI computations with extreme energy efficiency
The deep neural network models that power today's most demanding machine-learning applications have grown so large and complex that they are pushing the limits of traditional electronic computing hardware. Photonic hardware, which can perform machine-learning computations with light, offers a faster and more energy-efficient alternative. However, there are some types of neural network computations that a photonic device can't perform, requiring the use of off-chip electronics or other techniques that hamper speed and efficiency. Building on a decade of research, scientists from MIT and elsewhere have developed a new photonic chip that overcomes these roadblocks. They demonstrated a fully integrated photonic processor that can perform all the key computations of a deep neural network optically on the chip. The optical device was able to complete the key computations for a machine-learning classification task in less than half a nanosecond while achieving more than 92 percent accuracy -- performance that is on par with traditional hardware. The chip, composed of interconnected modules that form an optical neural network, is fabricated using commercial foundry processes, which could enable the scaling of the technology and its integration into electronics. In the long run, the photonic processor could lead to faster and more energy-efficient deep learning for computationally demanding applications like lidar, scientific research in astronomy and particle physics, or high-speed telecommunications. "There are a lot of cases where how well the model performs isn't the only thing that matters, but also how fast you can get an answer. Now that we have an end-to-end system that can run a neural network in optics, at a nanosecond time scale, we can start thinking at a higher level about applications and algorithms," says Saumil Bandyopadhyay '17, MEng '18, PhD '23, a visiting scientist in the Quantum Photonics and AI Group within the Research Laboratory of Electronics (RLE) and a postdoc at NTT Research, Inc., who is the lead author of a paper on the new chip. Bandyopadhyay is joined on the paper by Alexander Sludds '18, MEng '19, PhD '23, Nicholas Harris PhD '17, and Darius Bunandar PhD '19; Stefan Krastanov, a former RLE research scientist who is now an assistant professor at the University of Massachusetts at Amherst; Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research; Matthew Streshinsky, a former silicon photonics lead at Nokia who is now co-founder and CEO of Enosemi; Michael Hochberg, president of Periplous, LLC; and senior author Dirk Englund, a professor in the Department of Electrical Engineering and Computer Science, principal investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE. The research appears today in Nature Photonics. Machine learning with light Deep neural networks are composed of many interconnected layers of nodes, or neurons, that operate on input data to produce an output. One key operation in a deep neural network involves the use of linear algebra to perform matrix multiplication, which transforms data as it is passed from layer to layer. But in addition to these linear operations, deep neural networks perform nonlinear operations that help the model learn more intricate patterns. Nonlinear operations, like activation functions, give deep neural networks the power to solve complex problems. In 2017, Englund's group, along with researchers in the lab of Marin Soljačić, the Cecil and Ida Green Professor of Physics, demonstrated an optical neural network on a single photonic chip that could perform matrix multiplication with light. But at the time, the device couldn't perform nonlinear operations on the chip. Optical data had to be converted into electrical signals and sent to a digital processor to perform nonlinear operations. "Nonlinearity in optics is quite challenging because photons don't interact with each other very easily. That makes it very power consuming to trigger optical nonlinearities, so it becomes challenging to build a system that can do it in a scalable way," Bandyopadhyay explains. They overcame that challenge by designing devices called nonlinear optical function units (NOFUs), which combine electronics and optics to implement nonlinear operations on the chip. The researchers built an optical deep neural network on a photonic chip using three layers of devices that perform linear and nonlinear operations. A fully-integrated network At the outset, their system encodes the parameters of a deep neural network into light. Then, an array of programmable beamsplitters, which was demonstrated in the 2017 paper, performs matrix multiplication on those inputs. The data then pass to programmable NOFUs, which implement nonlinear functions by siphoning off a small amount of light to photodiodes that convert optical signals to electric current. This process, which eliminates the need for an external amplifier, consumes very little energy. "We stay in the optical domain the whole time, until the end when we want to read out the answer. This enables us to achieve ultra-low latency," Bandyopadhyay says. Achieving such low latency enabled them to efficiently train a deep neural network on the chip, a process known as in situ training that typically consumes a huge amount of energy in digital hardware. "This is especially useful for systems where you are doing in-domain processing of optical signals, like navigation or telecommunications, but also in systems that you want to learn in real time," he says. The photonic system achieved more than 96 percent accuracy during training tests and more than 92 percent accuracy during inference, which is comparable to traditional hardware. In addition, the chip performs key computations in less than half a nanosecond. "This work demonstrates that computing -- at its essence, the mapping of inputs to outputs -- can be compiled onto new architectures of linear and nonlinear physics that enable a fundamentally different scaling law of computation versus effort needed," says Englund. The entire circuit was fabricated using the same infrastructure and foundry processes that produce CMOS computer chips. This could enable the chip to be manufactured at scale, using tried-and-true techniques that introduce very little error into the fabrication process. Scaling up their device and integrating it with real-world electronics like cameras or telecommunications systems will be a major focus of future work, Bandyopadhyay says. In addition, the researchers want to explore algorithms that can leverage the advantages of optics to train systems faster and with better energy efficiency. This research was funded, in part, by the National Science Foundation, the Air Force Office of Scientific Research, and NTT Research.
[4]
MIT's new chip performs AI-powering computations under a nanosecond
Capable of completing key computations in less than half a nanosecond, this chip could power ultrafast artificial intelligence (AI) applications in the near future, a university press release said. Deep neural networks, which are being deployed to build cutting-edge AI applications these days, are also pushing the limits of computational hardware. The upcoming field of AI already has a reputation for using energy-hogging infrastructure, and with electronic hardware reaching its limits, researchers are keen to bring in advances that can handle computing demands while also being energy efficient. Photonic hardware can deliver on both fronts since it processes information using light, not electrons. However, the technology is still developing and relies on electronic hardware in some areas, slowing down processing speeds and technological development. This is where a research team led by Dirk Englund, principal investigator of the Quantum Photonics and Artificial Intelligence Group at MIT, has made a major breakthrough. Much like neurons in the brain, deep neural networks (DNNs) use interconnected layers of nodes to process information and produce output. DNNs work with two types of operations: linear operations, where matrix multiplication is performed to transform data as it passes through the nodes.
Share
Share
Copy Link
MIT researchers have created a new photonic chip that can perform all key computations of a deep neural network optically, achieving ultrafast speeds and high energy efficiency. This breakthrough could revolutionize AI applications in various fields.
MIT researchers, along with collaborators from other institutions, have developed a groundbreaking photonic chip that could revolutionize artificial intelligence (AI) computations. This fully integrated photonic processor can perform all key computations of a deep neural network optically on the chip, offering unprecedented speed and energy efficiency [1][2][3].
Deep neural network models, which power today's most demanding machine-learning applications, have grown increasingly complex, pushing the limits of traditional electronic computing hardware. While photonic hardware offers a faster and more energy-efficient alternative for machine-learning computations, it has been limited by its inability to perform certain types of neural network computations on-chip [1][2][3].
The new photonic chip overcomes these limitations by incorporating:
This design allows the chip to perform both linear and nonlinear operations entirely in the optical domain, eliminating the need for off-chip electronics that previously hampered speed and efficiency [1][2][3].
The optical device demonstrated remarkable capabilities:
The chip is composed of interconnected modules forming an optical neural network and is fabricated using commercial foundry processes. This approach could enable scaling of the technology and its integration into electronics [1][2][3].
The photonic processor's ultrafast and energy-efficient deep learning capabilities could benefit various computationally demanding applications, including:
This breakthrough demonstrates that computing can be compiled onto new architectures of linear and nonlinear physics, enabling fundamentally different scaling laws for computation versus effort needed. As Dirk Englund, a senior author of the study, notes, "This work demonstrates that computing -- at its essence, the mapping of inputs to outputs -- can be compiled onto new architectures of linear and nonlinear physics that enable a fundamentally different scaling law of computation versus effort needed" [2].
The development of this photonic chip marks a significant step forward in the field of AI hardware, potentially paving the way for more efficient and powerful AI systems in the future.
Reference
[1]
Massachusetts Institute of Technology
|Photonic processor could enable ultrafast AI computations with extreme energy efficiency[2]
[3]
[4]
An international team of researchers has developed a novel method for photonic in-memory computing, potentially revolutionizing optical computing with improved speed, efficiency, and robustness.
2 Sources
Scientists at the Chinese University of Hong Kong have created a laser-based artificial neuron that processes data a billion times faster than biological neurons, potentially transforming AI and computing with its speed and energy efficiency.
4 Sources
IBM Research unveils co-packaged optics technology that could dramatically improve data center efficiency for AI workloads, potentially reducing energy consumption and accelerating model training times.
9 Sources
Researchers at the University of Tokyo have introduced diffraction casting, a novel optical computing architecture that promises to revolutionize AI and parallel processing with improved speed and energy efficiency.
2 Sources
Researchers develop an AI model that can predict optical properties of materials a million times faster than traditional methods, potentially revolutionizing the discovery of new energy and quantum materials.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved