Curated by THEOUTPOST
On Fri, 14 Feb, 12:11 AM UTC
4 Sources
[1]
Startup gets $100M funding for low-power analog AI chips
Interview AI chip startup EnCharge claims its analog artificial intelligence accelerators could rival desktop GPUs while using just a fraction of the power. Impressive -- on paper, at least. Now comes the hard part: Proving it in the real world. The outfit boasts it has developed a novel in-memory compute architecture for AI inferencing that replaces traditional transistors with analog capacitors to achieve a 20x performance-per-watt advantage over digital accelerators, like GPUs. According to CEO Naveen Verma, EnCharge's inference chip delivers 150 TOPS of AI compute at 8-bit precision on just one watt of power. Scale it up to 4.5 watts, and Verma claims it could match desktop GPUs -- but with 1/100th the power draw. That's the pitch, at least. However, this isn't all theoretical. EnCharge's chips were spun out of Verma's lab at Princeton where they were developed with support from the United States Defense Advanced Research Projects Agency, aka DARPA, and Taiwanese chip factory giant TSMC. Verma told us that the biz has now taped out several test chips to prove the architecture can work. "The products we're building are actually based on a fundamental technology that came out of my research lab," he said. "We've really had an opportunity here to look at, fundamentally, what are the challenges with AI compute." With $100 million in new series-B funding from Tiger Global, RTX, and others, EnCharge plans to tape out its first production chips for mobile, PCs, and workstations later this year. Verma claims the real difference is in how and where the chip handles computation. The vast majority of genAI compute today is done using many, many multiply accumulate units, or MAC for short. In traditional architectures, these are built using billions of transistor gates, which ultimately operate on discrete values due to the way the numbers are represented using binary ones and zeroes. Verma argues this approach can be improved upon, and made more efficient and precise, by using continuous values rather than discrete ones. Thus, EnCharge's MACs are built using analog capacitors, which can represent arbitrary continuous signal values based on their charge level. And because capacitors are basically just two conductors separated by a dielectric material, they can easily be etched into silicon using existing CMOS technologies, Verma said. The second element of EnCharge's design is that analog computation is handled in memory. In-memory compute is by no means a new concept. Several companies have been working to commercialize AI accelerators based on the concept for years. The idea behind this concept is that by embedding compute -- often in the form of a bunch of math circuits -- into the memory, the matrices can be calculated in place rather than having to move data around. With EnCharge's design, the analog capacitors are now responsible for carrying out this calculation, by adding up the charges. "When you drive any one of these capacitors, the output of the capacitive line that's coupled basically goes to the average value of the signal," he said. "An average is an accumulation. It should normalize to the number of terms you're averaging." Achieving this took eight years of research and development, and involved not only the development of an in-memory analog matrix accumulate unit, but also all the other stuff necessary to make them programmable. "We recognized that what you have to do when you have these fundamental technology breakthroughs, is also build a full architecture, and build all of the software," Verma said. And speaking of programmability, EnCharge's chip supports a variety of AI workloads ranging from convolutional neural networks to the transformer architectures behind large language and diffusion models. As an inference chip, the design will vary depending on the target workload. For some workloads, factors such as memory capacity and bandwidth may have a bigger impact on performance than raw compute. Large language models, for example, tend to be heavily memory bound with memory capacity and bandwidth often having a larger impact on perceived performance than the number of TOPS it can churn out. So, Verma says, an EnCharge chip targeting those kinds of workloads might dedicate less die area to compute to make room for a bigger memory bus. On the flip side, for something like diffusion models - which aren't nearly as memory-bound - you might want more compute in order to generate images faster. For now, EnCharge is sticking to M.2 or PCIe add-in cards due to ease of adoption. We've previously seen lower power accelerators packaged in this form factor, like Google's Coral TPU and Hailo's NPUs. In the long run, the technology could be adapted for larger, higher-wattage applications, Verma said. "Fundamentally, the ability to grow to 75 watt PCIe cards and so on is all there." The initial batch of production EnCharge chips is expected to tape out later this year, though he notes it'll take a little while longer before they see widespread adoption as the startup works to integrate the chips into their customers' designs and build out software pipeline. ®
[2]
This DARPA-backed startup banked $100 million for its energy-slashing analog chips
A young DARPA-backed startup with a fresh spin on a low-power computer chip has raised over $100 million in a Series B funding round, a sign of the wild appetite for more energy-efficient ways to build bigger and better AI. The company, EnCharge AI, aims to move AI's heaviest workloads from big, power-hungry data centers to devices at the edge, including laptops and mobile devices, where energy, size, and cost constraints are tighter. Its approach, known as analog in-memory computing, comes from research that CEO Naveen Verma spun out of his lab at Princeton University, where he's still a professor of electrical and computer engineering. Verma wouldn't say who its customers are. But in addition to the U.S. Defense Advanced Research Projects Agency (DARPA), which gave it $18.6 million last year, a who's who of industrial, electronics, and defense players are interested in EnCharge's chips. The oversubscribed funding round, led by Tiger Global, includes the intelligence community's investment unit In-Q-Tel, alongside the venture arms of defense giant RTX, power producer Constellation Energy, South Korea's Samsung, and Taiwan's Hon Hai (Foxconn). The Santa Clara, California-based startup is also working with semiconductor giant Taiwan Semiconductor (TSMC) to produce its first-generation chips. The new investment brings EnCharge's total funding to more than $144 million, and will help the 60-person company commercialize its technology, which isn't cheap in the world of semiconductors.
[3]
EnCharge raises $100M to accelerate the rollout of its energy-efficient edge AI chips - SiliconANGLE
EnCharge raises $100M to accelerate the rollout of its energy-efficient edge AI chips A specialist artificial intelligence chipmaker called EnCharge AI Inc. said today it has closed on a more than $100 million Series B round of funding, bringing its total amount raised to date to more than $144 million. The company says it will use the funding to accelerate the commercialization of its first client computing AI accelerators, which are expected to launch later this year. The latest round was backed by a host of investors, with Tiger Global leading the way and other new investors such as Maverick Silicon, Capital TEN, SIP Global Partners, Zero Infinity Partners, CTBC VC, Vanderbilt University and Morgan Creek Digital also participating. Previous investors such as RTX Ventures, Anzu Partners, Scout Ventures, AlleyCorp, ACVC and S5V joined the round too, as did the likes of Samsung Ventures and HH-CTBC, which focus specifically on the semiconductor industry. Another investor was In-Q-Tel, which specializes in backing startups that develop technologies for the U.S. national security community. EnCharge is looking to change the reality that today, the vast majority of AI inference computation is performed by enormous clusters of extremely powerful and energy-intensive graphics processing units in cloud data centers. The startup believes this is unsustainable, both from an environmental and an economic perspective. It believes it can provide significant advantages by moving a lot of these cloud-based workloads onto local devices, where they will benefit from superior security and lower latency. Founded by a team of engineering Ph.D.s and incubated at Princeton University, the startup has created powerful analog in-memory-computing AI chips that it says will dramatically reduce the energy requirements for many AI workloads. Its technology has been in development for more than eight years. It's essentially a highly programmable application-specific integrated circuit that features a novel approach to memory management. In a 2022 interview with SiliconANGLE, EnCharge co-founder and Chief Executive Naveen Verma explained that the chips use "charge-based memory." It differs from traditional memory design in the way it reads data from the electrical current on a memory plane, as opposed to reading it from individual bit cells. It's an approach that enables the use of more precise capacitors, as opposed to less precise semiconductors. This is what enables EnCharge's chips to deliver enormous efficiency gains during data reduction operations involving matrix multiplication, Verma explained. "Instead of communicating individual bits, you communicate the result," Verma said. "You can do that by adding up the currents of all the bit cells, but that's noisy and messy. Or you can do that accumulation using the charge. That lets you move away from semiconductors to very robust and scalable capacitors. That operation can now be done very precisely." The increased efficiency means that EnCharge's chips require an astonishing 20 times less energy than typical GPUs. What's more, they're extremely versatile, with EnCharge having built an entire suite of software tools for developers to optimize the chips for efficiency, performance and fidelity. They're mounted onto cards that can plug directly into a PCIe interface, which means they can be fitted onto a wide range of devices and machines. They can even be used in tandem with GPUs, the company says. Holger Mueller of Constellation Research Inc. said it's nice to see EnCharge targeting improvements in the runtime cost of AI, as most of the cost-cutting efforts in the industry thus far have been focused on the costs of training AI models. There's also a need to address challenges posed by AI at the edge, which faces power constraints, he said. "EnCharge is doing things differently from traditional AI runtime architectures, and its approach looks to be promising," Mueller noted. "The question will be how well can existing models run on its silicon? If they don't run so well, then retraining efforts will push up the total cost of ownership and potentially negate any gains it delivers." EnCharge's efficiency gains have gotten a lot of attention from companies in the defense and aerospace industries, said RTX Ventures Managing Director Dan Ateya. "EnCharge's analog in-memory architecture can be transformative for defense and aerospace use cases, where size, weight and power constraints limit how AI is deployed today," he said. In addition, its technology should appeal to the broader AI industry at a time when customers are becoming increasingly concerned about the enormous energy demands required by the most powerful generative AI applications. EnCharge's roadmap calls for the company to move to advanced technology nodes as it prepares to start shipping a portfolio of its analog in-memory chips that will cater to different AI workloads spanning the data center to the edge.
[4]
EnCharge raises $100M+ to accelerate AI using analog chips | TechCrunch
EnCharge AI, a semiconductor startup developing analog memory chips for AI applications, has raised more than $100 million in a Series B round led by Tiger Global to spur its next stage of growth. The funding is significant partly because interest in AI is at an all-time high, but the high price of building and operating AI services continues to be a red flag. EnCharge, spun out from Princeton University, believes its analog memory chips -- envisioned to be embedded in devices such as laptops, desktops, handsets and wearables -- will not only speed up AI processing, they'll help bring the cost down as well. Santa Clara-based EnCharge claims its AI accelerators use 20 times less energy to run workloads compared with other chips on the market, and expects to have the first of those chips on the market later this year. EnCharge's fundraise is notable because it comes at a time when the U.S. government has identified hardware and infrastructure (including chips) as two key areas where it wants to boost domestic innovation and products. If it's successful in its execution, EnCharge could become a key part of that strategy. This Series B is a fresh round of funding, the company has confirmed to me. Of note: a tranche of funding we reported in December 2023, was not part of this Series B. There was a hint of this Series B last May, when Bloomberg reported that EnCharge wanted to raise at least $70 million more to expand its business. In an interview with TechCrunch, EnCharge's CEO and co-founder Naveen Verma would not disclose the company's valuation. PitchBook data that indicates EnCharge raised money in October at a $438 million post-money valuation is incorrect, the company told TechCrunch. Verma also would not disclose who its customers are, but the funding is coming from an interesting and long list of strategic and financial investors that indicate who is likely working with the startup. In addition to Tiger Global, others in the round include Maverick Silicon, Capital TEN (from Taiwan), SIP Global Partners, Zero Infinity Partners, CTBC VC, Vanderbilt University and Morgan Creek Digital, along returning investors RTX Ventures, Anzu Partners, Scout Ventures, AlleyCorp, ACVC and S5V. Corporations that invested in the round include Samsung Ventures and HH-CTBC -- a partnership between Hon Hai Technology Group (Foxconn) and CTBC VC. Previously, the VentureTech Alliance also backed EnCharge. Others include In-Q-Tel (the government-backed investor associated with the CIA), RTX Ventures (the VC arm of the aerospace and defense contractor), and Constellation Technology (a clean energy manufacturer). The startup has also received grants from U.S. organizations like DARPA and the Department of Defense. Verma said EnCharge is working closely with TSMC. He previously said TSMC would be the company manufacturing its first chips. "TSMC has been following my research for many, many years," he said in an interview, adding that the involvement dated back to the early stages of EnCharge's R&D. "They've given us access to very advanced silicon. That's a very rare thing for them to do." With its focus on analog, EnCharge is taking a different approach than its competitors. So far, all eyes have been focused on the processing chips used for training and AI inference at the server end, which has translated into a major surge of business for GPU makers like Nvidia and AMD. The difference with EnCharge's approach is laid out in a recent paper on analog chips from IBM's research team. As IBM's researchers explain it, there is "no separation between compute and memory, making these processors exceptionally economical compared to traditional designs." IBM, like EnCharge, also comes to the conclusion that so far, the physical properties of these chips makes them OK for inference, but less good for training. EnCharge chips are not used for training applications, but to run existing AI models at "the edge." But the startup (and others, like IBM) continue to work on new algorithms that could expand the use cases. IBM and EnCharge are not the only companies working on analog approaches. But as Verma explains it, one of EnCharge's breakthroughs has been in the design of its chips, specifically making them noise-resilient. "If you have 100 billion transistors on a chip, they can all have noise, and you need them all to work, so you want to have that signal separation. But you're also leaving a lot of efficiency on the table because you're not representing all these signals in between analog attempts to do that," Verma explained. "The big breakthrough we had is figuring out how to make analog not sensitive to noise." The company uses "a very precise device that you get for free in standard supply chain," he said, explaining that device is a set of geometry-dependent metal wires that "you can control them very, very well." The company, Verma says, is full-stack: It has also developed software around its hardware. It helps EnCharge's case that Verma and his co-founders, COO Echere Iroaga and CTO Kailash Gopalakrishnan (left and right above, with Verma center) -- who respectively previously worked at semiconductor company Macom and IBM -- bring a lot of expertise to the table. But it remains to be seen whether this will be enough to keep EnCharge competitive in an extremely crowded market. Other startups in the analog chip race include Mythic and Sagence. "We at Anzu have looked at probably 50-plus companies in this space -- at least 50 between 2017 and 2021, and probably more than 50 since then," said Jimmy Kan, an investment partner focused on semiconductors for Anzu Partners, who previously worked on chips at Qualcomm. "One out of every five of those was some sort of new novel architecture like analog or spiking neural network computation chips. We really had it in our mind to find an AI compute technology that was really, really differentiated, versus incremental, versus something that Nvidia might just develop next quarter or next year," he added. "So we're really, really excited to see the progress that EnCharge has made." EnCharge's rise is in contrast to how a lot of deep tech startups have developed over the last several years. One knock-on effect of the technology boom of the last 25 years has been the ample venture funding ready to back startups building what could be the next Google, Microsoft, Apple, Meta or Amazon. That, in turn, has spilled into a much bigger pool of startups in the market. That pool has seen an increasing number of deep tech efforts: Smart founders raising money not for finished products, but interesting ideas that are not yet market-ready but could be a big deal if they are brought into the world. Quantum computing is a classic "deep tech" category, for example. EnCharge could have easily been one of that wave of deep tech businesses, if it had spun out earlier from Princeton and worked quietly with venture and other funding to possibly build the next innovation in chips. But the startup waited years to venture out on its own. It was in 2022, nearly a decade after Verma and his team first started their research at Princeton, that the company emerged from stealth and started work on securing commercial partners while continuing to develop its technology. "There's certain kinds of innovations where you can jump to venture backing very early on. But if what you're doing is developing a fundamentally new technology, there's a lot of aspects of that that need to be understood to de-risk that a lot of them fail," Verma said. "The day you take venture funding, your agenda changes... It's no longer about understanding the technology. You have to be customer-focused."
Share
Share
Copy Link
EnCharge AI, a startup developing energy-efficient analog AI chips, has raised over $100 million in Series B funding. The company claims its technology can rival desktop GPUs while using a fraction of the power, potentially revolutionizing AI processing at the edge.
EnCharge AI, a startup spun out of Princeton University, has secured over $100 million in Series B funding to accelerate the development and commercialization of its groundbreaking analog AI chips 12. The company's technology promises to deliver significant improvements in energy efficiency for AI processing, potentially revolutionizing the industry.
At the core of EnCharge AI's innovation is its analog in-memory computing architecture. Unlike traditional digital chips that use transistors, EnCharge's design employs analog capacitors to perform AI computations 1. This approach allows for the representation of continuous values rather than discrete binary ones and zeros, resulting in improved efficiency and precision 1.
The company claims its chips can achieve a 20x performance-per-watt advantage over digital accelerators like GPUs 1. According to CEO Naveen Verma, an EnCharge chip can deliver 150 TOPS (Trillion Operations Per Second) of AI compute at 8-bit precision while consuming just one watt of power 1.
EnCharge AI's technology is designed to move AI workloads from power-hungry data centers to edge devices such as laptops, mobile phones, and wearables 2. This shift could lead to significant improvements in energy efficiency, cost reduction, and enhanced security for AI applications 3.
The company's chips are versatile, supporting various AI workloads including convolutional neural networks and transformer architectures used in large language and diffusion models 1. EnCharge is developing a portfolio of chips catering to different AI workloads, from data centers to edge devices 3.
The $100 million Series B round was led by Tiger Global, with participation from numerous strategic investors including In-Q-Tel, RTX Ventures, Samsung Ventures, and HH-CTBC 23. This brings EnCharge AI's total funding to over $144 million 2.
The company has also received support from DARPA and is working closely with TSMC for chip manufacturing 14. This backing from both government and industry leaders underscores the potential impact of EnCharge's technology.
While EnCharge AI's approach shows promise, the company faces challenges in integrating its chips into customer designs and building out its software pipeline 1. Additionally, the startup operates in a competitive field, with other companies like IBM also exploring analog chip designs for AI applications 4.
EnCharge AI plans to tape out its first production chips later this year, targeting mobile devices, PCs, and workstations 1. The company's roadmap includes moving to advanced technology nodes and expanding its chip portfolio to address various AI workloads 3.
As the AI industry grapples with the enormous energy demands of powerful generative AI applications, EnCharge AI's energy-efficient solution could play a crucial role in shaping the future of AI processing, particularly at the edge 3.
Reference
[1]
[3]
Sageance, a Silicon Valley startup, is developing analog AI chips that could significantly reduce power consumption for large language models, potentially revolutionizing the AI hardware landscape.
2 Sources
2 Sources
Enfabrica, an AI networking startup, secures $115 million in Series C funding and introduces the ACF SuperNIC chip, promising to revolutionize GPU networking for AI applications with unprecedented performance and scalability.
3 Sources
3 Sources
AI chip startup Positron raises $23.5 million to scale production of energy-efficient, cost-effective AI chips manufactured entirely in the US, aiming to compete with NVIDIA's dominance in the AI hardware market.
2 Sources
2 Sources
Ayar Labs secures $155 million in Series D funding from major chipmakers and investors to scale up its light-based chip-to-chip communication technology, promising to revolutionize AI infrastructure.
6 Sources
6 Sources
MatX, founded by former Google engineers, raises $80 million in a Series A round led by Spark Capital, valuing the company at over $300 million. The startup aims to develop AI chips that outperform NVIDIA's GPUs in large language model processing.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved