The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 19 Sept, 12:05 AM UTC
3 Sources
[1]
SiFive offers drop-in AI accelerator driven by RISC-V CPUs
SiFive, having designed RISC-V CPU cores for various AI chips, is now offering to license the blueprints for its own homegrown full-blown machine-learning accelerator. Announced this week, SiFive's Intelligence XM series clusters promise a scalable building block for developing AI chips large and small. The idea is that others can license the RISC-V-based designs to integrate into processors and system-on-chips - to be placed in products from edge and IoT gear to datacenter servers - and hopefully foster more competition between architectures. Fabless SiFive is no stranger to the AI arena. As we've previously reported, at least some of Google's tensor processing units are already using SiFive's X280 RISC-V CPU cores to manage the machine-learning accelerators and keep their matrix multiplication units (MXUs) fed with work and data. Likewise, John Ronco, SVP and GM of SiFive UK, told The Register that SiFive's RISC-V designs also underpin the CPU cores found in Tenstorrent's newly disclosed Blackhole accelerator, which we looked at in detail at Hot Chips last month. And in a canned statement, SiFive CEO Patrick Little claimed the US-based outfit is now supplying RISC-V-based chip designs to five of the "Magnificent 7" companies - Microsoft, Apple, Nvidia, Alphabet, Amazon, Meta and Tesla - though we suspect not all that silicon necessarily involves AI. What sets SiFive's Intelligence XM-series apart from previous engagements with the likes of Google or Tenstorrent is that rather than having its CPU cores attached to a third-party matrix math engine, all packaged up in the same chip, SiFive is instead bringing out its own complete AI accelerator design for customers to license and put into silicon. This isn't aimed at semiconductor players capable of crafting their own accelerators, such as Google and Tenstorrent - it's aimed at organizations that want to take an off-the-shelf design, customize it, and send it to the fab. "For some customers, it's still going to be right for them to do their own hardware," Ronco said. "But, for some customers, they wanted more of a one-stop shop from SiFive." In this sense, these XM clusters are a bit like Arm's Compute Subsystem (CSS) designs in that they offer customers a more comprehensive building block for designing custom silicon. But instead of general application processors, SiFive is targeting those who want to make their own AI accelerators. SiFive's base XM cluster is built around four of SiFive's Intelligence X RISC-V CPU cores which are connected to an in-house matrix math engine specifically for powering through neural network calculations in hardware. If you're not familiar, we've previously explored SiFive's X280 and newer X390 X-series core designs, the latter of which can be configured with a pair of 1,024 vector arithmetic logic units. Each of these clusters boasts support for up to 1TB/sec of memory bandwidth via a coherent hub interface, and is expected to deliver up to 16 TOPS (tera-operations per second) of INT8 or 8 teraFLOPS of BF16 performance per gigahertz. TeraFLOPS per gigahertz might seem like an odd metric, but it's important to remember this isn't a complete chip and performance is going to be determined in large part by how many clusters the customer places in their component, how it's all wired up internally, what else is on the die, what the power and cooling situation is, and how fast it ends up clocked. At face value, these XM clusters may not sound that powerful - especially when you consider SiFive expects most chips based on the design to operate at around 1GHz. However, stick a few together and its performance potential adds up quickly. Ronco expects most chips based on the design will utilize somewhere between four and eight XM clusters, which in theory would allow for between 4-8TB/sec of peak memory bandwidth and up to 32-64 teraFLOPS of BF16 performance - and that's assuming a 1GHz operating operating clock. That's still far slower than something like an Nvidia H100, which can churn out nearly a petaFLOPS of dense BF16 performance. But as we mentioned earlier, FLOPS aren't everything - especially when it comes to bandwidth constrained workloads like AI inferencing. There are considerations like price, power, process node, and everything else. For this reason, Ronco expects SiFive's XM clusters probably won't be used as widely for AI training. That said, the design isn't limited to eight clusters. Ronco was hesitant to say how far the design can scale - some of this is probably down to process tech and die area. However, the company's product slide deck suggests 512 XM clusters is within the realm of possibility. Again, this will be up to the customer to decide what's appropriate for their specific application. Assuming the end customer can actually maintain a 1GHz clock speed without running into thermal or power limitations, 512 XM clusters would rival Nvidia's upcoming Blackwell accelerators, boasting roughly four petaFLOPS of BF16 matrix compute. For comparison, Nvidia's top-specced Blackwell GPUs boast 2.5 petaFLOPS of BF16 performance. Along with its new XM clusters, SiFive says it will also offer an open source reference implementation of its SiFive Kernel Library to reduce barriers to adoption for RISC-V architectures. ®
[2]
SiFive unveils RISC-V chip design for high-performance AI workloads
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More SiFive, a designer of chips based on the RISC-V computing platform, announced a series of new AI chips for high-performance AI workloads. The SiFive Intelligence XM Series is designed for accelerating high performance AI workloads. This is the first intellectual property from SiFive to include a highly scalable AI matrix engine, which accelerates time to market for semiconductor companies building system on chip solutions for edge IoT, consumer devices, next generation electric and/or autonomous vehicles, data centers, and beyond. As part of SiFive's plan to support customers and the broader RISC-V ecosystem, SiFive also announced its intention to open source a reference implementation of its SiFive Kernel Library (SKL). The announcement was made at a SiFive press event, Tuesday, in Santa Clara, where executives discussed the leadership role the RISC-V architecture is playing at the core of AI solutions across a diverse range of market leaders, and provided an update on SiFive's strategy, roadmap and business momentum. The open solution Patrick Little, CEO of SiFive, said in an interview with VentureBeat that customers in the semiconductor, systems and consumer markets have come to appreciate the software strategy behind SiFive and RISC-V. He noted that products with more than 10 billion SiFive cores have shipped to date. And Little noted that SiFive has invested more than $500 million in R&D and it is selling to the top semiconductor leaders and hyperscalers. The company has more than 400 design wins. The RISC-V architecture has a software that is an open standard interface, meaning any kinds of cores that connect to it. That means customers who use SiFive designs can choose their own accelerators for AI and other applications without having to worry about breaking software compatibility, Little said. While big leaders in AI like Nvidia can use their own proprietary graphics processing unit (GPU) architectures, smaller companies use their own breed of accelerators, he said. But software programmers don't want to learn a new language every time a new accelerator comes along, Little said. So the hyperscalars and chip companies want to use RISC-V solutions like SiFive so they don't have to keep rewriting their software, he said. The RISC-V open standard software interface allows for the graceful evolution of the RISC-V standard over time and it de-risks the solution beyond a single proprietary vendor. SiFive has been steadily moving up a food chain, starting in the 1990s with embedded cores and adding its first vector processor in 2021. And now it is adding AI solutions. Customers can use it as a data flow processor as the front end to their processor to go with their changing backend AI accelerators. "They don't want to keep writing to the AI software. So we put a RISC-V vector processor in front of that. The AI processors keep changing fast. The models keep changing. Software writers want to write to something that will be around in 15 years," he said. "We are one of the few companies that can fill that gap. And today we announced own accelerator, or matrix multiplication engine, and we are doing the XM product line to completement what we did in vector processing. It's a matrix multiplication engine." Customers who want an alternative to Nvidia can turn to another source, but they don't want that rival to be another proprietary solution. Rather, they like RISC-V as it offers many rival companies behind it, Little said. "We believe our solution can scale to Nvidia level performance," he said. "Many companies are seeing the benefits of an open processor standard while they race to keep up with the rapid pace of change with AI. AI plays to SiFive's strengths with performance per watt and our unique ability to help customers customize their solutions," said Little. "We're already supplying our RISC-V solutions to five of the Magnificent 7 companies, and as companies pivot to a 'software first' design strategy we are working on new AI solutions with a wide variety of companies from automotive to datacenter and the intelligent edge and IoT." SiFive's new XM Series offers an extremely scalable and efficient AI compute engine. By integrating scalar, vector, and matrix engines, XM Series customers can take advantage of very efficient memory bandwidth. The XM Series also continues SiFive's legacy of offering extremely high performance per watt for compute-intensive applications. "RISC-V was originally developed to efficiently support specialized computing engines including mixed-precision operations," said Krste Asanovic, SiFive chief architect, in a statement. "This, coupled with the inclusion of efficient vector instructions and the support of specialized AI extensions, are the reasons why many of the largest datacenter companies have already adopted RISC-V AI accelerators." As part of his presentation, Asnovic introduced more details on the new XM Series which broadens its Intelligence Product family. The XM Series also continues SiFive's legacy of offering extremely high performance per watt for compute-intensive applications. Featuring four X-Cores per cluster, a cluster can deliver 16 TOPS (INT8) or 8 TFLOPS (BF16) per GHz. The chip has 1TB/s of sustained memory bandwidth per XM Series cluster, with the clusters being able to access memory via a high bandwidth port or via a CHI port for coherent memory access. SiFive envisions the creation of systems incorporating no host CPU or ones based on RISC-V, x86 or Arm. The company is sampling its solutions now. SiFive will be at the RISC-V Summit North America, taking place Oct. 22-23, 2024 in Santa Clara, California. The company has 500 people. "We've become the gold standard of RISC-V," Little said.
[3]
RISC-V guardian SiFive unveils new chip designs for low-powered AI at the edge - SiliconANGLE
RISC-V guardian SiFive unveils new chip designs for low-powered AI at the edge Open-source semiconductor design company SiFive Inc. today unveiled its latest chip design blueprints, the SiFive Intelligence XM Series, saying these are the first based on its RISC-V architecture to include a highly scalable artificial intelligence matrix engine. With the new blueprints, SiFive says, companies will be able to accelerate the development of RISC-V-based chips that are customized for AI workloads in data centers, autonomous machines and at the network edge. SiFive was founded by the inventors the RISC-V instruction set architecture in 2016, with the goal being to commercialize and popularize its alternative chip format. Instruction sets are a collection of technologies that can be used to build central processing units. They describe the computing operations that the millions of transistors on a chip should carry out. The company is a rival to the better-known Arm Holdings Plc, which also builds designs for CPUs, mostly focused on mobile devices such as smartphones. In addition to its chip designs, it also sells software for customers to design CPUs that implement them. The biggest difference between RISC-V and Arm is that the former's designs are entirely open-source, which means companies don't have to pay licensing fees to use them. SiFive's architecture is used by numerous companies as the foundation of chips for AI workloads, internet of things gadgets and data center servers. The company cites a number of advantages for companies using its instruction sets, aside from the fact they're free. For instance, they provide greater flexibility in terms of being able to customize the designs for different workloads, it says. The new Intelligence XM Series instruction sets provide what SiFive says is an extremely scalable and efficient AI compute architecture that integrates scalar, vector and matrix engines, which are necessary to perform millions of calculations per second. They also provide extremely high bandwidth, the company says, while maintaining the high performance its customers are used to. For instance, the new chips contain four X-Cores per cluster, with a single cluster able to deliver 16 tera operations per second of processing power. They also incorporate one terabyte of sustained memory bandwidth per cluster, accessible via a high-bandwidth CHI port for coherent memory access. According to SiFive, the Intelligence XM chip blueprints are ideal for organizations that need to run AI on low-powered devices, such as IoT sensors, autonomous vehicles, robots and drones. Chief Executive Patrick Little said the new designs will enable companies to keep pace with the rapid evolution of AI while maintaining the unique benefits its open processor standard provides. "We're already supplying our RISC-V solutions to five of the 'Magnificent 7' companies, and as companies pivot to a software-first design strategy, we are working on new AI solutions automotive to datacenter and the intelligent edge and IoT," he said.
Share
Share
Copy Link
SiFive, a leading RISC-V chip designer, has introduced a new AI accelerator chip design aimed at high-performance computing and edge AI applications. The new architecture promises improved efficiency and performance for AI workloads.
SiFive, a prominent player in the RISC-V chip design space, has unveiled a groundbreaking AI accelerator chip design that promises to reshape the landscape of high-performance computing and edge AI applications 1. This new architecture, built on the open-source RISC-V instruction set, aims to deliver superior efficiency and performance for increasingly demanding AI workloads.
The newly introduced chip design boasts impressive specifications tailored for AI-intensive tasks. SiFive's AI accelerator is capable of achieving up to 128 tera operations per second (TOPS) at a mere 5 watts of power consumption 2. This remarkable power efficiency positions the chip as a formidable contender in the rapidly evolving AI hardware market.
One of the key features of the new design is its scalability. The architecture can be configured to deliver performance ranging from 8 TOPS to a staggering 512 TOPS, catering to a wide spectrum of AI applications and device requirements 3.
SiFive's AI accelerator is poised to make significant inroads in various sectors. The chip design is particularly well-suited for edge AI applications, where power efficiency and performance are critical. Potential use cases include autonomous vehicles, robotics, smart cities, and industrial IoT devices 2.
The flexibility of the design also makes it attractive for data center applications, potentially challenging established players in the high-performance computing space. SiFive's solution offers a compelling alternative to proprietary architectures, leveraging the open-source nature of RISC-V to provide greater customization options for clients 1.
This announcement from SiFive is seen as a significant milestone for the RISC-V ecosystem. As the demand for specialized AI hardware continues to grow, SiFive's new chip design demonstrates the capability of RISC-V architecture to compete with established players in the AI chip market 3.
The introduction of this high-performance AI accelerator is expected to further accelerate the adoption of RISC-V in various computing domains. It showcases the potential of open-source hardware design in driving innovation and addressing the evolving needs of the AI industry 1.
While SiFive has unveiled the chip design, the company has not yet announced specific timelines for commercial availability. However, industry experts anticipate that this development will spur increased interest and investment in RISC-V-based AI solutions 2.
As the AI hardware landscape continues to evolve, SiFive's new chip design represents a significant step forward in the quest for more efficient and powerful AI computing solutions. It underscores the growing importance of specialized hardware in meeting the demands of next-generation AI applications across various industries.
Reference
[1]
DeepComputing and Fractile collaborate with Andes Technology to develop groundbreaking RISC-V-based AI hardware, including the world's first RISC-V AI PC and a novel AI inference accelerator, promising significant advancements in AI computing efficiency and performance.
2 Sources
2 Sources
MIPS has launched the P8700 Series RISC-V Processor, designed for advanced driver assistance systems (ADAS) and autonomous vehicles. This new CPU promises improved performance, power efficiency, and scalability for the automotive industry.
2 Sources
2 Sources
Ubitium, a startup led by semiconductor veterans, announces the development of a revolutionary Universal Processor that combines CPU, GPU, DSP, and FPGA functionalities into a single RISC-V chip, aiming to transform the processor market by 2026.
3 Sources
3 Sources
Intel launches new Xeon 6 CPUs and Gaudi 3 AI accelerators to boost AI and high-performance computing capabilities in data centers, aiming to compete with AMD and NVIDIA in the AI chip market.
7 Sources
7 Sources
SIMA.ai, a leading edge AI company, has introduced MLSoC Modalix, a new product family designed to enhance generative AI capabilities at the edge. This expansion of their One Platform for Edge AI aims to bring multimodal generative AI to various devices and applications.
3 Sources
3 Sources