2 Sources
2 Sources
[1]
Raspberry Pi AI HAT+ 2 Review: The brains and the brawn
Why you can trust Tom's Hardware Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test. Raspberry Pi's first product of 2026 is an update of the 2024 AI HAT+, but this newer version, another collaboration with Hailo, now sees the Hailo 10H AI chip running the show, along with 8GB of onboard RAM. The new AI HAT+ 2 takes the strain of AI workloads away from the Raspberry Pi 5's Arm CPU, but this all comes at a price of $130. With your Raspberry Pi already costing much more than the original $35 -- of course, the spec has vastly improved over the years -- you could already be hitting the $200 mark for just a Pi and AI HAT+ 2. Does the performance warrant the price? There's only one way to find out! The retail box follows the same design language as the many other Raspberry Pi product boxes that I have opened. At a glance, you'd be forgiven for thinking that this was the same Raspberry Pi AI HAT+ as released previously, and opening the box doesn't help as the boards are very similar. The new AI HAT+ 2 requires the included heatsink. Yes, this heatsink is for the HAT, not the Raspberry Pi 5. Your Pi 5 will also need cooling, and the official Raspberry Pi and Argon low-profile coolers will fit under the HAT. The included plastic standoffs and GPIO header extension work, but the GPIO connection is a little too loose for my liking. The resulting GPIO connections, using DuPoint style connectors, also feel a little too loose. Connecting the board to the Raspberry Pi 5 is simple. Just unlock the PCIe connection on the Pi 5, push in the ribbon cable, lock it down, and then secure the board to the standoffs and GPIO. There is a cut-out for connecting the official Raspberry Pi Camera and an official display. Connect up your keyboard, mouse, HDMI, Ethernet, and finally power, then boot to the Raspberry Pi desktop, remembering of course to enable PCIe Gen 3 via "raspi-config." We're using the latest Debian "Trixie" based image and have a custom installation process as our review unit predates the official software repositories. The end-user software experience will be streamlined for release. Using the provided installation instructions, we ran hailo-ollama and then queried the available and compatible models for the Hailo 10H powering the kit. In our pre-release software, the models are loaded using hailo-ollama via a carefully crafted curl command. Just change the "model" to one of the five available. The 8GB of onboard DDR4X RAM means that larger models will generally work better as the Raspberry Pi's own RAM is untouched. So models up to 8GB should load without incident, even on Raspberry Pi 5s with less than 8GB. This opens up cheaper AI projects, technically. You still need to pay $130 for the AI HAT+ 2, but a $50 1GB, $55 2GB, or $77 4GB Raspberry Pi 5 is now a viable AI platform, negating the need to buy a $105 8GB or the frankly frightening $160 for a 16GB Raspberry Pi 5. So why the new board? That is down to Large Language Models (LLM), an AI that is trained on huge amounts of text data and is used to understand, process, and respond to human language. The AI HAT+ 2 is mainly aimed at LLMs, whereas the older AI HAT+ is for image-based AI projects. The AI HAT+ 2 demo code supplied by Raspberry Pi leans heavily on creating our own local LLM using qwen2:1.5b but you can also use DeepSeek or Qwen models that are distilled via DeepSeek. The onboard 8GB of RAM and powerful AI processing chip take the strain off the Raspberry Pi 5's CPU and RAM. We can also use that power for image processing. If you've not got the original AI HAT+, then having good image processing and a viable LLM platform makes the $130 price tag easier to swallow. The two boards may look similar, but they don't work in the same way. The AI HAT+ was all about image-based AI processing, and the 26 TOPS of the onboard Hailo 8L (13 Tops for the cheaper Hailo8 model) is very similar in performance to the AI HAT+ 2. Using the rpicam-hello suite of examples, we tested compatibility with the AI HAT+ 2, and we are pleased to say that object identification and pose detection worked as expected. While we have no metrics to show the difference, the performance of the AI HAT+ 2 was smooth, and there were no camera issues. Where there were issues, they were largely software-based. "HailoRT not ready!" shows that Hailo's software is playing catch-up to the hardware, which has happened before. Hailo, Raspberry Pi and the community will ultimately fix this gap, but early adopters need to be aware. Because we cannot directly test the AI HAT+ 2 and the AI HAT+, we need to fallback to testing the AI HAT+ 2 versus the Arm CPU of the Raspberry Pi 5. For this test, we will use the qwen2:1.5b language model and pose some questions relating to general knowledge, then specific programming tasks using Python. Installation of qwen2:1.5b was made via ollama. The first question is "What is Tom's Hardware?" "Tom's Hardware is a hardware retailer with stores in the United States and Canada. The company sells various types of tools, equipment, and supplies for home repairs or construction projects." "Tom's Guide is a popular technology website founded in 2004. It was originally focused on computer hardware and reviews, but has since expanded its coverage to include consumer electronics, software, and other topics related to technology. The site is known for publishing detailed product reviews that are valuable resources for consumers looking to make informed purchasing decisions." The next request, "Write a Python script to display an image on the screen. The image file is supplied by asking the user for input and they will specify the absolute path to the image." The AI HAT+ 2 did a decent job of writing quick and concise code, but it was doomed to failure as it never called an application to open the image, rather it read the image and then closed the open image file. AI HAT+ 2 Response The Arm CPU response looked decent, but on closer inspection, it was full of errors around using the imported tkinter module and calling "Image.open" when there is no imported Image module. This vibe coding experience produced a long-winded response, but ultimately it was way off the mark and not something that we would rely on as a coding co-pilot or a sounding board for a project. In our tests, the AI HAT+ 2 was faster than the Raspberry Pi 5's Arm CPU, but more importantly, it ran the code without hogging the CPU. This is great for those who want to integrate AI into a GPIO-based project, like robotics. That said, the model produced inaccurate results. In the case of the coding exercise, the code would appear valid to a layman, but it was completely incorrect. If you are looking to run an LLM on a Pi, then try the Hailo-compatible models and see which one meets your needs. But be warned, the knowledge on which these models have been trained is now outdated, and from our limited testing time, we only saw incorrect responses. Obviously, someone who wants to use AI on a Raspberry Pi, but what type of AI? Offloading the workload from the Arm CPU to the Hailo 10H frees up the CPU for other tasks, such as running a chat server, controlling a robot, reacting to sensors, etc. So those of us who like to build smart GPIO projects will have a field day with the AI HAT+ 2. If you are just interested in image or vision-based AI projects, the older AI HAT+, Raspberry Pi AI Camera, or the original M.2 AI Kit are all cheaper viable options. If you already have any of these products, stick with them, as right now the AI HAT+ 2 is more money for little to no performance boost. If you haven't got any AI HATs or want to dabble with LLMs, then the AI HAT+ 2 is a viable, if currently flawed, option. Personally, we would run LLMs on the Raspberry Pi 5s Arm CPU until we have the knowledge and use case to warrant purchasing the AI HAT+ 2. AI is the buzzword that isn't going away, and Raspberry Pi's adoption of AI into its product range is an interesting, if polarizing, decision. The AI HAT+ 2 continues the progression of more powerful AI platforms, and for the right type of make,r it will be a considered choice. One day. Right now, this is a solution looking for a problem, and we're sure that the bugs will be worked out, but early adopters will be left wanting more. For many who just want to dabble with AI on their Raspberry Pi 5, then they can either use smaller models that your RAM can accommodate, or use an online service. For computer vision and image inference projects, you will get similar performance and a cheaper product with the older AI HAT+ or the Raspberry Pi AI Camera. The AI camera is a cheap entry point for learners. For those who want a local LLM in a compact and power-efficient package, the Raspberry Pi AI HAT+ 2 is something that you should research, after learning the skills and developing the project that it can support. It will also give the software time to mature and to make sure that your wallet is ready.
[2]
Raspberry Pi 5 gets LLM smarts with AI HAT+ 2
40 TOPS of inference grunt, 8 GB onboard memory, and the nagging question: who exactly needs this? Raspberry Pi has launched the AI HAT+ 2 with 8 GB of onboard RAM and the Hailo-10H neural network accelerator aimed at local AI computing. On paper, the specifications look great. The AI HAT+ 2 delivers 40 TOPS (INT4) of inference performance. The Hailo-10H silicon is designed to accelerate large language models (LLMs), vision language models (VLMs), and "other generative AI applications." Computer vision performance is roughly on a par with the 26 TOPS (INT4) of the previous AI HAT+ model. These components and 8 GB of onboard RAM should take a load off the hosting Pi, so if you need an AI coprocessor, you don't need to blow through the Pi's memory (although more on that later). The hardware plugs into the Pi's GPIO connector (we used an 8 GB Pi 5 to try it out) and communicates via the computer's PCIe interface, just like its predecessor. It comes with an "optional" passive heatsink - you'll certainly need some cooling solution since the chips run hot. There are also spacers and screws to fit the board to a Raspberry Pi 5 with the company's active cooler installed. Running it is a simple case of grabbing a fresh copy of the Raspberry Pi OS and installing the necessary software components. The AI hardware is natively supported by applications. In use, it worked well. We used a combination of Docker and the hailo-ollama server, running the Qwen2 model, and encountered no issues running locally on the Pi. However, while 8 GB of onboard RAM makes for a nice headline feature, it seems a little weedy considering the voracious appetite AI applications have for memory. In addition, it is possible to specify a Pi 5 with 16 GB RAM for a price. And then there's the computer vision, which is broadly the same 26 TOPS (INT4) as the earlier AI HAT+. For users with vision processing use cases, it's hard to recommend the $130 AI HAT+ 2 over the existing AI HAT+ or even the $70 AI camera. Where LLM workloads are needed, the RAM on the AI HAT+ 2 board will ease the load (although simply buying a Pi with more memory is an option worth exploring). According to Raspberry Pi, DeepSeek-R10-Distill, Llama3.2, Qwen2.5-Coder, Qwen2.5-Instruct, and Qwen2 will be available at launch. All (except Llama3.2) are 1.5-billion-parameter models, and the company said there will be larger models in future updates. The size compares poorly with what the cloud giants are running (Raspberry Pi admits "cloud-based LLMs from OpenAI, Meta, and Anthropic range from 500 billion to 2 trillion parameters"). Still, given the device's edge-based ambitions, the models work well within the hardware constraints. This brings us to the question of who this hardware is for. Industry use cases that require only computer vision can get by with the previous 26 TOPS AI HAT+. However, for tasks that require an LLM or other generative AI functionality but need to keep processing local, the AI HAT+ 2 may be worth considering. ®
Share
Share
Copy Link
Raspberry Pi unveils the AI HAT+ 2, a $130 add-on board featuring the Hailo-10H neural network accelerator with 8GB onboard RAM and 40 TOPS of inference performance. Designed for Large Language Models and generative AI applications, it transforms the Raspberry Pi 5 into a local AI platform, but questions remain about its value proposition for edge computing projects.

Raspberry Pi has released the AI HAT+ 2, marking a shift toward local Large Language Models processing on single-board computers. This $130 add-on board, developed in collaboration with Hailo, features the Hailo-10H AI chip and 8GB of onboard DDR4X RAM, delivering 40 TOPS of inference performance for INT4 operations
1
2
. The Raspberry Pi AI HAT+ 2 connects to the Raspberry Pi 5 via the PCIe interface and GPIO connector, offloading AI workloads from the host computer's Arm CPU and memory resources.The hardware arrives with an included passive heatsink specifically for the HAT itself, though users will need separate cooling for their Raspberry Pi 5. Compatible low-profile coolers from Raspberry Pi and Argon fit beneath the board, though some reviewers noted that the included GPIO header extension feels somewhat loose during connections
1
. Setup requires enabling PCIe Gen 3 through raspi-config and installing supporting software, with the board running on the latest Debian Trixie-based image.The Hailo-10H neural network accelerator at the heart of the AI HAT+ 2 is specifically engineered to accelerate AI workloads involving generative AI applications, including Large Language Models and vision language models (VLMs)
2
. This represents a strategic pivot from the original AI HAT+, which focused primarily on image-based AI processing with its Hailo 8L chip delivering 26 TOPS. The new board maintains similar Computer vision performance at 26 TOPS for INT4 operations, meaning it can handle both LLM tasks and traditional object identification and pose detection1
.The 8GB of onboard memory proves critical for running larger models without tapping into the host Raspberry Pi's RAM. This architecture allows models up to 8GB to load smoothly, even on lower-spec Raspberry Pi 5 units. A $50 1GB, $55 2GB, or $77 4GB Raspberry Pi 5 can now function as a viable AI platform when paired with the AI HAT+ 2, avoiding the need for the $105 8GB or $160 16GB Raspberry Pi 5 models
1
. This opens possibilities for more cost-effective edge AI processing projects, though the combined cost still approaches $200.The AI HAT+ 2 works with hailo-ollama and supports several language models at launch, including DeepSeek-R10-Distill, Llama3.2, Qwen2.5-Coder, Qwen2.5-Instruct, and Qwen2
2
. Most of these are 1.5-billion-parameter models, with Raspberry Pi promising larger models in future updates. The demo code leans heavily on creating local LLMs using qwen2:1.5b, with compatibility for DeepSeek and Qwen models distilled via DeepSeek1
.Testing with Docker and the hailo-ollama server running the Qwen2 model showed smooth performance with no issues during local operation. The board functioned as an effective AI coprocessor, handling general knowledge queries and specific programming tasks using Python
2
. However, early adopters encountered some software limitations, with "HailoRT not ready!" errors indicating that Hailo's software is still catching up to the hardware capabilities1
.Related Stories
The parameter count of available models compares poorly with cloud-based LLMs from OpenAI, Meta, and Anthropic, which range from 500 billion to 2 trillion parameters
2
. Yet for edge-based applications requiring local processing, these models work effectively within hardware constraints. The real question centers on who needs this specific configuration. Users focused solely on computer vision might find better value in the previous 26 TOPS AI HAT+ or even the $70 AI camera2
.The 8GB onboard RAM, while impressive as a headline feature, may prove limiting given AI applications' appetite for memory. Simply purchasing a Raspberry Pi 5 with 16GB RAM presents an alternative worth exploring for some use cases
2
. The AI HAT+ 2 makes most sense for industry applications requiring both LLM functionality and local processing for privacy, security, or connectivity reasons. As software support matures and larger models become available, the board's capabilities should expand, making it a platform to watch for developers building private, local AI solutions using ollama and similar frameworks.Summarized by
Navi
[1]
[2]
1
Policy and Regulation

2
Technology

3
Policy and Regulation
