Product show showing AI HATE plus 2 on a Pi 5.

A new version of the Raspberry Pi AI HAT+ has been released with enough power to run popular generative AI (GenAI) models, including Qwen and DeepSeek.

The first generation Raspberry Pi AI HAT+ add-ons were designed for on-device acceleration of vision-based neural network models, but weren’t powerful enough to run more generalised large-language models (LLMs) – something the new AI HAT+ 2 solves.

Described as the company’s “first AI product designed to fill the generative AI gap”, the Raspberry Pi AI HAT+ 2 packs in a Hailo-10H NPU with 40 TOPS (INT4) of inferencing performance and is paired with 8GB of dedicated on-board RAM.

“Performing all AI processing locally and without a network connection, the AI HAT+ 2 operates reliably and with low latency, maintaining the privacy, security, and cost-efficiency of cloud-free AI computing that we introduced with the original AI HAT+”, say the company.

Vision-based models can be run on the new board too, with AI HAT+ 2’s computer vision performance described as “broadly equivalent to that of its 26-TOPS predecessor”. It retains tight integration with the native Pi camera software stack.

Below are the LLMs a Raspberry Pi 5 with the AI HAT+ 2 will be able to run at launch:

LLMParameters
DeepSeek-R1-Distill1.5 billion
Llama3.21 billion
Qwen2.5-Coder 1.5 billion
Qwen2.5-Instruct 1.5 billion
Qwen21.5 billion

Newer and in some cases larger models will arrive soon, and the Pi community will almost certainly experiment with tuning and optimising variants of open source models to wring every bit of performance from the hardware for specific tasks.

In a video demo shared by Raspberry Pi, the Qwen2 model predicts cogent sounding answers to a few simple questions. That is what LLMs do: they statistically guesstimate the most applicable response from training data (they’re not thinking, they’re calculating):

Video demo from Raspberry Pi

Other demos show the coding tasks, language translation and, marginally more interesting as a visual demo, a Vision Language Model (LVM) describing what it sees from a camera stream:

Another demo

But is it actually going to prove useful?

Set expectations to: purpose

To state the obvious: the Raspberry Pi AI HAT+ 2 will not a rival for dominant cloud-based AI models. Some of those models boast as many as 2 trillion parameters, and harness colossal amounts of compute – not to mention electricity – at scale.

A rinky-dinky Pi won’t be a pocket-sized ChatGPT. That’s sort of the point.

Whether a smaller 1.5-billion-parameter model running on a Pi can be useful will, like a lot of Pi tasks, depend on the use case. Slop merchants looking to churn out ‘content’ for a quick buck have no need to rush out and buy one.

Real world use for this is going to be super specialise, or “for fun” – and for the latter, it may not be needed if benchmarking by ‘content creator’ Jef Gerrling is correct, as it shows LLMs run faster on Pi 5 CPU than the Hailo-10H NPU.

Model limitations bring focus, and there will be tasks a Pi with this HAT can handle well enough to decouple from the cloud, APIs and monthly subscriptions – processing on-device offers privacy and security benefits that don’t show up in TOPS benchmarks.

How much does it cost?

The Raspberry Pi AI HAT+ 2 costs $130/£125 and is available to buy from approved Pi resellers from January 15, 2026. It’s not exactly impulse buy territory, costing more than twice the price of a mid-tier 4GB Raspberry Pi 5.

For vision-related tasks, the first-gen AI HAT+ models remain available and cost less than the new one, around £68 for the 13 TOPs version and around £105 for 26 TOPS. The original HAT is not designed for running GenAI tasks, mind.

Still, if you’re interested in running AI locally, for whatever reason, then this is the the most capable option the Pi has had to date, even if more on paper than practice. Try and find and something useful to do with it – I can’t think of much…