AI features are coming to Ubuntu in 2026, though Canonical has made clear that the distro is not becoming an AI product.

In a community post, Jon Seager, VP of engineering at Canonical, says the company is “ramping up its use of AI tools in a focused and principled manner” this year, with a bias toward local inference and open-weight models whose licence terms match Canonical’s values.

AI features in Ubuntu will take one of two forms.

Implicit features improve existing capabilities using on-device AI models, for things like text-to-speech and speech-to-text to bolster accessibility.

The plan is to make Ubuntu a context-aware OS with agentic workflow integrations and endpoints

Explicit features are new additions powered by AI for things like generative text when writing documents, agents for automated file management, and so on.

The features will rely on local models, for which Canonical has been laying groundwork via its inference snaps, which offer optimised/quantised models including Qwen and DeepSeek.

Licence terms will decide which models end up being used in Ubuntu, more so than just the model weight. Local inference requires moderately capable hardware and smaller models are less capable, but Seager expects the gap to close.

“What today seems like it’s only possible with access to a frontier AI factory will become significantly more accessible in the coming months and years,” he says.

Don’t expect a “global AI kill-switch” if you don’t want to use them, as Seager says that would be ‘complex’ to implement ‘honestly’. But as Ubuntu’s AI features will be powered by local models installed as snaps, one could remove the snaps to remove (or break) said AI features.

Ubuntu to become agent-friendly

Ubuntu AI agent illustration.
A different kind of Ubuntu user agent

Canonical also plans to mould Ubuntu into a context-aware OS, integrating agentic workflows securely using Snap confinement guardrails.

“My aim is for Ubuntu to expose the primitives needed for agents to operate within existing boundaries, whether that be read-only analysis, tightly scoped permissions for any actions, and full auditability of decisions and outcomes”, Seager says.

Internally, Canonical’s engineering teams will be incentivised to “understand where AI tools add value” rather than be judged (cough, Mozilla) on how much AI is used and added (and thus swerve the efficiency drop from workslop).

“Using AI for its own sake is not a constructive goal for anything but increasing exposure, and it rarely yields good results in production code”, he writes, but notes when used “where it’s well-optimised, and in ways that can be controlled and reviewed, it can be highly effective”.

Will AI be taking jobs from people at Canonical? No, but Seager says that an engineer better skilled at using AI tools ‘certainly could’, which is a sign that AI integration is not a casual flirtation and more serious long-term commitment.

Cautious & considered is commendable

AI is not universally popular, and “AI or die” is not inevitable, either. The constant din about how ‘AI will end the world and you’re stupid if you’re worried’ coming from the mouths of dystopian-fantasist tech bros has a lot to do with the former, while the the latter ignores utility.

Take the endless push of AI features and capabilities into all places, regardless. AI is positioned as taking to the Industrial Revolution, where to complain is to be a luddite1 – but there weren’t steam-powered hairdryers and looms didn’t replace hobbyist knitting needles in homes.

Sanguine attitudes to AI are multifaceted and there’s no doubt that the barrage of banal Clippy-esque “hey, it looks like you’re having fun – do you want me to have fun for you instead?” nags suffused through modern tech is fuelling resentment.

To ask “Why can’t I just do this myself?” doesn’t make you a luddite, it makes you human.

Ordering a birthday card online this week, I was immediately prompted to use the on-site AI to write the message inside. It would ‘save me time’, I was told – but for whom, and to do what with? Reinvest into further cognitive dependency on a machine, I suppose.

So the (inevitable) trepidation that Ubuntu might adopt a similar AI-all-the-things approach2 is, alas, understandable given the context of where we’re all sat. One need only look at Windows 11 for proof of how well AI-stuffing goes with users.

Same is true of developers using AI to produce more code, faster as a solution to… Well, to making some CEO’s bar charts look healthy. Andrew Murphy mused recently that those who view the speed of producing code as a bottleneck, have bigger problems to face up to.

Canonical’s stance is (if light on specifics) reassuring: measured and grounded, with no hint the distro will use background LLMs to power gimmicky engagement-goosing features under the upsell of ‘optimising’ us as if we’re payroll’d software processes.

We’ll begin to learn more on the hard detail over the coming months, and be able to go hands with preview of these bold new intentions in October’s release of Ubuntu 26.10 ‘Stonking Stingray’.

  1. Historical trivia: luddites weren’t anti-technology, they were anti-not-having-a-job. ↩︎
  2. AI is, ultimately, a hammer and not every problem is a nail. The AI industry is in the business of renting hammers so would have you believe the pain you feel when hitting your own thumb with their hammer makes your thumb a luddite resisting change… ↩︎