Apple’s indigenous large language model (LLM) that will likely power the future version of virtual assistant Siri to make it sound more life-like will reportedly debut in 2026. According to a Bloomberg report, Apple’s Siri will not be getting advanced conversational features like Google’s Gemini and OpenAI’s ChatGPT until spring 2026. However, the company might preview it at the Worldwide Developer conference (WWDC) next year. Additionally, many new features that were initially planned for the next iOS version are also reportedly postponed.
While Siri is said to get advanced features such as more control over third party apps and contextual awareness with access to user’s data, it will still run on the same model that powers the current version. However, it could change with Apple’s own AI model which is reportedly under development and called “LLM Siri” internally.
The report stated that, while integration of OpenAI’s ChatGPT to iOS and Siri will add more capabilities, Apple does not wish to rely on partners for a “fundamental piece of technology” in the long run. It also said that Apple has already made progress on a revamped Siri and is “actively running and testing” it internally. However, it is not scheduled to be released until spring of 2026. The report also stated that a “larger-than-usual” number of new features that are expected to debut with iOS 19 has been postponed until spring, likely for iOS 19.4 update.
If true, this would be similar to the roll-out plan this year that saw highlight features such as advanced Apple Intelligence tools being incorporated into the system with subsequent iOS 18 updates. Some of these features are yet to arrive, such as image generation tools, ChatGPT integration and on-screen awareness for Siri.