Don’t miss the latest developments in business and finance.

Apple researchers show how the company plans on running AI models on-device

From large language models running entirely on device to a next-generation Siri, here is how Apple plans to take on the likes of Microsoft and Google in the AI space

Apple, Apple Inc
Representative Image
Harsh Shivam New Delhi
3 min read Last Updated : May 06 2024 | 3:27 PM IST

Apple will be a late entrant into the artificial intelligence space when it reveals its next generation operating systems for iPhones, iPads and Macs at its Worldwide Developers Conference (WWDC) on June 10. Bloomberg has reported that Apple is developing its own large language model (LLM) to power on-device generative AI features. But is it possible to run an entire AI model without any cloud based processing? Apple researchers think it is.

Apple’s on-device AI

In a research paper titled “LLM in a flash: Efficient Large Language Model Inference with Limited Memory”, researchers from Apple noted how the company plans to run big AI models on the iPhones, iPads, Macbooks, and more. In the paper, researchers at Apple have detailed how they plan to store the AI model’s data on to the device’s flash memory, instead of RAM where it is usually stored. According to a report by The Verge, researchers at Apple were able to run the AI models much faster and efficiently using this method. Additionally, this method also allowed them to run LLMs up to twice the size of available DRAM.

Last month, Researchers at Apple released new OpenELM AI-models on the Hugging Face model library. OpenELM, which stands for “Open-source Efficient Language Models" is a series of four small language models that are capable of running on devices such as phones and PCs. These new AI-models can take up text-related tasks such as text generation, summary writing, email writing, and more.

Siri with AI

Apple is also reportedly planning to improve its virtual assistant Siri with AI backing. According to The Verge, Apple researchers have been working on a way to use Siri without needing to use a wake word. Instead of answering to voice commands that start with “Hey Siri '' or “Siri,” the virtual assistant could inherit a new ability to know whether the user is talking to it or not.

In a research paper titled “MULTICHANNEL VOICE TRIGGER DETECTION BASED ON TRANSFORM-AVERAGE-CONCATENATE”, researchers at Apple found that with the device not discarding unnecessary sounds, rather feeding it to the AI model for processing the AI model can become capable of understanding what matters and what does not.

Once Siri is activated, Apple is working on more ways to make the conversation more interactive. In another research paper, Apple researchers have developed a model called STEER, which stands for Semantic Turn Extension-Expansion Recognition. The system uses LLM to understand ambiguous queries. The system uses AI to ask questions to the user to get a better understanding of their requirements. Additionally, the system is said to be much better at understanding if a query is a follow-up question by the user or an entirely new prompt. 

Also Read

Topics :Apple Apple iOSartifical intelligenceSiri

First Published: May 06 2024 | 3:27 PM IST

Next Story