Apple Intelligence, the iPhone maker’s new set of AI capabilities arriving in iOS 18, is laying the groundwork for a new way to use apps.
Today, the dated App Store model is under constant regulatory attack. Meanwhile, users can accomplish a lot of tasks with fairly simple questions to an AI assistant like ChatGPT. Proponents believe AI could become the preferred way we’ll search for answers, be productive at work, and experiment with creativity.
Where does that leave the world of apps, and the growing services revenue (more than $6 billion last quarter) they generate for Apple?
The answer cuts to the core of Apple’s AI strategy.
Apple Intelligence itself only offers a small set of capabilities out-of-the-box, like writing helpers, summarization tools, generative art, and other baseline offerings.
But earlier this year at its Worldwide Developers Conference (WWDC) in June, Apple presented new features that will allow developers’ apps to connect more deeply with both Siri and Apple Intelligence.
Improvements to the smart assistant will allow Siri to invoke any item from an app’s menu without additional work on a developer’s part. That means users could ask Siri to “show me my presenter notes” in a slide deck, for instance, and Siri would know what to do. Siri will also be able to access any text displayed on the page allowing users to reference and act on what’s on their screen.
So, if you were looking at your reminder to wish a family member a “happy birthday,” you could say something like “FaceTime him” and Siri would know what action to take.
That’s already an upgrade from the basic functionality today’s Siri offers, but it doesn’t end there. Apple is also providing developers with tools to use Apple Intelligence in their own apps. At WWDC, the company indicated that Apple Intelligence would first be made available to certain categories of apps, including Books, Browsers, Cameras, Document Readers, File Management, Journals, Mail, Photos, Presentations, Spreadsheets, Whiteboards, and Word Processors. Over time, Apple is likely to open up these capabilities to all developers across the App Store.
The AI functionality will be built on top of the App Intents framework, which is being expanded with new intents for developers. The eventual goal is to allow users to interact with Siri not just to open their apps, but to from within them.
That means a user wouldn’t have to dig around in an app’s menus to find the feature they needed to perform a task. They could just ask Siri.
Users could also make these requests while speaking naturally — conversationally — and could reference things that related to their personal context.
So, for instance, you could ask a photo-editing app like Darkroom to “apply a cinematic present to the photo I took of Ian yesterday.” Today’s version of Siri would balk at this sort of request, but the AI-powered Siri would instead know to leverage the app’s Apply Filter intent, as well as which photo you’re asking to use it on.
Siri will be able to take action even if you stumbled over your words or referenced an earlier part of the conversation in your instructions, Apple has said.
You could also take action across apps. For example, after editing your photo, you could ask Siri to move it into another app, like Notes, without having to tap on anything.
In addition, the iPhone’s search feature, Spotlight, will be able to search data from apps by incorporating app entities into its index. This refers to Apple Intelligence’s understanding of things like photos, messages, files, calendar events, and more.
This subtler use case for AI, of course, requires developer adoption. Apple has over the years alienated some of its larger developers and even some of its indies with its revenue-sharing rules, which generally allow the company to keep 30% of revenues for products and services sold through any app. But developers could be drawn back in as Siri takes apps that were previously hidden in a back-of-the-phone App Library and makes them easily accessible through voice commands.
Instead of boring onboarding screens to train users on how to navigate and use their app, developers could instead focus on making sure Siri understands how their app works, and how users might ask for the things they want to do in it. That way, users could engage with the app via Siri, and by either speaking or typing in commands, similar to how they today engage with an AI chatbot like ChatGPT.
Third-party developers will gain other benefits from Apple’s new AI architecture, too.
With its OpenAI partnership, Siri will be able to hand off queries to ChatGPT when it doesn’t have the answer. With its visual search feature on the iPhone 16 lineup, Apple will also allow users to access OpenAI’s chatbot or Google Search just by tapping on the new Camera Control button on the side, turning what they’re seeing through the camera’s viewfinder into an actionable query.
These developments won’t feel as immediately revolutionary as the introduction of something like ChatGPT did because the rate of developer adoption will likely vary.
Moreover, these future promises seem like they’re still a ways out. In the latest iOS 18 betas, the functionality feels incomplete. As often as I was surprised by what the new Siri can do, I was just as often confused by those things it can’t. That includes within Apple’s own apps. For instance, you can ask Siri in the Photos app to send a photo you’re viewing to someone, but you can’t ask it to do something more complex, like turn the photo into a sticker. Until Siri stops hitting these kinds of roadblocks, the functionality may end up feeling frustrating to use.