Apple's WWDC showcases AI to make daily tasks easier
So, I guess we’ve been getting it all wrong. It’s not Artificial Intelligence. AI really means Apple Intelligence. Or, at least, that’s what a lot of people will be saying after Apple’s announcement of its own generative AI features for upcoming versions of the iPhone’s iOS operating system and MacOS at the company’s annual Worldwide Developers Conference (WWDC) event.
Apple Intelligence consists of a whole range of new GenAI-powered “intelligent” features such as a more powerful and accurate version of Siri that integrates support for OpenAI’s ChatGPT. It also offers text creation and summarization, “smart” photo editing, and other changes that make your devices feel more intuitive. More importantly, these features are going to make common tasks that we all do multiple times every day easier and more efficient.
Starting in the fall with the release of iOS and iPadOS 18 and MacOS 15, (also called MacOS Sequoia) you’ll have the option to do all those cool new things. One important gotcha, however, is you’ll need an iPhone 15 Pro or later model, or an M-series processor-equipped Mac or iPad to use these new capabilities. If not, you’re out of luck – until you upgrade to a new device.
Among the new Apple Intelligence features are text-based functions that quickly summarize websites, documents, emails and text threads into small, easy-to-read blocks of text. You can compose or revise text too.
Photo cleanup
In addition, with the new Clean Up feature in Photos, you’ll find it significantly easier to do things like remove extraneous people or objects from crowded vacation photos. Some of these functions have been available with professional photo editing software for a while but building them straight into the Photos app and making the process much more fluid is indicative of what Apple Intelligence is going to enable.
These capabilities and more are being powered by a series of sophisticated new algorithms – technically called large language models, or LLMS – that can learn from existing data, such as text or images. These LLMs then apply that learning to create new content – hence the name generative AI.
Apple Conference:Apple WWDC 2024 keynote live updates: iOS 18, AI and changes to photos among what's coming
Even better, these algorithms can learn from you – such as the way you write – and then use that to personalize the content it generates. Some of these algorithms reside in and do their work within your iPhone or Mac – which is one of the benefits of what’s called on-device AI – while others require the computing power of the cloud.
Beyond just creating new content, these new algorithms provide a level of intelligence that can make your device feel “smarter” and more personal. Practically speaking, this means that the dramatically updated Siri should finally understand what you mean when you make a request and then respond accurately, instead of just hearing the words you say and either responding incorrectly, incompletely, or not at all.
In addition, these new Apple Intelligence features for Siri extend into many apps on the phone and basically give you voice control over your apps. Want to set a timer in your Camera app and put it in Portrait Mode without diving through menu settings? Just ask the new Siri to do it.
Siri meet ChatGPT
The most surprising addition to Siri was the integration of OpenAI’s ChatGPT. While it does offer important new capabilities, it’s very atypical for a company like Apple that has typically wanted to own and completely control the applications and experiences on its devices.
Because of the computing power required to enable some of these experiences, Apple is also using cloud-based computing resources, which it’s calling Private Cloud Compute, to bring some of them to life.
The concern of doing this – as Apple has noted in the past – is the need for personal data to be brought to the cloud, which some view as a potential privacy issue. Apple made it clear, however, that it is using its own data centers for these Private Cloud Compute efforts and will not look at any of this data or allow it to be associated with any individual.
Apple lags behind Android
In truth, Apple is late to the GenAI game as many similar capabilities have been available on Android phones powered by Qualcomm Snapdragon Gen 3 chips for some time now. Samsung, for example, debuted text summarization and Live Translate features back in January at the launch of its Galaxy S24 smartphone. Those functions automatically translate either spoken or written text from one language to another. Similarly, Google has offered the kinds of smart image editing features Apple just announced on its Pixel phones for over a year now.
Still, in the U.S. market, iPhones are the No. 1 choice, so whenever Apple brings new capabilities to that device, it ends up being the first exposure many have to a particular feature or technology. That’s why Apple’s plans to bring GenAI features to iPhones and Macs are so important – finally, average consumers and a majority of the market will start to get a feel for how amazing generative AI can be.
Features beyond AI
But not everything new from Apple is GenAI-related. The company also took the wraps off new abilities to customize your home screen, rearrange settings, organize your passwords and more.
Most importantly, Apple announced support for something called RCS (Rich Communications Service), which will finally make texting photos and videos between iPhones and Android phones much better (though the green bubbles will stay…sorry). Plus, unlike the new GenAI features, these new capabilities will work on virtually all existing iPhones – not just the latest models – when iOS18 becomes available.
Still, the main news at WWDC was Apple's recognition of the power of GenAI. Not everyone will buy into it initially, but there is little doubt that there’s still much more of it to come.
USA TODAY columnist Bob O'Donnell is the president and chief analyst of TECHnalysis Research, a market research and consulting firm. You can follow him on Twitter @bobodtech.