Menu Menu
[gtranslate]

You decide – is ‘Apple Intelligence’ safe and secure?

Apple Intelligence is billed as an AI assistant designed to make daily admin effortless. That means it can sift through our private messages and emails. Apple claims the system is completely secure, but are you confident enough to run with it?

Having bided its time, Apple is finally throwing its hat into the commercial AI ring.

Being fashionably late (as much as two years) is undoubtedly a risk, but there are significant upsides – chiefly, capitalising on the perceived shortcomings of rivals.

One of the chief concerns surrounding Tech’s newest obsession is privacy and security. How can you guarantee people their sensitive information is safe, when the processes of AI are ambiguous at best and its capabilities are constantly changing?

Only now are laws coming into effect about what companies are allowed to scrape from the web in order to train their deep learning models, for instance.

The industry’s safeguards, which are still generally perceived as lax, are hard to effectively enforce due to the breakneck pace as which AI is developing and the number of companies building their own platforms.

In typically grandiose fashion, however, Apple claims it has cracked the issue of privacy – despite offering arguably the most invasive AI application yet – at the WWDC last week.


What is Apple Intelligence?

Apple’s vision for general AI is to create an omnipresent assistant to streamline the daily admin activities of anyone who owns their devices. Siri, its ubiquitous voice assistant, has also been supercharged.

Its showcase highlighted ‘email assistance’ as a key feature, in which AI provided smart suggestions, automated sorting, and personalised responses based on the context and history of an email recipient. It was undeniably impressive.

If someone sent an email with a specific query, the AI would search through previous emails and instantly pick out the answer, all while allowing for the seamless creation of reminders and meetings as follow-up actions.

The actual content of emails, meanwhile, could be entirely generated or amended for tone and grammar in a matter of seconds.

Underpinning the entire product, ‘contextual awareness’ is supposedly able to understand who our contacts are across all of our apps.

That means, say, should you want to know what time your mother’s plane is due to land, Siri would search your messages, emails, WhatsApp, Insta Dms and other history of correspondence and quickly find the answer presented in a custom notification.

On a dull bus journey, you could say or type ‘Play that podcast Jess was talking about’ and it would perform the necessary steps to check your messages and play the media from a third-party app.

You can also generate specific emojis and graphics able to capture the likeness of the person you’re messaging in a sketch or cartoon form.

If, like me, your brain isn’t naturally geared for organisation, Apple Intelligence sounds dreamy in a practical sense. But, given the security concerns surrounding AI, is there a major drawback or privacy shit storm in the offing?


Inevitable public concern

Apple would have you believe not.

In-fact, a major point of the showcase that was constantly reiterated is that Apple is keeping the flow of user data strictly ‘on-device.’

We’ve become accustomed to our user data being a commodity for companies, but Apple ensures that no trace of Apple Intelligence will end up on third-party servers. ‘It [Apple] is aware of your data without collecting your personal data,’ said senior VP of software engineering at the company, Craig Federighi.

When complicated user requests do leave the confines of the device to make use of Large Language Models, Apple’s Private Cloud Compute service will handle requests – which reportedly doesn’t store data or make data accessible to Apple.

The company claims that it will make every production build of its Private Cloud Compute program available to security researchers for vulnerability checks and plans to put the program on its bug bounty list.

Apple’s efforts to protect its users are emphatic, commendable, and go beyond what rival companies have promised to date. Nonetheless, it would be overzealous to suggest that personal risk isn’t involved at all.

While Apple has gone above and beyond to eliminate risks we know about, the reality remains that we don’t yet know enough about the pitfalls of AI – and LLMs in particular.

Speaking of which, Apple has outsourced its language generation needs to OpenAI, which sceptics say nullifies Apple’s guarantee of user privacy. Elon Musk has been particularly vocal about the prospect of complete safety being impossible with this framework.

LLMs like ChatGPT are new enough that threats remain largely undiscovered. ‘I really worry that LLMs are a very, very different beast, and traditional security engineers just don’t have the experience with these AI techniques yet,’ says Steve Wilson, product officer at leading cyber-attack prevention firm, Exabeam.

In the 24 months since generative AI fully sprung to life, there have been numerous instances of unforeseen technical and security problems coming to light. Apple has a fantastic track record with user security, but could leaning on others create a chink in the armour?

At present, the signs are certainly more positive than negative for Apple Intelligence, but only time in the wild will prove whether the fruit is ripe or poisoned.

Accessibility