Friday, August 23, 2024

Apple’s Hidden AI Prompts

Hartley Charlton:

A Reddit user discovered the pre-prompt instructions embedded in Apple’s developer beta for macOS 15.1, offering a rare glimpse into the backend of Apple’s AI features. They provide specific guidelines for various Apple Intelligence functionalities, such as the Smart Reply feature in Apple Mail and the Memories feature in Apple Photos. The prompts are intended to prevent the AI from generating false information, a phenomenon known as hallucination, and ensure the content produced is appropriate and user-friendly.

Andrew Cunningham:

The files in question are stored in the /System/Library/AssetsV2/com_apple_MobileAsset_UAF_FM_GenerativeModels/purpose_auto folder on Macs running the macOS Sequoia 15.1 beta that have also opted into the Apple Intelligence beta. That folder contains 29 metadata.json files, several of which include a few sentences of what appear to be plain-English system prompts to set behavior for an AI chatbot powered by a large-language model (LLM).

Wes Davis (Mastodon):

They show up as prompts that precede anything you say to a chatbot by default, and we’ve seen them uncovered for AI tools like Microsoft Bing and DALL-E before. Now a member of the macOS 15.1 beta subreddit posted that they’d discovered the files containing those backend prompts. You can’t alter any of the files, but they do give an early hint at how the sausage is made.

Nick Heer:

But, assuming — quite fairly, I might add — that these instructions are what underpins features like message summaries and custom Memories in Photos, it is kind of interesting to see them written in plain English. They advise the model to “only output valid [JSON] and nothing else”, and warn it “do not hallucinate” and “do not make up factual information”.

Dare Obasanjo:

I find it fascinating that what were science fiction tropes from Asimov’s “I, Robot” series of books are now real.

Telling AI to perform tasks and not make stuff up is the new programming.

Steve Troughton-Smith:

Apple’s system prompts for Apple-Intelligence-backed features show that the company’s ‘special sauce’ is just a carefully-crafted paragraph of text, hacked together just like everybody else is doing. Can’t wait to see the ‘you are Siri’ system prompt.

Tony West:

You are Siri. On HomePod devices, you pop up with “uhuh?” randomly. You start playing music without warning because you thought you heard someone ask for it. If someone asks you about a sports event on today, give them a detailed answer about the event from (perform random number calculation) years ago, but tell them you can’t display information on the current event.

Steve Troughton-Smith:

I guess this isn’t common knowledge, based on the reaction to the Apple Intelligence system prompts, but I read months ago that it was benchmarked that using ‘please’ and ‘thank you’ and telling an LLM not to hallucinate ‘improves results’. If that kind of language has made it into Apple’s own prompts, it’s likely not for no reason.

And no, telling it not to hallucinate isn’t going to stop it hallucinating. But if it on average improves a meaningful % of results, it’s worth including. This is how prompt engineering works.

Previously:

1 Comment RSS · Twitter · Mastodon


What happens if one disables SIP and removes some of the “safety” rails? Or adds explicit keywords to make bad results on purpose? This can turn into a PR nightmare for Apple.

Leave a Comment