Tahoe Mac app icons are supposed to have margins/padding so that there’s empty space around the edge of the squircle. The opaque pixels don’t touch the edge of the canvas. Icon Composer and Xcode handle this detail for you. You design your icon without having to worry about the margin, and Xcode automatically adds it when compiling your asset catalog.
The issue is, how do you get the proper margin on icon images used in other contexts (documentation, marketing, etc.)? If you export from Icon Composer, it generates PNG files with no margin, so the icon appears too large, even though the outer pixel dimensions are the same. For example, on the SpamSieve screenshots page, the Light icon is generated by Xcode, and the Dark/Clear/Tinted variants are exported from Icon Composer. The difference is substantial.
I posted about this on Mastodon and also found an old post in the Apple Developer Forums, but no one seemed to know the answer.
Back in June, I learned from John Brayton that you don’t have to manually export from the Icon Composer app. It has a command-line tool called ictool
(formerly icontool
) that can convert .icon files to .png. So I have a Makefile that generates all the different sizes and variants for all my apps using commands like this:
"/Applications/Xcode.app/Contents/Applications/Icon Composer.app/Contents/Executables/ictool" AppIcon.icon --export-preview macOS Light 128 128 1 AppIconLight.iconset/icon_128x128.png
Unfortunately, ictool
, like Icon Composer itself, does not add the transparent margin.
I checked Xcode’s build log to see if there were other options it was passing but didn’t find anything. It looks like it’s not using ictool
and instead compiles the icon directly into the asset catalog.
So far, the only solution I’ve found is to actually build my app with Xcode and then extract the icons it generated. You can view the contents of an asset catalog using Samra, but it doesn’t export. Asset Catalog Tinkerer has some export problems—it shows constituents of the .icon as having the same name and only exports one of them—but it’s fine for the purposes of extracting the fully rendered icons.
The method I prefer is to take the .icns file in the .app package and use iconutil
to convert it to a folder of PNGs:
iconutil -c iconset -o AppIcon.iconset AppIcon.icns
Note that, by default, Xcode only generates the .icns file with sizes up to 256px. To get the larger sizes, you need to add to your .xcconfig file:
ASSETCATALOG_COMPILER_STANDALONE_ICON_BEHAVIOR = all
Unfortunately, relying on Xcode to apply the margins only works for the standard (Light) icon. This is the only one that it pre-generates. If you know how to get it to compile Dark/Clear/Tinted variants—or how to properly export those directly without using Xcode—please let me know.
Previously:
Apple Icon Image Format (.icns) Asset Catalog (.car) Graphics Icon Composer Icons Liquid Glass Mac macOS Tahoe 26 PNG Programming Xcode
OpenAI:
Today we’re releasing Sora 2, our flagship video and audio generation model.
The original Sora model from February 2024 was in many ways the GPT‑1 moment for video—the first time video generation started to seem like it was working, and simple behaviors like object permanence emerged from scaling up pre-training compute. Since then, the Sora team has been focused on training models with more advanced world simulation capabilities. We believe such systems will be critical for training AI models that deeply understand the physical world. A major milestone for this is mastering pre-training and post-training on large-scale video data, which are in their infancy compared to language.
With Sora 2, we are jumping straight to what we think may be the GPT‑3.5 moment for video. Sora 2 can do things that are exceptionally difficult—and in some instances outright impossible—for prior video generation models: Olympic gymnastics routines, backflips on a paddleboard that accurately model the dynamics of buoyancy and rigidity, and triple axels while a cat holds on for dear life.
Prior video models are overoptimistic—they will morph objects and deform reality to successfully execute upon a text prompt. For example, if a basketball player misses a shot, the ball may spontaneously teleport to the hoop. In Sora 2, if a basketball player misses a shot, it will rebound off the backboard.
Juli Clover:
The Sora app for iOS is available to download now, and it can be used in the United States and Canada. Those invited to the app will be able to use Sora 2 on the Sora website.
Amanda Silberling:
People on Sora who generate videos of Altman are especially getting a kick out of how blatantly OpenAI appears to be violating copyright laws. (Sora will reportedly require copyright holders to opt out of their content’s use — reversing the typical approach where creators must explicitly agree to such use — the legality of which is debatable.)
[…]
Aside from its algorithmic feed and profiles, Sora’s defining feature is that it is basically a deepfake generator — that’s how we got so many videos of Altman. In the app, you can create what OpenAI calls a “cameo” of yourself by uploading biometric data. When you first join the app, you’re immediately prompted to create your optional cameo through a quick process where you record yourself reading off some numbers, then turning your head from side to side.
Each Sora user can control who is allowed to generate videos using their cameo. You can adjust this setting between four options: “only me,” “people I approve,” “mutuals,” and “everyone.”
M.G. Siegler:
It’s been a long time since I’ve been this sucked into an app. Such was the situation I found myself in last night with OpenAI’s new version of Sora. Once I got access, I found it nearly impossible to put it down. I just kept wanting to remix everything I scrolled past. Yes, it was incredibly dumb. Yet highly amusing! And technically, very interesting, albeit in mildly troubling ways. But everyone else will write about that aspect, and rightfully so. My angle here is simply that OpenAI remains so good at creating these types of viral products. Underlying tech aside, that team continues to seem to know how to productize better than anyone else in the space.
Case in point: Meta launched a similar foray just days before in the form of “Vibes”. Now, did they rush it out the door to get ahead of this Sora 2 launch? Hard to say for sure, but it sure feels that way. The product, if you even want to call it that, is so half-baked and obtuse to use that it’s more like an employment quiz.
Dare Obasanjo:
Started using the Sora app and it’s like TikTok for AI generated videos.
I used to think it would take a year or two for AI videos to become as popular as influencer content on social media but I can see this app causing that to happen by the end of the year.
John Gruber:
Sora, though invitation-only at the moment, is currently #3 in the U.S. App Store.
[…]
Also, I’m sure Sora will eventually come to Android. But, to play with it now, you need an iPhone.
Previously:
App Store Artificial Intelligence Audio ChatGPT Copyright Graphics iOS iOS 26 iOS App Video Web
Wikipedia:
Nano Banana (officially Gemini 2.5 Flash Image) is an artificial intelligence image generating and editing tool created by Google. “Nano Banana” was the codename used on LMArena while the model was undergoing pre-release testing, allowing the community to evaluate its performance on real-world prompts without knowing its identity. When the company publicly released it in August 2025, it was part of their Gemini line of AI products. The model became known for its editing skills and for starting a social media trend of styled “3D figurine” photos.
Google:
Imagine yourself in any world you can dream up. Our latest AI image generation update, Nano Banana, lets you turn a single photo into countless new creations. You can even upload multiple images to blend scenes or combine ideas. And with an improved understanding of your instructions, it's easier than ever to bring your ideas to life.
There are tons of apps called Nano Banana in the App Store, some of them with Google-style icons, but none seems to be an official Google app. Neither is the nanobanana.ai Web front-end from Google.
PicoTrex (via Hacker News):
We present Nano-consistent-150k — the first dataset constructed using Nano-Banana that exceeds 150k high-quality samples, uniquely designed to preserve consistent human identity across diverse and complex editing scenarios. A key feature is its remarkable identity consistency: for a single portrait, more than 35 distinct editing outputs are provided across diverse tasks and instructions. By anchoring on consistent human identities, the dataset enables the construction of interleaved data that seamlessly link multiple editing tasks, instructions, and modalities around the same individual.
Fstoppers (via Hacker News):
Google Just Made Photography Obsolete
Google (tweet):
Our state-of-the-art image generation and editing model which has captured the imagination of the world, Gemini 2.5 Flash Image 🍌, is now generally available, ready for production environments, and comes with new features like a wider range of aspect ratios in addition to being able to specify image-only output.
Gemini 2.5 Flash Image empowers users to seamlessly blend multiple images, maintain consistent characters for richer storytelling, perform targeted edits with natural language, and leverage Gemini’s extensive world knowledge for image generation and modification. The model is accessible through the Gemini API on Google AI Studio and on Vertex AI for enterprise use.
Previously:
App Store Artificial Intelligence Google Gemini/Bard Graphics iOS iOS 26 iOS App Nano Banana Web
Meta (Hacker News):
The wait is over. Meta Ray-Ban Display hits shelves in the US today! Priced at $799 USD, which includes the Meta Neural Band, these breakthrough AI glasses let you interact with digital content while staying fully present in the physical world.
[…]
Today at Connect, Mark Zuckerberg debuted the next exciting evolution of AI glasses: the all-new Meta Ray-Ban Display and Meta Neural Band.
Meta Ray-Ban Display glasses are designed to help you look up and stay present. With a quick glance at the in-lens display, you can accomplish everyday tasks—like checking messages, previewing photos, and collaborating with visual Meta AI prompts — all without needing to pull out your phone. It’s technology that keeps you tuned in to the world around you, not distracted from it.
Juli Clover:
Meta placed the display off to the side to prevent it from obstructing the view through the glasses, and the display is also not designed to be on constantly. It is meant for short interactions.
The AI glasses are meant to be used with the Meta Neural Band, a wristband that interprets signals created by muscle activity to navigate the features of the glasses. With the band, you can control the glasses with subtle hand movements, similar to how Apple Vision Pro control works.
[…]
The AI glasses have a six hour battery life, but that can be extended to up to 30 hours with an included charging case. The Neural Band has an 18-hour battery life.
Juli Clover (Hacker News, Slashdot):
Apple has decided to stop work on a cheaper, lighter version of the $3,499 Vision Pro to instead focus its resources on smart glasses, reports Bloomberg. Apple wants to speed up development on a glasses product to better compete with Meta.
There were rumors that Apple was developing a a much lighter, more affordable “Vision Air” for launch in 2027, but Apple is now transitioning engineers from that project to its smart glasses project.
[…]
While work on a lighter version of the Vision Pro has been paused for now, Apple still plans to refresh the current model with an M5 chip later this year.
M.G. Siegler:
Anyway, admitting – again, even if implicitly – that the Vision Pro strategy to date has been a mistake is a good first step here. It’s too bad because they were starting to seesome success, making the device actually start to make some sense. But the hardware reality remains what it is. And what it is, remains far away.
[…]
With Meta now seemingly continuing to back off their VR strategy as well in favor of smart glasses, they’re sort of forcing Apple’s hand here. And Meta remains dangerous because they need this market to happen, whereas Apple does not.
Previously:
Apple Vision Pro Artificial Intelligence Hardware Meta Orion AR Glasses Rumor