Archive for October 2025

Thursday, October 2, 2025

How to Export a Mac .icon File With the Proper Margins

Tahoe Mac app icons are supposed to have margins/padding so that there’s empty space around the edge of the squircle. The opaque pixels don’t touch the edge of the canvas. Icon Composer and Xcode handle this detail for you. You design your icon without having to worry about the margin, and Xcode automatically adds it when compiling your asset catalog.

The issue is, how do you get the proper margin on icon images used in other contexts (documentation, marketing, etc.)? If you export from Icon Composer, it generates PNG files with no margin, so the icon appears too large, even though the outer pixel dimensions are the same. For example, on the SpamSieve screenshots page, the Light icon is generated by Xcode, and the Dark/Clear/Tinted variants are exported from Icon Composer. The difference is substantial.

I posted about this on Mastodon and also found an old post in the Apple Developer Forums, but no one seemed to know the answer.

Back in June, I learned from John Brayton that you don’t have to manually export from the Icon Composer app. It has a command-line tool called ictool (formerly icontool) that can convert .icon files to .png. So I have a Makefile that generates all the different sizes and variants for all my apps using commands like this:

"/Applications/Xcode.app/Contents/Applications/Icon Composer.app/Contents/Executables/ictool" AppIcon.icon --export-preview macOS Light 128 128 1 AppIconLight.iconset/icon_128x128.png

Unfortunately, ictool, like Icon Composer itself, does not add the transparent margin.

I checked Xcode’s build log to see if there were other options it was passing but didn’t find anything. It looks like it’s not using ictool and instead compiles the icon directly into the asset catalog.

So far, the only solution I’ve found is to actually build my app with Xcode and then extract the icons it generated. You can view the contents of an asset catalog using Samra, but it doesn’t export. Asset Catalog Tinkerer has some export problems—it shows constituents of the .icon as having the same name and only exports one of them—but it’s fine for the purposes of extracting the fully rendered icons.

The method I prefer is to take the .icns file in the .app package and use iconutil to convert it to a folder of PNGs:

iconutil -c iconset -o AppIcon.iconset AppIcon.icns

Note that, by default, Xcode only generates the .icns file with sizes up to 256px. To get the larger sizes, you need to add to your .xcconfig file:

ASSETCATALOG_COMPILER_STANDALONE_ICON_BEHAVIOR = all

Unfortunately, relying on Xcode to apply the margins only works for the standard (Light) icon. This is the only one that it pre-generates. If you know how to get it to compile Dark/Clear/Tinted variants—or how to properly export those directly without using Xcode—please let me know.

Previously:

Sora App

OpenAI:

Today we’re releasing Sora 2, our flagship video and audio generation model.

The original Sora model from February 2024 was in many ways the GPT‑1 moment for video—the first time video generation started to seem like it was working, and simple behaviors like object permanence emerged from scaling up pre-training compute. Since then, the Sora team has been focused on training models with more advanced world simulation capabilities. We believe such systems will be critical for training AI models that deeply understand the physical world. A major milestone for this is mastering pre-training and post-training on large-scale video data, which are in their infancy compared to language.

With Sora 2, we are jumping straight to what we think may be the GPT‑3.5 moment for video. Sora 2 can do things that are exceptionally difficult—and in some instances outright impossible—for prior video generation models: Olympic gymnastics routines, backflips on a paddleboard that accurately model the dynamics of buoyancy and rigidity, and triple axels while a cat holds on for dear life.

Prior video models are overoptimistic—they will morph objects and deform reality to successfully execute upon a text prompt. For example, if a basketball player misses a shot, the ball may spontaneously teleport to the hoop. In Sora 2, if a basketball player misses a shot, it will rebound off the backboard.

Juli Clover:

The Sora app for iOS is available to download now, and it can be used in the United States and Canada. Those invited to the app will be able to use Sora 2 on the Sora website.

Amanda Silberling:

People on Sora who generate videos of Altman are especially getting a kick out of how blatantly OpenAI appears to be violating copyright laws. (Sora will reportedly require copyright holders to opt out of their content’s use — reversing the typical approach where creators must explicitly agree to such use — the legality of which is debatable.)

[…]

Aside from its algorithmic feed and profiles, Sora’s defining feature is that it is basically a deepfake generator — that’s how we got so many videos of Altman. In the app, you can create what OpenAI calls a “cameo” of yourself by uploading biometric data. When you first join the app, you’re immediately prompted to create your optional cameo through a quick process where you record yourself reading off some numbers, then turning your head from side to side.

Each Sora user can control who is allowed to generate videos using their cameo. You can adjust this setting between four options: “only me,” “people I approve,” “mutuals,” and “everyone.”

M.G. Siegler:

It’s been a long time since I’ve been this sucked into an app. Such was the situation I found myself in last night with OpenAI’s new version of Sora. Once I got access, I found it nearly impossible to put it down. I just kept wanting to remix everything I scrolled past. Yes, it was incredibly dumb. Yet highly amusing! And technically, very interesting, albeit in mildly troubling ways. But everyone else will write about that aspect, and rightfully so. My angle here is simply that OpenAI remains so good at creating these types of viral products. Underlying tech aside, that team continues to seem to know how to productize better than anyone else in the space.

Case in point: Meta launched a similar foray just days before in the form of “Vibes”. Now, did they rush it out the door to get ahead of this Sora 2 launch? Hard to say for sure, but it sure feels that way. The product, if you even want to call it that, is so half-baked and obtuse to use that it’s more like an employment quiz.

Dare Obasanjo:

Started using the Sora app and it’s like TikTok for AI generated videos.

I used to think it would take a year or two for AI videos to become as popular as influencer content on social media but I can see this app causing that to happen by the end of the year.

John Gruber:

Sora, though invitation-only at the moment, is currently #3 in the U.S. App Store.

[…]

Also, I’m sure Sora will eventually come to Android. But, to play with it now, you need an iPhone.

Previously:

Nano Banana

Wikipedia:

Nano Banana (officially Gemini 2.5 Flash Image) is an artificial intelligence image generating and editing tool created by Google. “Nano Banana” was the codename used on LMArena while the model was undergoing pre-release testing, allowing the community to evaluate its performance on real-world prompts without knowing its identity. When the company publicly released it in August 2025, it was part of their Gemini line of AI products. The model became known for its editing skills and for starting a social media trend of styled “3D figurine” photos.

Google:

Imagine yourself in any world you can dream up. Our latest AI image generation update, Nano Banana, lets you turn a single photo into countless new creations. You can even upload multiple images to blend scenes or combine ideas. And with an improved understanding of your instructions, it's easier than ever to bring your ideas to life.

There are tons of apps called Nano Banana in the App Store, some of them with Google-style icons, but none seems to be an official Google app. Neither is the nanobanana.ai Web front-end from Google.

PicoTrex (via Hacker News):

We present Nano-consistent-150k — the first dataset constructed using Nano-Banana that exceeds 150k high-quality samples, uniquely designed to preserve consistent human identity across diverse and complex editing scenarios. A key feature is its remarkable identity consistency: for a single portrait, more than 35 distinct editing outputs are provided across diverse tasks and instructions. By anchoring on consistent human identities, the dataset enables the construction of interleaved data that seamlessly link multiple editing tasks, instructions, and modalities around the same individual.

Fstoppers (via Hacker News):

Google Just Made Photography Obsolete

Google (tweet):

Our state-of-the-art image generation and editing model which has captured the imagination of the world, Gemini 2.5 Flash Image 🍌, is now generally available, ready for production environments, and comes with new features like a wider range of aspect ratios in addition to being able to specify image-only output.

Gemini 2.5 Flash Image empowers users to seamlessly blend multiple images, maintain consistent characters for richer storytelling, perform targeted edits with natural language, and leverage Gemini’s extensive world knowledge for image generation and modification. The model is accessible through the Gemini API on Google AI Studio and on Vertex AI for enterprise use.

Previously:

Meta Ray-Ban Display

Meta (Hacker News):

The wait is over. Meta Ray-Ban Display hits shelves in the US today! Priced at $799 USD, which includes the Meta Neural Band, these breakthrough AI glasses let you interact with digital content while staying fully present in the physical world.

[…]

Today at Connect, Mark Zuckerberg debuted the next exciting evolution of AI glasses: the all-new Meta Ray-Ban Display and Meta Neural Band.

Meta Ray-Ban Display glasses are designed to help you look up and stay present. With a quick glance at the in-lens display, you can accomplish everyday tasks—like checking messages, previewing photos, and collaborating with visual Meta AI prompts — all without needing to pull out your phone. It’s technology that keeps you tuned in to the world around you, not distracted from it.

Juli Clover:

Meta placed the display off to the side to prevent it from obstructing the view through the glasses, and the display is also not designed to be on constantly. It is meant for short interactions.

The AI glasses are meant to be used with the Meta Neural Band, a wristband that interprets signals created by muscle activity to navigate the features of the glasses. With the band, you can control the glasses with subtle hand movements, similar to how Apple Vision Pro control works.

[…]

The AI glasses have a six hour battery life, but that can be extended to up to 30 hours with an included charging case. The Neural Band has an 18-hour battery life.

Juli Clover (Hacker News, Slashdot):

Apple has decided to stop work on a cheaper, lighter version of the $3,499 Vision Pro to instead focus its resources on smart glasses, reports Bloomberg. Apple wants to speed up development on a glasses product to better compete with Meta.

There were rumors that Apple was developing a a much lighter, more affordable “Vision Air” for launch in 2027, but Apple is now transitioning engineers from that project to its smart glasses project.

[…]

While work on a lighter version of the Vision Pro has been paused for now, Apple still plans to refresh the current model with an M5 chip later this year.

M.G. Siegler:

Anyway, admitting – again, even if implicitly – that the Vision Pro strategy to date has been a mistake is a good first step here. It’s too bad because they were starting to seesome success, making the device actually start to make some sense. But the hardware reality remains what it is. And what it is, remains far away.

[…]

With Meta now seemingly continuing to back off their VR strategy as well in favor of smart glasses, they’re sort of forcing Apple’s hand here. And Meta remains dangerous because they need this market to happen, whereas Apple does not.

Previously:

Wednesday, October 1, 2025

UK Again Wants iCloud Backdoor

Jess Weatherbed (Hacker News, Reddit, MacRumors, 9to5Mac):

The UK government is reportedly once again demanding that Apple provide it with backdoor access to encrypted iCloud user data, following claims that the effort had been abandoned in August. The Financial Times reports that a new technical capability notice (TCN) was issued by the UK Home Office in early September, this time specifically targeting access to British citizens’ iCloud backups.

[…]

While US officials raised concerns about the order during President Trump’s state visit to the UK last month, according to The Financial Times, the publication reports that two senior British government figures said the UK was no longer facing US pressure to drop its demands.

Matt Henderson:

Just returned from the UK, where a digital ID is about to be enforced on all adults. Soon, my Signal messages may be scanned. Financial policing co-opted to the institutions with KYC and draconian source-of-funds investigation.

Previously:

Adobe Premiere for iOS and iPadOS

Adobe:

Adobe announced that the company is bringing its industry leading Adobe Premiere video editor to mobile in a powerful new iPhone app that empowers creators to make pro-quality video on the go. The Adobe Premiere mobile app makes it fast, free and intuitive for creators to edit their videos with precision editing on a lightning-fast multi-track timeline, produce studio-quality audio with crystal clear voiceovers and perfectly timed AI sound effects, generate unique content and access millions of free multimedia assets, and send work directly to Premiere desktop for fine tuning further on a larger screen. The new mobile app offers all the video editing essentials for free, with upgrades available for additional generative credits and storage.

This makes it sound like the upgrades are à la carte, but in the App Store listing there seems to just be a generic subscription available for different terms ($7.99/month or $69.99/year).

It’s unclear whether it has the same limitation as Final Cut Pro, where you can bring files from mobile to desktop but not back to mobile.

Hartley Charlton:

Adobe has also built in a speech enhancement tool that removes background noise to isolate voices, as well as automatic captioning with stylized subtitles. The app supports 4K HDR export and allows direct one-tap publishing to platforms such as TikTok, YouTube, and Instagram. Users can also generate sound effects and other creative assets using Adobe Firefly AI, the company's generative AI platform, which is fully integrated into the app.

[…]

The app is positioned as a replacement for Premiere Rush, the company's previous lightweight mobile editor. Existing Rush users will retain access only on devices where it is already installed until the service is fully discontinued on September 30, 2026.

Previously:

Electronic Arts Acquired by Private Equity

Juli Clover (2022):

Apple is one of several companies that have held talks with Electronic Arts (EA) about a potential purchase, according to a new report from Puck.

EA has spoken to several “potential suitors,” including Apple, Amazon, and Disney as it looks for a merger arrangement.

Nicholas G. Miller and Lauren Thomas (Hacker News, MacRumors):

Videogame maker Electronic Arts said it would go private in a $55 billion deal with a group of investors including Saudi Arabia’s Public Investment Fund, private-equity firm Silver Lake and Jared Kushner’s investment firm Affinity Partners.

[…]

Electronic Arts publishes The Sims, football game Madden NFL and FIFA, the soccer videogame now known as FC. It has been boosted by sales of its marquee sports games and is expected to release “Battlefield 6,” the latest edition of its popular shooting game.

Electronic Arts (Hacker News):

PIF, Silver Lake, and Affinity Partners bring deep sector experience, committed capital, and global portfolios with networks across gaming, entertainment, and sports that offer unique possibilities for EA to blend physical and digital experiences, enhance fan engagement, and create new growth opportunities. The transaction represents the largest all-cash sponsor take-private investment in history, with the Consortium partnering closely with EA to enable the Company to move faster and unlock new opportunities on a global stage.

Previously:

Kindle Scribe 2025

Andrew Liszewski (Hacker News):

Amazon announced new versions of the Kindle Scribe today, including the Kindle Scribe Colorsoft, which features a larger version of the customized E Ink screen technology that Amazon uses in its color e-reader. The new Scribes feature a major redesign that does away with the asymmetrical chin on one side, making the devices look sleeker and more like a tablet.

The new Scribes feature larger 11-inch, glare-free E Ink screens — up from 10.2 inches previously — but Amazon has managed to make the new versions lighter than the first two. They now weigh just 400 grams compared to 433 grams for last year’s version, and at 5.4mm thick, they’re thinner than the iPhone Air.

[…]

A new quad-core processor and additional memory improve the performance of the new Kindle Scribes, which now offer a writing experience and page turns that feel 40 percent faster than previous versions.

[…]

All three of the new Kindle Scribes come with steeper price tags. Last year’s Kindle Scribe started at $399.99, but the cheapest of the new additions is the Scribe without a front light, which will start at $429.99 when available early next year. If you plan to write or read at night, then you’ll want the standard Kindle Scribe, which starts at $499.99, and if you want a splash of color, the Kindle Scribe Colorsoft starts at $629.99, with both arriving later this year.

Aisha Malik:

The devices feature a new Home experience that lets users jot something down, and open recently opened and added books, documents, and notebooks. Amazon anticipates both devices being used for handwritten notes, and the devices include significant product integrations to enhance that experience. One feature will let users search their notes across their notebooks and get simple AI summaries. Next year, users will be able to send their notes and documents to Alexa+ and have a more involved conversation about them.

[…]

The devices will also feature new AI reading features. A new “Story so Far” feature will let users catch up on the book they’re currently reading up until where they have read. An “Ask this Book” feature will let users highlight any passage of text while reading a book and get spoiler-free answers to questions about things like a character’s motive or the significance of a scene.

These features will be available on books users have purchased or borrowed on the Kindle iOS app later this year and on Kindle devices early next year.

Previously: