Archive for June 17, 2025

Tuesday, June 17, 2025

Chrome Doesn’t Support JPEG XL

Jim Bankoski (2022):

Helping the web to evolve is challenging, and it requires us to make difficult choices. We’ve also heard from our browser and device partners that every additional format adds costs (monetary or hardware), and we’re very much aware that these costs are borne by those outside of Google. When we evaluate new media formats, the first question we have to ask is whether the format works best for the web. With respect to new image formats such as JPEG XL, that means we have to look comprehensively at many factors: compression performance across a broad range of images; is the decoder fast, allowing for speedy rendering of smaller images; are there fast encoders, ideally with hardware support, that keep encoding costs reasonable for large users; can we optimize existing formats to meet any new use-cases, rather than adding support for an additional format; do other browsers and OSes support it?

After weighing the data,  we’ve decided to stop Chrome’s JPEG XL experiment and remove the code associated with the experiment.  We’ll work to publish data in the next couple of weeks.

For those who want to use JPEG XL in Chrome, we believe a WebAssembly (Wasm) implementation is both performant and a great path forward.

Jon Sneyers (2023, via Hacker News):

In early April 2021, the Chrome browser added experimental support (behind a flag), even before the JPEG XL standard was officially published. (The final draft had been submitted to ISO, but it would still take until March 2022 before it was approved and published as the international standard ISO/IEC 18181.) Firefox followed suit quickly and added experimental support. Things were looking good.

Then, on Halloween 2022, Chrome developers suddenly announced that they would be removing JPEG XL support. This decision was quite unexpected and controversial. In my blog The Case for JPEG XL, I argued why this decision should be reversed. In December, Chrome developers provided test results that were used to justify the decision and invited feedback. I analyzed the results and pointed out several methodological flaws and oversights. So far, my feedback has been ignored.

Beyond browsers, adoption of JPEG XL continued, in particular in image authoring software like Serif Affinity, Adobe Camera Raw, GIMP, Krita, etc. Unfortunately, Chrome’s decision has slowed wider adoption in web browsers of JPEG XL.

Ernie Smith (via Hacker News):

The JPEG file format has served us well. It’s been difficult to remove the format from its perch. The JPEG 2000 format, for example, was intended to supplant it by offering more lossless options and better performance. The format is widely used by the Library of Congress and specialized sites like the Internet Archive, however, it is less popular as an end-user format.

Other image technologies have had somewhat more luck getting past the JPEG format. The Google-supported WebP is popular with website developers (and controversial with end users). Meanwhile, the formats AVIF and HEIC, each developed by standards bodies, have largely outpaced both JPEG and JPEG 2000.

JPEG XL seems better, but even Apple’s not supporting it everywhere yet.

Previously:

Psylo Web Browser 1.0

Mysk:

We’re super excited to finally launch Psylo, a new kind of private web browser for iOS and iPadOS. In Psylo, each tab is its own silo with isolated storage, cookies, and even its own IP address. Psylo introduces advanced anti-tracking and anti-fingerprinting features that go beyond what a VPN can offer, thanks to the deep integration between Psylo and our own Mysk Private Proxy Network.

[…]

Currently consisting of 40+ proxy servers worldwide, it’s designed to scale as we grow our service. […] The system must operate exclusively on software that we either developed ourselves or maintain complete control over. […] The system must never log or otherwise store any personally identifiable information or browsing data, including but not limited to IP addresses, DNS requests, and any other data that could potentially identify a user. […] The system must be accessible without requiring users to create an account—in fact, the system must have no notion of user accounts whatsoever.

It’s $9.99/month or $99.99/year.

Previously:

Safari Audio Fingerprinting Protection

Sergey Mostsevenko (via Hacker News):

Apple introduced advanced fingerprinting protection in Safari 17. Advanced fingerprinting protection aims to reduce fingerprinting accuracy by limiting available information or adding randomness.

By default, the advanced protection is enabled in private (incognito) mode and disabled in normal mode. It affects both desktop and mobile platforms. Advanced fingerprinting protection also affects Screen API and Canvas API, but we’ll focus only on Audio API in this article.

[…]

The technique is called audio fingerprinting, and you can learn how it works in our previous article. In a nutshell, audio fingerprinting uses the browser’s Audio API to render an audio signal with OfflineAudioContext interface, which then transforms into a single number by adding all audio signal samples together. The number is the fingerprint, also called “identifier”.

But he says the protection measures “don’t fully work.”

Saagar Jha:

I feel like these days (especially given the recent focus on side channel attacks) it is basically a given that adding uniform noise to something that leaks data does not work, because you can always take more samples and remove the noise. Why did Safari add this? I understand that needing more samples is definitely an annoyance to fingerprinting efforts, but as this post shows it’s basically always surmountable in some form or the other.

Jeff Johnson:

I’m not going to wait another seven years to opt out of the advanced tracking and fingerprinting protection warnings, so I’m opting out of advanced tracking and fingerprinting protection entirely.

Previously:

Foundation Models Framework

Apple (MacRumors, 9to5Mac, Hacker News, Slashdot):

Apple is opening up access for any app to tap directly into the on-device foundation model at the core of Apple Intelligence.

With the Foundation Models framework, app developers will be able to build on Apple Intelligence to bring users new experiences that are intelligent, available when they’re offline, and that protect their privacy, using AI inference that is free of cost. For example, an education app can use the on-device model to generate a personalized quiz from a user’s notes, without any cloud API costs, or an outdoors app can add natural language search capabilities that work even when the user is offline.

The framework has native support for Swift, so app developers can easily access the Apple Intelligence model with as few as three lines of code. Guided generation, tool calling, and more are all built into the framework, making it easier than ever to implement generative capabilities right into a developer’s existing app.

There are two WWDC sessions and documentation.

Daniel Jalkut:

This is EXACTLY (the bare minimum) what developers have been asking for!

mxdalloway:

In my opinion this is revolutionary.

It was obvious that we would get framework access to models eventually, but I’m a little shocked that it’s already here.

I was skeptical of the performance in the demos, but running on M1 MBP I’m happy with the performance.

@Generable macro is intuitive to use and so far I’m impressed with the quality of the structured results that the model generates (admittedly, I need to do more extensive testing here but first impressions are promising).

Steve Troughton-Smith:

The wider tech press seems to think that Apple failed to show anything meaningful to do with AI at WWDC, without understanding that access to the Foundation Models is bigger than anything Apple announced at last year’s WWDC with Apple Intelligence. It’s what will give a million apps new AI features, and it’s built-in, and free.

As much as I want Siri to not suck, I have ChatGPT on all my devices, and that solves 95% of the use-cases I have.

Drew McCormack:

Having played with the new Foundation Models framework and thought about ways we can use it in our apps, I think it could be Apple’s Trojan horse for AI. It barely gets a mention in mainstream media, understandably, but it leverages Apple’s developer base. I think we are going to see very creative uses in apps, and Apple just have to iterate year on year (eg add private cloud compute next year).

Drew McCormack:

Was optimistic about Foundation Models yesterday, and today I think I know why they didn’t ship the improved Siri. The local model really is pretty thick. I thought it would be capable of stringing together tool calls in a logical way, and sometimes it is, but other times it fails to understand. Exactly the same prompt will work one time, and fail the next. Sounds like what Apple was saying about the new Siri.

Kuba Suder:

Ahh wait, so the Foundation Models thing will only work on the latest and greatest phones, right? 🫤

It doesn’t fall back on Private Cloud Compute.

Steve Troughton-Smith:

Oof, the FoundationModels framework is not exported to Mac Catalyst in Xcode 26 seed 1 😫 That puts a damper on prototyping

Matt Gallagher:

The Foundation Models in macOS 26 are quantized to 2 bits? I’m amazed anything coherent comes out.

Peter Steinberger (Reddit):

Apple’s on-device AI has a brutally small 4,096 tokens window.

Jordan Morgan:

In this post, I’ll show you the basics of how to get started with the Foundation Models framework, and we’ll even make a few calls to its API to generate some responses.

Previously: