Archive for August 11, 2023

Friday, August 11, 2023

AppKit vs. SwiftUI: Stable vs. Shiny

Milen Dzhumerov:

Mitchell Hashimoto has been working on a new cross-platform terminal written in Zig and posted a update on the project’s progress. […] So, usage of SwiftUI constrained the product to have bugs and missing features.

[…]

Because of its maturity, AppKit does not change often nor significantly: it provides a stable foundation to build upon. Desktop OS innovation is quite slow as resources are focused on mobile and spatial. In turn, this means lower likelihood of breaking changes on each major release and more time to focus on your product.

[…]

SwiftUI is tackling a much harder problem along multiple dimensions[…]

[…]

SwiftUI can be thought of as a unifying rewrite of AppKit and UIKit, so the usual rewriting caveats, risks and benefits apply.

Previously:

Unraveling the Digital Markets Act

iA:

When Facebook introduced Threads on July 5th, they excluded Europe due to non-compliance with the Digital Markets Act (DMA), an EU regulation effective since May 2, 2023. The question arises: Did the DMA function as intended, or were Europeans penalized by flawed legislation?

To comprehend the DMA’s relevance to us as an independent software company, we read and analyzed it from beginning to end. Our investigation aimed to determine if the criticisms, portraying EU laws as inefficient and uninformed, were justified.

[…]

To prevent gatekeepers from unfairly benefitting from their dual role, it is necessary to ensure that they do not use any aggregated or non-aggregated data, which could include anonymised and personal data that is not publicly available to provide similar services to those of their business users. That obligation should apply to the gatekeeper as a whole, including but not limited to its business unit that competes with the business users of a core platform service.

[…]

To ensure contestability, the gatekeeper should furthermore allow the third-party software applications or software application stores to prompt the end user to decide whether that service should become the default and enable that change to be carried out easily

[…]

The gatekeepers should, therefore, be required to ensure, free of charge, effective interoperability with, and access for the purposes of interoperability to, the same operating system, hardware or software features that are available or used in the provision of its own complementary and supporting services and hardware.

[…]

The gatekeeper shall not require end users to use, or business users to use, to offer, or to interoperate with, an identification service, a web browser engine or a payment service, or technical services that support the provision of payment services, such as payment systems for in-app purchases, of that gatekeeper in the context of services provided by the business users using that gatekeeper’s core platform services.

Previously:

Update (2023-08-15): Jesper:

To the extent that is realistically possible, this is a piece of legislation that plucks the power bestowed upon a few actors from their hands and back into the citizens, the customers, the owners.

The world is complicated and there are a number of points where the law will force one trade-off to turn into another trade-off.

[…]

I view this as a cornerstone of civil rights and customer rights in the same vein as the GDPR. The EU does not get everything right and are not the foremost authority on how this all should work. But they are in the same place as the United States Government was before passing the Clean Air Act and Clean Water Act. When the corporations involved have decided that they don’t feel like doing anything, what else is left to do?

Nick Heer:

There remain lingering concerns, like the requirement for interoperability among messaging platforms, which may impact privacy protections. Many E.U. member states have expressed interest in weakening end-to-end encryption. That is not part of this Act but is, I think, contextually relevant.

I am also worried that the tech companies affected by this Act will treat it with contempt and make users’ experiences worse instead of adapting in a favourable way. After GDPR was passed, owners of web properties did their best to avoid compliance. They could choose to collect less information and avoid nagging visitors with repeated confirmation of privacy violations. Instead, cookie consent sheets are simply added to the long list of things users need to deal with[…]

CNET Deletes Thousands of Old Articles to Game Google Search

So speaking as someone who’s adjacent to the SEO industry (not my job, but I’ve spent a couple of decades in publishing, digital media, and analytics), I can share a little detail about what I suspect is going on here.

“Content pruning” is a common practice, and largely includes taking out of date content so that readers can focus on more current and/or profitable content. This is routine for large sites, and usually includes updating out-of-date but popular articles. Also has the benefit of trimming the amount of content to manage - spring cleaning, if you will.

From an SEO perspective, Google will dedicate limited resources to indexing any given site (its so-called “crawl budget”). If you take down the pages that aren’t doing you any good because they’re unprofitable, Google stops spending resources on those pages, and stops sending traffic to pages that don’t make money. If you’re lucky and have better pages with relevant content, Google will hopefully send those people to those better pages instead.

[…]

As for why Google says this isn’t necessary, well, CNET and Google have different objectives.

Thomas Germain (via Slashdot, Hacker News):

Archived copies of CNET’s author pages show the company deleted small batches of articles prior to the second half of July, but then the pace increased. Thousands of articles disappeared in recent weeks. A CNET representative confirmed that the company was culling stories but declined to share exactly how many it has taken down. The move adds to recent controversies over CNET’s editorial strategy, which has included layoffs and experiments with error-riddled articles written by AI chatbots.

“Removing content from our site is not a decision we take lightly. Our teams analyze many data points to determine whether there are pages on CNET that are not currently serving a meaningful audience. This is an industry-wide best practice for large sites like ours that are primarily driven by SEO traffic,” said Taylor Canada, CNET’s senior director of marketing and communications. “In an ideal world, we would leave all of our content on our site in perpetuity. Unfortunately, we are penalized by the modern internet for leaving all previously published content live on our site.”

[…]

Removing, redirecting, or refreshing irrelevant or unhelpful URLs “sends a signal to Google that says CNET is fresh, relevant and worthy of being placed higher than our competitors in search results,” the document reads.

Danny Sullivan:

Are you deleting content from your site because you somehow believe Google doesn’t like “old” content? That’s not a thing! Our guidance doesn’t encourage this.

Nick Heer:

A bunch of SEO types Germain interviewed swear by it, but they believe in a lot of really bizarre stuff. It sounds like nonsense to me. After all, Google also prioritizes authority, and a well-known website which has chronicled the history of an industry for decades is pretty damn impressive. Why would “a 1996 article about available AOL service tiers” — per the internal memo — cause a negative effect on the site’s rankings, anyhow? I cannot think of a good reason why a news site purging its archives makes any sense whatsoever.

It’s quite possible the consultants were taking them for a ride or are just wrong. But it’s also possible that the SEO people who follow this stuff really closely for a living have figured out something non-intuitive and unexpected. Google obviously doesn’t want to say that it incentivizes sites to delete content, and the algorithms are probably not intentionally designed to do that, but that doesn’t mean this result isn’t an emergent property of complex algorithms and models that no one fully understands.

Danny Sullivan:

Indexing and ranking are two different things.

Indexing is about gathering content. The internet is big, so we don’t index all the pages on it. We try, but there’s a lot. If you have a huge site, similarly, we might not get all your pages. Potentially, if you remove some, we might get more to index. Or maybe not, because we also try to index pages as they seem to need to be indexed. If you have an old page that doesn’t seem to change much, we probably aren’t running back ever hour to it in order to index it again.

[…]

People who believe removing “old” content aren’t generally thinking that’s going to make the “new” pages get indexed faster. They might think that maybe it means more of their pages overall from a site could get indexed, but that can include “old” pages they’re successful with, too.

fshbbdssbbgdd:

Suppose CNET published an article about LK99 a week ago, then they published another article an hour ago. If Google hasn’t indexed the new article yet, won’t CNET rank lower on a search for “LK99” because the only matching page is a week old?

If by pruning old content, CNET can get its new articles in the results faster, it seems this would get CNET higher rankings and more traffic. Google doesn’t need to have a ranking system directly measuring the average age of content on the site for the net effect of Google’s systems to produce that effect. “Indexing and ranking are two different things” is an important implementation detail, but CNET cares about the outcome, which is whether they can show up at the top of the results page.

It would be nice to look at concrete data. Google knows how the CNET pages rank in its index, and CNET knows how its traffic changed (or didn’t) after the deletions. But so far neither is sharing.

Previously:

Update (2023-08-15): Nick Heer:

The whole entire point of a publisher like CNet is to chronicle an industry. It is too bad its new owners do not see that in either its history or its future.

Adam Engst:

Though I’m dubious of most SEO claims based on my experience with the TidBITS and Take Control sites over decades, it’s conceivable that SEO experts have discovered a hack that works—until Google tweaks its algorithms in response. Regardless, I disapprove of deleting legitimate content because there’s no predicting what utility it could provide to the future; at least CNET says it’s sending deleted stories to the Internet Archive.

Update (2023-08-16): Chris Morrell:

I will say that Google has a history of publicly stating things about rankings that were measurably untrue. I would not at all be surprised to find out that “content pruning” is actually effective and is just another way Google’s search algos incentivize bad content decisions.

[…]

Google has claimed for years that they crawl client-side JS just fine, but almost everyone knows that’s not true. They’ve also said very clearly that Core Web Vitals are important but experimentation shows they have minimal impact.

I’m not advocating for deleting content on the web, but I do think that Google has put a lot of publishers in a position to second-guess everything because what they say often doesn’t match the evidence.

Update (2023-08-22): Nik Friedman TeBockhorst:

So speaking as someone who’s adjacent to the SEO industry (not my job, but I’ve spent a couple of decades in publishing, digital media, and analytics), I can share a little detail about what I suspect is going on here.

“Content pruning” is a common practice, and largely includes taking out of date content so that readers can focus on more current and/or profitable content. This is routine for large sites, and usually includes updating out-of-date but popular articles. Also has the benefit of trimming the amount of content to manage - spring cleaning, if you will.

From an SEO perspective, Google will dedicate limited resources to indexing any given site (its so-called “crawl budget”). If you take down the pages that aren’t doing you any good because they’re unprofitable, Google stops spending resources on those pages, and stops sending traffic to pages that don’t make money. If you’re lucky and have better pages with relevant content, Google will hopefully send those people to those better pages instead.

[…]

As for why Google says this isn’t necessary, well, CNET and Google have different objectives.

Overlaying Text on Images

Eric D. Kennedy (previous version):

If you hop into Dev Tools and remove the overlay, you’ll see that the original image was too bright and had too much contrast for the text to be legible. But with a dark overlay, no problem!

[…]

Whip up a mildly-transparent black rectangle and lather on some white text. If the overlay is opaque enough, you can have just about any image underneath and the text will still be totally legible.

[…]

A surprisingly good way for making overlaid text legible is to blur part of the underlying image.

[…]

The floor fade is when you have an image that subtly fades towards black at the bottom, and then there’s white text written over it.

[…]

A scrim is a piece of photography equipment that makes light softer. Now it’s also a visual design technique for softening an image so overlaid text is more legible.

Via Shannon Hughes:

Just set the background color of the UIVisualEffectView (the view itself, not the contentView) to a partially opaque white. And, crucially, skip the vibrancy effect for the text. (As an extra flourish, make the text color black with 70% opacity so the background can show through just a little. And we made the border color black at 40% opacity so it doesn’t compete with the text, which is what you’ve seen in all these examples, but wasn’t something we hit upon until the end.)

[…]

In sum, be cautious when using UIVisualEffectsViews over backgrounds you don’t control, but don’t despair. Adding a semi-opaque background color to the effect view might be all you need to get legible text you can count on.

Previously: