Archive for January 25, 2023

Wednesday, January 25, 2023

ChatGPT vs. Google

Dave Winer:

I went to ChatGPT and entered “Simple instructions about how to send email from a Node.js app?” What came back was absolutely perfect, none of the confusing crap and business models you see in online instructions in Google. I see why Google is worried. ;-)

John Gruber:

The threat to Google is real. That type of search for a clearly-written one-line programming question used to produce excellent results from Google Search. For a number of years, though, search results for queries like that — both at Google and competing search engines — have been littered with junk generated by content farms.

[…]

The problem with Google Search today isn’t specific to programming questions, but the general problem of answering how-to questions in any subject.

The ranking problem is real—these days it’s common for Google search results to be filled with junk. But I think the bigger problem is that Google no longer feels complete. I used to be able to weed out the junk by writing more specific queries. Now, such queries—as well as searches for phrases that I know exist on the Web—commonly turn up nothing.

Previously:

Update (2023-01-27): John Gordon:

Google can’t find things I’ve written on their blogging platform (Yeah, Blogger still works.) It’s a husk now.

Update (2023-01-31): Ameya Paleja (via Hacker News):

The popularity of ChatGPT, the online chatbot built by OpenAI, has brought many to question the survival of search engines such as Google. Paul Buchheit, the creator of Gmail, has also dropped his opinion on the matter, and he thinks that Google's business will last a maximum of two years, he tweeted.

[…]

Google could quickly be pushed into irrelevancy as users throng for more simplistic answers than indexed pages. Even if Google were able to push A.I. products developed in-house into the market almost immediately, Buchheit does not see a way; it could do so without destroying the most valuable part of its business.

Twitter to Revert Hostile “For You” Switch

Kyle Barr:

Twitter has reversed course on its extremely unpopular decision to make an algorithmically generated timeline the default for all Twitter users.

[…]

The change will start on the web version of Twitter before “soon” coming to the iOS and Android versions of the app. The move comes just two weeks after the company made the much-maligned decision to force feed users content based on stuff “You might like.”

It’s not just that they made “For You” the default but also that it would keep switching you away from “Following” after you had selected it, either when returning to the app or if you were scrolling down the timeline but moved your finger slightly to one side.

I still don’t think this makes the official Twitter app usable, as it does not do a good job of loading all the tweets if you’ve been away from the app for a while.

It also wastes a chunk of valuable screen space showing the tab titles even though I never want to use a different tab.

Previously:

LaunchBar Actions for Mastodon

Christian Bender:

Search Mastodon accounts and hashtags easily with LaunchBar.

[…]

This action opens the current post or profile in Safari on your home instance.

[…]

This is a simple action to post a status (toot) on Mastodon.

I had forgotten that LaunchBar now has a JavaScript API that can make these sorts of custom actions feel like built-in parts of the app.

Previously:

Network Connections From mediaanalysisd

Jeffrey Paul (Hacker News):

Imagine my surprise when browsing these images in the Finder, Little Snitch told me that macOS is now connecting to Apple APIs via a program named mediaanalysisd (Media Analysis Daemon - a background process for analyzing media files).

[…]

Apple has repeatedly declared in their marketing materials that “privacy is a human right”, yet they offered no explanation whatsoever as to why those of us who do not traffic in child pornography might wish to have such privacy-violating software running on our devices.

[…]

Integrate this data and remember it: macOS now contains network-based spyware even with all Apple services disabled. It cannot be disabled via controls within the OS: you must used third party network filtering software (or external devices) to prevent it.

Contrary to this post, I think Apple did clearly state that it has abandoned local CSAM detection. And he doesn’t seem to have evidence that his data is being improperly sent to Apple. Still, it’s not clear exactly what mediaanalysisd is doing with the network.

Howard Oakley:

There is no evidence that local images on a Mac have identifiers computed and uploaded to Apple’s servers when viewed in Finder windows.

[…]

Images viewed in apps supporting VLU have neural hashes computed, and those are uploaded to Apple’s servers to perform look up and return its results to the user, as previously detailed.

VLU can be disabled by disabling Siri Suggestions in System Settings > Siri & Spotlight, as previously explained.

Mysk:

No, macOS doesn’t send info about your local photos to Apple We analyzed mediaanalysisd after an extraordinary claim by Jeffrey Paul that it scans local photos and secretly sends the results to an Apple server.

[…]

We analyzed the network traffic sent and received by mediaanalysisd. Well, the call is literally empty. We decrypted it. No headers, no IDs, nothing. Just a simple GET request to this endpoint that returns nothing. Honestly, it looks like it is a bug.

Mysk:

The issue was indeed a bug and it has been fixed in macOS 13.2. The process no longer makes calls to Apple servers.

Was it also a bug that this happened even though Paul had opted out of everything? Or is there no setting for this? Or did he miss a setting?

Jamie Zawinski:

Ok, that may well be. But when my OS was phoning home on my photos yesterday and happens to not be phoning home on them today… that doesn’t really build trust. Intent matters, and we know what Apple’s intent is because they told us. Code matters, and we are not allowed to see Apple’s code.

Maybe the fact that it phoned home with a null response is only because the test photos didn’t match some magic neural net -- congratulations, Apple didn’t report your test images to the FBI.

We cannot know. But suspicion and mistrust are absolutely justified. Apple is examining your photos and then phoning home. The onus is on them to explain -- and prove -- what they are doing and why.

Previously:

Update (2023-01-27): Howard Oakley:

Just checked this evening: this hasn’t changed in 13.2.

Nick Heer:

This bug violated users’ trust. The last time something like this happened was with the OCSP fiasco, when Apple promised a way to opt out of Gatekeeper checks by the end of 2021. As of writing, any such option remains unavailable.

[…]

At the same time, it is unwise to trust alarmist reports like these, either. These are extraordinary claims made without evidence, and they can be dismissed unless proven.

Howard Oakley:

Live Text analysis doesn’t generate neural hashes or other identifiers for an image, in the way that Visual Look Up does.

Any connection to Apple’s servers during Live Text analysis is performed before the image has been analysed, and before the extraction of any text. It cannot, therefore, send Apple any image identifiers or extracted text.

Live Text relies on language asset files, which may need to be augmented or updated over a network connection during text recognition.

macOS 13.1 and 13.2 perform Live Text essentially the same, and will both attempt to connect to Apple’s servers in the event that they need to update language asset files.