Archive for August 1, 2015

Saturday, August 1, 2015

Safari vs. Chrome: Power Consumption

John Gruber:

This exemplifies what the “Safari Is the New IE” crowd doesn’t get — Apple’s priorities for Safari/WebKit are very different from Google’s for Chrome/Blink. Innovation and progress aren’t necessarily only about adding new features. 24 percent better battery life is huge.

Safari is not IE because it’s not stagnant. But the original post resonated with me because Safari is no longer bringing me the whole Web. I spend much of my day in FogBugz and Google Docs, and these sites work better in Chrome, or even Firefox, than in Safari. I’ve also had problems with Apple’s own discussion forums in Safari, and there are apparently issues with Apple’s developer site, as well.

I love Safari as an app. Neither Chrome nor Firefox has ever felt very Mac-like to me. Yet I’m increasingly using Chrome because Safari can’t get the job done. I don’t know whose fault it is that these sites don’t work with Safari, but to me as a user it doesn’t really matter. Improving power efficiency is great, but it would probably save even more battery life if Safari were compatible enough that I didn’t have to keep Chrome running.

I’m not writing this as a Web developer who wants to deploy a native app experience via HTML and JavaScript. I’m just a Mac user who prefers Safari and wants to be able to use it.

The Appsmiths

Hansen Hsu (via Wil Shipley):

This dissertation is an ethnographic study, accomplished through semi-structured interviews and participant observation, of the cultural world of third party Apple software developers who use Apple’s Cocoa libraries to create apps. It answers the questions: what motivates Apple developers’ devotion to Cocoa technology, and why do they believe it is a superior programming environment? What does it mean to be a “good” Cocoa programmer, technically and morally, in the Cocoa community of practice, and how do people become one?

A free account is required to download the paper, but it looks like it’s well worth reading—or at least skimming, since it’s over 500 pages.

A Catalog of Functional Refactorings

Simon Thompson and Claus Reinke (PDF) (via Jeremy W. Sherman):

This document is the first draft of a catalogue of refactorings for functional programs. Most are applicable to a variety of modern functional programming languages – with the example code begin written in Haskell – but some relate specifically to Haskell and are marked as such.

Swift Array Performance

Airspeed Velocity:

Your reminder that building arrays with reduce, while fun, is accidentally quadratic.

I would have thought that extending an array would be optimized to not make a copy.

Update (2015-08-01): Joe Groff:

I don’t think + is implemented to check for unique referencing at all.

Update (2015-08-03): Airspeed Velocity (tweet):

I was surprised at how surprising some found this. Quite a few people suggested the reduce version could be changed to not do the array copy (I don’t think it can). Some suggested maybe + could be optimized so it doesn’t perform a copy (I don’t think that it could easily, as we’ll see shortly).

[…]

Assuming combine here is { $0 + [transform($1)] }, you can see that + similarly has no knowledge of the fact that we’re actually going to assign the outcome directly to the result variable. We know, on inspecting the code, that it’ll just be fine to add the right-hand side onto the left-hand side, if that were even possible (in theory it is, since even though the array is passed immutably by value, the buffer is a class and so could be mutated since it has reference semantics). But + can’t know that from where it sits. It definitely knows it’s copy of the left-hand side isn’t the only owner of the buffer. There is another owner too: reduce holds a copy of result – and is about to throw it away and replace it with a new result, but that’s coming after + has run.

Joe Groff:

Swift’s parameter convention is callee-release, so isUniquelyRefd can work inside +. It’d only succeed if + is the last use.

That optimization could lead to optimizer-dependent algorithmic complexity though, sorta like TCO.

“TCO” refers to tail call optimization.