Archive for December 3, 2015

Thursday, December 3, 2015

Apple Open Sources Swift

Eric Slivka:

As promised, Apple has officially made its Swift programming language open source, making the project available through Swift.org.

Most surprising, to me, is that Apple is reimplementing Foundation on top of cross-platform libraries without using the Objective-C runtime.

Curiously, Apple plans to remove the “NS” prefix from the classes, which would seem to hamper compatibility. [Update (2015-12-03): I guess the idea is that the prefix will also be removed when calling the Objective-C Foundation APIs, so the names would then be consistent. I’m not sure what this means for code that intends to distinguish between NSString and Swift’s String.]

The Swift Programming Language Evolution repository is particularly interesting. The current goal is to stabilize the ABI by fall 2016.

Javier Soto:

I think it’s safe to say Apple greatly exceeded all of our expectations with their work open sourcing Swift. Well done, !

Henri Watson:

Shout out to Apple for not just having an “initial commit” and actually including the entire project history.

Update (2015-12-03): Russ Bishop has a good summary.

Apple’s announcement:

Swift.org is where the daily engineering work for Swift will take place, as the community of developers work together to bring Swift to new platforms, add new features, and continually evolve our favorite language.

Andrew Cunningham:

Most of that is covered under the standard Apache license, but Federighi tells us that Apple has also included a more permissive runtime exception, “so that if you build code in Swift and parts of the Swift library are generated in your own code, you don’t have to provide attribution in that case.”

Apple engineers working on Swift will start using the GitHub repos, developing the language out in the open.

“The Swift team will be developing completely in the open on GitHub,” Federighi told Ars. “As they’re working day-to-day and making modifications to the language, including their work on Swift 3.0, all of that is going to be happening out in the open on GitHub.”

[…]

“We think [Swift] is how really everyone should be programming for the next 20 years,” Federighi told Ars. “We think it’s the next major programming language.”

Bravo for having real mailing lists.

There are lots of comments on Hacker News and Reddit.

Craig Federighi:

Objective C is forever. I don’t think anyone should fear for the future of Objective C. We’re going to continue to support Objective C for ourselves and the developer community.

We think Objective C is still a great language, and Apple has an investment in many many millions of lines of Objective C, and that’s not going to change.

Variable Capture and Loops

Tim Ekl:

The difference, it turns out, has to do with how variables are bound in loops, and how values are captured in anonymous functions. The Swift (and Objective-C) behavior – which I was most used to at the time of writing – was to bind i as a different immutable value in each loop iteration, then capture a reference to that value each time through.

Go, on the other hand, binds a single mutable value for the entire loop, then captures a reference to that single variable instead, only getting the value in question at the time the function is executed.

[…]

Interestingly enough, we can even “introduce” this bug in Swift code by using a C-like loop instead of the nicer forin syntax[…] Since this style explicitly uses a single mutable i for the entire loop, rather than binding a new i for each iteration, the “buggy” behavior – printing five sixes – occurs. Swift is even kind enough to make the mutability of i here more explicit, by requiring it be annotated var in the loop declaration.

See also: Capturing references in closures, Capture Lists, The __block Storage Type.

Put Save As Back on the File Menu

Adam C. Engst:

Regardless, the more important question is how you can bring Save As back, if that’s what you’d prefer. I fall into that category — the make-a-duplicate-and-then-save model doesn’t fit with the way I work.

You could remember to press Option when the File menu is showing to reveal Save As or invoke Save As from the keyboard with Command-Shift-Option-S. But that’s fussy, and there’s a way to put Save As back on the File menu permanently, and even replace Duplicate with it, if that’s what you want.

If you assign a keyboard shortcut, the system will show the Save As menu item all the time, instead of treating it as an alternate of Duplicate.

The Secret Power of “Read It Later” Apps

Tiago Forte (via Hacker News):

Let’s look at the 4 main barriers to consuming long-form content, and the affordances that Read It Later apps use to overcome them[…]

[…]

Bringing this back to filtering, not only am I saving time and preserving focus by batch processing both the collection and the consumption of new content, I’m time-shifting the curation process to a time better suited for reading, and (most critically) removed from the temptations, stresses, and biopsychosocial hooks that first lured me in.

I am always amazed by what happens: no matter how stringent I was in the original collecting, no matter how certain I was that this thing was worthwhile, I regularly eliminate 1/3 of my list before reading. The post that looked SO INTERESTING when compared to that one task I’d been procrastinating on, in retrospect isn’t even something I care about.

pdkl95:

The “secret power” of Pocket is that someone is making money off selling detailed information on what people red, probably including when, where, probably how long a given document is read. They aren’t offering their bandwidth and storage as some sort of charity; those server costs are obviously being covered by surveillance-as-a-business-model.

The database Pocket is building is an incredibly tempting target for many different groups (governments, insurance companies, etc). Even if Pocket isn’t using that data (unlikely), the probability of leaks/theft is high.

The End of Dynamic Languages

Elben Shira (via Michael Feathers):

There is a frantic rush to bolt-on a type system to every dynamic language out there. Typed Racket, Typed Clojure, TypeScript, Typed Lua. Even Python has “type hints”. […] The fundamental problem, you see, is that a programming language is not just about code. Implied in the community is a school of thought, a philosophy. And that is difficult to change.

[…]

This is my bet: the age of dynamic languages is over. There will be no new successful ones. Indeed we have learned a lot from them. We’ve learned that library code should be extendable by the programmer (mixins and meta-programming), that we want to control the structure (macros), that we disdain verbosity. And above all, we’ve learned that we want our languages to be enjoyable.

Marcel Weiher:

Looks like someone else confused hash-languages with dynamic languages. If you pass hashes around the type checker won’t save you

Dan Luu:

There are some pretty strong statements about types floating around out there; for the most part, those claims haven’t matched my personal experience. The claims range from the oft-repeated phrase that when you get the types to line up, everything just works, to “not relying on type safety is unethical (if you have an SLA)”, “I think programmers who doubt that type systems help are basically the tech equivalent of an anti-vaxxer”, and “It boils down to cost vs benefit, actual studies, and mathematical axioms, not aesthetics or feelings”. There are probably plenty of strong claims about dynamic languages that I’d disagree with if I heard them, but I’m not in the right communities to hear the more outlandish claims about dynamically typed languages. Either way, it’s rare to see people cite actual evidence.

This is a review of the empirical evidence.

Via Lambda the Ultimate:

Part of the benefits of types allegedly surround documentation to help refactoring without violating invariants. So another future study I’d like to see is one where participants develop a program meeting certain requirements in their language of choice. They will have as much time as needed to satisfy a correctness test suite. They should then be asked many months later to add a new feature to the program they developed. I expect that the maintenance effort required of a language is more important than the effort required of initial development, because programs change more often than they are written from scratch.

This could be a good thread on how to test the various beliefs surrounding statically typed and dynamically languages. If you have any studies that aren’t mentioned above, or some ideas on what would make a good study, let’s hear it!

Maxime Chevalier-Boisvert:

That there are less dynamic programming languages coming out is an undeniable fact. I’ve written code in statically typed languages such as C, C++, D and OCaml, and I agree that their type systems help catch certain classes of bugs more easily and rapidly. When writing code in JavaScript, you can run into nasty surprises. Latent, trivial bugs that remain hidden in your code, sometimes for months, until some specific input causes them to manifest themselves.

[…]

Dynamic languages are at a disadvantage. Most of the mainstream ones out there today were designed by amateurs, people with no formal CS background, or no adequate background in compiler construction. They were designed with no regard for performance, and an impractical mash of features that often poorly work together. Most of the dynamic languages you know are simply poorly crafted. This has resulted in some backlash.

[…]

Have static languages won? It seems to me that what people really like about static languages is IDE support for things like simple refactorings and autocompletion. Program analysis that can provide some guarantees, find certain classes of bugs without having to run programs with every possible combination of inputs. It’s perfectly legitimate for programmers to want these things. They help alleviate the cognitive burden of working with large (and small) codebases. But, these advantages aren’t inherently advantages of statically typed programming languages. I would argue that Smalltalk had (has) some amazingly powerful tools that go way beyond what the Eclipse IDE could ever give you.

Update (2015-12-04): Michael R. Bernstein:

IMO you’re wrong if you think the “next big language” will be “either” “static” or “dynamic.” It’ll be both, if we’re making progress.

The Success of ARM

ChuckMcM:

What is fascinating is that Intel got into that position by being open, there were no fewer than 12 licensees for its 8086 design, and people had supplanted “expensive, proprietary lock-in” type architectures with more open and cheaper chips. It was the emergence of the PC market, and the great Chip Recession of 1984, where Intel decided if it was going to stay a chip maker, it had to be the best source of its dominant computer chips. I was at Intel at the time and it shifted from partnering, to competing, with the same people who had licensed its chips, with the intent of “reclaiming” the market for CPU chips for itself.

[…]

The relentless pace of putting more transistors into less space drove an interesting problem for ARM. When you get a process shrink you can do one of two things, you can cut your costs (more die per wafer), or you can keep your costs about the same and increase features (more transistors per die). And the truth is you always did a bit of both. But the challenge with chips is their macro scale parts (the pin pads for example) really couldn’t shrink. So you became “pad limited”. The ratio of the area dedicated to the pads (which you connected external wires too) and the transistors could not drop below the point where most of your wafer was “pad”. If it did so then you’re costs flipped and your expensive manufacturing process was producing wafers of mostly pads so not utilizing its capabilities.

[…]

So we had an explosion of “system on chip” products with all sorts of peripherals that continues to this day. And the process feature size keeps getting smaller, and the stuff added keeps growing. The ARM core was so small it could accommodate more peripherals on the same die, that made it cost effective and that made it a good choice for phones which needed long battery life but low cost.

Update (2015-12-03): mrpippy (via Twitter):

The advantages that Apple derives from the A-series SoCs is not due to any inherent advantage of ARM vs. x86, but because Apple has full control over the design and manufacturing.

The Apple Pencil

Gus Mueller:

It feels absolutely right. Super low latency, palm rejection, and … it just works.

Is it the same as drawing in my sketchbook? No. Of course not. I’m rubbing a plastic tip across a glass screen.

It’s still God Damn Amazing though.

Serenity Caldwell:

But it does something heretofore unexpected: It beats Wacom at its own game. The Pencil is just as good a sketching tool as any Wacom pen. I don’t care that we don’t know its official pressure rating. It’s right. Apple got it right. The pressure, the accuracy, the lag, the palm rejection. My brain is fully and thoroughly tricked into believing it’s drawing on paper, and even the pen on glass sensation can’t convince me otherwise.

Myke Hurley:

When this happens the weights actually seem to give it momentum, and will propel it forward further and faster than it would have otherwise. Each time as the Pencil turns, it acts against itself as it is moving to quickly to balance, and on it goes, off the table.

In all honesty, I’m not sure that putting weights in the Pencil is the right way to solve this issue. Even Apple’s own Marc Newson put a clip on his recent Mont Blanc pen which would stop this from happening. I wish Apple would have considered this when designing the Pencil.

Jean-Louis Gassée:

As I write this, the Smart Keyboard and the Apple Pencil are still at least three weeks away. One assumes that the DRI (Directly Responsible Individual) in charge of the iPad Pro Supply Chain now works at a procurement office for rare minerals in Mongolia’s Ulaanbaatar. (An Apple Store employee took pity on me and sold me a Pencil from a freshly arrived shipment. I may need to get another…I’ve left mine at home more than once.)

Liz Marley:

It is a small, expensive device, which I basically expect to lose. And I don’t see a good way to attach a TrackR to its sleek exterior. But! It is bluetooth-paired to a rather large iPad. There’s no microphone, either in the official description or in the teardown, so it probably can’t be pinged. But maybe an iPad app could still give you a vague hotter/colder indicator? Or if it went out of range, the GPS location of the last time they were together?

Paul Haddad:

I know the Apple Pencil charges super fast, but is it me or is standby life pretty crappy? Charged it yesterday, haven’t used it, 75% now.

I recently played around with an iPad Pro. The Pencil really is amazing. It is much more comfortable to hold than I would have expected from the photos. Yes, it does not feel like drawing on paper, but there is a bit of texture from the Pencil tip, so it doesn’t really feel like glass, either. It’s faster than other styluses I’ve used, although there was more lag in Adobe Sketch than I was expecting based on the reviews. It took a long time to get it working in the Notes app because of a bug that hid the button for entering drawing mode. (I never had that problem on my iPhone.) I found that I really liked using the Pencil for non-drawing purposes, controlling the interface with a more precise instrument than my finger. More generally, the iPad Pro seems like a good product, but I, personally, have no use for it. I was not impressed by the keyboard and found the multitasking confusing.

Update (2015-12-04): Christopher Phin:

Let me be completely clear: this is the best digital drawing tool there has ever been. Better than a Wacom Intuos, better than a Wacom Cintiq, and better, by a margin so wide it's downright comical, than any other stylus for iOS or Android.