On My Misalignment With Apple’s Love Affair With Swift
Dominik Wagner (tweet, Hacker News, Reddit):
On top of all of this, there is that great tension with the existing Apple framework ecosystem. While Apple did a great job on exposing Cocoa/Foundation as graspable into Swift as they could, there is still great tension in the way Swift wants to see the world, and the design paradigms that created the existing frameworks. That tension is not resolved yet, and since it is a design conflict, essentially can’t be resolved. Just mitigated.
[…]
If you work in that world you are constantly torn between doing things the Swift/standard-library way, or the Cocoa way and bridging in-between. To make matters worse there are a lot of concepts that don’t even have a good equivalent. This, for me at least, generates an almost unbearable mental load. It leads to writers block and/or running around in cognitive circles trying to answer questions on how to best express the problem, or make Swift be happy with my code.
[…]
Yes, Swift code might end up being more correct in the end. It also might alert you to edge cases early on. However, the flip side is it inhibits your creativity while writing.
[…]
In my opinion, a lot of the “lofty goals” haven’t been achieved, and as discussed, should even be non-goals. Just imagine a world where Objective‑C would have gotten the same amount of drive and attention Swift got from Apple? It is not a big leap to see that everyone would be better off right now. Swift just ended up being a jack of all trades, master of none.
I agree with most of what he says, except that overall I do like programming in Swift, and I prefer it to Objective-C most of the time. I suspect I would also like an enhanced Objective-C much more than the Objective-C that we actually have. I remain unconvinced that Swift was the right strategy for Apple to take vs. putting the same amount of effort into improving Objective-C. But that decision was made long ago. I do not, as he suggests, want to see a pivot back to try that path now. I think Swift (and the frameworks and tools) can get to where they need to go, but a long way remains, and it will require continued sustained focus to get there.
Update (2018-06-10): See also these Twitter threads from: Daniel Pasco, Dominik Wagner, Nick Lockwood, Marcel Weiher, Ilja A. Iwas.
Update (2018-06-11): See also: Steven Sinofsky, Alexis Gallagher, Dominik Wagner, Lukas Spieß, Colin Wheeler, Kyle Bashour, Gopal Sharma, Daniel Pasco, Marco Scheurer, Helge Heß, Kristof.
Update (2018-06-12): See also: Stan Chang Khin Boon, Jonathan Gerlach, Marcin Krzyzanowski, Joe Groff, Steven Sinofsky (tweet).
Update (2018-06-21): Stefan Lesser:
I wanted to write a thoughtful article in response to Dominik Wagner’s “On my misalignment with Apple’s love affair with Swift”. During my research I realized, that Chris Lattner had already done this. Sort of. 17 months before Dominik published his piece.
Hehe, going full circle. Part of the motivation for my writing down my critique was based on this interview. A lot of the arguments boil down to deferring to the future, are all over the place, or (e.g. fast) not really true.
Lattner’s responses here are fairly level headed. But the goals mentioned in here, from the Swift manifesto, haven’t necessarily been achieved yet. That’s why they’re stated as goals.
Doesn’t mean Swift will never achieve them, but it’s still a give and take vs Obj-C.
It’s also definitely good to have both sides of the [Swift] story told.
I’m still very much torn in-between though. I’m confident that at no point in the past, switching to Swift would have had net benefits to date. (100% ObjC here.)
But I’m also clear that this day will come…
Also… not a single one of the “core problems” of Objective-C stated in Chris’ post seriously affects us. It’s definitely not memory safety that keeps us up at night.
Devil’s advocate could say, Swift is fundamentally solving problems people (we) aren’t having.
Which brings it down to (for us): Everything Swift offers is a long list of very desirable nice-to-haves.
And in that point I’m totally siding with @monkeydom – there are no essential, fundamental issues being solved. (Again: for us)
Nobody asked, but here I go anyway:
If I had to put my finger on the #1 pain point for us in recent years (!), it’s never been the language.
It’d be frameworks lacking behind in quality. (bugs, workarounds) And UI programming paradigms not evolving with time. (e.g. reactive)
My sincere hope is though, that Swift lays the foundations for that to be tackled one day.
And that once we’re there, it will have fundamental effects on how we do apps.
And so I’m still torn in-between.
I think the (hidden) cost of complexity is undersold. And Swift opened that bottle wide up by making all the regular things very complex. And as of now it doesn’t even have the big benefits, e.g. like rust’s concurrency safety or full C++ style speed.
Update (2018-08-13): See also: Hacker News.
26 Comments RSS · Twitter
“Just imagine a world where Objective‑C would have gotten the same amount of drive and attention Swift got from Apple”
I astounding how people keep ignoring the reality. What prevented Objective-C to evolved was not the amount of resources Apple spend on it, it was a simple fact: the number one requirement was that it remains compatible with C and C++.
And anybody who had work with C++ know how complex it is to make a language change without breaking anything. That what the C++ committee try to do at each release, but even them have a very hard time doing it.
"but a long way remains, and it will require continued sustained focus to get there"
Considering that Swift ABI stability may only be supported in Fall 2019 in Apple's OSes, how long can the way be before it becomes way too long?
@Jean-Daniel I completely disagree. I think there’s a lot that could have been done while maintaining compatibility with C/C++. And I think they could have eventually moved to a hybrid approach where Objective-C 5.0 was the last version that retained that compatibility, but which could call back and forth with 6.0 and beyond, as is done with Swift today.
@stephane Too long for what?
@Jean-Daniel what could have been done has actually been prototyped at https://objective.st. A language that moves Objective C closer to its Smalltalk roots and obsoletes the need for a number of C-isms like loops.
It is not over the top to say that NeXT really wanted Smalltalk but didn't think the hardware of the time could run it with adequate performance. Objective C was an excellent compromise. They key missing capability in version 1 was blocks/closures. This was added years ago and it would have been easy to recast the language as closer to Smalltalk by adding the Smalltalk collections iteration operations, auto-wrapping numbers structs etc....all of Swift's innovations to hide C could have been applied to Objective C in a more cohesive fashion with a more elegant syntax without littering the language with compiler concerns and onerous access control requirements.
Unfortunately type safety is trendy now. Personally, type errors were never a big problem in my objective code.
One thought I keep coming back to: As someone without a formal CS background, Objective-C’s metaphor of message-passing really helped me wrap my head around how the frameworks operate. ‘Function’ just seems so opaque and dry compared to ‘message’ (or even ‘method’).
Also, although I cheered their elimination during the original Swift announcement, I kinda miss header files now (or at least some kind of spatially-distinct @interface)... they made it easier for me to organize my thoughts and have a single place to look for a succinct overview of what a class does. Relying on synthesized headers or the jump menu just seems like a step back in retrospect.
It’s a shame because Swift could lean on its advantages more if it didn’t have to interact with Cocoa. There are some app-oriented functional languages (Elm in particular) where lots of folks say the contstraints help empower them to iterate and refactor quickly (knowing the compiler has their back). The contrast in developer attitudes toward Elm and Swift is rather stark.
@Michael Too long for people to consider it a viable solution for future development.
Swift is like Tesla. A lot of unachieved goals. People are still waiting to get their hands on the promised public version. More and more proofs of failure. But still a lot of fans and it's still super cool to say that you're coding in Swift.
From where I sit, Swift unachieved goals include:
- it's not good as a scripting language
- it's not good as a system language. If I understood correctly, Network.framework was built in C. With Obj-C and Swift wrapping APIs. Okay…
- it's not a good language for command line tools. Interfacing with BSD C APIs is still a pain. Parsing arguments is still awkward.
- it's not a good language to work with Cocoa. Or Cocoa is not good at working with Swift.
- I'm under the impression its availability on Linux is not a resounding success. Making it an Apple's platforms specific language.
@stephane There are certainly unachieved goals, but it does keep getting better, and it still seems to be gaining mindshare. Apple itself is using it more. I just don’t see people suddenly deciding that it’s not viable. Plus, it’s harder to switch back to Objective-C because it can’t subclass Swift classes.
Jean-Daniel: "I astounding how people keep ignoring the reality. What prevented Objective-C to evolved was not the amount of resources Apple spend on it, it was a simple fact: the number one requirement was that it remains compatible with C and C++."
Nonsense. ObjC was designed to be compatible with C; it even predated C++. And the real purpose of ObjC was to be a first-class Cocoa App development language, which it was. Languages are ten-a-penny. All the real investment and asset value is in the libraries (frameworks), so it's the language's job to fit those, not vice-versa.
Nor did this prevent Apple from creating a safe modern alternative to ObjC that was a first-class Cocoa client, while eliminating all the C-isms that make it such a menace for modern app development. That solves the C compatibility problem—just write those portions in ObjC same as before, and expose to the new language as standard ObjC classes. The only thing Apple's new Cocoa development language needed to do to be a success was provide modern syntax, type inference, and memory safety, and avoid creating impedance mismatch with Cocoa.
Swift is not that language, because Swift was never designed to be that language; it was a compiler engineer's pet vanity project, designed to be a better C++, that caught the attention of managers who presumably didn't know better and engineers who should have, and got itself elected to a job it was never the right choice for. A solution in search of a problem, and absolute worst cost-vs-value proposition they could have chosen. And once Swift was announced at WWDC, Apple was locked into it no matter how troublesome or expensive it would prove to be, because to admit their mistake would mean a loss of public face (and a good few heads too), so SOP for corporate blunders: double-down and hope they get away with it.
So now Apple's five or six years into reinventing the same app development wheel as they had before, and the only reason their competitors haven't totally got the jump is that their dev platforms have their own bunch of problems too. (IMO, Google should just buy JetBrains and make Kotlin THE language for Android development going forward. What Swift did wrong, Kotlin did right.)
Now there's talk of Swift getting a Rust-style memory model and using it for systems programming as well. Why? How the hell does this contribute to Apple's bottom line? This is early 90s Apple all over again: engineering teams operating as laws unto themselves, playing with their toys for their own ego and pleasure, instead of delivering what the business needs to build marketshare and keep competitors at bay. I seriously doubt it would've gotten this far under Jobs; the man made mistakes, but he knew how to own and correct them within a year (e.g. Cube), and still come out smelling of roses at the next WWDC. Cook's Apple keeps duds like the Mac Trashcan lying around for half a decade and doesn't even notice the stink of failure.
..
TL;DR: Apple fucked up. Best thing developers and users can do is acknowledge that, cos then they can do the cost-vs-benefit calculations as to whether it's worth their own time to grab a shovel and jump down Apple's hole as well. If it is, crack on; if not, there are other platforms out there that may increasingly provide better ROIs by pushing ahead and genuinely innovating App development while Apple's still stuck reinventing the same old wheel from 20 years ago.
@Michael To your point of 'suddenly deciding [Swift] is not viable', I'm certainly leaning that way, but I don't have any shipping Swift code so it's easier for me to do that. Right now I'm experimenting with using Visual Studio and C# or F# to interact with Cocoa, just to see how it compares to ObjC and Swift. If I come across any useful insights I'll be sure to write up a post about them.
(One silly tidbit so far: A native Cocoa 'hello world' app built with F#/Xamarin.Mac — .NET bundled and all — is smaller than the same app built with Swift/Xcode! Of course that'll change once Swift gets ABI stability, but I thought it was amusing nonetheless.)
@has Do you have any thoughts on Kotlin vs Dart? Seems like Google is starting to lean on the latter, since it ties into Flutter and whatever they're planning for Fuchsia.
@remmah Yeah, I think it would take a lot for anyone who already has shipping code to drop it, especially given that Swift will (likely) only get better. I expect that (unless Apple does a major pivot) Swift will be increasingly required to do certain things on Apple’s platforms, so developers may not be able to ignore it (though they can certainly write the core of their apps in a different language). Objective-C will last longer than Carbon, but its prominence will continue to decline.
@remmah: The real value of functional languages is that they can reason independently of time, and thus determine the appropriate sequence of operations for you. Lots of pretenders, unfortunately, but when you look at specific domains such as massively parallel/distributed programming, or even just gluing Models to View, that is absolutely the right tool for the job.
An ideal ObjC replacement would be componentized in such a way that the same set of datatypes could be reused for both traditional imperative and pure functional reactive programming. Write your model in traditional imperative code, and glue it to its View—or Views (since I’m an AppleScript weenie)—using FRP. Now everything that users see is kept in perfect sync with application state, with absolute minimum effort on the developer’s part (just describe the relationships, and let the runtime worry about everything else).
Having recently spelunked the Swift + Cocoa Bindings hole for the first time, I would on balance much preferred to be eaten by cave monsters. A good language could have learned from those design intentions and implementation mistakes, and done it right; a quiet but real blinder, delivering Apps that do what users want, faster, cheaper, simpler, and robust.
The worst thing about Swift is that it delivers the opposite: maximum grandstanding that demands the most work while delivering the least benefit. Too disruptive to be a good Cocoa client, yet too conservative to blow it away. When Alan Kay and VPRI cooked up Nile and Gezira, the goal was to provide a 10x savings in LOC for every DSL layer created. Gezira was built atop two layers of DSL, hence a 100x LOC saving over the equivalent C/C++ code. Swift provides, at best, a 2x saving in LOC, while adding 2x complexity (two [NS]String types, two [NS]Array types, etc, etc); frankly it needn't have bothered. What a wasted opportunity.
@Michael: "Swift will be increasingly required to do certain things on Apple’s platforms, so developers may not be able to ignore it"
Absolutely; developers cannot do otherwise.
They may, however, choose to walk away, should Apple’s platform become suffiently uncompetitive to be worth their effort. Just good business.
When NeXTStep shipped it was 20 years ahead of its time; now it’s a decade behind. Swift’s only true contribution is to suck up another decade’s effort without actually moving anything forward. Like watching a supertanker trying to turn, right now I’d put my own money on the iceberg. A pity; but I’m not an Apple shareholder so they don’t owe me shit, and vice-versa. Just wish it didn’t screw with my own plans to make my millions before I run for the hills; I have enough headaches as it is. :p
I think part of the author's issue is he has only ever tried imperative languages in the Modula-2 family, a family that includes C++, Java, ObjC, C#, Python and to a lesser extend C and Lua
Swift, like Rust and F#, is a immutable-first, functional-first language in the ML family (Rust, Haskell, F#, SML); albeit with the functional stuff considerably toned down.
Like F#, Swift also has an OOP layer added for interaction with the legacy codebase, and like F#, there are occasional impedance-mismatch due to the fact that the legacy library was designed to target language-semantics of a different language in a different language family.
The core philosophy of the ML family has been to ask the programmer to provide more information up-front such that more bugs can be detected at compile-time rather than runtime. Swift naysayers has blithely overlooked the impossibility of buffer-overflows and null-pointer exceptions. In Rust's case, with no legacy, they've been able to take this further, and do automatic memory management (almost) entirely at compile-time, and further do compile-time dead-race detection. Apple hired many of the Rust devs, and they're adding this on now with the "Ownership Manifesto"
So Swift's origin, philosophy, and future are clear. It's badly hobbled by being released too early (Swift only caught up to Rust 1.0's features in the last year) and having a bulky legacy-interface that's made the language rather large. I suspect, as Chris Lattner mentioned to the ATP podcast that there'll be a "strict-mode" Swift in 5yrs that deprecates a lot of the ObjC layer. *That* language will be ideally suited to a networked-first, multi-core-first, stable-first world.
@michael "it still seems to be gaining mindshare."
That's not what the "languages popularity" polls I looked at show for 2018. But let's wait for the end of the year.
@stephane I was thinking of stuff like this and what I see people talking about. (But maybe the Objective-C developers just are busy getting work done instead of writing about their language.) We’ll see.
Bryan: "Swift, like Rust and F#, is a immutable-first, functional-first language in the ML family (Rust, Haskell, F#, SML); albeit with the functional stuff considerably toned down."
🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣
There is NOTHING functional about Swift. Functional languages are DECLARATIVE; Swift and Rust are inherently IMPERATIVE. Programming with functions != Functional Programming.
What makes functional languages special is that they are time-independent. Functional programs describe the relationships between their inputs and outputs mathematically, leaving complier+runtime to reason about which operations the program needs to perform and when in order to transform a given set of inputs to the corresponding output.
That both use functions for easy packaging and reuse of code is neither here nor there. What distinguishes functions in FPs is that they're REFERENTIALLY TRANSPARENT. i.e. For any given set of inputs, the corresponding output will ALWAYS be the same, no matter how or when they're run [1]. And this behavior is GUARANTEED by the compiler and runtime.
Essentially, where a language like Swift automates low-level memory management, saving the developer from having to specify exactly what memory to allocate/deallocate and when, a language like Haskell automates high-level decision-making, saving the developer from having to specify the exact sequence in which operations must be performed. Lifting that restriction enables a functional language to make lots of very powerful decisions at runtime: e.g. it can defer evaluating code until/unless actually needed; it can subdivide and offload work to any other cores/devices available; it can even determine when it needs to run automatically (e.g. FRP).
That kind of higher-level mathematical reasoning is IMPOSSIBLE in imperative programs, because they cannot promise or enforce the prerequisite guarantees in the first place. All they can promise to do is exactly what the developer tells them to do, in the order the developer tells them to do them, at the time the developer wrote that program.
Imperative languages are crude, dumb, low-level tools; good for banging the rocks together, and that's it. Ditto many modern programmers, alas.
--
[1] This is why performing IO *inside* of a functional language is such a pain: because a file may be modified between one read and the next, yielding different outputs from the same input. e.g. Haskell gets around this by isolating these imperative behaviors inside of monads, allowing their time-dependent naughtiness to do what it needs to do without infecting the rest of the program. Thus the compiler/runtime can continue to reason mathematically about the larger program, while allowing sandboxed imperative subprograms to operate inside of it to perform that subset of useful tasks which FPs cannot.
@has What you're describing with respect to referential transparency is the idea of *purity*. What you're describing in terms of not being "time-dependent" and undefined sequencing of actions is laziness, thought in fact lazy operations due have an ordering, they just don't have defined execution points.
This is why Haskell is defined as a lazily-evaluated, pure, functional programming language in the ML family; Purescript is an eagerly-evaluated ("strict"), pure, functional language in the ML family; and StandardML, Ocaml and F# are eagerly-evaluated, impure functional languages in the ML family.
What makes Ocaml/F# functional are the liberal use of map/filter/apply, support for currying, and the idea of using a curried function instead of a struct for passing state. What makes it an ML, in addition to these things, is the support for both sum and product types, support for destructing matches, lightweight syntax for creating types, newtypes, and the idiom of using sum-types like Either for error handling in preference to exception throwing.
F# additionally has monad support (it calls them "Computational Expressions"): it built its async support around monads. It did this without higher-kinded types in a bit of a compiler-hack, but programmer can implement their own monads (e.g. for parsing) nevertheless.
Rust & Swift have most of the features of F#, except for monads and currying, which limits the number of functional programming idioms one can employ, which is why I said "the functional stuff [is] considerably toned down". Rust also doesn't have OOP support.
I would also say that modern imperative languages like Kotlin and in particular C# are particularly powerful, and are by no means crude, dumb, tools. However the C# approach of attaching functional features to an imperative, Modula-2 derived language is beginning to run out of steam it seems. It would appear the future of industrial programming-language design lied in attaching imperative features to functional, ML-derived languages.
Its kind of interesting looking at this debate from someone who hated Swift from day 1. The perception (that Michael mentioned) is that Swift is the IN thing, and it has been since released. It seemed to me that Objective-C was dead, even if that was 5 years away. Well, 5 years later, seems like the debate has shifted- note the number of updates. I'm not sure where we will end up, but do hope Objective-C remains at least on the same level as Swift.
Bryan:
Soo, if we add map/filter/apply functions(e.g. in Categories for relevant types) in ObjC, would that make it functional too?
@Mndgs
Support for map/filter/apply would enable some functional idioms, similar to to Python, but really currying (aka partial application in C#) is essential to be functional. With currying you can have a function like
add :: Int -> Int -> Int
times :: Int -> Int -> Int
And then write (using Elm notation)
myfn listVar = listVar |> map (add 2 .> times 4) |> filter ` times 4) .> filter ` (a,s))
where the parameter of Stateful is presumably a more complex function that's been curried.
People tend to make the presumption that Haskell = functional: this disqualifies strict languages like Elm, Purescript and Clean, and "impure" languages with opt-in mutability like F# and Ocaml.
(Aside: With the IOVar-in-Reader pattern that is presently popular in Haskell, a lot of Haskell code has non-obvious mutability too)
The reason the line seems blurred now is because all of the old imperative languages have copied as many functional idioms as they can in the last ten years. Now we're seeing things go in a different direction with Rust and Swift, where designers are starting with impure functional languages (usually MLs) and porting over some imperative idioms and familiar C-family syntax.
@Mndgs
So a stray < munged that last comment. Basically with map/filter/apply and currying, I was trying to say, you can write.
myfn listVar = listVar |> map (add 2 .> times 4) |> filter < 4
or
myfn = map (add 2 .> times 3) .> filter < 4
and allow you to create datatypes that take curried functions like
data Stateful = Stateful (s -> (s, a))
None of this requires either purity or lazy-evaluation. Absent purity, formal proofs are impossible of course, but I have a hard time believing that StandardML, Ocaml and F# are *not* functional programming languages.
@Michael Interesting data. I would need some time to check it against my theory (which could be totally wrong) of the reasons why people are using Swift willingly or unwillingly.
Regarding people talking more about Swift than Objective-C, it's definitely true. The problem I have with this is that the discussions are precisely about Swift (and to caricature a bit, for example about how to better add 2 integers in Swift) while, previously, the discussions were more about the Cocoa frameworks, how to achieve nice things with Cocoa in the UI area, how to work around limitations or bugs in the frameworks, etc. The discussions were basically about the platform not a programming language.
@Bryan: "@has What you're describing with respect to referential transparency is the idea of *purity*."
What I’m describing is math. A mathematical function describes a relationship. Yes, I'm aware of the religious wars, preponderance of Scotsmen, poorly shaved barbers, and so on in agreeing the U set of "Functional Languages". That is not interesting.
Ultimately, whether you consider "impure functional" languages to be meaningfully functional, or just imperative programming with first-class functions, depends on who you believe owns the definition of "function". Personally I'll go with the mathematicians, because A. they were here first, and B. if "FP" is merely imperative programming with one hand tied behind your back to demonstrate how ingenious you are, i.e. Pretentiously Imperative Programming, then what's the point of it?
Want to call OCaml "Functional"? Honestly, you can *call* things anything you like. Whether your naming of it is *useful* is orthogonal to that. Tell it to Turing; Alonzo Church cares not. But that is also not interesting.
Here’s what *is* interesting:
y = 2 * x
If x is a Model attribute and y is a View attribute, a declarative language can bind those two attribute together in a two-way relationship, such as when x is set to 5, the View automatically displays 10; or when the user types in "24" the Model's state is updated to 12.
Once you start thinking models of computation where a user (developer) can describe *what she wants*, rather than how to do it, lots of really nasty intractable procedural problems become very elegant engrossing mathematical exercises. But a programmer can't even begin to think in those terms until and unless he relinquishes his absolute need to control everything.
..
"It would appear the future of industrial programming-language design lied in attaching imperative features to functional, ML-derived languages."
Whether the future of industrial programming-language design lies in imperative languages with fancy functions, or functional languages rendered meaningless by imperative rot, either way it's a dead end. A language's value proposition lies as much in what it *doesn't* do as what it does. It's only by setting hard boundaries that a computing system can begin to "think smart".
The real future of industrial programming language design *should* follow the original Unix philosophy, of many small simple tools, each designed to do *one* task well, that interop seamlessly with each other. But seeing how Unix Philosophy has likewise been rendered meaningless cargo cult by programmers' reflection-free solipsism as they lash ever more impressively unusable blinky buttons to the vast machines they already can't control, I am not going to hold my breath.
Frankly, the sooner that the task of designing languages is taken away from programmers and handed to the people who understand the real-world problems and do the real-world work that those languages need to address, the better. Because right now the worst limitation in programming language design is modern-day programmers themselves.
This is not me being ivory-tower academic either. I wrote a domain-specific declarative language that solves a problem our billion-dollar multinational competitors employing hundreds of highly-trained, highly-paid industrial programming developers have failed to crack in 15 years; and I'm an art school dropout who's never had a CS lesson in his life, so what's their excuse?
..
Humans make languages as tools for self-expression. Those who make languages of micromanagement are already failed at both.
stephane: "The discussions were basically about the platform not a programming language."
Ah, the old days. Welcome to the shiny new world of Swift, the tail that wags the dog.