Static Typing vs. Testing
My problem is that both [Swift and Kotlin] have doubled down on strong static typing. Both seem to be intent on closing every single type hole in their parent languages.
[…]
But before you run out of fingers and toes, you have created languages that contain dozens of keywords, hundreds of constraints, a tortuous syntax, and a reference manual that reads like a law book. Indeed, to become an expert in these languages, you must become a language lawyer (a term that was invented during the C++ era.)
[…]
Why are these languages adopting all these features? Because programmers are not testing their code. And because programmers are not testing their code, we now have languages that force us to put the word
open
in front of every class we want to derive from. We now have languages that force us to adorn every function, all the way up the calling tree, withtry!
. We now have languages that are so constraining, and so over-specified, that you have to design the whole system up front before you can code any of it.[…]
All these constraints, that these languages are imposing, presume that the programmer has perfect knowledge of the system; before the system is written.
Not in total agreement with the post, but quite a lot of agreement.
A type system is only as static as the code it describes. If you can change code, you can change types! The machine can even help you.
I don’t think it’s an either/or situation at all: we can have a solid type system and write “manual” tests.
A type checker actually does testing for you. It’s not a replacement for TDD, but it allows you to completely get rid of a whole bunch of tests.
[…]
In Swift, once you change
foo
to return anInt?
, the compiler will now show an error for each time you callfoo
. This makes it easy to make that change, because until you have reviewed every single call tofoo
, the program simply won’t compile. I think of the compile errors as a todo-list, not as a speed bump.
I’m pro testing but I think this matches my thoughts exactly, almost anything you can do to get errors earlier in dev process is worth it.
I like both testing and types. The problem is that sometimes types get in the way. It would be nice to be able to do some work in a more dynamic style and then nail things down before shipping. That is the promise of type inference, but it’s not the reality of developing in Swift today.
Update (2017-01-12): Isaiah Carew:
i wish i could have both. static typing is really useful sometimes. really a burden other times.
My main problem with Swift has always been this: far too pedantic about types and complete lack of a dynamic runtime.
Update (2017-01-13): Mark Bernstein:
Reading this debate, I was startled to realize one thing I’d overlooked: I almost never make type errors. I make plenty of errors – for me, programming is always about correcting errors – but type errors are once in a blue moon affairs.
I wonder if strong typing is a solution to a problem we don’t really have, or whether I’m just Doing It Wrong.
I concur. I think Swift’s optionals solve a real problem. The stronger typing, e.g. for collections, can make calling APIs more concise and help with auto-completion, but I don’t see this preventing many errors for me.
Update (2017-01-14): Robert C. Martin:
To be clear, and at the risk of being repetitive, that blog was not an indictment of static typing. I rather enjoy static typing. I’ve spent the last 30 years working in statically typed languages and have gotten rather used to them.
My intent, with that blog, was to complain about how far the pendulum has swung. I consider the static typing of Swift and Kotlin to have swung too far in the statically type-checked direction. They have, IMHO, passed the point where the tradeoff between expressiveness and constraint is profitable.
[…]
You might therefore say that the type system is a kind of “test” that fails for all inappropriate invocations of
f
. I might concede that point except for one thing – the wayf
is called has nothing to do with the required behavior of the system. Rather it is a test of an arbitrary constraint imposed by the programmer. A constraint that was likely over specified from the point of view of the system requirements.[…]
So what the type system is checking is not the external behavior of the program. It is checking only the internal consistency of the program text.
13 Comments RSS · Twitter
I loved Modula-2 (and Pascal, to a point) over C back in their days very much, because they were more explicit and helped avoid sloppy coding errors. However, most of the "cool" devs thought that M-2 would restrict them too much (which was nonsense, other than perhaps its verbosity, which I avoided with a syntax-aware editor) and prefferred C. Which in turn gave us all the bugs in key system software that we'll still have to suffer from, and half of which could have been avoided.
Nowadays, however, I think swift is overdoing it. Instead, I love coding in Xojo, which is a derivate of Visual BASIC. Of course, that makes me totally uncool. But my software, while often not pretty, is very stable overall, and gets high praise for being efficient and dependable.
"It would be nice to be able to do some work in a more dynamic style and then nail things down before shipping. That is the promise of type inference, but it’s not the reality of developing in Swift today."
No, type inferences eliminates the need to explicitly declare the type of every single variable. You're thinking of soft typing, where the compiler warns the author but does not reject a program if it cannot confirm a variable's type. That can then be tightened to strict static typing later on.
Soft typing's a really good user-friendly feature too, because it means the language automatically adapts to allow both 1. quick-n-dirty coding as you initially explore the problem space and rapidly experiment to understand it and how best to solve it, and 2. guaranteed full type-correctness once you've picked your chosen solution and are writing the final production-quality implementation for it. In other words, the language adapts to the user and the different ways in which she codes at different stages of project development, instead of forcing the user to adapt to the language and the obtuse and artificial limitations its authors put in just because they were bone-lazy/control-freaks.
Alas, all the popular languages we have today aren't remotely that sensible—probably cos they're all designed either by 1. cowboys (the Rossums and Matzes) or 2. martinets (the Goslings and Lattners) in order to please all the other cowboys/martinets just like them (and piss off everyone else who isn't). Needless to say, I despise both groups. Kay calls them a "pop culture"; I maintain that Kay was being far too polite in leaving out the other "o".
BTW, if you want to see some decently-designed type systems, go study Haskell and ML. The C* type systems are all designed by amateur hackers, not mathematicians. Type systems are all about performing set algebra and generating mathematical proofs, not about allocating different-sized chunks of memory on the stack. I barely grok the beginnings of formal type theory, and even I can see at a glance that they're day and night in comparison.
(Never trust C* weenies' expert knowledge and experience on anything unless and until you've compared and verified it against multiple independent non-C* sources; for all their absolute confidence, half the time they're Not Even Wrong.)
@has Soft typing is also good, but I really did mean “type inference”: the idea that the compiler can figure out the types so that I don’t have to declare them all. I would then add them later for documentation purposes or to resolve ambiguities to improve performance. Swift can do this, but only to a limited extent.
Even better if the compiler can infer types, then it should do that and at compile time optionally go and fill in everything for me.
@Michael: I really do not think you mean "type inference". That only distinguishes between manifest (explicitly written by programmer) and inferred (automatically calculated by compiler) typing. Either way your program is still 100% typed (i.e. "static"), so when a variable's type cannot be accurately inferred, the type inferencer has no other choice than to type it as Any
. In a typed language, the compiler will then bar you from calling 99.9% of useful functions on that variable's content until you add an explicit cast to a known type to your code.
What you're asking for is the right to call ALL functions on an Any
variable's contents and see what happens, just as you can in untyped ("dynamic") languages such as Python and Ruby, and receive a run-time "Can't call foo() on object of type Bar" error if the variable's current value is not of an acceptable data type.
...
Yeah, discussions of types are always confusing, as not only does a type system consist of multiple orthogonal behavioral traits (typed vs untyped, strong vs weak, inferred vs manifest, etc), but formal type theorists and ordinary jobbing C* users don't even use the same damn jargon!
e.g. What type theorists call the "typed vs untyped" axis, pop coders call "static vs dynamic" (or whatever else they might think it's called; there's a lot of confusing/wrong information and badly [mis]educated programmers polluting the latter world). And while type theorists carefully talk about "tagged values" pop coders blindly blather about "typed values", not even realizing that this term is an oxymoron since type is an attribute of variables (name binding) declared in the source code, not of values (live data) created/stored/manipulated in memory as the program runs.
Hell, a lot of folks even struggle to grasp the distinction between "variable" and "value" correctly, if at all. Much confusion. Huge mess.
@has Yes, I really want both, but in practice we have neither. One case is where the program could be 100% typed but the type inferencer can’t figure it out. There are limitations in practice due to algorithms and performance. But, as I recall, there is also a theoretical result about this, i.e. if the language has certain features it’s known that there is no algorithm that could determine the types. Another case is the dynamic one where you write a program that can’t be statically typed, but nevertheless you could prove using reasoning outside the language’s type system that the right thing happens at runtime.
"i wish i could have both. static typing is really useful sometimes. really a burden other times."
With the new stuff that was added to Objcetive-C to make Swift less miserable when it comes to interfacing with the existing framework, we probably could get rid of Swift and make most of the static typing worshippers happy.
"Another case is the dynamic one where you write a program that can’t be statically typed, but nevertheless you could prove using reasoning outside the language’s type system that the right thing happens at runtime."
Is that possible? Cites? The point of a type system is to generate a mathematical proof for a program before it is run. Your only other option is to run the program repeatedly for every possible set of valid and invalid inputs and confirm it always produces the correct output (in practice we just test it with a tiny representative subset until we're X% confident it's correct enough, or we'd be testing till the heat death of the universe).
A type system's job is to guarantee one aspect of program correctness: that every input value will *always* be a member of the set of all valid values for that input; which is why undisciplined behaviors like side effects and partial functions are such a giant pain, wrecking the type checker's ability to reason about the program. Which in turn leads to things like Haskell's monads, which allow those unreasonable elements to be sandboxed from the rest of the program, allowing the type checker to reason about the rest of the program and ignore the evils inside those little black boxes as they're guararanteed never to escape and pollute the rest of the code.
ISTR seeing something about how some of the Big Branes are also now trying to figure how to extend typing to describe the presence and action boundaries of side effects as well; that would be very interesting to me as I that's exactly the sort of system I already develop, except mine is purely ad-hoc - write, run, succeed/warn/fail, modify, run again,... The available metadata is currently just used for runtime decision making and users documentation, but the more added value I can extract out of it, the more the system can assist the user at all stages of development, testing, and running to get it working right more quickly.
Obligatory:
@someone: You're not too far wrong about the right solution, except that there is *precisely* 0% chance of Apple abandoning Swift at this stage, because that would be a *public* admission that they've spent the last 5 years *completely* fucking up. Wanna see what happens when a company *openly* admits that: look what happened to Nokia when Elop's private internal "platform's fucked" memo publicly announced this fact to the entire world. So even if the Apple board do realize now what a catastrophically unnecessary ball-and-chain they let themselves be shackled to, there's nothing they can do about about it except use their huge warchest to power through this whole self-inflicted mess and hopefully get out the other side with a workable platform and enough remaining market inertia to stay in the game. And/Or hope that Oracle gamefucks Google hard enough over Java that their Android platform suffers an equivalent 10-year setback, otherwise they'll never catch back up.
I've said from the day Swift was announced that it was the wrong solution for the right problem. *Apple's requirement* was to ADD VALUE to their Cocoa development platform, not degrade its value by forcing on it a massive number of painful changes and clumsier language support. A "safe C" dialect like Cyclone welded to the existing Objective-* runtime, with a new type inferencer and fresh coat of syntax would have delivered that by now without any of this years-long disruption, platform stagnation, and eternal Swift-vs-Cocoa bickering on the right way to do things, and all while Google's competing platform steadily powers ahead of it.
Swift was incredible career building for Mr Lattner who has rocketed straight to Vice President(!) off the back of it; but as part of Apple's 10-year business plan it was a weak-brained decision for which they'll pay dearly over the decade to come. If you're going to cause that much disruption to your current platform, it better be because what you're replacing it with is at least a HUNDRED TIMES better - e.g. Mac -> iPhone - and if you can't justify it in terms of cold hard revenue and huge new numbers of bums on seats then you don't do it. You go for the incremental, iterative improvement for what you've already got, and anyone whose nose is put out by that decision you sack and replace with someone who understands that their job is to build the company, not their own personal empire within it (e.g. c.f. Ballmer's Microsoft, where powerful unchecked departments spent all their time destroying each other instead of the company's competitors).
...
Unfortunately computer geeks know the value of everything and the cost of nothing, which means that left to their own devices they will always throw out and start over completely from scratch, because that's far more exciting for them and a perfect gift for them to make their own mark. Whereas modifying somebody else's work to improve it is pure scut work for which they'll never get true credit no matter how much better they make it. It's the same with Hollywood, which is why every newly hired/promoted executive's first action is to cancel all the shows and development of his predecessor, regardless of how bad/good/great they were, so he can create his own empire and reputation their place. (Hey, why d'you think all your favorite TV shows suddenly get screwed on scheduling so no viewers can find them, then canceled abrubtly because their "ratings have collapsed"?)
TL;DR: Never attribute to technology planning that which can be adequately explained by ambitious egos and corporate politics. I've never liked Objective-C (and could design a 10x better language to substitute it with my eyes closed), but after a year of Swifting I overall miss it.
Maybe if I succeed in bootstrapping entoli off the back of Swift (and make enough $M from my current project to be financially set for life), I'll get back to cracking my Dragon book and how to re-Swift Swift into what it could, should, and would've been had The Steve still been here to keep all the children in order. Limitations are a Good Thing, cos they FORCE you to do better with what you've got... and hey, no-one's more limited than me.;)
"With the new stuff that was added to Objcetive-C to make Swift less miserable when it comes to interfacing with the existing framework, we probably could get rid of Swift and make most of the static typing worshippers happy."
If only it hadn't made ObjC so ugly.
@has One simple example would be a heterogenous array of types A and B that are unrelated. You write the code such that the even elements are all As and the odd ones are all Bs, and that all the code that pulls elements out of the array respects that, but there’s no way to convince the compiler that you’re respecting that invariant.
@someone @has To me the Objective-C generics and nullability annotations make the language much less pleasant to use without really offering much value (aside from interoperability with Swift). It’s neither here nor there.
Swift is not the language that I would have designed, but neither do I think it’s a “self-inflicted mess.” I think it really does add value and has probably already been a net positive for the platforms.
@Michael: "One simple example would be a heterogenous array of types A and B that are unrelated. You write the code such that the even elements are all As and the odd ones are all Bs, and that all the code that pulls elements out of the array respects that, but there’s no way to convince the compiler that you’re respecting that invariant."
Ah, I see what you mean. When you said "prove using reasoning" I thought you meant a formal machine-generated proof. What you decribe is an informal ad-hoc agreement between your code, which consumes the Array on the assumption that it will always be of form [A,B,A,B,...], and whatever other code populates/modifies that array in the first place.
If you can visually locate all instances of Array-writing code and their algorithms are sufficiently simple and unambiguous that you can confirm they will always produce Arrays of that form, then you, the programmer, can say "I reasoned that this code is correct" and write a few corner-case tests just to double-check yourself**.
(**Cos I dunno about you, but the number of times I've "reasoned" this way that "*Of course* my code is correct! Just read it, it's obvious!", and a few much-begrudged unit tests later it's "Alright, shutup, I'm fixing it now...")
...
Spot of advice: *Never* use the word "prove" when talking about anything type-related. It's like saying "I have a theory" around scientists when what you mean is "I have a randon idea I just pulled from my ass". Type theory is hard math, and "proof" in math has a very different meaning from "proof" in layman talk. A type system generates a formal mathematical proof: a logical assertion that is automatically assembled from the program's type information by the type checker. So unless your type checker has bugs, you know that proof is valid cos the machine's already checked it for you.
"Static" type checking is *Automation*. A highly repetitive, complicated, tedious, human-error-prone job that humans can do (slowly and badly) if they have to, but is also a bounded problem with completely predictable rules, which means it's possible to write a program to perform that action for you.
The only thing that continues to astonish me about Algol-y programmers is how very little of their own jobs they bother to automate, or even consider might be amenable to automation. Most have just about swallowed the concept of Automating memory management shitwork; most view typing as Religious Affiliation and would rather fight each other in Holy Wars of attrition than fight the machine until it does all that crapwork for them too. And let's not even mention Automatic decision-making on which operations a program needs to perform and in what order to calculate the result the user has asked for; a.k.a. Declarative Programming (Functional, Logic, etc). What can I say? Should've learned Lisp, cos that Automates *programming itself* (meta-programming).
But then, I've been Automating for 16 years now, and am doing more amazing things today than ever before, and anyone who scoffs at Automation or tries to take it away from me is most welcome to pry it from my cold dead fingers…just as soon as he unwraps them from his own throat. >>:]