Swift and Debuggability
Currently in Swift, those stack traces are even worse. I won’t post one until they actually release it to be fair to Apple…but why oh why when they were making a ‘modern’ programming language could they not solve this? I know, the Objective-C runtime is hailed by many the world over as being fast, and awesome. But it’s 2014, the things I actually care about are the problems Microsoft and Sun Microsystems solved, memory management and reliability. If it comes at the expense of a tiny amount of speed, I’ll happily take it.
[…]
As much as people hate Java, and to some extent I’m in that camp too, here’s an equivalent crash from our Android app […] Yes I know, ha ha Null Pointer, Java, LOL. But that’s an exact line number friends. What did the user do? They tapped the subscribe button. Which page where they on? The Podcast Dialog. Zero ambiguity. Guess how many of our Android crashes we get that for? 100%. In iOS we’d be lucky if even 30% of our crashes had stack traces we can line up to actual things we can then reproduce.
7 Comments RSS · Twitter
Swift is going to improve things somewhat, because it is memory safe (provided you don't twiddle with the properly labeled "Unsafe" primitives). It'll help with your own code, but of course it won't help with things written in Objective-C, like the current Apple frameworks, where it is too easy (and even tradition) to play unsafe. For instance Cocoa really should use auto-nulling weak pointers when maintaining a list of observers, not unsafe-unretained ones.
Isn't the reason they don't use auto-zeroing weak pointers for observers:
a) They have to be able to use observers everywhere, including the five or ten classes that can't be weakly referenced in Cocoa because setting good examples are for chumps.
and also b) something something hogwash something auto-zeroing something performance. (In plain text: yes, it'd be nice if you could use something sanely and not have to worry about all these other effects, but you know what would be nicer, if it'd run 1200 times 40 microseconds faster on the iPhone 4S, yes, this is indeed the forward-thinking, long-term decision making we should be doing.)
I'm not saying there aren't good reasons they aren't using weak pointers, there are. But there's no question it's making more fragile code that uses notifications in ways that the errors are non-obvious. Since they are moving towards memory safety in Swift, I expect if there was an API specifically designed for Swift for observers it would avoid the current gotchas.
The stack trace problem happens because the error is happening far away from where the actual broken code is. Those are the worse kind of errors to debug. APIs that produce less of those errors are definitely easier to work with.
Oh yes. I agree with you. Except for the part where I don't think that the reasons they're not using weak pointers are still good after so much time.
On the desktop, it's possible for code to be faster than necessary to be responsive, and then it is reasonable to spend some cycles to make things easier for developers.
On mobile devices this is not true. Performance translates directly into battery life, so even if code is fast enough to be 60fps responsive at all times, additional performance still improves the user-experience.
This is a really stupid rant. The iOS stack trace is not something that's specific to iOS, and the Java crash is not something that iOS doesn't ever get. The first iOS stack trace looks like a heap corruption issue, so it's not even relevant here (even if it was 100% application code, that still wouldn't be the source of the bug). The second iOS stack trace was very deliberately chosen to be one where the source of the bug is disconnected from the actual triggering of the bug. But that's not an issue with iOS.
Meanwhile, the Java trace is a bug where the source of the bug and the point where it throws an error are the same. But you can get that in iOS too, just as you can write code in Java that works like the iOS version.
Ultimately, this has *no relation to the language at all*, and trying to position this as Objective-C vs Java is disingenuous. This is purely a result of iOS frameworks encouraging delegation/notification patterns. I don't know how the same sort of thing is handled in the Android world. But these patterns are not iOS-specific. And they're useful too.
As long as the frameworks are designed this way, it doesn't matter what language you're programming in. About the only way the language matters at all is languages that use a GC won't end up crashing by messaging a deallocated object. But unless the author really means for their entire argument to boil down to "Swift should have GC" (and no, it shouldn't) then that's not very meaningful.
@Kevin Not relevant here? Not an issue with iOS? It absolutely is because these types of problems could have been avoided. The Java heap doesn’t get corrupted like that. It is possible to use the notification/delegation pattern without crashing if you forget to remove an observer. With different optimization settings, there could be more information in iOS stack traces. Of course, there are tradeoffs, but the choices could have been made differently.
The point is not that Swift should have GC. Rather, it’s that using a safe language like Swift doesn’t actually guarantee safety if the frameworks are written in an unsafe language.