Chris Lattner (tweet):
We focus on task-based concurrency abstractions commonly encountered in client and server applications, particularly those that are highly event driven (e.g. responding to UI events or requests from clients). This does not attempt to be a comprehensive survey of all possible options, nor does it attempt to solve all possible problems in the space of concurrency. Instead, it outlines a single coherent design thread that can be built over the span of years to incrementally drive Swift to further greatness.
[…]
Modern Cocoa development involves a lot of asynchronous programming using closures and completion handlers, but these APIs are hard to use. This gets particularly problematic when many asynchronous operations are used, error handling is required, or control flow between asynchronous calls is non-trivial. […] Error handling is particularly ugly, because Swift’s natural error handling mechanism cannot be used.
[…]
Asynchrony is the next fundamental abstraction that must be tackled in Swift, because it is
essential to programming in the real world where we are talking to other machines, to slow
devices (spinning disks are still a thing!), and looking to achieve concurrency between multiple
independent operations. Fortunately, Swift is not the first language to face
these challenges: the industry as a whole has fought this dragon and settled on
async/await as the right abstraction. We propose
adopting this proven concept outright (with a Swift spin on the syntax). Adopting
async/await will dramatically improve existing Swift code, dovetailing with existing and
future approaches to concurrency.
The next step is to define a programmer abstraction to define and model the independent
tasks in a program, as well as the data that is owned by those tasks. We propose the
introduction of a first-class actor model, which
provides a way to define and reason about independent tasks who communicate between
themselves with asynchronous message sending. The actor model has a deep history of
strong academic work and was adopted and proven in
Erlang and Akka, which successfully power a large
number of highly scalable and reliable systems.
With the actor model as a baseline, we believe we can achieve data isolation by ensuring that
messages sent to actors do not lead to shared mutable state.
Previously: Swift: Challenges and Opportunity for Language and Compiler Research, Python 3.5: async and await, C# 5.0’s async.
Update (2017-09-11): See also: Swift Unwrapped.
Concurrency Erlang Programming Language Language Design Programming Swift Concurrency Swift Programming Language
David Schuetz (Hacker News):
Earlier today, it was reported that a hacker/researcher called “xerub” had released the encryption key, and tools to use it, for the firmware that runs the Secure Enclave Processor (SEP) on iPhone 5S. Reporting was…breathless. Stories suggested that this move was “destroying key piece of iOS mobile security,” and that we should “be on the lookout for Touch ID hacks” and “password harvesting scams.”
Is it really that bad? No, not really.
[…]
What was released today was the key to decrypt that firmware, but not a key to decrypt the region of disk used by the SE to store data. So now we can actually reverse-engineer the SE system, and hopefully gain a much better understanding of how it works. But we can’t decrypt the data it processes.
iOS iOS 10 iPhone iPhone 5s Secure Enclave Security Touch ID
Timothy B. Lee:
On Thursday, Gab said that Google had banned its Android app from the Google Play Store for violating Google’s ban on hate speech.
Google’s e-mail doesn’t explain how Gab violated Google’s rules, and the company’s policy on the topic isn’t very specific. It says only that “We don’t allow apps that advocate against groups of people based on their race or ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation, or gender identity.”
[…]
Google is following in Apple’s footsteps here. Apple has long had more restrictive app store policies, and it originally rejected the Gab app for allowing pornographic content to be posted on the service—despite the fact that hardcore pornography is readily available on Twitter. In a second rejection, Apple faulted the app for containing content that was “defamatory or mean-spirited”—Apple’s version of the hate speech rule.
My understanding is that Gab is a social network/platform, with all the content/speech/advocacy generated by users. So these rejections don’t make much sense to me. I don’t think an app developer should be held responsible for the content of messages that the app’s users send to one another.
And look at the larger context of apps that are approved. Twitter, Facebook, Reddit, etc. have policies to restrict certain kinds of content, but in practice all manner of violations and abuse still occur. Those apps are not in any danger of being rejected. How are they different from Gab, or Micro.blog, other than being too big to ban?
There are also Web browser, e-mail, RSS, and podcast apps that provide access to totally unfiltered content—including from Gab’s site. And there are secure messaging apps that only transmit opaque encrypted content.
Banning apps based on user content seems to only inconvenience users without really protecting them, anyway.
Update (2017-08-18): Google clarified:
In an email to Ars, Google explained its decision to remove Gab from the Play Store: “In order to be on the Play Store, social networking apps need to demonstrate a sufficient level of moderation, including for content that encourages violence and advocates hate against groups of people. This is a long-standing rule and clearly stated in our developer policies.
Apple’s guidelines say:
Apps with user-generated content present particular challenges, ranging from intellectual property infringement to anonymous bullying. To prevent abuse, apps with user-generated content or social networking services must include:
- A method for filtering objectionable material from being posted to the app
- A mechanism to report offensive content and timely responses to concerns
- The ability to block abusive users from the service
- Published contact information so users can easily reach you
Apps with user-generated content or services that end up being used primarily for pornographic content, objectification of real people (e.g. “hot-or-not” voting), making physical threats, or bullying do not belong on the App Store and may be removed without notice. If your app includes user-generated content from a web-based service, it may display incidental mature “NSFW” content, provided that the content is hidden by default and only displayed when the user turns it on via your website.
App Review listed some additional requirements:
[We] require that you act on objectionable content reports within 24 hours by removing the content and ejecting the user who provide the offending content.
This is rather vague. Who determines what’s objectionable? How many reports do you need to get before ejecting the user? How does Apple determine whether you are getting reports or acting on them? Obviously, there is lots of objectionable content on other social networks that remains for more than 24 hours.
Phil Schiller says:
“The App Store does not review or reject apps for political views. The App Store will not distribute apps designed to promote racism, pornography, or hatred.”
I’m not sure what “designed to promote” means here, since those things seem to be prohibited by Gab’s guidelines, which are not as different from Twitter’s as I expected.
Gab’s mission is to put people and free speech first. We believe that the only valid form of censorship is an individual’s own choice to opt-out. Gab empowers users to filter and remove unwanted followers, words, phrases, and topics they do not want to see in their feeds. However, we do take steps to protect ourselves and our users from illegal activity, spam, and abuse.
[…]
Users are prohibited from calling for the acts of violence against others, promoting or engaging in self-harm, and/or acts of cruelty, threatening language or behaviour that clearly, directly and incontrovertibly infringes on the safety of another user or individual(s).
Gab said in January that they try to enforce the guidelines but that it’s physically impossible to act within 24 hours or to know about violations that users don’t flag.
However, a little later Andrew Torba seemed to change their stance:
We refuse to “eject users” who post “offending content.”
And that makes Gab clearly in violation of Apple’s (unwritten) rules.
Update (2017-08-28): Alex Tabarrok:
I also fear that Google and Apple haven’t thought very far down the game tree. One of the arguments for leaving the meta-platforms alone is that they are facially neutral with respect to content. But if Google and Apple are explicitly exercising their power over speech on moral and political grounds then they open themselves up to regulation. If code is law then don’t be surprised when the legislators demand to write the code.
Android Android App App Store App Store Rejection Gab Google Play Store iOS iOS App Micro.blog Twitter
Juli Clover:
Emergency SOS is activated by pressing on the sleep/wake button of an iPhone five times in rapid succession. When the requisite number of presses is complete, it brings up a screen that offers buttons to power off the iPhone, bring up your Medical ID (if filled out) and make an emergency 911 call.
Along with these options, there’s also a cancel button. If you hit the sleep/wake button five times and then hit cancel, it disables Touch ID and requires a passcode before Touch ID can be re-enabled. Touch ID is also disabled if you actually make an emergency call.
I presume this is so that people can’t get into your phone after you call 911 and pass out. However, it’s also a way to temporarily disable Touch ID, so you can’t be forced to unlock your phone.
iOS iOS 11 Privacy Touch ID