Archive for August 29, 2017

Tuesday, August 29, 2017

APFS to be Mandatory for SSDs in High Sierra

Apple (via Felix Schwarz):

When you upgrade to macOS High Sierra, systems with all flash storage configurations are converted automatically. Systems with hard disk drives (HDD) and Fusion drives won’t be converted to APFS. You can’t opt-out of the transition to APFS.

[…]

Boot Camp doesn’t support Read/Write to APFS-formatted Mac volumes.

[…]

Volumes formatted with APFS can’t offer share points over the network using AFP.

I assumed Apple would let you opt-out, since that was possible for the betas.

Previously: Pondering the Conversion From HFS+ to APFS.

Update (2017-08-29): Steve Moser:

I wonder what this means for hackintoshes since I read some where that APFS requires Apple firmware.

Update (2017-08-31): Edward Marczak:

For anyone worried about being “forced” into converting to APFS, startosinstall still supports the --converttoapfs flag

Swift 4: Bridging Peephole for “as” Casts

John McCall (via Peter Steinberger):

Bridging conversions are not always desirable. First, they do impose some performance overhead which the user may not want. But they can also change semantics in unwanted ways. For example, in certain rare situations, the reference identity of an NSString return value is important — maybe it's actually a persistent NSMutableString which should be modified in-place, or maybe it's a subclass which carries additional information. A pair of bridging conversions from NSString to String and then back to NSString is likely to lose this reference identity. In the current representation, String can store an NSString reference, and if the String is bridged to NSString that reference will be used as the result; however, the bridging conversion from NSString does not directly store the original NSString in the String, but instead stores the result of invoking +copy on it, in an effort to protect against the original NSString being somehow mutable.

Bridging conversions arising from reasons #1 and #2 are avoidable, but bridging conversions arising from reason #3 currently cannot be eliminated without major inconvenience, such as writing a stub in Objective-C. This is unsatisfactory. At the same time, it is not valid for Swift to simply eliminate pairs of bridging conversions as a matter of course, precisely because those bridging conversions can be semantically important. We do not want optimization settings to be able to affect things as important as whether a particular NSString is mutable or not.

He proposes eliminating pairs of bridging conversions under certain circumstances:

This would avoid the bridging conversions through [View] on the return value of the getter:

let subviews = view.subviews as NSArray

This would not:

let subviews = view.subviews
let nsSubviews = subviews as NSArray

This would avoid the bridging conversion through [CIFilter] on the argument to the setter:

view.backgroundFilters = nsFilters as [CIFilter]

This would not:

let filters = nsFilters as [CIFilter]
view.backgroundFilters = filters

YouTube Transcripts

David Pogue:

Believe it or not, YouTube creates a written transcript for every single video. Just click More and Transcript and boom!

What’s cool is that you can use this feature as a great way to create free transcripts of your own recordings.

Given its native software and focus on accessibility, this is the kind of thing I’d expect Apple to do. Indeed, Clips does use speech recognition, but it’s for adding titles on top of the video, not transcribing what’s there. And I don’t think there’s anything like this in iMovie. I don’t know what Apple uses for the WWDC videos. Meanwhile, Google has implemented what looks like an impressive interface in the browser.

Deep Learning for Siri’s Voice

Siri Team:

Recently, deep learning has gained momentum the field of speech technology, largely surpassing conventional techniques, such as hidden Markov models (HMMs). Parametric synthesis has benefited greatly from deep learning technology. Deep learning has also enabled a completely new approach for speech synthesis called direct waveform modeling (for example using WaveNet [4]), which has the potential to provide both the high quality of unit selection synthesis and flexibility of parametric synthesis. However, given its extremely high computational cost, it is not yet feasible for a production system.

In order to provide the best possible quality for Siri’s voices across all platforms, Apple is now taking a step forward to utilize deep learning in an on-device hybrid unit selection system.

[…]

For iOS 11, we chose a new female voice talent with the goal of improving the naturalness, personality, and expressivity of Siri’s voice. We evaluated hundreds of candidates before choosing the best one. Then, we recorded over 20 hours of speech and built a new TTS voice using the new deep learning based TTS technology. As a result, the new US English Siri voice sounds better than ever. Table 1 contains a few examples of the Siri deep learning -based voices in iOS 11 and 10 compared to a traditional unit selection voice in iOS 9.

Update (2017-09-11): John Gruber:

It’s the voice assistant equivalent to getting a better UI font or retina graphics for a visual UI. But: if given a choice between a Siri that sounds better but works the same, or a Siri that sounds the same but works better, I don’t know anyone who wouldn’t choose the latter.