Saturday, April 30, 2016 [Tweets]
9/14/14: Sold 500 edu copies of Air Display 2.
4/25/16: All 500 refunded.
No explanation, contact, or even customer name.
I’m guessing some school district returned its trial iPads and went Android but who knows.
Basically Apple let them use my app for free for two whole school years without even asking my permission.
To appeal anything there would have to be a channel of communication with Apple.
And it’s actually worse than just letting them use it for free, because now Avatron has an unexpected revenue hole for this year.
Friday, April 29, 2016 [Tweets]
I think Dr. Drang nails it. We might be seeing “peak iPhone”. But it could just be a statistical blip, caused in large part by the iPhone 6’s exceptional popularity, along with other factors like the economy in China and currency exchange rates.
In chart form, you can see what an anomaly last year was with the iPhone 6. But given that you can almost draw a straight line connecting the other four points in the chart, I’m not willing to call it a peak yet. But even if we see a return to growth, it might take several years before we see another Q2 with over 60 million units sold.
To that end I stand by my takeaway last quarter that the iPhone is likely to return to growth with the iPhone 7, but I must admit Cook’s focus on slow upgrades is worrisome: a big reason why I argued last fall that the iPhone will be fine was Cook’s insistence that iPhone 6 upgrades were not out of the ordinary, but given that Cook now says the opposite leads to the question of just how long people will now hold onto their good-enough iPhones with their big-enough screens.
“We’re thrilled with the response that we’ve seen on [iPhone SE]. It is clear that there is a demand there even much beyond what we thought. That is really why we have the constraint that we have.”
I find it a little alarming that Apple was taken by surprise on this.
Apple’s Services revenue line increased 20 percent over the year-ago quarter, showing the power of Apple’s installed base of a billion devices to generate money for the company outside of hardware sales.
As I was typing up the many statements of Tim Cook during yesterday’s quarterly financial call, one in particular—about Apple Music—caught my attention […] Even at the time it stood out to me—hence the added emphasis. It seemed like an odd thing for the usually cautious Cook to say.
Apple is pushing the services category as a burgeoning part of its business with current and future growth potential. Focusing on and expanding services is a fascinating proposition as generally I’ve considered Apple as a company that sells hardware and bundles services for free. To me, these comments on the earnings call indicate Cook wants to develop services further in a serious way.
There’s nothing wrong with diversification per se but it is a change to how the company used to operate. Around the launch of iPad mini, there’s an obvious breakpoint in company strategy where they expanded from a couple of flagships to a myriad of variants in each category. The days of Apple’s products ‘all fitting on one table’ are long gone. If services do grow significantly, the metaphor really breaks down as its products would be intangibles.
See also: Paid App Store Search.
Creating value for shareholders by developing great products and services that enrich people’s lives will always be our top priority and the key factor driving our investment and capital allocation decisions.
I get that it’s an earnings call and extremely fragile shareholders want reassurance, but I wish this was phrased differently. What I hope Cook means by this is that Apple’s top priority is to create really great stuff and provide exemplary services that they hope people will buy and love, thereby assuaging trepidatious hedge fund billionaires.
But the way it’s phrased here is that shareholders come first, and Apple’s entire R&D strategy revolves around them. I sincerely hope this is just poorly articulated, because if it’s taken at face value, it’s deeply concerning.
Copy the above code into XCode, press “build,” and go get a coffee. Come back in 12 hours. Yes, the above dictionary literal code takes at least 12 hours to compile.
If you suspect that something is taking too long to compile in your Swift project, you should turn on the
debug-time-function-bodies option for the compiler. In your project in Xcode, go to Build Settings and set Other Swift Flags to
Now that we know type inference can be a problem here, we can investigate the problem areas, specify type information and try building again. In this case, simply defining the structure to be a
Dictionary<String, AnyObject> brings our compile time for that function down to 21.6ms. Even adding the rest of the employee objects back in doesn’t meaningfully change the compile time. Problem solved! Hit the rest of the potential problem areas in your code and try adding type information to speed up the compile times for the rest of your project.
This particular case has just been improved for an upcoming Swift release.
Good to see these issues getting discovered and also eventually fixed. Maybe Swift 3 will be prime time ready?
Dunno. Scala is how many years old and the compile times are still shockingly abysmal.
Previously: Swift 1.0 Performance and Compilation Times, Slow Swift Array Type Inference, Swift Type-checking Performance Case Study.
Leo Laporte has a great two-part, three-hour interview with Bill Atkinson that covers a wide range of subjects.
Your app does not comply with:
11.13: Apps that link to external mechanisms for purchases or subscriptions to be used in the app, such as a “buy” button that goes to a web site to purchase a digital book, will be rejected
Piet Brauer (via Benjamin Mayo):
Apple dismissed my appeal because GitHub page has a sign up link. Am I stupid, because I don’t see it.
Sorry, you don’t understand. Your app was rejected 3X for the same thing, and a call was made on 3/21. They will call you again.
Yes, I then removed the sign up link (according to the calls that were made) and submitted again without the sign up link.
It was then rejected again for the same reason with no detailed explanation and a screenshot that’s not showing anything.
Here’s how he removed the links.
This kind of masterclass in PR is why Apple execs make the big $
Previously: Phil Schiller Takes Over the App Stores.
Update (2016-04-29): Piet Brauer:
Terms -> Return to GitHub -> Sign up but that’s ridiculous
Update (2016-04-30): Piet Brauer:
Had a phone call with someone I could talk but I am still rejected.
She said she understands and its ridiculous but those are the rules.
See also: We Love This Stuff Too, and Honor What You Do.
Thursday, April 28, 2016 [Tweets]
Edward O’Connor (comments):
The current consensus among browser implementors is that, on the whole, prefixed properties have hurt more than they’ve helped. So, WebKit’s new policy is to implement experimental features unprefixed, behind a runtime flag. Runtime flags allow us to continue to get experimental features into developers’ hands while avoiding the various problems vendor prefixes had. Runtime flags also make it easier for us to have different default settings between stable builds and preview builds such as Safari Technology Preview.
Brady Dale (via Bruce Schneier):
Amazon created Kindle Unlimited, a Netflix for books, that’s delivering indie authors revenue and readers. But it turns out that the way it works may have created an opportunity for scammers to steal earnings from real writers producing genuine works.
Kindle Unlimited authors get paid out of a pool of funds set up by Amazon each month ($14.9 million in March 2016). Their cut of that pool is determined by the number of pages a reader reads in their books, not by the number of books readers check out. Top performers get an extra boost of as much as $25,000 in one month for being a top ten author.
But, what if someone finds a way to trick Amazon into believing that “readers” have read thousands of pages, when in reality they haven’t read any?
We rely on busy waiting via creating a custom runloop - the condition is checked every x milliseconds and only once it returns
true, the control flow continues. This might seem crude, but it’s a much, much better solution than adding random timeouts and allows us to run tests at an insane speed while also getting reliable results, even under heavy load. It’s somewhat similar to
XCTestCase’s asynchronous testing extension
waitForExpectationsWithTimeout:handler:, just faster and more flexible.
Now, the final puzzle is to simply increase the speed of Core Animation itself. And the best part: This is not even private API!
Shape (via Andy Bargh):
We refer to the App Store Review Guidelines all the time. It’s hard to spot the changes, so we made this site for ourselves and our clients.
They have summaries of the changes as well as diffs.
Tuesday, April 26, 2016 [Tweets]
Yesterday, I wrote about the superiority of BitTorrent Sync’s Selective Sync over Dropbox’s. Today, Dropbox gave a technology preview (via Mitchel Broussard, Hacker News):
Project Infinite will enable users to seamlessly and securely access all their Dropbox files from the desktop, regardless of how much space they have available on their hard drives. Everything in the company’s Dropbox that you’re given access to, whether it’s stored locally or in the cloud, will show up in Dropbox on your desktop. If it’s synced locally, you’ll see the familiar green checkmark, while everything else will have a new cloud icon.
It’s not clear whether this will be limited to business customers.
This feature is implemented at a low level, and works on the command line.
For example if you have a directory that is all stored in the cloud you can cd to it without any network delay, you can do ls -lh and see a list with real sizes without a delay (e.g., see that an ISO is 650 MB), and you can do du -sh and see that all the files are taking up zero space.
If you open a file in that directory, it will open, even from command line, then do du -sh and see that that file is now taking up space, while all the others in the directory are not.
Dan Luu (in 2015):
Something that’s occasionally overlooked is that hardware performance also has profound implications for system design and architecture. […] Consider the latency of a disk seek (10ms) vs. the latency of a round-trip within the same datacenter (.5ms). The round-trip latency is so much lower than the seek time of a disk that we can disaggregate storage and distribute it anywhere in the datacenter without noticeable performance degradation, giving applications the appearance of having infinite disk space without any appreciable change in performance. This fact was behind the rise of distributed filesystems within the datacenter ten years ago, and various networked attached storage schemes long before.
However, while it’s easy to say that we should use disaggregated disk because the ratio of network latency to disk latency has changed, it’s not as easy as just taking any old system and throwing it on a fast network. If we take a 2005-era distributed filesystem or distributed database and throw it on top of a fast network, it won’t really take advantage of the network. That 2005 system is going to have assumptions like the idea that it’s fine for an operation to take 500ns, because how much can 500ns matter? But it matters a lot when your round-trip network latency is only few times more than that. The caching and other obvious wins if you have 1ms latency may not buy you much at 10us latency, and it may even cost you something.
Latency hasn’t just gone down in the datacenter. Today, I get about 2ms to 3ms latency to Youtube. Youtube, Netflix, and a lot of other services put a very large number of boxes close to consumers to provide high-bandwidth low-latency connections. A side effect of this is that any company that owns one of these services has the capability of providing consumers with infinite disk that’s only slightly slower than normal disk.
We’ll see how well the implementation works, but when Dropbox was first released people said, “who cares, it’s just rsync”.
Omar Abdelhafith (via Ole Begemann):
If you are like me, then you banged your head trying to solve “Include of non-modular header inside framework module ..”. But what is a modular header.
A modular header is a header that is included in the module map. This header can be either imported in the umbrella header in case you are using
umbrella header "header.h", or it can be explicitly imported in the module file using
Any header that is not included in the module map is not a modular header and hence cannot be imported in any of the modular headers (modular headers can only include modular headers).
M.G. Siegler (via John Gruber):
Last year, Rolex did $4.5 billion in sales. A solid year for the premium watchmaker. Of course, it was no Apple Watch. That business did roughly $6 billion in sales, if industry estimates are accurate.
It’s a misconception that what Apple does best is unveil mind-blowing new products. What Apple does best is iterate year after year after year — exactly what Apple Watch needs.
I sold mine after a month and have never — not for even one microsecond — reconsidered or desired the Apple Watch
I bought the Apple Watch a year ago. I stopped wearing it two months ago, and I’m not sure if I’ll ever wear it again. That’s because it doesn’t really do anything that anyone needs, and even when it does, it doesn’t always work like it’s supposed to.
Wearing the Apple Watch for nearly a year did change something in me though, but it’s the opposite effect that was probably intended: the Watch’s constant low-level notifications made me realize that there’s nothing really worth being notified about. Being able to feel every text, email, and whatever else, made me see how useless they mostly were. I used to joke that wearing a watch is handcuffing yourself to time. Wearing an Apple Watch (or any smartwatch, really), doubles down on that by locking you in a barricade of notifications too. So I’ve taken the Apple Watch off and don’t know when I’ll put it back on. The Watch isn’t at all worth it, but I’m not sure it’s even possible to make a smartwatch that I, or any reasonable non-tech nerd, would need. The more ambitious a smartwatch gets, the more complicated it is to use.
“When do I use which button and what do the buttons do?” needs to be obvious for the Apple Watch to truly feel Apple-y. And it fails. The longer I own mine the more obvious it is that Apple dropped the ball on the buttons[…] My hope is that Apple does more than just make the second generation watch faster/thinner/longer-lasting, and takes a step back and reconsiders some of the fundamental aspects to the conceptual design.
Yesterday was the one year anniversary of the launch of the Apple Watch. It is a device that has had a profound impact on my life both personally and professionally. The Apple Watch I received on launch day is still firmly on my wrist each day (notably with barely a scratch).
I remember being rather skeptical of Apple’s original marketing of the Apple Watch as “our most personal device ever”, but a year later I must say that it would be a hard case to make that something that has been physically attached to me for 83% of my life is anything other than personal.
I wouldn’t say that the Apple Watch is essential, and I think Apple greatly oversold its abilities, which has led many people to be underwhelmed. I also think it was priced too high, which has led Apple to drop the price, and led to sales in a number of retail outlets, something that isn’t common with Apple products. But I choose to put mine on every day.
Calgary is in the middle of a recession, but I have also seen a lot more Apple Watches lately while on my commute, on the pathway system, or on the streets. There’s a lot that I’m hoping for, both in software and in hardware, but as far as “failures” or “flops” are concerned, the Watch is a long way off, and the potential is huge.
Update (2016-04-28): Joe Cieplinski:
Apple obviously thought that sending little hand-drawn pictures was a big enough feature to warrant a dedicated hardware button. Perhaps now, armed with a year of data, they can reassess that decision in the next update to the OS. If you never put a product into the hands of real people, you can’t learn anything about how people will use it and what they’ll want from it.
The Watch is flawed, no doubt. Just like the original iPad, the iPhone, the iPod, and the Mac were before it. But there’s no question in my mind, one year into wearing this thing every day, that it’s a device that has a bright future. The vast majority of Apple customers will never witness these growing pains. They will buy version four or five. Don’t let your early adopter frustrations cloud your perspective on how most people view these products in the long run.
Update (2016-04-29): John Gruber:
Did you know there are games for Apple Watch? My favorite: launching any app and seeing if anything other than a spinner appears on screen.
Pye Jirsa (via Hacker News):
So, I wanted to do another Mac vs PC test to see if we could purchase pre-built Apple computers that can compete in terms of performance and price with our custom PCs which have been built by our own internal IT guru Joseph Wu.
While we really appreciate Apple’s approach to their hardware quality and design, we can’t justify the price to performance difference at this time.
The custom water-cooled PC was between 26% and 114% faster than the top-of-the-line iMac. The most obvious objection to these results is that the iMac has a Retina display and so was pushing around a lot more pixels:
We were using a 2.5K Eizo vs the 5K Mac, however, to compensate we lowered the Smart Preview resolutions to 2048px on both machines, so all tests/previews were running at the same sizes on both sides.
To me it just shows how many hoops you have to jump through to make iMac look bad.
There’s a lot of bullshitting in the parts choices, but the conclusion stands.
To me it’s a broader “Mac performance has lagged behind PC performance more than I can remember”
Of course a purpose-built PC will outperform an iMac, dollar for dollar. That’s not exactly news. The iMac also has a severe price-to-performance disadvantage here by including an expensive 5K panel, while the PC is tested with a much cheaper 2560x1440 monitor.
However, the disappointing thing to me here is that you can’t reach performance parity with the iMac by throwing more money at it. The iMac tested here is a completely maxed out machine.
The Skylake processor in the iMac is something the end user cannot overclock (without doing something extreme). And none of the Mac Pro cores are as fast. And Apple doesn’t water cool their machines. They simply don’t offer robust overclocking options to us. So, for a single-core, the 4.0 GHz Skylake in the iMac is the fastest thing Apple offers.
If there is an underlying, valid core criticism in this article, it’s the same one everybody already knows about: Apple doesn’t offer their end users a truly competitive array of system configurations. Instead, they force their users into carefully defined and limited product tiers, each one costing significantly more than the last.
The Play Store is the one part of the Google Mobile Services suite that is irreplaceable and thus the leverage enforcing the various requirements the [European] Commission objected to, like making Google Search and Chrome defaults, and forbidding AOSP forks.
This monopoly, though, is a lot different than the monopolies of yesteryear: aggregators aren’t limiting consumer choice by controlling supply (like oil) or distribution (like railroads) or infrastructure (like telephone wires); rather, consumers are self-selecting onto the Aggregator’s platform because it’s a better experience. This has completely neutered U.S. antitrust law, which is based on whether or not there has been clear harm to the consumer (primarily through higher prices, but also decreased competition), and it’s why the FTC has declined to sue Google for questionable search practices.
One more implication of aggregation-based monopolies is that once competitors die the aggregators become monopsonies — i.e. the only buyer for modularized suppliers. And this, by extension, turns the virtuous cycle on its head: instead of more consumers leading to more suppliers, a dominant hold over suppliers means that consumers can never leave, rendering a superior user experience less important than a monopoly that looks an awful lot like the ones our antitrust laws were designed to eliminate.
Monday, April 25, 2016 [Tweets]
All that to say, it’s unfortunate that you can’t separately turn on/off smart dashes and quotes in System Preferences. Fortunately, though, you still can through Terminal with:
defaults write 'Apple Global Domain' NSAutomaticDashSubstitutionEnabled 0
Update (2016-04-25): Rosyna Keller notes that it’s best to avoid “Apple Global Domain”:
defaults write -g NSAutomaticDashSubstitutionEnabled 0
I don’t claim to be an expert on ransomware, but after studying various specimens, a general (and obvious?) commonality seems to be that ransomware rapidly encrypts user files. (And after googling around, it appears that others allude to having perhaps similar ideas - at least for Windows). So to me, this rapid encryption of user files, seemed like a promising heuristic that could lead to the generic detection and prevention of ransomware.
Vitaly Shmatikov (via Bruce Schneier):
Short URLs produced by bit.ly, goo.gl, and similar services are so short that they can be scanned by brute force. Our scan discovered a large number of Microsoft OneDrive accounts with private documents. Many of these accounts are unlocked and allow anyone to inject malware that will be automatically downloaded to users’ devices. We also discovered many driving directions that reveal sensitive information for identifiable individuals, including their visits to specialized medical facilities, prisons, and adult establishments.
At any rate, the owners of the services in question quickly modified their code so that short links couldn’t be brute-forced or automatically crawled, and measures were put in place to limit access rates on any particular link.
This stuff was solved years ago on services built by a single developer. This shouldn’t be an issue at large companies like Google and Microsoft.
BTS can sync any folder, as long as you can make changes to that folder. They can be named and located anywhere.
Imagine you have a folder that you want to sync between your iMac and your MacBook. That folder can sync even if they are not on the same network, but if they are (i.e. both are on your home or work LAN) then transfers are fast.
You can disconnect a sync folder from a computer, in which case it is just like Dropbox’s selective sync. The difference is that you can easily look down the main BTS window and see which folders are disconnected.
But if you choose Selective Sync, what happens is so much cooler. Each file is represented by an empty “placeholder” file, which ends with some sort of “bts” suffix. For example, a video file would be “.btsv”. However, you can see all of the files that are in the selectively sync’d folder, and if you need one, just double-click it and it will sync to your computer and then be opened.
Files are kept for 30 days, and versioned, so that if you save a new version over an old version, that old version is also saved for 30 days.
Friday, April 22, 2016 [Tweets]
As you know from our announcement of Swift Open Source and our work on naming guidelines, one of our goals for Swift 3 is to “drop NS” for Foundation. We want to to make the cross-platform Foundation API that is available as part of swift-corelibs feel like it is not tied to just Darwin targets. We also want to reinforce the idea that new Foundation API must fit in with the language, standard library, and the rapidly evolving design patterns we see in the community.
You challenged us on one part of this plan: some Foundation API just doesn’t “feel Swifty”, and a large part of the reason why is that it often does not have the same value type behavior as other Swift types. We took this feedback seriously, and I would like to share with you the start of an important journey for some of the most commonly used APIs on all of our platforms: adopting value semantics for key Foundation types.
The pervasive presence of struct types in the standard library, plus the aforementioned automatic bridging of all Cocoa SDK API when imported into Swift, leads to the feeling of an API impedance mismatch for key unbridged Foundation reference types.
The following value types will be added in the Swift overlay. Immutable/mutable pairs (e.g.
MutableData) will become one mutable struct type[…] The overlay is deployed back to the first supported release for Swift, so the implementation of these types will use the existing reference type API.
Some of the struct types will gain mutating methods. In general, the implementation of the struct type will forward to the underlying reference type, so as to allow a subclass to customize the behavior. If the struct is not initialized with a reference type (using a cast), then it is free to implement as much or as little behavior as it chooses either by delegation to the standard Foundation reference type or via a customized Swift implementation. However, our first version will rely heavily on the existing logic in the Objective-C Foundation framework.
The most obvious drawback to using a struct is that the type can no longer be subclassed. At first glance, this would seem to prevent the customization of behavior of these types. However, by publicizing the reference type and providing a mechanism to wrap it (
mySubclassInstance as ValueType), we enable subclasses to provide customized behavior.
There’s a lot to consider here, but my initial reaction is positive. I don’t fully understand the memory and CPU overhead of the wrapping and bridging yet.
Update (2016-04-25): More about the bridging:
When these types are returned from Objective-C methods, they will be automatically bridged into the equivalent struct type. When these types are passed into Objective-C methods, they will be automatically bridged into the equivalent class type.
Swift has an existing mechanism to support bridging of Swift struct types to Objective-C reference types. It is used for
NSArray, and more. Although it has some performance limitations (especially around eager copying of collection types), these new struct types will use the same functionality for two reasons:
- We do not have block important improvements to our API on the invention of a new bridging system.
- If and when the existing bridging system is improved, we will also be able to take advantage of those improvements.
Bridged struct types adopt a compiler-defined protocol called
_ObjectiveCBridgeable. This protocol defines methods that convert Swift to Objective-C and vice-versa.
I have written about some bridging performance issues here. The proposal claims:
In almost all API in the SDK, the returned value type is immutable. In these cases, the
copy is simply a
retain and this operation is cheap. If the returned type is mutable, then we must pay the full cost of the copy.
However, as with the current bridging of string and collection types, it’s not clear what you can do in the event that there is an unwanted copy that affects performance. Likewise, there may be cases where you want to pass references to mutable objects back and forth. In your own Objective-C code, you could probably defeat the bridging by typing the parameters and return values as
id, but what about in OS code?
Some interesting posts from the mailing list:
In reality, the stored reference object is a Swift class with Swift-native ref counting that implements the methods of the Objective-C
NSData “protocol” (the interface described by
@interface NSData) by holding a “real”
NSMutableData ivar and forwarding messages to it. This technique is actually already in use in the standard library.
This means that if an Objective-C client calls retain on this object (which is a
_SwiftNativeNSData), it actually calls
swift_retain, which mutates the Swift ref count. This means that a call to
_isUniquelyReferencedNonObjC from Swift actually still works on this class type.
If the answer to this is that the
MyMutableData instance is copied in the
transfer from Objective-C to Swift so that mutations in Objective-C don’t
affect the Swift instance, what about the case where I used
let foo? Does that mean that if I modify the data in
Objective-C that it won’t get modified in Swift (and vice-versa)?
Have you looked at the performance and size of your value type designs when they’re made
Optional? In the general case,
Optional works by adding a tag byte after the wrapped instance, but for reference types it uses the 0x0 address instead. Thus, depending on how they’re designed and where Swift is clever enough to find memory savings, these value types might end up taking more memory than the corresponding reference types when made
Optional, particularly if the reference types have tagged pointer representations.
I went down this rabbit hole a few weeks ago, trying to determine if we could make an
isUniquelyReferenced that works for Objective-C-defined classes. One obvious issue is that you can’t always trust retainCount to do the right thing for an arbitrary Objective-C class, because it may have been overridden. We can probably say “don’t do that” and get away with it, but that brings us to Tony’s point: Objective-C weak references are stored in a separate side table, so we can’t atomically determine whether an Objective-C class is uniquely referenced. On platforms that have a non-pointer
isa we could make this work through the inline reference count (which requires changes to the Objective-C runtime and therefore wouldn’t support backward deployment), but that still doesn’t give us
isUniquelyReferenced for Objective-C classes everywhere.
Interestingly, while Swift’s object layout gives us the ability to consider the weak reference count, the Swift runtime currently does not do so. IIRC, part of our reasoning was that
isUniquelyReferencedNonObjC is generally there to implement copy-on-write, where weak references aren’t actually interesting. However, we weren’t comfortable enough in that logic to commit to excluding weak references from
isUniquelyReferencedNonObjC “forever", and certainly aren’t comfortable enough in that reasoning to enshrine “excluding weak references” as part of the semantics of
isUniquelyReference for Objective-C classes.
It turns out you can’t add devices to an iCloud account without triggering an alert because that analysis happens on your device, and doesn’t rely (totally) on a push notification from the server. Apple put the security logic in each device, even though the system still needs a central authority. Basically, they designed the system to not trust them.
Once in place that will make it impossible to place a ‘tap’ using a phantom device without at least someone in the conversation receiving an alert. The way the current system works, you also cannot add a phantom recipient because your own devices keep checking for new recipients on your account.
I hope he can get Apple to talk about iMessage backups.
Craig Hockenberry (tweet):
Over the coming years, displays that only show sRGB are going to feel as antiquated as ones that can only display @1x resolution. And the only way you’re going to be able to cope with all these new kinds viewing environments is with a thing called “color management.”
We’re quickly reaching a point where more pixels don’t make better photos. Think about how much Apple likes to tout the camera and how better saturation improves photos. These new displays are the first step in a process were wider gamuts become a part of the entire iOS photography workflow. The number of places where your code assumes everything is sRGB will be both surprising and painful.
If you have a 9.7-inch iPad Pro, you can see the difference for yourself: Craig set up a simple Web page that lets you load the images and compare against sRGB. You should also be able to see it on a 5K iMac, which also uses the same expanded PCI-3 color space, but I haven’t had a chance to view it on one.
The difference is most noticeable in the Harbor photo: Look at the orange streaky reflections in the water at the center of the image and tap the Compare sRGB button. The other images aren’t as noticeable—I have trouble telling the sRGB differences, probably because the gamut is most pronounced at the red/orange end of the spectrum.
Update (2016-04-22): Craig Hockenberry:
This is clearly a time where our tools and APIs need to evolve. Here are some things that you’ll need to watch out for as you start using color management on iOS[…]
This recent “bot-mania” is at the confluence of two separate trends. One is agent AIs steadily getting better, as evidenced by Siri and Alexa being things people actually use rather than gimmicks. The other is that the the US somehow still hasn’t got a dominant messaging app and Silicon Valley is trying to learn from the success of Asian messenger apps. This involves a peculiar fixation on how these apps, particularly WeChat, incorporate all sorts of functionality seemingly unrelated to messaging. They come away surprised by just how many differently-shaped pegs fit into this seemingly oddly-shaped hole. The thesis, then, is that users will engage more frequently, deeply, and efficiently with third-party services if they’re presented in a conversational UI instead of a separate native app.
As I’ll explain, messenger apps’ apparent success in fulfilling such a surprising array of tasks does not owe to the triumph of “conversational UI.” What they’ve achieved can be much more instructively framed as an adept exploitation of Silicon Valley phone OS makers’ growing failure to fully serve users’ needs, particularly in other parts of the world. Chat apps have responded by evolving into “meta-platforms.” Many of the platform-like aspects they’ve taken on to plaster over gaps in the OS actually have little to do with the core chat functionality. Not only is “conversational UI” a red herring, but as we look more closely, we’ll can even see places where conversational UI has breached its limits and broken down.
I want the first tab of my OS’s home screen to be a central inbox half as good as my chat app’s inbox. It want it to incorporate all my messengers, emails, news subscriptions, and notifications and give me as great a degree of control in managing it.
In the end, I think that what the next ‘MacOS’ needs most is focus. Focus on what it has historically done best — ‘just working’. I don’t think that the current problems of OS X have much to do with its old age or its old models. It’s more a matter of identity. I feel that recent versions of OS X have tried to ‘look friendly’, as if to say Hey folks, I can be simple like iOS! Look, I too can have big app icons taking up the whole screen! I too can go full-screen with apps, and I can do split-view just as well! And I have Notification Centre like on iOS! and so on. This path of convergence with iOS hasn’t been all bad, but the process has involved an accumulation of new features which not always have brought more value or functionality, and often have introduced bugs or annoyances; all this has ultimately undermined the most important aspect of using a Mac — the ‘it just works’ aspect.
Thursday, April 21, 2016 [Tweets]
Xcode also supports Swift code generation, but I don’t think developers should use it. First off, the amount of code you have to write to use Core Data with Swift is less than with Objective-C, since there are not separate interface and implementation files and the property syntax is simpler. It’s not that hard to do by hand, as we’ll see. Second, types are so much more important in Swift, and
NSManagedObject is actually incredibly smart when it comes to types and Swift.
By default, to-many relationships will be generated as
NSSet?. But who can deal with typeless containers these days? Use the native Swift
Set type instead.
If you’re using Core Data with Swift, I hope this inspires you to revisit you models and reduce the amount of force casting and optional unwrapping in your code.
In Xcode 7.3:
NSManaged attribute can be used with methods as well as properties, for access to Core Data’s automatically generated Key-Value-Coding-compliant to-many accessors.
@NSManaged var employees: NSSet
@NSManaged func addEmployeesObject(employee: Employee)
@NSManaged func removeEmployeesObject(employee: Employee)
@NSManaged func addEmployees(employees: NSSet)
@NSManaged func removeEmployees(employees: NSSet)
These can be declared in your
Still, the speed of
objc_msgSend continues to astound me. Considering that it performs a full hash table lookup followed by an indirect jump to the result, the fact that it runs in 2.6 nanoseconds is amazing. That’s about 9 CPU cycles. In the 10.5 days it was a dozen or more, so we’ve seen a nice improvement. To turn this number upside down, if you did nothing but Objective-C message sends, you could do about 400 million of them per second on this computer.
It appears to have slowed down since 10.5, with an
NSInvocation call taking about twice as much time in this test compared to the old one, even though this test is running on faster hardware.
A retain and release pair take about 23 nanoseconds together. Modifying an object’s reference count must be thread safe, so it requires an atomic operation which is relatively expensive when we’re down at the nanosecond level counting individual CPU cycles.
In the old test, creating and destroying an autorelease pool took well over 300ns. Here, it shows up at 25ns.
Objective-C object creation also got a nice speedup, from almost 300ns to about 100ns. Obviously, the typical app creates and destroys a lot of Objective-C objects, so this is really useful. On the flip side, consider that you can send an existing object about 40 messages in the same amount of time it takes to create and destroy a new object, so it’s still a significantly more expensive operation, especially considering that most objects will take more time to create and destroy than a simple
NSObject instance does.
Jason Koebler (via Slashdot):
Here is the truth: Apple paid independent recyclers to recycle old electronics—which were almost never Apple products, by the way—because it’s required by law to do so. Far from banking $40 million on the prospect, Apple likely ended up taking an overall monetary loss. This is not because Apple is a bad actor or is hiding anything, it’s simply how the industry works.
In this sense, electronics recycling often works a lot like carbon offset credits, according to Kyle Wiens, CEO of iFixit, a company focused on helping people reuse and repair their electronics rather than recycle them.
“What they really do is cut recyclers a check and say ‘Can we have credit for a million of your pounds?” Wiens told me.
“The ironic thing is most of the gold is in old PCs and servers,” Wiens said. There’s very little gold in an iPhone.”
John McCall (via Ole Begemann):
The standard library team asked the compiler team to take a look at the performance of the type-checker.
Their observation was that several in-progress additions to the standard library were causing a
disproportionate increase in the time it required to compile a simple-seeming expression, and they were
concerned that this was going to block library progress. This is the resulting analysis.
In general, it is expected that changes to the source code can cause non-linear increases in type-checking
time. For one, all type systems with some form of generic type propagation have the ability to create
exponentially-large types that will generally require exponential work to manipulate. But Swift also
introduces several forms of disjunctive constraint, most notably overload resolution, and solving
these can require combinatorial explosions in type-checker work.
Note that there have been proposals about restricting various kinds of implicit conversions. Those
proposals are not pertinent here; the implicit conversion are not a significant contributor to this
particular problem. The real problem is overloading.
Wednesday, April 20, 2016 [Tweets]
Brad Larson (tweet, comments):
The rewritten Swift version of the framework, despite doing everything the Objective-C version does*, only uses 4549 lines of non-shader code vs. the 20107 lines of code before (shaders were copied straight across between the two). That’s only 22% the size. That reduction in size is due to a radical internal reorganization which makes it far easier to build and define custom filters and other processing operations. For example, take a look at the difference between the old GPUImageSoftEleganceFilter (don’t forget the interface) and the new SoftElegance operation. They do the same thing, yet one is 62 lines long and the other 20. The setup for the new one is much easier to read, as a result.
* (OK, with just a few nonfunctional parts. See the bottom of this page.)
The Swift framework has also been made easier to work with. Clear and simple platform-independent data types (Position, Size, Color, etc.) are used to interact with the framework, and you get safe arrays of values from callbacks, rather than raw pointers. Optionals are used to enable and disable overrides, and enums make values like image orientations easy to follow.
Because of open-source Swift, it now supports Linux. On the Mac and iOS side, though, it is surprising that this sort of thing is necessary when Apple provides Core Image. The original project claims:
This framework compares favorably to Core Image when handling video, taking only 2.5 ms on an iPhone 4 to upload a frame from the camera, apply a gamma filter, and display, versus 106 ms for the same operation using Core Image. CPU-based processing takes 460 ms, making GPUImage 40X faster than Core Image for this operation on this hardware, and 184X faster than CPU-bound processing. On an iPhone 4S, GPUImage is only 4X faster than Core Image for this case, and 102X faster than CPU-bound processing. However, for more complex operations like Gaussian blurs at larger radii, Core Image currently outpaces GPUImage.
While people might argue that you can send an app to Apple for review and later change it completely from your backend. It is uncommon for people to do so. With Amazon Alexa Skills, you can update your response at any time. I am unsure if they monitor your updates and review them later.
To install a skill people need to have the Alexa app on their phone, this app is not a native app but some kind of web app and it’s really hard to navigate and use. While the capabilities of the Echo are remarkable, the app experience is very poor.
One important missing thing in the skills store is selling skills, currently everything is free. You can use your login and handle everything by yourself, but without a real business model for developers, I can’t see skills as a real game changer. Only small skills like ours and add ons for already existing services are possible.
The Echo initially sounded redundant to me, since I have an iPhone with “Hey Siri” and HomeKit support. But Siri is slower, doesn’t work when the phone is in my pocket, and doesn’t have an API. Amazon’s digital hub for the home seems to support more devices. Apparently, it even works with music from iTunes, via an iOS app. I still don’t really know what I would use it for, but it seems promising.
Imagine what would happen if this were true for buildings… if the efficiency and style and ambience of every building in the world could be fixed, all at once, in exchange for one investment.
Alas, software tends to be mediocre.
Perhaps the biggest problem: In many markets, especially online, software is free. And free software built by corporations turns us from the user into the product. If you’re not paying for it, after all, you must be the bait for the person who is. Which means companies spend time figuring out how to extract value once we’re locked in and can’t easily switch.
Tuesday, April 19, 2016 [Tweets]
Two Factor Auth (via Adam Chandler):
List of websites and whether or not they support 2FA.
Starting in OS X Yosemite, Apple introduced a new option to log into your Mac using the password associated with an Apple ID. As of OS X 10.11.4, this option seems to have been removed from the Users & Groups preference pane in System Preferences.
Curiouser and curiouser. Has Apple removed the option for security reasons? Is there a bug? Why is Apple always so damned secretive?
When I rejoined Twitter, I already knew who to follow, because I had people I followed during my previous stint. For completely new users, however, the first hurdle is figuring out who to follow. Twitter unhelpfully suggests celebrities. These suggestions are self-defeating, because celebrities are almost guaranteed to ignore you. They have way more followers than they can respond to personally. So you can follow celebrity accounts, tweet to the celebrity accounts, and ... nothing. That gets old quickly. You can try to “personalize” your experience by telling Twitter your interests, but the categories are so broad (e.g., Music) that you end up with more celebrity accounts anyway. And you can upload your contacts to Twitter in order to discover your contacts on Twitter, but what if you don’t want to provide your address book to Twitter? And while uploading your contacts may be a good way of finding people you already know on Twitter, it’s not necessarily a good way of finding people you don’t know. What if you’re interested in, say, Mac programming, and you’re new and unknown in the field, so you don’t have any existing contacts? Indeed, what if you’re signing up for Twitter in order to meet other people in your field? Good luck with that.
The current Git version is 2.8.1. Xcode 7.3 comes with Git 2.6.4.
rachelbythebay (via Hacker News):
git 2.6.4. Is anything wrong with that? Well, yeah, actually. Say hello to CVE-2016-2324 and CVE-2016-2315, present in everything before 2.7.1 according to the report. You should check this out.
Remote. Code. Execution.
Apple is doing something new which basically keeps you from twiddling certain system-level programs without going to fantastic lengths. Not even root is enough to do it. In short, you can’t just replace /usr/bin/git.
Companies like Apple and Microsoft prevent you from modifying the software installed on your computer to improve your security.
Ironically, when they do that, they also make it difficult, impractical, or impossible for you to upgrade or disable vulnerable software (in this case, an old, insecure version of git with remote-code-execution vulnerability).
/usr/bin/git is a “toolshim” that effectively calls “xcrun git” (it actually calls xcselect_invoke_xcrun, from /usr/lib/libxcselect.dylib, if you really want the details - this can be found by inspecting the binary). xcode-select’s manpage tells you that these shims call the respective binary in the active developer directory, whereas xcrun’s manpage describes its capabilities in more detail.
Imagine that you are a corp IT and managing a fleet of developers with Macs. You can push a newer version of git to them, and you can even change their default PATH so that the version of git you pushed are before the git comes with Apple. But you still cannot remove the one comes with Apple, and you cannot prevent it from being used.
Saturday, April 16, 2016 [Tweets]
It’s easy to look at successful, painstakingly crafted, impeccably designed apps from well-known developers like Panic or Omni and attribute their success to their craftsmanship, design, and delightful details. Far too many developers believe that if they polish an app to a similar level, they’ll be successful, too. And then they pour months or years of effort into an app that, more often than not, never takes off and can’t sustain that level of effort.
These high-profile success stories didn’t become successful because they invested tons of time or had world-class designs — they became successful because they solved common needs, that people were willing to pay good money for, in areas with relatively little competition. […] The craftsmanship and design were indulgent luxuries that their successful market fits enabled them to do, not the other way around.
Recognize that indie development is flooded with competition. This isn’t to discourage anyone from entering it, but should be considered when deciding what to do (and not do): keep your costs as low as possible, and get ideas to market quickly before assuming they’ll be successful.
Richard Eckel (via John Carmack, Hacker News):
Cutler, a Microsoft Senior Technical Fellow whose impressive body of work spans five decades and two coasts, will be honored Saturday evening as a Computer History Museum Fellow, along with Lee Felsenstein, the designer of the Osborne 1, the first mass-produced portable computer; and Philip Moorby, one of the inventors of the Verilog hardware description language.
Cutler, 74, who still comes to his office each day on Microsoft’s sprawling Redmond, Washington, campus, has shaped entire eras: from his work developing the VMS operating system for Digital Equipment Corporation in the late ‘70s, his central role in the development of Windows NT – the basis for all major versions of Windows since 1993 – to his more recent experiences in developing the Microsoft Azure cloud operating system and the hypervisor for Xbox One that allows the console to be more than just for gaming.
One of the toughest challenges was testing the system. Early on, the team decided it didn’t have the resources necessary to write a comprehensive test suite. Instead, they opted for a dynamic stress system, which put a severe load on the overall system. Every night the team ran stress tests on hundreds of machines. The next morning the team would arrive at the office, triage the failures and identify the bugs for the daily 9 a.m. bug-review meeting.
I had the privilege to work on the windows kernel team in the NT5 then XP days. I really wish they could share some of his code. it was the cleanest, well segmented, and commented code I’ve ever seen. it made the system much more maintainable and understandable, in areas that are inherently complex. great interfaces with a clear understanding of what was going in and coming out. and it helped all the other devs raise their game.
I have this little saying that the successful people in the world are the people that do the things that the unsuccessful ones won’t. So I’ve always been this person that, you know…I will build the system, I will fix the bugs, I will fix other people’s bugs, I will fix build breaks. It’s all part of getting the job done.
Amazon (via Hacker News, Slashdot):
Kindle Oasis features a high-resolution 300 ppi display for crisp, laser-quality text—all on the same 6” display size as Kindle Voyage. A redesigned built-in light features 60% more LEDs than any other Kindle, increasing the consistency and range of screen brightness for improved reading in all types of lighting. Kindle Oasis guides light toward the surface of the display with its built-in front light—unlike back-lit tablets that shine in your eyes—so you can read comfortably for hours without eyestrain.
Charge the device and cover simultaneously while snapped together and plugged in. When on the go, the cover will automatically recharge the device, giving you months of combined battery life.
At $289, the Oasis is the most expensive Kindle in years, four times the price of the entry-level Kindle, which does all the same things. But damn is it tiny. The smallest Kindle yet at less than five ounces and just 3.4mm thick at its smallest point. Got two quarters? Stack them. That’s how thick the Oasis is. It makes an iPhone 6 look porcine.
Kindle is for reading. Nothing more. Everything about its performance, its design, its software, reflects that.
The major changes here are in the form factor: instead of the earlier version’s tablet shape, the Oasis is more of a wedge, with a bulge on one side intended to make it more ergonomic to hold. (You can do so with either the left or right hand, and the Kindle’s screen will rotate to accommodate.) Backward and forward page-turning is done either by the touch screen or by actual physical buttons on the side with the larger bezel.
Amazon calls the latest version “the thinnest and lightest Kindle ever”; frankly, I just got a Paperwhite last week, which already feels pretty darn light, but the Wi-Fi-only version of the Oasis is 4.6 oz, compared to the 7.2 oz of the Paperwhite, so there you go.
The iPad mini 4 is 10.4 oz. My ideal tablet would have the form factor and display of a Kindle but the speed and content availability of an iPad.
It’s got a charging cover, which doesn’t make sense, given how long the Kindle’s battery lasts.
No matter how carefully you pick the text for your screenshots @JeffBezos you can’t hide that garbage typography.
Friday, April 15, 2016 [Tweets]
If you no longer need QuickTime 7, here’s how to remove it from your PC.
They don’t say why you might want to do this, though.
The Department of Homeland Security’s U.S. Computer Emergency Readiness Team today issued an alert recommending Windows users with QuickTime installed uninstall the software as new vulnerabilities have been discovered that Apple does not plan to patch.
First, Apple is deprecating QuickTime for Microsoft Windows. They will no longer be issuing security updates for the product on the Windows Platform and recommend users uninstall it. Note that this does not apply to QuickTime on Mac OSX.
Second, our Zero Day Initiative has just released two advisories ZDI-16-241 and ZDI-16-242 detailing two new, critical vulnerabilities affecting QuickTime for Windows. These advisories are being released in accordance with the Zero Day Initiative’s Disclosure Policy for when a vendor does not issue a security patch for a disclosed vulnerability. And because Apple is no longer providing security updates for QuickTime on Windows, these vulnerabilities are never going to be patched.
The retirement of QuickTime for Windows has been in the planning stages for at least a few months, and possibly much longer. Apple has never supported QuickTime for Windows 8 or 10, although some users found ways to work around the restriction. What’s more, the January update removed the browser plugin for QuickTime, making it impossible for video on websites to seamlessly play in a user’s browser. As a result, there’s little chance QuickTime vulnerabilities could be harnessed into a drive-by download exploit. Instead, exploits would have to rely on social engineering that convinces a user to download a video and open it in QuickTime.
Even so, Apple officials should have shown the courtesy to tell Windows users QuickTime was no longer receiving security updates, rather than leaving it to Trend Micro.
As for Adobe apps needing QuickTime on Windows, there’s also irony there. All indications were that Apple didn’t tell Adobe until everyone else found out. The same thing happened when Apple announced during a Carbon WWDC session that 64-bit HIToolbox was cancelled. This was the first time Adobe or anyone else learned about the cancellation.
Apparently, Lightroom 6 for Windows relies on QuickTime.
Update (2016-04-16): Nick Heer:
It’s easy enough to uninstall QuickTime, but a surprising number of programs on Windows list it as a dependency, including GoPro Studio and Cubase to run, and Premiere Pro, After Effects, and Traktor for various features.
As far as users go, the average user now has a number of alternatives, starting with VLC, but there are a number of people working on Windows in media and media-related industries who will miss having a reference media player on their machine (iTunes’ just not the same thing). However, software developers who were still building against the QuickTime SDK and relying on QuickTime being installed on Windows should have seen it coming for some time: the writing has been on the wall for QuickTime for Windows since QuickTime X in 2009, when there was no corresponding update on the Windows side, which stayed on QuickTime 7.
Update (2016-04-19): Adobe (via Rosyna Keller):
Adobe has worked extensively on removing dependencies on QuickTime in its professional video, audio and digital imaging applications and native decoding of many .mov formats is available today (including uncompressed, DV, IMX, MPEG2, XDCAM, h264, JPEG, DNxHD, DNxHR, ProRes, AVCI and Cineform). Native export support is also possible for DV and Cineform in .mov wrappers.
Unfortunately, there are some codecs which remain dependent on QuickTime being installed on Windows, most notably Apple ProRes. We know how common this format is in many worfklows, and we continue to work hard to improve this situation, but have no estimated timeframe for native decode currently.
Adam Satariano and Alex Webb:
Among the ideas being pursued, Apple is considering paid search, a Google-like model in which companies would pay to have their app shown at the top of search results based on what a customer is seeking. For instance, a game developer could pay to have its program shown when somebody looks for “football game,” “word puzzle” or “blackjack.”
Paid search, which Google turned into a multibillion-dollar business, would give Apple a new way to make money from the App Store.
This sounds like a terrible idea. The one and only thing Apple should do with App Store search is make it more accurate. They don’t need to squeeze any more money from it. More accurate, reliable App Store search would help users and help good developers. It’s downright embarrassing that App Store search is still so bad. Google web search is better for searching Apple’s App Store than the App Store’s built-in search. That’s the problem Apple needs to address.
Putting aside the fact that such a move seems un-Apple-like, I don’t see how it would benefit Apple, either.
Allowing third parties to pay for placement in the App Store would not contribute to Apple’s justifications for the App Store in any way. Who benefits from such a change? The businesses paying for the placement, presumably. It’s hard to see how paid placement would consistently benefit either Apple or its direct customers. It’s unlikely that paid listings would be used to highlight apps that are in line with Apple’s other goals for the store.
Subramanian is right in one sense: if Apple does this, it will be huge. It’ll be huge in eradicating any sense that the App Store is a meritocracy when it comes to app visibility.
My bigger concern, though, is paid placement permeating throughout the store, such as on to the entry pages a great many people use to find new apps and games. There, Apple’s ‘curation’ is uneven. I’ve been told by various American friends that ‘Editor’s Choice’ in the US is closer in meaning to ‘this is interesting’ than ‘this is amazing’, but even so, that slot is often filled with garbage, albeit garbage released by companies important to Apple from a revenue standpoint.
Apple doesn’t need “a new way to make money from the App Store”. They need a way to get developers to make more money. They need to de-crappify the Store and improve the chances of success for smaller developers.
I doubt this is true, because I don’t understand this move at all. Apple makes their biggest margins on selling their hardware, and any potential revenue from App Store pay-to-play will be dwarfed by profits from their products. The App Store needs some work done on discovery, but it’s not to make discovery less egalitarian towards Big Money.
Apple has done some dumb things in the company’s history, but this stands out as particularly stupid. Let’s be honest; Apple really doesn’t need the money that they’d be making from paid search placement, and all this will do is make the customer experience worse. It’s already very hard to find anything on the App Store, since Apple is so lenient about clones, and about apps using misleading keywords in their names and descriptions. Adding paid search will turn the App Store into a random morass of crap.
Apple is said to have approximately 100 employees working on its App Store project under vice president and former iAd leader Todd Terisi, including engineers who formerly worked on the iAd team. According to sources who spoke to Bloomberg, the search team is relatively new and it is not yet known if and when changes will be introduced to the App Store.
Update (2016-04-16): Andrew Cunningham:
That said, charging for visibility might not actually solve any of those problems. Those with the money to pay Apple’s fees could well be the same big-name app developers whose software already dominates search results and the Top Charts. And making enough money from your app to make paying for search results worthwhile could still be contingent on getting into those Top Charts or onto one of Apple’s curated lists somehow.
Apple ran a video at WWDC last year called The App Effect. In it, Apple tries to deliver the message that the App Store is a platform that gives big companies and one-man-shows a level playing field. […] I really hope Apple sees value in fixing the App Store before thinking of ways to squeeze more money out of it.
There’s just too many downsides associated with charging developers for placement in App Store search results. I would be shocked if Apple made a move like this.
Am I wrong in suggesting that Apple created this problem and is now asking developers to pay to "fix" it? Why wasn’t search already better?
If I was tasked with creating a system that only benefits those ALREADY doing well in the app store, and hurts indie developers I would come up with exactly what they’re proposing.
I would like Apple to fix search before they start asking devs to pay for placement. For such a simple data set, their search features are completely non-existent. Search terms look to need to be pretty close to exact, the search results are artificially limited by some mechanism, no ability to search multiple terms, no ability to create custom lists, no ability to filter based on more than their two or three meaningless filters, etc.
See also: Hacker News.
Update (2016-04-19): Nick Heer:
What concerns me is that this story would have been immediately written-off prior to the introduction of iAd, or even just a few years ago. It is entirely unlike Apple. But recent decisions by Apple — such as the introduction of an interstitial ad displaying to users not subscribed to Apple Music, or the other interstitial ad that displayed on older iPhones after the introduction of the 6S — makes this all the more likely.
Update (2016-04-21): Ben Thompson:
As for the concerns of Apple bloggers that such a scheme will reinforce the tendency of the App Store to ensure the rich get richer, well, I’m sorry to say but there is no evidence that Apple cares. The company has done nothing to help developers with more traditional business models (i.e. not pay-to-play games) monetize; indeed, in a telling twist the team working on this search ad product is the former iAd team, which Steve Jobs himself said existed so that apps could be as cheap as possible. The Occam’s Razor conclusion is that Apple is actually serious about their services business or, perhaps more accurately, hopeful they can offer an alternative narrative to Wall Street alongside what might be a very tough earnings report.
Such a system would exacerbate much of the App Store’s dysfunction, disincentivizing improvements to organic search and editorial features while raising the cost of acquiring new customers above what many indie developers and business models can sustain.
While a good search-ad system could benefit the App Store, customers, and many of us, nothing in Apple’s track record suggests that they’re willing or able to do this well.
But a bad search-ad system, on top of bad search, will only further damage the App Store, funnel more of our already slim margins back into Apple like a massive regressive tax, and erode customers’ confidence in installing new apps.
Update (2016-04-22): John Gruber:
Perhaps comparisons to Google search are a red herring, and the right comparison is to Amazon, and retail co-op. Pay for placement, just like in grocery stores.
I don’t think it makes sense that it’s a trial balloon from someone in favor of the program. Apple doesn’t care about “warming us up” to changes. They don’t care. I think it makes more sense as a leak from someone opposed to it, and who foresaw that it wouldn’t go over well.
The App Store started off indie because of the shared code with Mac and intense developer interest, but I think Apple’s plan has always been to cater to big brands, like Nike, Disney, Bank of America, etc.
The reason I was wrong about Apple making money on paid search is I was looking at this from my own perspective, that Apple doesn’t stand to make money from me (and people like me) on pay-to-play App Store search results. But from the big brands like Nike, Disney, and Bank of America, etc, Apple absolutely stands to make good money.
I know this gets repeated ad nauseum, but it remains true: the App Store is not in good shape. A paid search placement feature dropped overtop the existing infrastructure would likely be a disaster.
Update (2016-04-25): Roopesh Chander:
Actually, ad-like stuff already shows up in App Store search. If you search for “podcast player” right now in the App Store, you get an ad for Apple’s Podcasts app right on top, and the search results below that. (However, I don’t know of any other app that’s promoted this way. Anything else you’ve spotted?)
I don’t think ads in App Store search can improve the viability of paid-upfront apps being able to sustain themselves.
Thursday, April 14, 2016 [Tweets]
According to the study, the top 5 categories where money is being spent are, in order: Games, Music, Social Networking, Entertainment and Lifestyle.
I, like Michael, would love for indie developers (at least the one who make productivity apps and the like) to see their apps climb the charts too. But an excellently designed calendar app just doesn’t make enough users feel the same way as winning a game against a friend. So when it comes time to choose between spending $5 on a better calendaring system or spending those five dollars in beating a friend at a game, which developer you think is buying themselves a coffee that evening?
See also: Jim Dalrymple, who talks about what he sees being promoted in the App Store.
Update (2016-04-14): Craig Grannell:
Which makes Apple’s ongoing lack of interest in games all the more baffling.
Update (2016-04-15): Paul Jones:
While professional, productivity, and utility applications aren’t #1, I get the impression its still a huge market. Stranger still, the recent Game Center white-screen bugs seem to indicate that Apple’s incentives haven’t particularly swayed to providing to the needs of game developers, but the amount of new graphics APIs recently somewhat counter this.