Another point to make is that Apple’s terms and conditions make it clear that you do not own any content you purchase from the company, but are only granted access until your death. That’s a much more complicated issue that may, one day, have to be dealt with by the courts.
Friday, March 7, 2014
Thursday, March 6, 2014
Doug Carlston, computer games pioneer and founder of Brøderbund Software, Inc., has donated to The Strong in Rochester, New York, a collection of games, consumer software, and corporate records that document the history of the company and the development of the computer games industry in the 1980s and 1990s. The materials will be cared for by The Strong’s International Center for the History of Electronic Games (ICHEG) and made accessible to researchers.
The bug in the GnuTLS library makes it trivial for attackers to bypass secure sockets layer (SSL) and Transport Layer Security (TLS) protections available on websites that depend on the open source package. Initial estimates included in Internet discussions such as this one indicate that more than 200 different operating systems or applications rely on GnuTLS to implement crucial SSL and TLS operations, but it wouldn't be surprising if the actual number is much higher. Web applications, e-mail programs, and other code that use the library are vulnerable to exploits that allow attackers monitoring connections to silently decode encrypted traffic passing between end users and servers.
It sounds a lot like the recent Apple bug.
Getty Images is dropping the watermark for the bulk of its collection, in exchange for an open-embed program that will let users drop in any image they want, as long as the service gets to append a footer at the bottom of the picture with a credit and link to the licensing page. For a small-scale WordPress blog with no photo budget, this looks an awful lot like free stock imagery.
Model objects live on the main thread. This makes it easy to use VSNote, VSTag, and so on in view controllers and in syncing.
There is one exception: you can create a “detached” copy of a model object to use with API calls. A detached model object exists on one thread of execution only, is short-lived, and is disconnected from the database. Detached objects aren’t a factor when it comes to concurrency.
When a model object is added, changed, or deleted, updates to the database are placed in a background serial queue.
Update (2014-03-07): Jesper:
Of Apple’s fixes’ own admission, Core Data sync didn’t work because it was a black box with no ability to debug it. It would be unfair to zing Core Data at large with that epithet. But if it’s something that seems true about Apple’s frameworks, love them mostly as I do, it’s that they’re constructed as if to impress on their user how privileged they should feel because of the difficulty of the bar that they set to solve the problem at, and the complexity of implementation they have used to convincingly solve the problem.
Basic features are still painful for people that have been successful Cocoa coders for ten years. They’re not sufficiently saved by the ripening of frameworks as much as by their own accumulated ingenuity. Cocoa is still being developed, features are added, but rarely does something hard get easier.
The second reason has to do with my enduring love of plain-ol’ Cocoa. I like regular Cocoa objects. I like being able to implement
hash, and design objects that can be created with a simple
init(when possible and sensible). I especially like being able to do those things with model objects. (Which totally makes sense.)
Tuesday, March 4, 2014
I’m really excited about this release! It’s got features that many people have been asking for, and it opens Arq up to a whole new range of options for storing backup data.
Glacier backups now use the S3 Glacier Lifecycle feature. Among other benefits, this allows Arq to prune old Glacier commits (that previously were immortal) and subject them to the budget. Unfortunately, Glacier vaults from previous versions of Arq cannot be transitioned; you have to delete them and create a new backup target (not in that order!).
You can now back up to other S3-compatible destinations such as as DreamObjects, which is about half the price of Amazon S3 and has fewer restrictions than the (even cheaper) Amazon Glacier. I plan to continue using Glacier and S3 because the performance has been great and (in theory, see below) the reliability is unmatched. But it’s nice to have alternative services to switch to or use in parallel.
Arq now supports backups via SFTP, which is something I’ve wanted a backup app to do for as long as I can remember. I have an account with DreamHost, and they offer 50 GB of SFTP space for personal backups. This is a convenient, free space I can use for my most important backups. It avoids the delays and expense of restoring from Glacier. DreamHost Personal Backup is great as a secondary backup target, but it is not itself backed up so you should still use AWS or another service for your primary.
You can also use SFTP to make a local backup or archive on a NAS or other Mac that you have an account on.
Aside from the new storage options, the other big new feature is that you can now have multiple backup targets. This lets you have multiple backups going to different cloud services. You can also spread your files across multiple targets, e.g. if you want your Documents folder to have a different backup schedule than your Aperture or iTunes library. Each target can also have a separate budget, which lets you keep a longer history for certain folders. You can also pause a backup target (by setting its schedule to manual) in order to give priority to other targets (since Arq seems to only back up to one target at a time). Alas, the targets cannot be renamed or reordered, and you cannot copy file exclusion patterns from one target to another.
I’ve been seriously using Arq since version 2, and version 3 was one of my favorite apps. Version 4 so far seems to be better still. The app itself has been reliable (rarely crashing) and has not hogged the CPU (like other backup apps I’ve tried). However, I have had some problems with the reliability of Arq’s backups. It’s not clear whether this is due to a bug in Arq itself or problems with the cloud storage provider (AWS).
Twice in the last six months, I’ve found that backup snapshots (“commits”) older than a certain date had disappeared. Arq stores the commits in a linked list. If a commit object is lost, Arq, naturally, will no longer be able to find the trees and blobs in that commit. But it will also lose the link to the parent commit (previous backup snapshot) and, thus, all of the previous snapshots. In theory, much of the data is still on the server, but it’s no longer in an accessible form, and Arq will garbage collect it when it enforces the budget.
The developer, of course, takes this sort of thing very seriously. The first time I noticed missing backup snapshots, he told me that several other customers had reported the same problem around the same time. It seemed as though the problem was that Amazon S3 was reporting objects as missing (when doing the equivalent of an ls) even though it could successfully fetch their data when asked (the equivalent of stat or cat). So when Arq periodically verified its backups, it would delete objects related to the “missing” ones unnecessarily. An update to Arq was soon released to fix this.
At the time, I was using S3 Reduced Redundancy Storage for my backups. RRS storage is cheaper than regular S3 but offers only 99.99% durability compared with 99.999999999%. Since I have other backups besides Arq, I did not think I needed to pay for those extra 9’s. I thought it was acceptable to lose 1 in 10,000 objects, even though I have many more files than that. What I failed to appreciate was that the lost object might not be a file. It could instead be a commit object. In that case, losing that one object effectively means losing hundreds of thousands or even millions of other objects. These days, I think there is little reason to use RRS with Arq. You can store your backup data in Glacier, which is much cheaper than RRS yet has the same durability as S3. The backup metadata is stored in S3.
It’s not clear whether RRS was at fault, but I switched away from it just to be safe. Then, a few months later, I noticed that more old backups had disappeared. This time, other Arq users had apparently not encountered the same problem. It’s hard to know, though, because it is not obvious in the user interface that backups have been lost. You only notice it when you click a disclosure triangle to see the list of snapshots and see that the list is shorter than expected.
I never actually lost any current backups, but I was intending to use Arq as a historical archive as well, because sometimes I need access to old versions of files. In that sense, the cloud backup is much more than a backup; I do not have master local copies of all the versions.
It’s obviously very troubling to have a backup app or cloud storage provider lose my backups. But I continue to use and recommend Arq for several reasons. First, I have confidence in the product’s basic design and in Stefan, its developer. Second, Arq 4’s support for multiple backup targets offers a variety of ways to mitigate the problems caused by lost objects. Third, I have tried just about every backup product I could find over the years, and I have yet to find one that’s better. The closer I look, the more flaws and design limitations become visible. For example, Backblaze is highly regarded, yet it silently deletes backups of external drives that haven’t been connected in a while.
Backups are important enough that I make local ones (using SuperDuper and DropDMG) even though that’s more work than just relying on the cloud. I want to have copies of my data in my physical possession. There are also obvious benefits to making cloud backups, e.g. using Arq, so I do that as well. What I have more recently come to realize is that cloud backups are important enough that I shouldn’t rely on just one provider. Before Arq I used CrashPlan, and it, too, occasionally lost my data. The lesson here is that there is no perfect cloud provider. I should plan for failure and use multiple good providers. I am now using CrashPlan alongside Arq.
The second lesson I’m learning is that I value access to old versions of files but that there are few, if any, backup products that can provide this over the long term. The answer, I believe, is to structure the data so that the backup, rather than the backup history, contains the old versions. In other words, put the versions in band, where possible. For example, a single backup snapshot of a Git repository includes the complete, checksummed history for those files. I don’t need last year’s backup if I committed the file to Git last year and I have yesterday’s backup. Of course, my source code has been in version control from the beginning. But I am now using version control to track other types of files such as notes, recipes, my 1Password database, and my calendar and address book. This lets a newly created cloud backup contain versions from years ago.
The same logic holds for verifying the backup. It’s nice if the backup software can do this, but if your data has in band checksums you can verify the restored files independently. You can also verify your working files so that you can identify damage and know when you need to restore a clean copy from backup. You can verify files in Git using git-fsck. For files not in Git, I use EagleFiler and IntegrityChecker.
Long ago, as the design of the Unix file system was being worked out, the entries . and .. appeared, to make navigation easier. […] When one typed ls, however, these files appeared, so either Ken or Dennis added a simple test to the program. It was in assembler then, but the code in question was equivalent to something like this:if (name == '.') continue;
I’m pretty sure the concept of a hidden file was an unintended consequence. It was certainly a mistake.
Disallowing an app from controlling another is a good idea (I sure don’t want an app selecting menu items for me!) and the App Sandbox Design Guide’s statements about accessibility make complete sense.
That being said, automatically moving windows around on my screen is something that helps me do my job and something I can explicitly control using Accessibility in System Preferences. As a user, this type of “controlling my app” means “making my work easier”.
He wants part of System Events’ AppleScript dictionary to have an access group so that it can be a scripting target. This would make it possible to target System Events from a sandboxed application using the com.apple.security.scripting-targets entitlement rather than the broader com.apple.security.temporary-exception.apple-events one that’s likely to be rejected by App Review.
Unfortunately, access groups are not yet widely supported by Mac OS X’s built-in applications or by third-party ones. One app that does support access groups is iTunes, whose .sdef file is, curiously, not stored inside iTunes.app.
There are two changes in this update that I really like:
When generating the Markers popup, leading whitespace from the marker name is now used to indent the menu item, so that type-to-select works correctly in the menu.
Control-Tab actually does not work for its intended purpose, which was to flip the sense of “Auto-Expand Tabs” on the fly when entering a tab character. To work around this, Option-Tab has been defined so that it always enters a literal Tab character; thus, if “Auto-Expand Tabs” is turned on, use Option-Tab to enter a tab character instead of spaces.
Read the release notes for each BBEdit update to see just how much behind-the-scenes work it takes to keep a top Mac app up-to-date and polished.
Having an email address in a domain you control and hosting your email at a provider you like can solve numerous problems and perhaps even improve your image.
Apple ships a patched version of OpenSSL with OS X. If no precautions are taken, their changes rob you of the power to choose your trusted CAs, and break the semantics of a callback that can be used for custom checks and verifications in client software.
The reason for this unexpected behavior is that Apple is trying to be helpful. Certificate validation and especially trust databases are a hassle and OpenSSL’s handling of them is rather user-hostile. So Apple patched a Trust Evaluation Agent (TEA) into their OpenSSL. It gives failed verifications a second chance using the system keyring as trust store.
Apple has rebranded iOS in the Car as the much more syllable-friendly “CarPlay”, and launched it in Geneva. This new version has a much different interface than that shown at WWDC, as can be seen on the CarPlay page on Apple’s website. Also of note: there are third-party apps which support CarPlay; it isn’t known yet whether third-party developers require a special agreement to enable CarPlay support.
Interacting with CarPlay can be done via buttons/knobs or directly by touch (if available). It’s important to note that CarPlay likely won’t replace the need for checking an expensive box on your car’s option list. The OEM still needs to provide the underlying hardware/interface, CarPlay simply leverages the display and communicates over Apple’s Lightning cable.
It also has the potential to fizzle out because Apple demands more control than their partners are comfortable with, like iAd, or their interests conflict too much with the partners’ interests without enough upside to the partners, like iTunes TV rentals.
The risk seems clear: Apple isn’t building the hardware in the cars. Color me skeptical that this is going to work smoothly. Also, no third-party app support — yet. UPDATE: Actually, there are a handful of third-party apps — Beats Radio, iHeartRadio, Spotify, and Stitcher — but those are hand-picked partners. What I’m saying is there’s no way yet for any app in the App Store to present a CarPlay-specific interface.
Volvo confirmed that CarPlay’s connection and video mirroring functionality is based on a streaming H.264 video feed, prompting watchers to speculate that the feature is based on AirPlay, an Apple-designed media streaming technology.
In a rather surprising find earlier today, N4BB was able to confirm that CarPlay runs on QNX, an operating system the embattled Canadian smartphone maker BlackBerry acquired Harman International Industries back in 2010…
For all we know, CarPlay might just be an extension to the existing car entertainment systems, using something like VNC (or hopefully something more optimized for the use-case) to show the iOS screen on the existing infrastructure.
In that case, the car is running QNX because it has always been running QNX and because the car must be useable even if the user decides to switch to a different platform or loses their device.
In that scenario, saying CarPlay is running QNX is similar to saying your Thunderbolt display is running OSX when it’s connected to your Mac running OSX, or using an even closer analogy, similar to saying that your OS X machine is running linux because you’re using SSH connected to a linux box (or any other kind of remote desktop)
Previous reports had suggested that CarPlay would communicate with displays wirelessly using some version of Apple’s AirPlay protocol, but according to today’s release, the feature will only work with Lightning-equipped iPhones.
But how does CarPlay stack up to the current crop of infotainment systems? Here’s a breakdown of how Apple’s first real attempt at dashboard dominance competes with the best from the established automakers.
I’m sitting at my desk right now, waiting to sync my iPhone. I think I started about twenty minutes ago, and all I’m doing is adding a bunch of audio files I want to listen to when I go out for a walk. Which I hope to do before the sun goes down…
Back in the day, this process was much faster than it is now. I don’t know exactly what’s changed since iOS 7, but I see this all the time, on all my iOS devices: iPhone, iPod touch, iPad Air.
This has been my experience as well. It is somewhat faster now that I’ve turned off photo syncing in favor of FlickStackr (App Store) and podcast syncing in favor of Downcast (App Store). This is curious since the photos, especially, didn’t change very much but always seemed to require an inordinate amount of time to sync. The nice thing about FlickStackr is that it lets me zoom in more than the regular Photos app. The photos also seem to have fewer JPEG compression artifacts. Unfortunately, I have to remember to tell it to load the photos while I have a Wi-Fi connection.
If you find yourself in a situation that is difficult to solve with Auto Layout, just don’t use it for that particular view. You can freely mix the constraint-based layout with manual layout code, even within the same view hierarchy.
You can think of Auto Layout as just an additional step that runs automatically in your view’s
layoutSubviewsmethod. The Auto Layout algorith performs some magic, at the end of which your subviews’ frames are set correctly according to the layout constraints. When that step is done, the Auto Layout engine halts until a relayout is required (for example, because the parent view size changes or a constraint gets added). What you do to your subviews’ frames after Auto Layout has done its job, doesn’t matter.
Sunday, March 2, 2014
The built-in cameras on Apple computers were designed to prevent this, says Stephen Checkoway, a computer science professor at Johns Hopkins and a co-author of the study. “Apple went to some amount of effort to make sure that the LED would turn on whenever the camera was taking images,” Checkoway says. The 2008-era Apple products they studied had a “hardware interlock” between the camera and the light to ensure that the camera couldn’t turn on without alerting its owner.
MacBooks are designed to prevent software running on the MacBook’s central processing unit (CPU) from activating its iSight camera without turning on the light. But researchers figured out how to reprogram the chip inside the camera, known as a micro-controller, to defeat this security feature. In a paper called “iSeeYou: Disabling the MacBook Webcam Indicator LED,” Brocker and Checkoway describe how to reprogram the iSight camera’s micro-controller to allow the camera and light to be activated independently. That allows the camera to be turned on while the light stays off.
See also Checkoway’s iSightDefender on GitHub.
Two years ago I developed a case of Emacs Pinkie (RSI) so severe my hands went numb and I could no longer type or work. Desperate, I tried voice recognition. At first programming with it was painfully slow but, as I couldn’t type, I persevered. After several months of vocab tweaking and duct-tape coding in Python and Emacs Lisp, I had a system that enabled me to code faster and more efficiently by voice than I ever had by hand.
In a fast-paced live demo, I will create a small system using Python, plus a few other languages for good measure, and deploy it without touching the keyboard. The demo gods will make a scheduled appearance. I hope to convince you that voice recognition is no longer a crutch for the disabled or limited to plain prose. It’s now a highly effective tool that could benefit all programmers.
I used the Newton as a productivity device. I used the P800 as a productivity device. But at least for me, the iPad never turned out to be a good productivity device. It turned out to be great for browsing the web, watching movies, and playing games. Great for reading books and comics. Great for consumption. But not great for production.
The iPad will have arrived as a productivity device when news sites stop reporting about people who use iPads for productivity. So in the end, all of these links to articles about people who use their iPads to create things only seem to support the notion that this is not how most people use their iPads.
Metro’s split-screen mode isn’t perfect. It doesn’t cover every use case. But at least for me, it covered surprisingly many of them, and it made the Surface a much better option for creative work than an iPad.
The Surface’s pen is almost as good as my Cintiq’s. Tracking is fast, it’s pressure-sensitive, it works everywhere, and it feels like a real pen. It’s great, unlike every iPad pen I’ve ever tried.
Friday, February 28, 2014
Chris Anderson correctly analysed that the advent of e-commerce sites like Amazon or iTunes gave more prominence to the bottom of the catalogue than ever, therefore making it possible to increase the sales of historically less popular items which in a classical trade model had no chance of being on front display (or even in stock!) nor of having enough success to benefit from the accelerator effect of those at the top of the pile (the top 50 chart in music for example).
While the iTunes App Store is over 5 years old and the number of catalogue references exceeds a million items as many as the Google Play Store for Android, it is legitimate to ask oneself whether the long tail applies to these pure e-commerce sites, next generation offsprings with only slight mutations… Does an app buried away at the bottom of the catalogue benefit from the positive effects mentioned above? Do the app stores facilitate the discovery of apps and allow app publishers and developers to establish a truly profitable business?
These arguments alone suggest that the long-tail effect probably does not hold water on the app stores. This situation is even exacerbated, since if there is no long-tail effect, the opposite becomes possible: the creation of super champions capitalizing on the nature of apps which have built-in sharing and viral features that books or films do not have!
Truth is, you shouldn’t use the flash at a performance like that anyway. Not at a sports event, not at a school play, not on Broadway, not at fireworks, not at the Olympics — because your camera’s flash is useless beyond about eight feet.
Yeah, yeah, I know. I’m telling you to turn off the flash when it’s dark out, but to turn on the flash when it’s sunny?
That’s called a fill flash. Its purpose is to supply a little additional light for the subject to compensate for the overly bright background.
Apple does not log messages or attachments, and their contents are protected by end-to-end encryption so no one but the sender and receiver can access them. Apple cannot decrypt the data.
I still think this is misleading because it ignores the fact that iCloud backups are encrypted with a key that’s in Apple’s possession. We know this because you can buy a new iPhone and restore your backup simply by entering your Apple ID and password. And we know that your password itself is not the key because Apple’s support people can restore your account access if you forget your password.
The other important point is that, since Apple’s servers are handing out the keys, Apple could easily be the “man in the middle” if it ever wanted to intercept messages. In other words, the security in iMessage is purely due to policy (trusting that Apple is not doing this) rather than the architecture or something that we can verify.
The white paper is well worth reading, though I’m not sure why everyone is treating it as a new document, rather than an update to the previous version.
Thursday, February 27, 2014
Highly efficient file backup system based on the git packfile format. Capable of doing fast incremental backups of virtual machine images.
It uses a rolling checksum algorithm (similar to rsync) to split large files into chunks. The most useful result of this is you can backup huge virtual machine (VM) disk images, databases, and XML files incrementally, even though they’re typically all in one huge file, and not use tons of disk space for multiple versions.
It uses the packfile format from git (the open source version control system), so you can access the stored data even if you don’t like bup’s user interface.
Unlike git, it writes packfiles directly (instead of having a separate garbage collection / repacking stage) so it’s fast even with gratuitously huge amounts of data. bup’s improved index formats also allow you to track far more filenames than git (millions) and keep track of far more objects (hundreds or thousands of gigabytes).
bup is overly optimistic about mmap. Right now bup just assumes that it can mmap as large a block as it likes, and that mmap will never fail.
Because of the way the packfile system works, backups become “entangled” in weird ways and it’s not actually possible to delete one pack (corresponding approximately to one backup) without risking screwing up other backups.
The Y combinator is a higher-order function. It takes a single argument, which is a function that isn't recursive. It returns a version of the function which is recursive. We will walk through this process of generating recursive functions from non-recursive ones using Y in great detail below, but that's the basic idea.
More generally, Y gives us a way to get recursion in a programming language that supports first-class functions but that doesn't have recursion built in to it. So what Y shows us is that such a language already allows us to define recursive functions, even though the language definition itself says nothing about recursion. This is a Beautiful Thing: it shows us that functional programming alone can allow us to do things that we would never expect to be able to do (and it's not the only example of this).
Wednesday, February 26, 2014
It’s been 4 years and throughout all this time we've continued to sell RapidWeaver in and out of the Mac App Store. I expected direct sales to trail off as the years went on. I kept thinking it was about to happen… but it never did. In fact, most days the direct version of RapidWeaver continues to outsell the Mac App Store version.
This is what I’m seeing as well. Given that the “storeagent: Unsigned app” Mavericks bug that can prevent Mac App Store apps from launching is still present in 10.9.2, I’m glad that Apple’s store is not my exclusive sales channel.
Tuesday, February 25, 2014
We’re adding arbitration clauses to our Terms of Service and Dropbox for Business online agreement. Arbitration is a faster and more efficient way to resolve legal disputes, and it provides a good alternative to things like state or federal courts, where the process could take months or even years. If you prefer to opt out of arbitration in the Terms of Service, there’s no need to fax us or trek to the post office — just fill out this quick online form.
No matter what they do (delete your data, privacy breach, overcharging, whatever), you don’t get to sue. Instead, they get to choose the arbitrator according to whatever criteria they want, and thus any dispute is decided by someone they’re paying.
The agreement we make with Dropbox is too important to be enforced only by an arbitrator of their choosing. You have 30 days from the date of notification to opt out of the arbitration clause.
Another question I asked myself was: Is Software Update actually contacting Apple servers or am I being served a compromised update with even more security holes by the NSA?
Does it matter where the update comes from if it’s signed by Apple?
Update (2014-02-26): Nat!:
To get at the meat, use xar -x -f which will get you eventually to a file called Payload. That is a bzip2 encrypted tararchive. Now I find this quite hilarious. After all the hoops Apple went through, with xar, cpio, pax and what have you, they finally use tar to install, as they maybe should have right from the beginning.
Apple has quietly rolled out its iBeacon specification as it starts to certify devices that carry the Bluetooth LE standard.
Under their MFI program, manufacturers can now request that Apple permit them to attach the iBeacon name to their devices so long as they meet certain criteria.
The specifications are available after signing an NDA. Applying to the program in order to register to carry the iBeacon name, we’re told, is free.
We’re getting closer to the first official release of the Wolfram Language—so I am starting to demo it more publicly.
Here’s a short video demo I just made. It’s amazing to me how much of this is based on things I hadn’t even thought of just a few months ago. Knowledge-based programming is going to be much bigger than I imagined…
In a sense, the Wolfram Language has been incubating inside Mathematica for more than 25 years. It’s the language of Mathematica, and CDF—and the language used to implement Wolfram|Alpha. But now—considerably extended, and unified with the knowledgebase of Wolfram|Alpha—it’s about to emerge on its own, ready to be at the center of a remarkable constellation of new developments.
There are plenty of existing general-purpose computer languages. But their vision is very different—and in a sense much more modest—than the Wolfram Language. They concentrate on managing the structure of programs, keeping the language itself small in scope, and relying on a web of external libraries for additional functionality. In the Wolfram Language my concept from the very beginning has been to create a single tightly integrated system in which as much as possible is included right in the language itself.
I also played around with Cocoa Script “shaders” for shape graphics in Acorn. This won’t ship in 4.4 (or maybe ever?), but it was fun to code up and might be something awesome some day. How it works is a little hard to explain, but I'll try. Basically, instead of a rectangle having just a stroke and a fill when it draws, it will call a snippet of Cocoa Script code in place of the normal drawing routines. That snippet of code then has access to a bunch of libraries, and can do whatever it wants in the context it is drawing into.
Working with Woz was like working with the smartest person you’ve ever known kicked up a couple notches combined with a practical joker. The best times Woz and I had were not coding, but rather playing jokes.
I was not yet out of high school and immature; yet he was always willing to deal with my mood swings, and answer every technical question I gave him (and there were a lot!) He loved explaining things — I’ll never forget one evening at Denny’s when he explained how parsers and lexical analysis worked. He was never too busy to explain concepts that were new to me.
We have created a proof-of-concept "monitoring" app on non-jailbroken iOS 7.0.x devices. This “monitoring” app can record all the user touch/press events in the background, including, touches on the screen, home button press, volume button press and TouchID press, and then this app can send all user events to any remote server, as shown in Fig.1. Potential attackers can use such information to reconstruct every character the victim inputs.
Note that the demo exploits the latest 7.0.4 version of iOS system on a non-jailbroken iPhone 5s device successfully. We have verified that the same vulnerability also exists in iOS versions 7.0.5, 7.0.6 and 6.1.x. Based on the findings, potential attackers can either use phishing to mislead the victim to install a malicious/vulnerable app or exploit another remote vulnerability of some app, and then conduct background monitoring.
Monday, February 24, 2014
There is, however, an intrinsic danger in applying this ability without fully thinking through the implications. When enabled within your applications you are essentially building a massively distributed botnet. Each copy of your application will be periodically awoken and sent on a mission to seek and assimilate internet content with only the OS safeguards holding it back. As your app grows in popularity this can lead to some rather significant increases in activity.
My first example of this was when I added Background Fetch to Check the Weather. A weather app’s primary function is displaying up-to-the-minute, constantly changing data so in my initial iOS 7 update I experimented with adding highly frequent background updates. The result was far more dramatic than I’d expected. Here are my weather API requests (which cost 0.01¢ per request) per day once the update went live. I saw an immediate jump in traffic, roughly 16x normal. Suffice to say I immediately had to scale back on my requested update frequency.
The background fetch API is a game-changer for iOS developers. It has the potential to free us of significant server and infrastructure overheads. This is particularly relevant at a time when many developers are wondering how to stay independent. For Castro, the decision was an easy one and we strongly advocate that other developers take full advantage of this new API as well.
Service-backed apps still have a lot of advantages and exclusive capabilities over iOS 7’s Background Fetch. I think server-side crawling is still the best choice for podcast apps and feed readers, for which users want fast updates to collections of infrequently updated feeds.
Overcast has been crawling tens of thousands of podcast feeds every few minutes for the last 6 months using standard HTTP caching headers. In the last week, 62% of all requests have returned 304 (“Not Modified”). Many of the rest returned the entire “new” feed, but none of the episodes had actually changed, making the server download and process hundreds of kilobytes unnecessarily.
The entire Overcast feed-crawling infrastructure can run on a $40/month Linode VPS.
Core Intuition Jobs aims to solve this problem by becoming the go-to source for both employers and job-seekers in the Cocoa development market. Other sites like StackOverflow Careers take a stab at solving the problem, but they suffer from a problem in that they are too large, and serve too many different needs to be uniquely valuable to a niche market such as ours.
Let’s just say I spent a lot of quality time with Google before eventually stumbling across a hint on Microsoft’s developer site. The document talks about using a default setting of 96 DPI. I’ve been spending a lot of time lately with the Mac’s text system, so I knew that TextEdit was using 72 DPI to render text.
That’s another way to think about this problem: a single point of text on your Mac will be 1.33 times larger in your browser.
US cable giant Comcast has announced a deal with Netflix allowing Netflix’s video-streaming service a more direct route through Comcast’s network, which should improve streaming video quality for viewers. The first indications of the new deal between the companies came last week after App.net founder Bryan Berg observed more direct routes for Netflix data through Comcast’s network. The Wall Street Journal reported on Sunday night that the change was the result of a formal, paid agreement between the two companies, but Comcast does not specify how much the deal is worth.
Officially, Comcast’s deal with Netflix is about interconnection, not traffic discrimination. But it’s hard to see a practical difference between this deal and the kind of tiered access that network neutrality advocates have long feared.
Dan Rayburn has a contrary take:
Today’s news is very simple to understand. Netflix decided it made sense to pay Comcast for every port they use to connect to Comcast’s network, like many other content owners and network providers have done. This is how the Internet works, and it’s not about providing better access for one content owner over another, it simply comes down to Netflix making a business decision that it makes sense for them to deliver their content directly to Comcast, instead of through a third party. Tied into Netflix’s decision is the fact that Comcast guarantees a certain level of quality to Netflix, via their SLA, which could be much better than Netflix was getting from a transit provider. While I don’t know the price Comcast is charging Netflix, I can guarantee you it’s at the fair market price for transit in the market today and Comcast is not overcharging Netflix like some have implied. Many are quick to want to argue that Netflix should not have to pay Comcast anything, but they are missing the point that Netflix is already paying someone who connects with Comcast. It’s not a new cost to them.
As does Marc Andreessen:
The venture capitalist argued that too much of the discussion about net neutrality assumes that the internet is a static thing, rather than something that is likely to increase exponentially in terms of its demand for bandwidth, and that a strict or dogmatic adherence to net neutrality would likely “kill investment in infrastructure [and] limit the future of what broadband can deliver.”
Update (2014-02-27): Ben Thompson:
What Netflix is most concerned about from a non-discrimination standpoint are broadband caps, and, more broadly, usage-based broadband pricing. It’s not that their position differs on a point-by-point basis from most net neutrality advocates; rather, the priorities are different.
That leaves unlimited access on the chopping block. While I love the idea of unlimited data, I also am aware that nothing comes for free; in the case of unlimited data, the cost we are paying is underinvestment and/or discriminatory treatment of data. Therefore I believe the best approach to broadband is usage-based payment by both upstream and downstream, with no payments in the middle.
Sunday, February 23, 2014
The SSLVerifySignedServerKeyExchange function in libsecurity_ssl/lib/sslKeyExchange.c in the Secure Transport feature in the Data Security component in Apple iOS 6.x before 6.1.6 and 7.x before 7.0.6, Apple TV 6.x before 6.0.2, and Apple OS X 10.9.x before 10.9.2 does not check the signature in a TLS Server Key Exchange message, which allows man-in-the-middle attackers to spoof SSL servers by (1) using an arbitrary private key for the signing step or (2) omitting the signing step.
This signature verification is checking the signature in a ServerKeyExchange message. This is used in DHE and ECDHE ciphersuites to communicate the ephemeral key for the connection. The server is saying “here's the ephemeral key and here's a signature, from my certificate, so you know that it's from me”. Now, if the link between the ephemeral key and the certificate chain is broken, then everything falls apart. It's possible to send a correct certificate chain to the client, but sign the handshake with the wrong private key, or not sign it at all! There's no proof that the server possesses the private key matching the public key in its certificate.
If I compile with -Wall (enable all warnings), neither GCC 4.8.2 or Clang 3.3 from Xcode make a peep about the dead code. That's surprising to me. A better warning could have stopped this but perhaps the false positive rate is too high over real codebases? (Thanks to Peter Nelson for pointing out the Clang does have -Wunreachable-code to warn about this, but it's not in -Wall.)
John Gruber on the NSA angle:
These three facts prove nothing; it’s purely circumstantial. But the shoe fits.
You can test whether your device is affected at gotofail.com or imperialviolet.org:1266. At this writing, Mac OS X 10.9, including current seeds, is still vulnerable. iOS 5 and Mac OS X 10.8 never had the bug. It’s fixed in iOS 6.1.6 and iOS 7.0.6:
Secure Transport failed to validate the authenticity of the connection. This issue was addressed by restoring missing validation steps.
The offending line of code is a single extra
goto in SSLVerifySignedServerKeyExchange(). In my view, this is not an improper use of goto. The code follows a standard C error handling style. I’m also unpersuaded by the argument that the bug should be blamed on brace format preferences.
Any of us could have written a bug like this, especially when merging changes from different sources. But a flaw in process is what let the bug ship. If ever there were code that should be unit tested, it’s Secure Transport. Landon Fuller shows that it would have been easy to write a test to detect this regression.
Update (2014-02-24): Lloyd Chambers:
This one is unforgiveable. It could have compromised interactions with tens of millions of devices, had hackers exploited it (have they?), and that fact remains true for some time to come because plenty of people won’t update their devices and OS X doesn’t even have a fix as this was written.
You just don’t break a core security protocol like this. Who is in charge over there? Test suites should validate such stuff; it’s not exactly a new protocol. Heads ought to roll on this one and right up to high levels perhaps.
He and Chris Breen suggest that using Firefox or Chrome may be safer than Safari.
Update (2014-02-25): Macworld:
In addition, you may be able to potentially save your traffic from prying eyes with a VPN (Virtual Private Network). Although the VPN hooks into the security framework where the SSL/TLS bug exists, the VPN protocols supported by OS X don’t directly use SSL. You’ll need to check with your network administrator to make sure all your traffic runs through the VPN, however, and it’s not just site-specific (as some work-related VPNs can be).
The bug is fixed in Mac OS X 10.9.2.
Friday, February 21, 2014
All of these nuances in the API cause KVO to embody what is known as a pit of failure rather than a pit of success. The pit of success is a concept that Jeff Atwood talks about. APIs should be designed so that they guide you into using them successfully. The should give you hints as to how to use them, even if they don’t explain why you should use them in that particular way.
KVO does none of those things. If you don’t understand the subtleties in the parameters, or if you forget any of the details in implementation (which I did, and only noticed because I went back to my code to reference it while writing this blogpost), you can cause horrible unintended behaviors, such as infinite loops, crashes, and ignored KVO notifications.
I wish Cocoa didn’t have APIs that require you to use KVO.
Unlike a relationship, there is no way to pre-fetch a fetched property. Therefore, if you are going to fetch a large number of entities and then desire to access the fetched property for those properties, they are going to be fetched individually. This will drastically impact performance.
Fetched Properties are only fetched once per context without a reset. This means that if you add other objects that would qualify for the fetched property after the property has been fetched then they won’t be included if you call the fetched property again. To reset the fetched property requires a call to
parentContext was introduced alongside a new concurrency model. To use parentContext both the parent and child contexts must adopt the new concurrency model. But the problem addressed by parentContext is not concurrency. Concurrency is just a problem, albeit a significant one, that needed to be solved to for parentContext to be implemented. The intent of parentContext is to improve the [atomicity] of changes. parentContext allows changes to be batch up and committed en masse. This has always been possible by using multiple
NSManagedObjectContext, but parentContext allows for improved granularity of the batching.
parentContext does provides features that simplify a handful of use cases. Unfortunately the short comings of parentContext mean that it can not be adopted piecemeal. The top of the Core Data stack are managed objects. A good model will provide an interface that works at a high level of abstraction. Creating such an interface requires encapsulating implementation detail. The way that Core Data is designed means that the natural place for this code is in managed object subclasses. Because parentContext affects the behaviour of managed objects adopting it makes it difficult to write managed object subclasses without knowing the context hierarchy in which they’ll be used. Proceed with extreme caution!