Archive for April 2014
Wednesday, April 30, 2014 [Tweets] [Favorites]
Earlier this month, we introduced our new Watchtower service on the web. In its initial version, Watchtower checks whether a website is (or ever was) vulnerable to the internet’s nasty Heartbleed security bug, then tells you whether it’s safe to update your password.
Now we’ve taken the next major step and made it much easier to stay secure online, as Watchtower can now check all your Logins at once, right inside 1Password for Mac.
Great idea for protecting people from Heartbleed, but I’m finding it annoying to use because there’s no obvious way to tell it which sites’ passwords have already been changed. It also reports some sites such as PayPal as requiring a password change, though they’ve been reported elsewhere to be unaffected.
While doing our taxes this month, I was a little surprised just how much I spend for various web apps and services to help run Riverfold. While I could trim some of them, most are essential and save a lot of time. I thought it would be interesting to write up some of the most important ones.
That’s a lot. Of these, I’m using and recommend Amazon Web Services (Glacier, SES, and S3), DreamHost (VPS), and FogBugz.
Two specific changes have enabled Facebook to use Mercurial for their repository size; modifying the status updates for files to check for specific file changes as opposed to content changes (by hooking into operating system's list of file changes) and modifying the checkout to give a lightweight or shallow clone without needing the full history state.
Normally, a distributed version control system will generate hashes based on the content of data, rather than timestamp. As a result, computing whether a repository has changes often involves scanning through every file calculating hashes for each to determine whether the file's content is different. By limiting the set of files to check to ones that the operating system has reported as having changed since the last scan, the speed is proportional to the number of files whose timestamp has changed, instead of all files in the current workspace. Git tries to reduce this by running lstat to determine file specific information, but still has to walk through every file in the repository in order to determine if they are changed. By asking the operating system to provide the information, the repository can be optimised to only scan those files that the OS reports as having changed.
Update (2014-05-05): Fred McCann:
Asking why does Facebook need a single source tree is the wrong question. Facebook’s process is to treat the codebase as a single thing, so they made tools that supported their process. Same with Google. When the Jenkins project had headaches with Git, I took exception with the criticism that the project should modify its process to better work with Git. That’s backwards thinking.
Update (2015-10-21): Previously: Git at Facebook Scale.
So I popped up an incognito window and loaded the same URL. Voila, the full profile. It seems LinkedIn has decided that its optimal strategy is to punish registered users.
Göran Krampe (via Karsten Kusche):
There is a funny story about these verbs. Martin McClure told me at ESUG
in Brest to ask Dan Ingalls about it, hinting that they are “inspired”
by a famous song.
At 4 a.m. on May 1, 1964, in the basement of College Hall, Professor John Kemeny and a student programmer simultaneously typed RUN on neighboring terminals. When they both got back correct answers to their simple programs, time-sharing and BASIC were born.
Kemeny, who later became Dartmouth’s 13th president, Professor Tom Kurtz, and a number of undergraduate students worked together to revolutionize computing with the introduction of time-sharing and the BASIC programming language. Their innovations made computing accessible to all Dartmouth students and faculty, and soon after, to people across the nation and the world.
Harry McCracken (via Jim Matthews):
The thinking that led to the creation of BASIC sprung from “a general belief on Kemeny’s part that liberal arts education was important, and should include some serious and significant mathematics–but math not disconnected from the general goals of liberal arts education,” says Dan Rockmore, the current chairman of Dartmouth’s math department and one of the producers of a new documentary on BASIC’s birth. (It’s premiering at Dartmouth’s celebration of BASIC’s 50th anniversary this Wednesday.)
By letting non-computer scientists use BASIC running on the DTSS, Kemeny, Kurtz and their collaborators had invented something that was arguably the first real form of personal computing.
I’m not sure when the documentary will be publicly available, but I highly recommend it.
Update (2014-05-05): Steve Wozniak:
I first experienced BASIC in high school that same year. We didn’t have a computer in the school but GE, I think, brought in a terminal with modem to promote their time-sharing business. A very few of we bright math students were given some pages of instruction and we wrote some very simple programs in BASIC. I saw that this was a very simple and easy-to learn language to start with, but that terminal was only in our school for a few days.
Tuesday, April 29, 2014 [Tweets] [Favorites]
Microsoft Security Advisory 2963983:
The vulnerability is a remote code execution vulnerability. The vulnerability exists in the way that Internet Explorer accesses an object in memory that has been deleted or has not been properly allocated. The vulnerability may corrupt memory in a way that could allow an attacker to execute arbitrary code in the context of the current user within Internet Explorer. An attacker could host a specially crafted website that is designed to exploit this vulnerability through Internet Explorer and then convince a user to view the website.
However, the issue may be of special concern to people still using the Windows XP operating system.
That is because Microsoft ended official support for that system earlier this month.
It means there will be no more official security updates and bug fixes for XP from the firm.
About 30% of all desktops are thought to be still running Windows XP and analysts have previously warned that those users would be vulnerable to attacks from cyber-thieves.
Along the same lines, Apple is not fixing its recent FaceTime bug for iOS 6:
If you’re not fond of iOS 7’s design, but value FaceTime, it looks like you’ll finally have to give in. This FaceTime issue began earlier in April and gained recognition thanks to a lengthy forum thread in Apple’s Support Communities. The bug appeared after another mysterious issue that prevented first generation Apple TV units from connecting to Apple’s iTunes store.
Today we’re open-sourcing Pop, the animation engine behind the application’s smooth animations and transitions. Using dynamic instead of traditional static animations, Pop drives the scrolling, bouncing, and unfolding effects that bring Paper to life.
Fast-forwarding a year, the effect that iOS 7 has had on third party development is disheartening — which sounds like a fatuous thing to say, since there have been so many well-liked redesigns over the past year. But that’s the rub: the vast majority of third-party developers’ time has been spent redesigning and reimplementing apps to dress the part for iOS 7. Many shops, such as Tapbots and Cultured Code, were forced to delay new products indefinitely while they scrapped ongoing work in favor of reboots. I suspect that many other developers had to make similar decisions.
Can we expect the same from Mac OS X 10.10?
Jared argues that iOS 7 wasn’t urgent, that evolution rather than revolution would have been fine, since customer satisfaction was extremely high with iOS 6. In retrospect I agree, but were I at Apple I would have argued that the situation is like tech debt — UI debt — and it’s best to deal with it quickly, completely, and early.
We spent an entire year with clients (and with our own apps) doing this and it was a huge pain in the ass for only visual style gains.
Sunday, April 27, 2014 [Tweets] [Favorites]
Apple’s recent announcements about CarPlay have revived the intermittent discussion on blogs and podcasts of the poor quality of user interface design in automobiles. Most of the talk has been about the phone and music player controls, but the UI problems go way beyond that. Yesterday, I came across a particularly bad example.
Robin Harris argues, unconvincingly in my opinion:
Therefore, by a process of elimination, Glacier must be using optical disks. Not just any optical discs, but 3 layer Blu-ray discs.
Not single discs either, but something like the otherwise inexplicable Panasonic 12 disc cartridge shown at this year’s Creative Storage conference. That’s 1.2TB in a small, stable cartridge with RAID so a disc can fail and the data can still be read. And since the discs weigh ≈16 grams, 12 weigh 192g.
For several years I didn’t see how optical disk technology could survive without consumer support. But its use by major cloud services explains its continued existence.
sintaks (August 22, 2012):
Former S3 employee here. I was on my way out of the company just after the storage engineering work was completed, before they had finalized the API design and pricing structure, so my POV may be slightly out of date, but I will say this: they’re out to replace tape. No more custom build-outs with temperature-controlled rooms of tapes and robots and costly tech support.
I’m not sure how much detail I can go into, but I will say that they’ve contracted a major hardware manufacturer to create custom low-RPM (and therefore low-power) hard drives that can programmatically be spun down. These custom HDs are put in custom racks with custom logic boards all designed to be very low-power. The upper limit of how much I/O they can perform is surprisingly low - only so many drives can be spun up to full speed on a given rack. I’m not sure how they stripe their data, so the perceived throughput may be higher based on parallel retrievals across racks, but if they’re using the same erasure coding strategy that S3 uses, and writing those fragments sequentially, it doesn’t matter - you’ll still have to wait for the last usable fragment to be read.
The author quickly dismisses hard drives because at the time of the Glacier launch SMR drives were to expensive because of the Thai flood. But after a few years of running S3 and EC2 Amazon must have tons of left-over hard drives which are now simply too old for a 24/7 service.
So what do you with those three year old 1 TB hard drives where the power-consumption-to-space ratio is not good enough anymore? Or can of course destroy them. Or you actually do build a disk drive robot, fill the disk with Glacier data, simply spin it down and store it away. Zero cost to buy the drives, zero cost for power-consumption. Then add a 3-4 hour retrieval delay to ensure that those old disk don’t have to spin up more than 6-8 at times a day anymore even in the worst case.
I worked in AWS. OP flatters AWS arguing that they take
care to make money and assuming that they are developing
advanced technologies. That’t not working as Amazon.
Glacier is S3, with the added code to S3 that waits. That
is all that needed to do. Second or third iteration could
be something else. But this is what the glacier is now.
I am an AWS engineer but note that I am not affiliated with Glacier. However James Hamilton did an absolutely amazing Principals of Amazon talk a couple of years ago going into some detail on this topic. Highly recommended viewing for Amazonians.
From what I remember from it, its custom HDs, custom racks, custom logic boards with custom power supplies. The system trades performance for durability and energy efficiency.
Having a robot juggling the hard drives would not make that much sense. The reason why we have optical disc and tape robots is that the tape and discs need a separate device that reads/writes them. With hardware there’s not such need.
With hard drives it would make more sense to do some development on the electronics side and build a system where lots of drives can be simultaneously connected to a small controller computer. All of the HD’s don’t need to be powered on or accessible all the time, the controller could turn on only few of them at a time. And of course also part of the controllers could be normally powered off, once all the harddrives connected to them are filled.
Tom Patterson (via Amit Patel):
This paper examines the techniques being developed by the U.S. National Park Service (NPS) Division of Publications for designing plan (2D) maps with a faux realistic look. The NPS produces tourist maps for 385 parks in a system spanning a large swath of the Earth’s surface from the Caribbean to Alaska to the South Pacific, and which is visited by nearly 300 million people each year. Many park visitors are inexperienced map readers and non-English speakers. In our ongoing effort to make NPS maps accessible to everyone, the design of NPS maps over time has become less abstract and increasingly realistic, particularly in the depiction of mountainous terrain and natural landscapes (Figure 1). Many of the techniques discussed herein are borrowed from or inspired by 3D mapping (Patterson, 1999). However, the scope of my paper deals exclusively with plan mapping—a format that has received scant attention in the digital era in regard to abstract vs. realistic depiction compared to the 3D world. It is also the format in which the majority of NPS maps will continue to be made.
Four major tech companies including Apple and Google have agreed to pay a total of $324 million to settle a lawsuit accusing them of conspiring to hold down salaries in Silicon Valley, sources familiar with the deal said, just weeks before a high profile trial had been scheduled to begin
Tech workers filed a class action lawsuit against Apple Inc, Google Inc, Intel Inc and Adobe Systems Inc in 2011, alleging they conspired to refrain from soliciting one another’s employees in order to avert a salary war. They planned to ask for $3 billion in damages at trial, according to court filings. That could have tripled to $9 billion under antitrust law.
There were more than 60,000 workers in the class. Class members claimed that the “no cold calls” agreement resulted in $3 billion of lost wages, a far cry from the settlement agreement.
The New York Times:
The companies, which are some of the world’s richest, must think that is a bargain. At a moment when Silicon Valley is losing some of its luster even on its home territory, the antitrust case depicted the upper levels of the valley’s executive suites as a cozy old boys’ network. Private deals are made, and then the executives send emails saying they wanted everything to remain secret.
Originally there were seven defendants. Settlements with Lucasfilm and Pixar (both now owned by Disney) and Intuit were reached last year. Those companies agreed to pay a total of $20 million — small change in the valley.
Sounds like clear victory for the defendants. They avoid more embarrassing e-mails and testimony and end up paying just a few thousand dollars per employee, surely less than they saved through this scheme, which also suppressed the wages for plenty of other employees outside the class.
OmniFocus 2 for Mac adds some cool features. The inspector sidebar and Quick Open seem especially nice. But it also includes a new layout that I think is a regression. I’ve been worried since last winter that OmniFocus 2 would ship with an iPad-style two-line design. It seems cluttered with icons and context names that get in the way when I’m scanning, while showing much less actual information in the same amount of space. (OmniOutliner 4 also reduced its data density, but it included options in the Pro version to tighten up the spacing.) The current OmniFocus beta, which is quite a shock to an OmniFocus 1 user, is actually the new and improved design:
In today’s most recent build (r207056), we’ve significantly reduced the amount of vertical whitespace in the main outline for actions and projects. OmniFocus 2 can now displays 65 rows in the same amount of space as it would previously use to display 48 rows—an increase of over 35%.
Ken Case says that Omni is listening, but the layout’s been this way since the first public screenshots—and presumably long before that internally. This is obviously not a high priority. The user interface is now frozen for the June release.
The new design also has the checkboxes on the right, like in iOS, but I don’t think this works as well with a wider window. The left sidebar with the project and context has a minimum width that’s about twice what it should be, wasting further space. They’ve also removed the features for customizing the fonts and styles and reduced the filtering and sorting options.
I consider OmniFocus 1.x to be one the best Mac apps ever, so it pains me to see it seemingly ruined by iOSification. It looks nice, but I don’t think it works well. Obviously, the developers are no dummies. Maybe they are right and this is what most people want.
Nevertheless, it puts fans of the old version in a bad position. Soon 1.x will no longer be supported. Someday it will stop syncing with the current iPhone version, or break in some other way. There’s no telling when or even if 2.x will match it. There are lots of competing apps, but I haven’t found any of them compelling.
Update (2014-05-20): Ken Case announces a work in progress:
If you’d prefer to see all of your task information laid out in one line (so it’s more vertically compact) and would prefer your status circles on the left, you can start experimenting with this now by opening this URL[…]
Saturday, April 26, 2014 [Tweets] [Favorites]
comiXology (recently purchased by Amazon):
We have introduced a new comiXology iPhone and iPad Comics App and are retiring the old one. iPhone and iPad users will now buy comics on comixology.com and download to the app.
In other words, they don’t want to give Apple 30%, which means that by the rules of the App Store there can be no purchasing within the app at all. The Google Play version does allow purchasing within the app, without giving Google a cut, since Google allows that. But then you presumably have to enter your credit card information in the app or store it on their Web site. I still think it would be to Apple’s long-term advantage to offer much cheaper payment processing. Apple would still make money, and the user experience would be better.
Update (2014-04-28): Gerry Conway:
By forcing readers to leave the app and go searching the Comixology website, add books to a cart, process the cart, return to the app, activate download, and wait for their purchases to appear, Comixology has replaced what was a quick, simple, intuitive impulse purchase experience with a cumbersome multi-step process that will provide multiple opportunities along the path for the casual reader to think twice and decide, ah, never mind, I don’t really want to try that new book after all. I’ll stick with what I know. Or worse, when a new casual reader opens the Comixology app for the first time and sees that THERE ARE NO COMICS THERE, and that he or she will have to exit the app and go somewhere else and sign up for a new account, maybe he or she won’t bother buying a comic in the first place.
He thinks this is about about advancing the Kindle platform rather than Apple’s 30%. I don’t think this argument makes much sense economically or strategically. Amazon is in the content business.
Update (2014-04-29): Moises Chiullan:
By purchasing ComiXology what was previously ComiXology’s “piece of the pie” is now Amazon’s. That piece grows, but the publisher’s portion also grows, and therefore the amount that can be paid out to creators is larger. I asked ComiXology’s Mosher directly: Will the reduced overhead mean that more revenue can and will go to creators, whether they’re big-time publishers or independent creators? “Yes,” he said.
Update (2014-05-12): I really enjoyed John Siracusa’s take on this issue.
Dave Cross has made his out-of-print book available as a free PDF download:
Your desktop dictionary may not include it, but ‘munging’ is a common term in the programmer’s world. Many computing tasks require taking data from one computer system, manipulating it in some way, and passing it to another. Munging can mean manipulating raw data to achieve a final form. It can mean parsing or filtering data, or the many steps required for data recognition. Or it can be something as simple as converting hours worked plus pay rates into a salary cheque.
This book shows you how to process data productively with Perl. It discusses general munging techniques and how to think about data munging problems. You will learn how to decouple the various stages of munging programs, how to design data structures, how to emulate the Unix filter model, etc. If you need to work with complex data formats it will teach you how to do that and also how to build your own tools to process these formats. The book includes detailed techniques for processing HTML and XML. And, it shows you how to build your own parsers to process data of arbitrary complexity.
There are two aspects to testing an asynchronous task that you need to consider. The first is that the unit test method should not return until the asynchronous task has fully completed; otherwise, the test will terminate prematurely. The second, which relates only to unit tests of Cocoa code, is to keep the main run loop turning over. Without the run loop, functionality such as networking and timers will not work.
The way I run asynchronous operations in unit tests for Ensembles is to include two methods in the test class. The first starts the run loop, and does not return until the run loop is stopped.
In short, Google seems to be backing away from the original Google+ strategy. The report states that Google+ will no longer be considered a product that competes with Facebook and Twitter, and that Google’s mission to force Google+ into every product will end. With this downgrade in importance comes a downgrade in resources. TechCrunch claims that 1000-1200 employees—many of which formed the core of Google+—will be moved to other divisions. Google Hangouts will supposedly be moved to Android, and the Google+ photos team is “likely” to follow. “Basically, talent will be shifting away from the Google+ kingdom and towards Android as a platform,” the report said. The strange part is that both of these teams create cross-platform products. So if the report is true, there will be a group inside the Android team making iOS and Web apps, which doesn’t seem like the best fit.
I think the big eye opener here is that Subversion is still the big dog in town, and there’s still a good chunk of CVS users. Granted, no one survey or tool will paint a completely accurate picture of the VCS landscape. If we had a truly complete sample, I would assume “No Version Control” would be the largest slice of the pie.
The XMLObjectMapper class acts as our NSXMLParser delegate. It’s responsible for setting up the parser and responding to events. It also handles accumulating text and maintaining two stacks, one for elements and one for objects. These two stacks are the heart of this solution. When the parser starts a new element the mapper adds that element to the end of the elements stack and then asks its delegate what, if any, object that element maps to. The delegate then returns either a new object or the current object which is pushed onto the end of the objects stack. When an element ends, the object at the end of the stack is passed the text contents of the element (if any) and then both the element and the object are popped off the end.
The New York Times published an article yesterday entitled, “F.C.C., in a Shift, Backs Fast Lanes for Web Traffic”. Pay careful attention to the ISP-friendly political marketing language being used.
This language was carefully constructed to sound like a positive, additive move: It’s building, not destroying or restricting. They want to offer faster service, not reduce the speed or priority of all existing traffic. Who could possibly be against that?
It’s already extremely expensive for companies to deliver content to end users as quickly as possible. Many companies elect to pay for content delivery networks, like Akamai or Amazon, to bring content closer to end users by mirroring data around the world in huge server arrays. Other companies — like Apple, Google, and Facebook — simply elect to build their own data centres. Now the FCC wants to mandate an additional implicit penalty on companies that cannot afford additional costs to ISPs.
Netflix (via John Gruber):
For a content company such as Netflix, paying an ISP like Comcast for interconnection is not the same as paying for Internet transit. Transit networks like Level3, XO, Cogent and Tata perform two important services: (1) they carry traffic over long distances and (2) they provide access to every network on the global Internet. When Netflix connects directly to the Comcast network, Comcast is not providing either of the services typically provided by transit networks.
Comcast does not carry Netflix traffic over long distances. Netflix is itself shouldering the costs and performing the transport function for which it used to pay transit providers. Netflix connects to Comcast in locations all over the U.S., and has offered to connect in as many locations as Comcast desires. So Netflix is moving Netflix content long distances, not Comcast.
Comcast (via John Gruber):
Comcast has a multiplicity of other agreements just like the one Netflix approached us to negotiate, and so has every other Internet service provider for the last two decades. And those agreements have not harmed consumers or increased costs for content providers – if anything, they have decreased the costs those providers would have paid to others.
Comcast has upped the ante by accusing Netflix of something extraordinary: The cable company says the video company sabotaged its own streams prior to the transit deal the two companies reached earlier this year.
Wednesday, April 23, 2014 [Tweets] [Favorites]
Project Naptha (via Matthew Guay):
Project Naptha automatically applies state-of-the-art computer vision algorithms on every image you see while browsing the web. The result is a seamless and intuitive experience, where you can highlight as well as copy and paste and even edit and translate the text formerly trapped within an image.
With iOS 7.1.1 Apple now takes multiple scans of each position you place finger at setup instead of a single one and uses algorithms to predict potential errors that could arise in the future. Touch ID was supposed to gradually improve accuracy with every scan but the problem was if you didn’t scan well on setup it would ruin your experience until you re-setup your finger. iOS 7.1.1 not only removes that problem and increases accuracy but also greatly reduces the calculations your iPhone 5S had to make while unlocking the device which means you should get a much faster unlock time.
This new capability enables developers to respond to reviews of Windows Phone apps directly from Dev Center. Once you create a response, users will receive the comment via email from Microsoft and can even contact you directly if you included your support email address in the app submission ‘Support email address’ metadata.
This capability is designed to help you maintain closer contact with users to inform them of new features, bugs you’ve addressed, as well as get feedback and ideas to improve your app. This capability is not to be used for marketing and does not provide you as the developer with the user’s personal information, such as an email address.
Google already lets Android developers respond to reviews of their apps in the Google Play store. On the iOS developer side, Apple, despite numerous entreaties from developers and industry observers, does not.
Update (2014-08-17): Microsoft (via Steven Frank):
The feedback from all developers who have been able to respond to reviews has been very positive so far, with developers using this feature to help users resolve questions, inform them of a new version of the app, and increase user satisfaction with their apps.
I am pleased to announce that today we completed the rollout of this feature to all eligible Windows Phone developers.
Tuesday, April 22, 2014 [Tweets] [Favorites]
Apple has provided an “Offers In-App Purchases” disclosure on individual app detail pages since March of 2013, but now the App Store has been updated to include a small “In-App Purchases” notification for apps in Top Charts listings and on specific featured apps listings, such as in the “Great Free Games” category.
I think there’s a lot more to be done along these lines, but this is a good first step.
Apple has shown, by consistent inaction over the last six years, that they simply aren’t interested in putting substantial effort into improving the App Store. It’s just not a priority. They’ll do the bare minimum to keep it working, and not much more.
And I think they’re committing a massive long-term strategic error.
Ever dreamed of an opportunity to try out new versions of OS X before they’re released, but without having to pony up the $99 to become a registered developer? Well, that opportunity’s here: On Tuesday, Apple announced a new initiative, the OS X Beta Seed Program.
You have to log in with your Apple ID and accept a confidentiality agreement, which prohibits you from discussing or publicly sharing any information about pre-release software with people who are not also using the pre-release software—according to the agreement, the company will likely provide discussion boards expressly for the purpose of discussing pre-release software.
Even before this, I’ve been seeing a much higher percentage of my customers using pre-release versions of Mac OS X than in the past.
Update (2014-04-23): Kirk McElhearn:
Apple’s opening up the OS X beta program is an odd step. They already don’t fix many of the bugs that those with developer accounts report, so getting many more bug reports is unlikely to make a difference. While this is a good thing for users who are not developers, and who want access to OS X betas – journalists such as my colleagues and I will save $100 a year – I don’t see how expanding beta access will improve anything. But this is a sign of the greater openness we’ve seen since Tim Cook took over the company.
The iPad rose and rose. It won legions of admirers because of its simplicity: No windows (no pun), no file system, no cursor keys (memories of the first Mac). Liberated from these old-style personal computer ways, the iPad cannibalized PC sales and came to be perceived as the exemplar Post-PC device.
But that truly blissful simplicity exacts a high price. I recall my first-day disappointment when I went home and tried to write a Monday Note on my new iPad. It’s difficult — impossible, really — to create a real-life composite document, one that combines graphics, spreadsheet data, rich text from several sources and hyperlinks. For such tasks, the Rest of Us have to go back to our PCs and Macs.
We might have overestimated the eventual role of tablets and underestimated the role of phones — and the whole argument is further muddled by the industry-wide move toward 5-inch-ish phone displays.
Update (2014-04-26): Benedict Evans:
This chart, and dozens of others from every possible source, makes it very clear that the iPad dominates tablet web traffic in a way that it does not dominate smartphone web traffic.
The classic negative view on iPads was that they couldn’t compete with PCs because they lacked multitasking, keyboards, Office (until now) etc, etc. But that’s an incomplete response, because PC sales are suddenly weak too (and only part of that is Windows 8).
So, looking at tablets and smartphones as mobile devices in a new category that competes with PCs may be the wrong comparison - in fact, it may be better to think of tablets, laptops and desktops as one ‘big screen’ segment, all of which compete with smartphones, and for which the opportunity is just smaller than that for smartphones.
Update (2014-05-01): Dustin Curtis:
Mobile phones and tablets are already becoming less differentiated over time, and within a few years I think they will converge into one multipurpose, pocketable device. Screen and battery technology are improving fast enough that even needing two devices will soon be pointless; why carry both a small-screened and a large-screened device–both of which are otherwise essentially identical–when you can pull out your mobile phone and have a screen that, for example, expands to tablet-size when you stretch it?
The tablet is really just a temporary evolutionary sidestep that overcomes screen and battery technology issues in mobile phones. There is no such thing as a tablet in the future.
In fact, if your app has multiple threads, then you’re almost certainly using
The above code crashes reliably in the
sleep. Why? What we see here is that
removeObserver: does not block until all notifications have been posted. The method can return while a notification is still executing on another thread. Thus, we have a race condition.
Sunday, April 20, 2014 [Tweets] [Favorites]
The document is on the right track to a solution. The key is to be able to detect the overflow situation without triggering it. Or in this specific case, detect that n * m would overflow, without actually calculating the value of bytes. But putting the detection after the calculation of bytes defeats that purpose because by then we’ve triggered undefined behavior. We need to use the detection to avoid the calculation that would result in undefined behavior.
Some optimizing compilers exploit undefined behavior by using it to prune the state tree. Since signed integer overflow is undefined the compiler is allowed to assume that it cannot happen. Therefore, at the time of the ‘if’ statement the compiler is allowed to assume that n*m does not overflow, and the compiler may therefore determine that the overflow checks will always pass, and can therefore be removed.
At some point Apple quietly fixed their document. They didn’t change the date (the footer still says February 11 but the properties now say March 10) or acknowledge the error (the revision history is silent), but they fixed the code.
They now avoid doing the multiplication until they have done the check, but their code is still wrong.
SIZE_MAX is the maximum value of a size_t typed variable. We’ve already established that ‘n’ and ‘m’ are not size_t types – they appear to be signed integers. The maximum value for a signed integer is INT_MAX. Apple is using the wrong constant!
Friday, April 18, 2014 [Tweets] [Favorites]
Carlos Hernández (via John Gruber):
One of the biggest advantages of SLR cameras over camera phones is the ability to achieve shallow depth of field and bokeh effects. Shallow depth of field makes the object of interest “pop” by bringing the foreground into focus and de-emphasizing the background. Achieving this optical effect has traditionally required a big lens and aperture, and therefore hasn’t been possible using the camera on your mobile phone or tablet.
That all changes with Lens Blur, a new mode in the Google Camera app. It lets you take a photo with a shallow depth of field using just your Android phone or tablet. Unlike a regular photo, Lens Blur lets you change the point or level of focus after the photo is taken.
Here’s an example in Apple’s calendar app. It uses a red tint color for buttons, but it also highlights the current day with a round circle using the tint color. It looks tappable, but it’s not.
And here’s an even worse example, from the App Store app. “Categories” in this screenshot is a button, but “Paid” directly underneath it — same blue, same font and style — is just highlighted to show that you are viewing paid apps. It’s actually “Top Grossing” that is the button.
So, we’ve got inline tag data that is simple to display, but is virtually impossible to query. Regular indexing doesn’t really work well at finding matches in the middle of character data. Enter (trumpets) SQL Server Full-Text Search. This is inbuilt to SQL Server (which we were already using), and allows all kinds of complex matching to be done using CONTAINS, FREETEXT, CONTAINSTABLE and FREETEXTTABLE. But there were some problems: stop words and non-word characters (think “c#”, “c++”, etc). For the tags that weren’t stop words and didn’t involve symbols, it worked great. So how to convince Full Text Search to work with these others? Answer: cheat, lie and fake it. At the time, we only allowed ASCII alpha-numerics and a few reserved special characters (+, –, ., #), so it was possible to hijack some non-English characters to replace these 4, and the problem of stop words could be solved by wrapping each tag in a pair of other characters. It looked like gibberish, but we were asking Full Text Search for exact matches only, so frankly it didn’t matter. A set of tags like “.net c#” thus became “éûnetà écñà”.
We finally had a reason to remove this legacy from the past. […] After some thought and consideration, we settled on a pipe (bar) delimited natural representation, with leading/trailing pipes, so “.net c#” becomes simply “|.net|c#|”.
Wednesday, April 16, 2014 [Tweets] [Favorites]
I’ve been using an iPad mini with Retina display for almost six months now. The Mini itself is old news, but I wanted to write down some notes from using it.
I upgraded from a first-generation iPad and a third-generation Kindle. Not surprisingly, the Mini blows away the original iPad in every way. I think it’s actually the speed that I notice most. I had never really liked using the iPad because it seemed so inefficient compared with my Mac. In retrospect, I don’t think I appreciated how much this was due to the (lack of) processor speed and RAM, rather than inherent limitations of the touch interface (though those certainly still exist).
For basic reading, the Kindle’s low-resolution E Ink screen is easier on the eyes than the Retina display. But it’s a tradeoff I’m willing to make because the iPad is so much faster and more versatile. Screen aside, the Kindle iOS app is nicer than the Kindle’s own software. I used to read Instapaper on my Kindle, but that feature stopped working for me, and of course the app makes it easier to read articles out of order.
Others have reported problems with the iPad mini’s color gamut. The screen may not be as nice as the iPad Air’s, but the colors look great to me. What I notice more is that when the room is dark, and I’m looking at a dark background, I can see that the backlight is uneven. I see the image retention problem, but it goes away after a few seconds. It’s nowhere near as bad as on the MacBook Pro.
I have a smart cover, but I’m not very happy with it. It’s unstable when folded back and makes the iPad even thicker. But if I remove the cover, there often isn’t a good place to put it. The cover does work OK as a stand, and it’s good for wiping fingerprints off. I like the STM Jacket D7 Padded Case. It’s small enough for protecting the iPad within a larger bag and also works standalone.
It’s easier to type on the iPad mini than on a full-size iPad, but I still dread it. I find myself straining to remember my shortish, random Web passwords to avoid having to type my long master password for 1Password. Touch ID can’t come soon enough.
The worst part of the iPad mini is holding it. It’s decidedly heavier than the first-generation iPad mini, not to mention a Kindle. And it just doesn’t feel as nice in the hand. I still find it unnatural to hold it with my fingers over the edge of the screen, and sometimes this triggers unintentional touches. I’m not convinced that having the full iPad experience is worth this size and weight. I would rather have something in the 6–7″ range that’s optimized for reading. But I suspect that we’ll instead see a 5-5.5″ tweener iPhone.
People have been bugging me to write about Integrated Storage for some time, and with Bill Gates having just disclosed that failure to ship WinFS was his biggest product regret now seemed like a good time. In Part 1 I’ll give a little introduction and talk about scenarios and why you’d want an Integrated (also refered to as unified) Store.
You can solve many of the problems I described for photos by putting an external metadata later on top of the file system and using an application or library to interact with the photos instead of interacting directly with the file system. And that is exactly how it is done without integrated storage. This causes problems of its own as applications typically won’t understand the layer and operate just on the filesystem underneath it. That can make functionality that the layer purports to provide unreliable (e.g., when the application changes something about the photo which is not accurately propagated back into the external metadata store). And with photos now stored in a data type-specific layer it is ever more difficult to implement scenarios or applications in which photos are but one data type.
So from the earliest discussions I recall Integrated Storage was always a new, Win32-compatible, file system. Accessing new functionality would be done by a new API, but you always had to be able to expose traditional file artifacts in a way that a legacy Win32 app could manipulate them. Double-click on a photo in an Integrated Storage-based Windows Explorer and it had to be able to launch a copy of Photoshop that didn’t know about Integrated Storage. And since that version of Photoshop didn’t know about Integrated Storage it also couldn’t update metadata in the store, it could just make changes to the properties inside the JPEG file. So when it closed the file Integrated Storage had to look inside the file and promote any JPEG properties that had been changed into the external metadata it maintained about the object.
Much of the complexity of Microsoft’s attempts at delivering Integrated Storage is owed to all this legacy support. Property promotion and demotion (e.g., if you changed something in the external metadata it might have to be pushed down into the legacy file format) was one nightmare that wasn’t a conceptual requirement of Integrated Storage but was a practical one. Dealing with Win32 file access details was another.
At Microsoft you can see numerous ways that the File System team tried to accommodate greater richness in the file system without perverting the core file system concepts. For example, the need for making metadata dynamic or adding some of the things that the Semi-Structured Storage world needs was met by adding a secondary stream capability to files.
The notion of a Property Bag seems easy enough and painless enough to understand, but it clashes with the world of Structured Storage. How does arbitrary definition of metadata clash with a world in which schema evolution is (mostly) tightly controlled? Do you add a column to a table every time someone specifies a new property? If two people create properties with the same name are they the same property? If a table with thousands of columns, all of which are Null 99.99% of the time, seems unwieldy then what is an alternate storage structure? And can you make it perform?
What was different about WinFS is that most of these barriers, including the organization structure, were addressed. And the failure to deliver an Integrated Storage File System when the conditions were as close to ideal as they’ll ever be is why the concept will probably never be realized. Meanwhile the world of storage has moved on in interesting ways.
Because I was new to Microsoft (and thus could be objective) I was asked to intervene in a spat between the Exchange team (working on the first version of Exchange Server, nee Exchange 4.0) and the JET-Blue database engine over the performance of the Mailbox Store. What I learned along the way was that the intent was for Exchange Server to be built on OFS, but since OFS wasn’t ready Exchange was doing its own interim store for Exchange 4.0. The plan of record was for the second version of Exchange to move to OFS. However, in an email discussing the performance of the existing mailbox store the Exchange General Manager mentioned that he didn’t think Exchange would ever move to OFS. While the OFS project was still alive, it was clear to me that everyone in the company had already written it off.
Longhorn itself turned out to be too aggressive an effort and have too many dependencies. For example, if the new Windows Shell was built on WinFS and the .NET CLR, and WinFS itself was built on the CLR, and the CLR was a new technology itself that needed a lot of work to function “inside” Windows, then how could you develop all three concurrently? One story I heard was that when it became clear that Longhorn was failing and they initiated the reset it started with removing CLR. Then everyone was told to take a look at the impact of that move and what they could deliver without CLR by a specified date. WinFS had bet so heavily on CLR that it couldn’t rewrite around its removal in time and so WinFS was dropped from Longhorn as well.
The WinFS project continued with the thought that it would initially ship asynchronous to a Windows release before being incorporated into a future one. But now it had two problems. First, it was back to the problem of having no Microsoft internal client that was committed to use it. And second, they eventually concluded that there was no chance in the forseeable future of shipping WinFS in a release of Windows. With the move of Steven Sinofsky, who had been a critic of WinFS, to run Windows that conclusion was confirmed. WinFS was dead.
Edge Cases episode 88:
Andrew Pontious talks with Wolf Rentzsch about the simplest of things, the tuple: what it is, how it is used in other languages (specifically Python), and how, in an alternate universe, it could bring some sanity to Cocoa error handling.
Tuples in Python are great, particularly because there’s syntax for unpacking (a.k.a. destructuring). It even works with nested structures.
They hypothesize that
NSError was introduced in Mac OS X 10.4 with Core Data. My recollection is that it was added with Safari 1.0 and WebKit, which could be installed on Mac OS X 10.2 and was built into 10.2.7.
Although I would certainly welcome Objective-C support for tuples and language-level support for errors, I’m not sure that it makes sense to implement the latter using the former.
One of the few virtues of using
(NSError **) parameters is that you can pass in
NULL if you only care about success/failure, not the reason for the failure. I’ve found that this is sometimes very useful for performance reasons. There’s a tension between putting lots of useful information in the error object and creating the error object quickly.
NULL lets you have your cake an eat it, too. If you know that you will be making many related calls that could fail, you can do this without creating any
NSError objects. Then you can generate one higher level
NSError to represent the whole operation, and possibly retry one of the lower level calls without
NULL to get a suitable underlying error object. This level of control would not be possible if methods always returned a tuple with a full error object.
Secondly, tuples would require more lines of code because if you want to save the error you can’t use the return value in an
if statement. Instead of:
if ([self fooAndReturnError:&error])
you would write something like:
BOOL ok, NSError *error = [self foo];
You can see what this is like by calling Cocoa APIs using PyObjC.
Monday, April 14, 2014 [Tweets] [Favorites]
We were reviewing the bookmarks user interface in the yet-to-be-released Safari. At that time, all bookmarks were contained in a single, separate modeless window. It was homely but easy to implement.
And Steve didn’t like it. Probably because he didn’t want the complication of switching between windows. We started looking at how other Mac browsers did it. He didn’t like those solutions either.
So he turned directly to me, leaned forward with that laser-like focus of his and asked, “What would you do?”
Considering that what we just demoed was what I had done — or, technically, what my engineers had done — I was screwed. Everything else in the world seemed to fade away in a blur around Steve’s face, and for a moment I couldn’t think. But I didn’t panic. Or soil myself.
After a beat I said, “I actually like what Internet Explorer for Windows does, with the bookmarks in the same window as the Web content. I just don’t like how it puts them in a sidebar. There’s got to be a better solution than a sidebar, but I don’t know what that is yet.”
I liked the original design better than the current one, where the bookmarks are in the sidebar unless you’re editing them.
Steve didn’t like the status bar and didn’t see the need for it. “Who looks at URLs when you hover your mouse over a link?” He thought it was just too geeky.
Fortunately, Scott and I convinced Steve to keep the status bar as an option, not visible by default. But that meant we had a new problem. Where should we put the progress bar to indicate how much of the page was left to load?
This is what I’d always assumed was the reason Safari put its progress bar in the address bar. I’ve never liked that, and I always run Safari with the status bar shown so I can see on mouseover where a link will take me.
Brent Jackson (last year):
For the iPhone, Apple conjured up three fairly solid navigation patterns: the tab bar, the table view (e.g. Messages & Mail), and the card stack (e.g. Weather). All three work fairly well if used as intended, but there’s always room for experimentation and evolution in UI design – and always room for designers and developers to screw it up.
Path and Facebook’s mobile left nav flyout pattern is one such experimentation that should be avoided. Mark Kawano calls it the “hamburger icon that slides open the basement.” Why call it the basement? Because it’s hidden, dark, there’s a ton of crap in it, and, frankly, it’s scary and no one wants to go down there.
The Facebook app has since switched to a tab bar.
Update (2014-07-11): Kelsey Campbell-Dollaghan (via Jeffrey Zeldman):
It turns out that the burger comes from the Xerox “Star” personal workstation, one of the earliest graphical user interfaces. Its designer, Norm Cox, was responsible for the entire system’s interface—including the icons that would effectively communicate functionality to the earliest computer users. The hamburger, which looks like a list, seemed like a good way to remind users of a menu list. Skip to about 21:05 in the following video to see an explanation[…]
Adam Langley (via Wolf Rentzsch):
But an attacker who can intercept HTTPS connections can also make online revocation checks appear to fail and so bypass the revocation checks! In cases where the attacker can only intercept a subset of a victim’s traffic (i.e. the SSL traffic but not the revocation checks), the attacker is likely to be a backbone provider capable of DNS or BGP poisoning to block the revocation checks too.
If the attacker is close to the server then online revocation checks can be effective, but an attacker close to the server can get certificates issued from many CAs and deploy different certificates as needed. In short, even revocation checks don’t stop this from being a real mess.
So soft-fail revocation checks are like a seat-belt that snaps when you crash. Even though it works 99% of the time, it’s worthless because it only works when you don’t need it.
Thursday, April 10, 2014 [Tweets] [Favorites]
Philip Greenspun finds lots of interesting passages in Brad Stone’s The Everything Store: Jeff Bezos and the Age of Amazon:
PowerPoint is a very imprecise communication mechanism,” says Jeff Holden, Bezos’s former D. E. Shaw colleague, who by that point had joined the S Team. “It is fantastically easy to hide between bullet points. You are never forced to express your thoughts completely.” Bezos announced that employees could no longer use such corporate crutches and would have to write their presentations in prose, in what he called narratives.
Bill Miller, the chief investment officer at Legg Mason Capital Management and a major Amazon shareholder, asked Bezos at the time about the profitability prospects for AWS. Bezos predicted they would be good over the long term but said that he didn’t want to repeat “Steve Jobs’s mistake” of pricing the iPhone in a way that was so fantastically profitable that the smartphone market became a magnet for competition. The comment reflected his distinctive business philosophy. Bezos believed that high margins justified rivals’ investments in research and development and attracted more competition, while low margins attracted customers and were more defensible.
This site will generate a graph diagram for an NFA that corresponds to a regular expression, as well as a corresponding DFA (via Chris Nebel).
As long as you’re only keeping system frameworks in that group, you can delete it. Yes, delete the entire “Frameworks” group. Just ensure that you’ve enabled Link Frameworks Automatically in your Xcode project’s settings.
The Heartbleed Bug:
The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).
The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.
Adam C. Engst:
We won’t lie — it’s extremely bad, and among the worst security bugs we’ve seen in recent history. It enables attackers to break encryption and potentially access other sensitive information from the server. Worse, it does so invisibly, so Web site administrators can’t go back and check logs to see if the site has been attacked in the past.
Security expert Bruce Schneier calls Heartbleed catastrophic, saying “On the scale of 1 to 10, this is an 11.” Half a million sites may be vulnerable to the bug, according to Netcraft. With this tool from Filippo Valsorda, you can test sites you use regularly, although negative results may not mean anything, since conscientious system administrators are installing a new version of OpenSSL that patches the bug quickly.
Then it copies payload bytes from pl, the user supplied data, to the newly
allocated bp array. After this, it sends this all back to the user. So
where’s the bug?
What if the requester didn’t actually supply payload bytes, like she said she
did? What if pl really is only one byte? Then the read from memcpy is going
to read whatever memory was near the SSLv3 record and within the same process.
LastPass offers a great service:
To help our users take action and protect themselves in the wake of Heartbleed, we've added a feature to our Security Check tool. LastPass users can now run the LastPass Security Check to automatically see if any of their stored sites and services were 1) Affected by Heartbleed, and 2) Should update their passwords for those accounts at this time.
Mashable has a list of affected sites.
Update (2014-04-11): See also xkcd and The New Yorker.
Update (2014-04-14): Cyrus Farivar:
President Barack Obama has explicitly decided that when any federal agency discovers a vulnerability in online security, the agency should come forward rather than exploit it for intelligence purposes, according to The New York Times, citing unnamed “senior administration officials.”
However, while there is now a stated “bias” towards disclosure, Obama also created a massive exception to this policy if “there is a clear national security or law enforcement need.”
The U.S. National Security Agency knew for at least two years about a flaw in the way that many websites send sensitive information, now dubbed the Heartbleed bug, and regularly used it to gather critical intelligence, two people familiar with the matter said.
ODNI Public Affairs Office:
NSA was not aware of the recently identified vulnerability in OpenSSL, the so-called Heartbleed vulnerability, until it was made public in a private sector cybersecurity report. Reports that say otherwise are wrong.
Update (2014-04-23): Apple issues AirPort Base Station Firmware Update 7.7.3.
Update (2014-04-29): Accidental Tech Podcast 60 has a good segment on Heartbleed.
Update (2014-05-14): Martin Fowler:
The proof-of-concept test above shows that it is conceivable that
had someone tried to unit test the code, they could have possibly
caught and prevented one of the most catastrophic computer bugs in
history. The existence of the proof-of-concept unit test eliminates
the assertion that it would've been impossible.
Sadly the fix submitted for the bug
also lacked a unit test to verify it and guard against regression.
This is why this email was such a surprise. Like the poor quality mailing lists mentioned above, it didn’t require a confirmed opt-in. We had to reply to say that we didn’t want the contact email address changed.
This means that a forged source address was sufficient. Even though the attacker couldn’t read email to email@example.com, they didn’t need to. All they needed was for us to not read it.
To Gandi’s credit, they responded very quickly to our “NO, DON’T CHANGE IT” email, and locked our account to stop any further shenanigans while they investigated and collected more documents from us.
Tuesday, April 8, 2014 [Tweets] [Favorites]
I went on to implement the C compiler, known as Datalight C. True to my interest in optimization, it was the first on the PC to have a data-flow optimizing compiler. Such a concept was new enough that the compiler got into trouble in the computer magazine benchmarks because the optimizer figured out that the benchmarks did nothing and so deleted all that dead code — the journalist assumed my compiler was broken or cheating and Datalight C got a bad review.
I’ll note here that working on stuff that I needed has fared quite a bit better than working on stuff that I was told others need.
For example, I was out jogging one day with a programmer friend who said, “You know, what the world is desperate for is a Java compiler that generates native code. You’ll make a mint off of that! I use Java and this is really needed.” I told him that, coincidentally, I had written one and he could start using it right away. Of course, he never did.
Whining about perceived problems with existing languages had gone on long enough; I decided to power up the machine shop. When tackling a problem like this, I am always reminded of Gimli the dwarf: “Certainty of death. Small chance of success. What are we waiting for?” Why not? At least I’ll go down sword in hand fighting the glorious fight.
Update (2014-04-09): Bright answers questions on Reddit.
While powerful, indexed ivars come with two caveats. First of all,
class_createInstance can’t be used under ARC, so you’ll have to compile some parts of your class with
-fno-objc-arc flag to make it shine. Secondly, the runtime doesn’t keep the indexed ivar size information anywhere. Even though
dealloc will clean everything up (as it calls
free internally), you should keep the storage size somewhere, assuming you use variable number of extra bytes.
We already know
__NSDictionarySizes is some kind of array that stores different possible sizes of
It turns out
__NSDictionaryI doesn’t check if the
key passed into
nil (and I’d argue this is a good design decision). Calling
hash method on
0, which causes the class to compare key at index
nil. This is important: it is the stored key that executes the
isEqual: method, not the passed in key.
See also Exposing NSMutableArray.
The London School of Sound has a video that shows how to wrap a cable with the loops in alternating directions so that it uncoils neatly (via Jim Dalrymple).
Monday, April 7, 2014 [Tweets] [Favorites]
What’s different though is that it feels like Microsoft is more interested in working with us as a partner whereas Apple has always given off a vibe of just sort of dealing with us because they have to. Maybe that’s a little sour grapes, but as a developer it was a nice change.
Overall though, Microsoft seems to be embracing open source in new and interesting ways that the old Microsoft never seemed to care about. Previously when they open sourced a piece of technology it’s because they were no longer interested in it. Now, key pieces of functionality that the future of the company is based on are out in the open.
Build allowed me three days to immerse myself in technologies that I know almost nothing about. I came away impressed with it too. For all its past faults, the New Microsoft is doing things that are on the cutting edge of technology. Their Rx extensions library is everything I hope ReactiveCocoa could be: a fully functional extension to the core C# language built and maintained by Microsoft. Their unit and integration testing story for Windows Phone is light years beyond what either Apple or Google offer for their respective mobile platforms.
Update (2014-04-08): Brent Simmons:
But where the new CEO makes a difference is that leadership has caught up to where Microsoft employees already were. They can be honest, with themselves and others, about the company’s role in the world. They can stop wasting time trying to recapture those days of monopolistic dominance and instead concentrate on building great things for the future, for the many-platforms future.
I made it my mission to discover the specific reasons for iOS battery drainage. This article is a product of my years of research and anecdotal evidence I gathered in the hundreds of Genius Bar appointments I took during my time as a Genius and iOS technician, as well as testing on my personal devices and the devices of my friends.
Sunday, April 6, 2014 [Tweets] [Favorites]
Reuters (via Graham Lee and OSNews):
Major U.S. companies including Ford, Apple and Pfizer have formed a lobbying group aimed at pushing back at some changes to the patent system members of Congress have proposed, saying these measures would hinder protection of valuable inventions.
“Yes, and she’s produced a map showing the radius within which we
can send email to be slightly more than 500 miles. There are a number
of destinations within that radius that we can’t reach, either, or
reach sporadically, but we can never email farther than this radius.”
A jpeg parser, running on a surveillance camera, which crashed every time the company’s CEO came into the room. 100% reproducible error.
An address database that crashed when given street addresses on the upper East side of New York City. It worked fine for any other address in the country. It interpreted “149 E 72” for example, as a floating point number.
One of the localized versions of the video game “Lord of the Rings: The Two Towers” had name of the movie studio translated as “Carriage Return Linefeed Cinema” instead of “New Line Cinema” in the credits.
A Windows Phone 8 error that asks you to put in your Windows installation disc and restart the computer. It sounds too funny to be true, right? Apparently it’s not. According to some digging by WMPoweruser, it’s rare, but real.
Case in point: last week Nest decided to halt the sale of its new Protect smart smoke alarm because it found a flaw in the sensor and gesture-based UI. It turns out that the function that enabled users to pause an alarm by waving at it could also be unintentionally triggered by other types of movement. The fear was that if there was a fire and the alarm was going off, a nearby movement could falsely pause the alarm.
Brian McCullough interviews Netscape’s founding engineers:
As part of the Internet History Podcast project, I’ve collected oral histories from the founding engineers who made Netscape possible 20 years ago. I’ve lightly edited and transcribed the interviews chronologically below, but if you want to hear each interview in its entirety, you can do so.
There was a definite schism between us young kids and especially Tim Berners-Lee, who wanted to keep the web essentially lowest common denominator. I wouldn’t say that he was opposed to adding images and other things, but he was opposed to the methodology at which it was going about. We had a bunch of discussions around that at the conference and came up with some interesting ideas. That’s where the idea of <alt> text came from.
Marc basically sends mail, says, “Hey, I met Jim Clark. He’s a cool guy. He’s looking to start up a company. And I’m talking with him about what we should do.” At that point Jim was very interested in doing interactive TV. He was trying to convince Marc to go do interactive TV. And the more they talked about it, the more Marc basically just said, “What we really should do is go do Mosaic right. Do a Mosaic killer.”
We were originally accused of taking the [Mosaic] code and then we said, “No, we haven’t take a line of code.” And we were audited and of course proven that we didn’t. But we didn’t want to take any of the code, that’s the thing! We wanted to start from scratch. We wanted to do it right. There wasn’t any code we wanted to take. Look at how it works much much better. Obviously, it’s not the same code base!
Marc [Andreessen] basically drove a lot of that discussion. One was obviously a shared code base between the three versions, which is pretty much unheard of at that point in time. That you’d have Mac, Windows and Unix all sharing a code base. The biggest other thing was the invention of SSL and that basically, if this is going to be a commercial product, we have to come up with how to make it secure—such that people can use it for things like putting their credit card in and shopping and business and all the stuff that people use it for today. Fast was the other thing. We realized as we worked on it that there were a lot of things we had done wrong in terms of how we had written Mosaic and that we could get at least a 10x perceived speed improvement in redoing it.
At the time, Marc Andreessen was really throwing the gauntlet down at Microsoft. Foreshadowing what I think eventually has come to pass, which is that that whole native platform is considerably less a focus than the web platform. He made this well-publicized comment about turning Windows into a poorly debugged set of device drivers.
But the funny thing about us and Microsoft was, from day one, people would ask who our competition was and our answer pretty much was Microsoft at that point. And people would look at us like we’re crazy. First of, you’re 20 guys and they don’t have any clue what the web was. But we fundamentally understood that if we succeeded that we were going to be in their crosshairs. They were the 800 pound gorilla and anyone who succeeded was in their crosshairs. There wasn’t a product category in software that existed that if you succeeded, you know, Microsoft was your competition.
I’ve always been a Mac guy. Although everybody laughed at me. The whole Mac Daddy? That was not a cool thing at Netscape. Everybody was like, “Why are you working on that crappy little computer with no virtual memory?” And then Apple decides to ship Internet Explorer with the Mac because Microsoft gave them like a hundred million dollar investment. That was kind of the stuff we were fighting. We made some technical mistakes here and there but the fight was really lost in Microsoft’s kind of business assault. Cutting off our air supply.
Saturday, April 5, 2014 [Tweets] [Favorites]
The .NET Compiler Platform (“Roslyn”) provides open-source C# and Visual Basic compilers with rich code analysis APIs. You can build code analysis tools with the same APIs that Microsoft is using to implement Visual Studio!
Roslyn was the codename of the effort to rebuild the C# and Visual Basic.NET compilers in their own languages, but also to do it in a modern way. These compilers expose services that are appropriate to the stages of compilation and allow the information that the compiler builds up not only not to go to waste but to be readily accessed by other programs. Instead of sitting on the knowledge, sharing it.
Let’s be honest, searching in the iTunes Store sucks, especially on the desktop. It’s often slow, and the results are difficult to navigate. Apple has tried to simplify things by displaying one result at a time in the App Store on iOS, but that approach also means that it can take longer to find the specific app you want in a sea of knockoffs.
A new web tool called “fnd” makes it easier to quickly search and navigate not just the App Store, but the iTunes Store in general.
Unlike iTunes (and the Mac App Store), fnd.io is fast, and you can search and select text within the page. Here’s a LaunchBar search template for it:
ARCHS build setting for your framework includes both
x86_64. The first thing you'll want to do is to put your ARC code in files that aren't used by your 32-bit apps. Next, you'll want to wrap your ARC code files with
#if __LP64__ to conditionally compile the code for 64-bit. As a consequence, the files will simply be empty when compiling
i386. Finally, ARC can be enabled for individual files with the
-fobjc-arc compiler flag. In Xcode, you can set per-file flags under Build Phases in the Compile Sources build phase. The catch, for “fat” or “universal” builds that are both 32- and 64-bit, is that you cannot set your per-file flags to
-fobjc-arc. Why not? Because the per-file compiler flag applies to every architecture, but
-fobjc-arc is an invalid flag for
i386. So your build will die.
The trick to enabling ARC for specific files in your framework is to use a per-architecture build setting. Create a User-Defined build setting, something like
MY_FRAMEWORK_USE_ARC. Make the build setting empty. Then create a per-architecture variant of
MY_FRAMEWORK_USE_ARC for Intel 64-bit, and set that variant to
-fobjc-arc. You'll now be able to use
$MY_FRAMEWORK_USE_ARC as the per-file flag in your build phase, and the flag will have the appropriate definition as each architecture is built.
Also, when you’re in Xcode’s Compile Sources view, you can select multiple files and press Return to batch-edit the Compiler Flags.
Wednesday, April 2, 2014 [Tweets] [Favorites]
The alternative approach, and what is used on Apple’s 64-bit platforms, is the use of so-called zero-cost exceptions. Rather than recording thread state at runtime, the compiler builds a lookup table that covers all codein an executable. This table defines how to accurately unwind a single frame from any valid instruction address, as well as providing language/runtime-specific definitions of where try/catch/finally blocks are defined, and how to handle them.
As it turns out, this is exactly the same information that debuggers, crash reporters, and evil crash recovery hacks need to perform their own stack unwinding.
Apple updated its iWork suite on all three platforms (iOS, Mac, and iCloud) yesterday, with improvements to almost every aspect of every app, from editing in Pages to creating charts in Numbers and delivering presentations in Keynote.
There’s plenty more, all of it detailed on the product pages for Pages, Numbers, and Keynote. And, if you’re counting, the new Mac versions are Pages 6.2, Numbers 3.2, Keynote 6.2; on iOS, it’s version 2.2 of all three.
I’m pleased to say that this week, Apple has delivered again. This time, the iWork apps have received a notable set of AppleScript improvements across the board.
However, the most exciting news is that Keynote, Numbers, and Pages all introduce brand new text and iWork suites of terminology, allowing for interaction with text and common elements such as charts, images, tables, lines, placed audio files, and more.
What’s especially interesting is that these suites are consistent from app to app. In other words, since all the apps have certain features in common, the same exact AppleScript terminology is used to script those features.
The patents are, of course, worded in the usual dense legaleze. If you want to read them for yourself, you can find them on the US Patent and Trademark Office website in the links below. But here’s my reading of what each one is about, in plain English.
Patent 5,946,647: “System and method for performing action on a structure in computer generated data”
Patent 6,847,959: “Universal interface for retrieval of information in a computer system, a patent that Apple claims is central to universal search”
Patent 7,761,414: “Synchronous data synchronization among devices”
Patent 8,046,721: “Unlocking a device by performing gestures on an unlock image”
Patent 8,074,172: Method, system and graphical user interface for providing word recommendations
I have had discussions on Twitter and email with Apple fans who find it hard to believe that Apple, after revolutionizing the market, can’t prevent companies like Google and Samsung from providing some of the same functionality. But Apple, like everyone else in this field of incremental innovations, is standing on the shoulders of giants. A smartphone or tablet is a mobile computer, but Apple does not own all computing technology. Apple achieved key breakthroughs for those product categories. Those breakthroughs weren’t just marketing successes. They wouldn’t have been possible without certain technical achievements that made portable touchscreen devices as usable as they are now. But Apple didn’t create all of this singlehandedly on a green field. There were other touchscreen devices before, and they came with features of the kind many people mistakenly regard as foundational Apple inventions (for example, the Neonode N1m already had slide-to-unlock, even though in a less elaborate graphical form).
It turns out that virtually every other language [than Java] I know of uses an optimized string-search by default, which had the upshot that simply rewriting our Scala code in Ruby(!) would actually make the code dramatically faster and pass our benchmarks! “Oops”.
But inspired by this, I decided to go do a brief survey of common language/VM string-search algorithms[…]
It looks like CFStringFindWithOptionsAndLocale() uses a Java-style slow matching algorithm rather than something like Boyer-Moore.
Jonathan Corbet (via Kyle Sluder):
Note that the use of an atomic swap operation on the main lock means that
only CPU 2 can have a pointer to CPU 1’s
mcs_spinlock structure. So there is no need for atomic operations
when making changes to that structure, though some careful programming is
still needed to make sure that changes are visible to CPU 1 at the
Once this assignment is done, CPU 2 will spin on the locked value in its ownmcs_spinlock structure rather than the value in the main lock.
Its spinning will thus be entirely CPU-local, not touching the main lock
at all. This process can go on indefinitely as contention for the lock
increases, with each CPU placing itself in line behind those that are
already there, and each CPU spinning on its own copy of the lock. The
pointer in the “main” lock, thus, always indicates the tail of the queue of
An MCS lock, thus, is somewhat more complicated than a regular spinlock. But that added complexity removes much of the cache-line bouncing from the contended case; it also is entirely fair, passing the lock to each CPU in the order that the CPUs arrived.
Path Finder developer Steve Gehrman:
Tried to become a professional Android developer. Didn’t realize posting some apps for beta testing would result in being banned for life by Google before even having a chance to get started. I assumed a human would be involved in any banning process and they would clearly see that I was not intending to fool anyone and that my apps were harmless. Consider this before investing time and money into developing for Android. If one of their algorithms thinks you’re a bad guy, you’re banned for life.
Tuesday, April 1, 2014 [Tweets] [Favorites]
Dropbox did confirm to Ars that it checks publicly shared file links against hashes of other files that have been previously subject to successful DMCA requests. “We sometimes receive DMCA notices to remove links on copyright grounds,” the company said in a statement provided to Ars. “When we receive these, we process them according to the law and disable the identified link. We have an automated system that then prevents other users from sharing the identical material using another Dropbox link. This is done by comparing file hashes.”
Dropbox added that this comparison happens when a public link to your file is created and that “we don’t look at the files in your private folders and are committed to keeping your stuff safe.”
Suvir Mirchandani and Peter Pinko (PDF) (via Chris Taylor):
This study identifies fonts that use ink most efficiently and estimates the amount of money a single school and a school district can save on ink by choosing efficient fonts for student handouts. […] Based on the analysis, it was concluded that a switch to Garamond, the most efficient font, would reduce ink consumption by 24%, thereby decreasing environmental damage and saving the school district approximately $21,000 per year.
Anand Lal Shimpi:
With six decoders and nine ports to execution units, Cyclone is big. As I mentioned before, it’s bigger than anything else that goes in a phone. Apple didn’t build a Krait/Silvermont competitor, it built something much closer to Intel’s big cores. At the launch of the iPhone 5s, Apple referred to the A7 as being “desktop class” - it turns out that wasn’t an exaggeration.
Cyclone is a bold move by Apple, but not one that is without its challenges. I still find that there are almost no applications on iOS that really take advantage of the CPU power underneath the hood. More than anything Apple needs first party software that really demonstrates what’s possible. The challenge is that at full tilt a pair of Cyclone cores can consume quite a bit of power. So for now, Cyclone’s performance is really used to exploit race to sleep and get the device into a low power state as quickly as possible. The other problem I see is that although Cyclone is incredibly forward looking, it launched in devices with only 1GB of RAM. It’s very likely that you’ll run into memory limits before you hit CPU performance limits if you plan on keeping your device for a long time.
I don’t want to use iCloud since Apple has a poor track record with online services (in both longevity and correctness) and I’d prefer not to upload my private information.
I’ve been using ownCloud for about four months now and have been happy with it. I’m happy to be able to use cloud syncing without being beholden to a questionable cloud provider.
Erica Sadun on AnyFont (App Store) (via John Gruber):
I contact the developer, Florian Schimanke, who explained the steps the application takes. “[In iOS 7], it is possible to include fonts in configuration profiles. You can do this for example using the Apple Configurator from the Mac App Store,” he wrote.
“[AnyFont] takes the fonts that are added to the app’s storage by the user via iTunes file sharing or the ‘Open in…’ dialog and creates a configuration profile from it so it can be installed on the device. AnyFont hands over the newly created profile to Safari which then takes the user to the installation process. When finished, the user is then taken back to AnyFont.”
BusyContacts brings to contact management the same power, flexibility, and sharing capabilities that BusyCal users have enjoyed with their calendars. What's more, BusyContacts and BusyCal integrate seamlessly together to become the ultimate contact and calendar solution on the Mac.
Looks good, and it won a Best of Show award from Macworld, but it won’t even be in beta until this summer.
The UD590 connects to your PC via HDMI or DisplayPort, and its specs are impressive: a 1-millisecond gray-to-gray response time, 370 cd/m2 brightness, and support for one billion colors. It uses LED backlights with a TN panel that offers 170- and 160-degree viewing angles. These angles are good, though not as wide as monitors using IPS or LPS panels.
If you wanted to pick a single date to mark the beginning of the modern era of the web, you could do a lot worse than choosing Thursday, April 1, 2004, the day Gmail launched.
Within Google, Gmail was also regarded as a huge, improbable deal. It was in the works for nearly three years before it reached consumers; during that time, skeptical Googlers ripped into the concept on multiple grounds, from the technical to the philosophical. It’s not hard to envision an alternate universe in which the effort fell apart along the way, or at least resulted in something a whole lot less interesting.
I wanted to get a better understanding of the “stupid content tracker” and see how it was built so I spent a few weeks in my spare time reading the source code. I found it tiny, tidy, well-documented and overall pleasant to read.
As usual I have compiled my notes into an article, maybe it will encourage some of us to read more source code and become better engineers.
Drew McCormack reports that he received the following error message in Xcode:
Deprecated API Usage. Apple no longer accepts submissions of apps that use QuickTime APIs.
Apple has long had a guideline stating that apps that use deprecated technologies will be rejected. QTKit classes such as QTMovieView were only deprecated in October 2013 with the release of Mac OS X 10.9. This is probably because the replacement, AVFoundation, did not initially support much of what was possible with QuickTime. For example, its
QTMovieView equivalent, AVPlayerView was only added in 10.9. Prior to that, you had to build your own player using AVPlayerLayer.
I always took the guideline to mean deprecated as of the OS version the app is targeting. If, in fact, it means deprecated as of the current OS version, developers will need to choose between dropping support for older OS versions or writing extra code, i.e. implementing
MyPlayerView in terms of
AVPlayerLayer. The latter is not a very attractive use of time. If you’re going to rewrite your code, you probably want to use the latest and greatest, not reinvent the
AVPlayerView wheel and maintain it into the future.
This puts developers in an artificially bad situation. Both the old and new OS versions already have the code to do what’s needed. It’s just put off-limits.
The other effect of this is that, since
AVPlayerLayer was added in Mac OS X 10.7, there isn’t a good replacement for QTKit if your app targets 10.6. About 10% of my customers and 8% of Omni’s are currently using 10.6, so you might still want to provide updates for those customers—or at least fix bugs.