After an investigation into the problem by Facebook’s data team, they discovered that the new News Feed was performing too well. It was performing so well from a design standpoint that users no longer felt the need to browse areas outside of the News Feed as often, so they were spending less time on the site. Unfortunately, this change in user behavior led to fewer advertisement impressions, which led, ultimately, to less revenue.
Archive for March 2014
I took the apps for a quick spin, and they are impressive, though I personally don’t have much need for them right now.
It took four years, but Microsoft has finally released full-featured Office apps for the iPad. As expected, the new Word, Excel, and PowerPoint apps are free to install but require an Office 365 subscription to unlock the full set of features.
Make no mistake about it: These three apps are feature-rich, powerful tools for creating and editing Office documents. They look and act like their Office 2013 counterparts on Windows. And although these iPad apps obviously can’t replicate every feature of the full desktop programs, they deliver an impressive subset of those features. Anyone who was expecting Office Lite or a rehash of the underwhelming Office for iPhone will be pleasantly surprised.
What’s fascinating about Office for the iPad is how it leapfrogs Microsoft’s Windows tablets. On Windows 8 and Windows RT devices, Office is still a desktop app with some grudging interface tweaks designed to ease the pain of using an app without a mouse. Anyone who owns a Surface RT is likely to look enviously at these iPad apps, which for now are the gold standard for Office on a modern tablet.
Office for iPad represents the distilled Office experience, poured into an iOS glass. Quite frankly, I prefer it to working in Office on the desktop, if only because Microsoft organizes the most commonly-used functions so intuitively, using an icon-driven ribbon at the top of the screen.
I haven’t yet spent enough time with Office for iPad alongside the Apple iWork suite to definitively give one suite the edge over the other. My initial impression, however, is that you’ll prefer Word for iPad over Pages, with perhaps a slight edge to Excel over Numbers, as well. I’ve always been very impressed with Keynote, however, and I suspect that most iPad users will prefer to stick with it.
Working with text in Office for iPad should be intuitive to anyone who has used iOS: Tapping once on a word moves the cursor to that location; tapping twice creates the slider bars for highlighting a block of text. Pressing and releasing brings up a set of options to select or insert text. Holding down your finger brings up the zoom or spyglass icon. (Atalla said that Microsoft developed an elongated, widened zoom that highlighted a word. All I saw was the default circular view, however.)
The text selection and zooming do seem to be a bit different—and perhaps faster—than normal.
If you’ve got a complex report that you’ve been working on in Word, and you want to access it on your iPad, you can either export that file in RTF format, or import it in Pages from the .doc file, but there’s a good chance that the formatting won’t match. If you use any kind of auto-numbering or fields, they won’t transfer at all, so you simply couldn’t use Pages to edit the document (though you may be able to view it).
New Microsoft CEO Satya Nadella could well have started his keynote yesterday with: “We have to let go of this notion that for Microsoft to win, Apple has to lose.” And with Office for iPad, Microsoft pitched a no-hitter. For those of you not as obsessed with baseball as I am, this is a good thing for Microsoft, Apple and us.
Ballmer and Gates think losing the platform war, no longer being the largest and the no-one-ever-got-fired-for choice means the end of the Microsoft as we know it, and they may be right. But it’s the also the beginning of the only Microsoft that can stop the bleeding and thrive.
Apple’s refusal to put locally accessible file storage on iOS has opened the door for Microsoft to lock people to their cloud.
While one of the big holdups for Office for iPad was getting the software just right, another was Apple’s policy that apps that sell things — including subscriptions — use Apple’s in-app purchase mechanism and hand over 30 percent of that revenue to Apple.
I’ve been using Office for iPad since Monday on a loaner Apple iPad Air. The device itself is beautiful, thin and light, and iOS 7, while an improvement over previous versions, still lacks basic productivity features like the ability to run at least two apps side-by-side. So it’s important to understand that the biggest limitation of Office on this platform isn’t Office, it’s the iPad. You can only do—or at least see—one thing at a time.
Update (2014-04-01): Mark Hachman:
The first iteration of Microsoft’s Office for iPad lacks the ability to print, an unfortunate omission that Microsoft representatives intimated will be fixed in a forthcoming release.
Update (2014-04-07): Eric Wilfrid:
We made some bold moves in performance-tuning Office applications for iPad. We changed how Excel draws the contents of spreadsheets, because the old way wasn’t fast enough. We modified Word to render documents on a background thread, because the tried-and-true way didn’t allow the kind of scrolling performance iPad users expect. And there’s my favorite demo: insert a picture in any of the apps, grab the rotation handle, and enjoy the way the OfficeArt graphics engine was re-engineered to take full advantage of hardware acceleration in iOS. The monitor in the hallway outside my office has each day’s performance measurements on it. We’re still looking at performance every day, and we already have some ideas about how Office on your iPad can get even faster.
DateTools was written to streamline date and time handling in Objective-C. Classes and concepts from other languages served as an inspiration for DateTools, especially the DateTime structure and Time Period Library for .NET. Through these classes and others, DateTools removes the boilerplate required to access date components, handles more nuanced date comparisons, and serves as the foundation for entirely new concepts like Time Periods and their collections.
Andrei Alexandrescu interviews Walter Bright:
Conventional wisdom has it that preprocessing time is a negligible part of building a C++ binary. C++ is notoriously difficult to parse, which means that C++ parsers tend to be large and slow compared to their counterparts for other languages. Code generation is also quite the time sink, especially if you’re using optimizations. Finally, linking is a large serial step at the end of each build, and using the gold linker only helps so much. Therefore, it would seem that the lowly task of going through the included files and expanding macros would take a comparatively short time and be essentially impossible to improve upon.
Not so! Replacing gcc’s preprocessor with warp has led to significant improvements of our end-to-end build times (including linking). Depending on a variety of circumstances, we measured debug build speed improvements ranging from 10% all the way to 40%, all in complex projects with massive codebases and many dependencies. That’s not per-file speed, but rather global times measured for scenarios like “build after changing a header file.”
There sure are a lot of A-players at Facebook.
Within weeks of Whitman’s call to Schmidt, eBay was placed on a Google list of “Sensitive” companies, for whom Google placed fewer restrictions on its recruiters except at the executive recruitment level. It was at this time that Google began to internally formalize its illegal wage-suppression pacts—and Schmidt was clearly worried about getting caught.
In early October, 2005, Google’s Senior VP for Human Resources, Shona Brown, emailed Schmidt a draft list of companies on their “Do Not Call” and “Sensitive” lists, and the policy protocols.
During a deposition last year, the plaintiff’s attorney for the Silicon Valley wage theft class action lawsuit asked Sergey Brin, Google’s co-founder, about this incident and others.
[Heimann]: But I’m gathering from your answer that you don’t really recall this at all.
[Brin]: No, sorry.
At this point in the deposition, Heimann shows Brin the March 9, 2007 email from Eric Schmidt to Steve Jobs, assuring Jobs that the Google recruiter Jobs complained about had been summarily fired, and that it won’t happen again.
Brin too was shocked at Jobs’ response. But possibly not for the reason you’d expect.
In late 2005, Jean-Marie Hullot, one of Apple’s (and Steve Jobs’) most valued longtime programmers going way back to Jobs’ NeXT Computer startup, resigned from Apple. Hullot worked for Apple out of Paris, and when he left the company at the end of 2005, his team of four engineers resigned with him.
A few months later, Hullot and his team of engineers negotiated a deal with Google to set up a new Google engineering center in Paris. The “last step”—as Hullot called it—was to get Jobs’ blessing.
In late May 2006, Google’s Alan Eustace formally cancelled the Google project in Paris. […]
Based on your strong preference that we not hire the ex-Apple engineers, Jean-Marie and I decided not to open a Google Paris engineering center. I appreciate your input into this decision, and your continued support of the Google/Apple partnership.
Looks like these features aren’t coming to third-party clients, though, so they might as well not exist as far as I’m concerned. I won’t see them.
It took awhile to climb this mountain, 14 months actually. So to “show our work”, we’re posting around 45,000 words that mark the trail we took. It’s not every text, skype call or even every email in our big 500+ email thread. But it’s the important stuff, and a lot of it was important to getting Threes out in the world.
The following query computes an approximation of the Mandelbrot Set and outputs the result as ASCII-art.
This next query solves a Sudoku puzzle.
The final answer is found by looking for a string with ind==0. If the original sudoku problem did not have a unique solution, then the query will return all possible solutions. If the original problem was unsolvable, then no rows will be returned.
Floating-point arithmetic is considered an esoteric subject by many people. This is rather surprising because floating-point is ubiquitous in computer systems. Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on those aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating-point standard, and concludes with numerous examples of how computer builders can better support floating-point.
Those pioneers deserve our admiration because IEEE 754 is difficult to build. Its interlocking design requires every micro-operation to be performed in the right order or else it may have to be repeated. Maybe some implementations conformed only because their project managers had ordered their engineers to conform as a matter of policy and then found out too late that this standard was unlike those many standards whose minimal requirements are easy to meet. Anyway, a year before it was canonized, IEEE 754-1985 had become a de facto standard, the best kind.
This is the one most people forget. Once you’ve developed your API and built the app(s) to consume it, you likely have a user base that is going to be emailing in support requests and bug reports. Those requests likely mean you need to look into the data stored on your backend.
The admin portal is the piece that most developers forget about until after they’ve shipped, but it can also be one of the most important part of your product once you reach a level of success.
The same is true of apps without cloud components. Even if your app is bug-free—and it’s not—you need to build in easy ways to get at the information needed for support.
Hypo helps Cocoa coders write loosely-coupled classes. That is, classes that use the services of other classes but try to minimize assumptions.
It’s incredibly lightweight, although I’m not keen on using the
_hypo ivar/property name suffix to annotate the required dependencies. With Objection, you would instead add a separate line such as
Because we are still in the early stages of our development, we do not yet have enough operating history to measure the lifetime of our customer relationships. Therefore, we cannot predict the average duration of a customer relationship for the 2010 Cohort or for customers acquired in other fiscal years. We also cannot predict whether revenue from the 2010 Cohort will continue to grow at the rate of growth experienced through January 31, 2014, or whether the growth rate of other cohorts will be similar to that of the 2010 Cohort. We may not achieve profitability even if our revenue exceeds costs from our customers over time.
Box is among the fastest growing SaaS companies at this point in its life. Box’s revenue grew 110% in the last twelve months, about 2x the average rate of 53% of a SaaS company in its ninth year.
Box’s burn rate is twice as large as the next comparable firm, and nearly 10x the average. To drive its torrid revenue growth in the last 12 months, Box burned $168M, which is more than twice the next-most-cash-lax company, ServiceNow, which burned $74M in its ninth year en route to generating $424M in revenue.
Box spends about 137% of their revenue on sales and marketing.
Box spends nearly 3.7x as much on sales and marketing as research and development.
My IOUSBFamily radar: “The issue is not going to be addressed … We discourage developers to do anything in kernel”
If this Radar response is accurate, it appears Apple will no longer publish OS X’s IOUSBFamily source code.
Along with the kext signing approval requirements, I’d say the writing is on the wall: Apple’s not afraid to knee-cap Mac OS X, iOS-style.
I can see why Apple doesn’t like kernel extensions, but forbidding them, or locking out all but a few high-profile developers, would be bad for the future of the platform.
This journalist believes in magic. Note that he expressly talks about digital cables. While there is a possibility that there can be tiny differences in analog cables, this is simply not possible with digital cables, whether they are USB, HDMI or Ethernet.
Isn’t there another, more basic, way to use ‘digital’ cables, that doesn’t necessarily include error correction? I’ve read that ‘digital’ cables necessarily entail faultless transmission, but I don’t buy it. Digital cables are analog cables made to carry digital information. How you work with what comes out the other end depends on the circumstances. The OSI model doesn’t require reliable transmission on the physical layer (for obvious reasons), and some audio over ethernet protocols use this layer.
In other words, when recording engineers set up to record very subtle music – this was a choir in a chapel, and the sound is very complex – they don’t use anything other than cables which, most likely, are thick and robust enough to withstand rolling, unrolling and people walking on them. If even recording engineers don’t use fancy cables, then why should anyone think that expensive cables are necessary to play back music; let alone expensive digital cables?
The above review was for an audio interconnect; that’s the cable that you run from, say, a CD player to an amplifier. But look here, at a review for speaker cables from the same company: it’s exactly the same review! Word for word; it’s a copy and paste (though the header, Neutral, detailed and smooth, has been removed from the speaker cable review). Speaker cables and audio interconnects are two totally different kinds of cable, and it would surprise me that it is possible to say exactly the same thing about two different kinds of cable.
Dozens of reputable and disreputable companies market HDMI cables, and many outright lie to consumers about the “advantages” of their product.
Because it’s important to understand that it is impossible for the pixel to be different. It’s either exactly what it’s supposed to be, or it fails and looks like one of the images above. In order for one HDMI cable to have “better picture quality” than another, it would imply that the final result between the source and display could somehow be different. It’s not possible. It’s either everything that was sent, or full of very visible errors (sparkles). The image cannot have more noise, or less resolution, worse color, or any other picture-quality difference.
A curious download hit Apple’s app store this week: a messaging app called FireChat.
It’s a new kind of app because it uses an iOS feature unavailable until version 7: the Multipeer Connectivity Framework. The app was developed by the crowdsourced connectivity provider Open Garden and this is their first iOS app.
But here’s the really big deal — it can enable two users to chat not only without an Internet connection, but also when they are far beyond WiFi and Bluetooth range from each other — connected with a chain of peer-to-peer users between one user and a far-away Internet connection.
Update (2014-04-28): Edge Cases 89:
Wolf Rentzsch talks to Andrew Pontious about Apple’s new promising but troubled Multipeer Connectivity framework and his new app that takes advantage of it: Rumor Monger.
I began MacFixIt in 1996. After managing the site for four years, and watching it grow to a level I had never imagined possible, I sold MacFixIt to TechTracker in 2000. I remained as editor until 2002.
In 2007, CNET purchased TechTracker, including MacFixIt. This resulted in a dramatic transformation of MacFixIt. In my opinion, it was not a good change. While the MacFixIt name was retained, the site soon lost its distinct character. It was even hard to find the “site,” if you didn’t already know the URL; it was awkwardly located under the “Reviews” section of CNET. After a while, it seemed to me that there was little point in CNET keeping the MacFixIt name alive. I guess CNET finally came to the same conclusion.
Inserting object at index 0 uses the circular buffer magic to put the newly inserted object at the end of the buffer.
This is a shocker –
__NSArrayMnever reduces its size!
I’ve always had this idea of Foundation being a thin wrapper on CoreFoundation. My argument was simple – there is no need to reinvent the wheel with brand new implementations of NS* classes when the CF* counterparts are available. I was shocked to realize neither
NSMutableArrayhave anything in common with
CFArraymoves the memory around to accommodate the changes in the most efficient fashion, similarly to how
__NSArrayMdoes its job. However, the
CFArraydoes not use a circular buffer! Instead it has a larger buffer padded with zeros from both ends which makes enumeration and fetching the correct object much easier. Adding elements at either end simply eats up the remaining padding.
Update (2014-04-14): David Smith notes that the Core Foundation creation functions will often now give you NS objects, whereas it used to be the reverse. Removing the Core Foundation layer yielded a 10–40% speed improvement. Marcel Weiher notes that the NS collections were originally faster and Core Foundation was a regression, for political reasons.
For Steve, this contract wasn’t that important to the future of NeXT. While we would go on to pay Next about $5 million in royalties over the life of the contract, and were their first source of revenue, we were not central to his mission (Steve later teased me that he made more money collecting interest on his bank account than he made from me.). However, he had promised the developers 50%, he had said the number within earshot of everyone, and he wanted to be able to tell everyone he got what he wanted.
I had to make the business make sense financially. I just needed to make my 15% look like his 50%.
The Computer History Museum (CHM) announced today that it has, with permission from Microsoft Corporation, made available original source code for two historic programs: MS-DOS, the 1982 "Disk Operating System" for IBM-compatible personal computers, and Word for Windows, the 1990 Windows-based version of their word processor.
MS-DOS was basically a file manager and a simple program loader. The user interface was text commands typed on a keyboard, followed by text responses displayed on the screen. There was no graphical output, and no mouse for input. Only one user application program could run at a time. File names were limited to 8 characters, plus a 3-character extension indicating the file type. There were commands like “dir” to list the files in a directory, and “del” to delete a file; you ran a program by typing the name of its executable file.
It may have been a “small program” but it had some sophisticated features, including support for style sheets, multiple windows, footnotes, mail-merge, undo, and the proportional fonts that the newly emerging laser printers would be able to use.
The first version for Microsoft Windows was released in late 1989 at a single-user price of $495. It received a glowing review in Inforworld  that didn’t flinch at the price: “If your system is powerful enough to support Microsoft Windows, at $495 it is an excellent value.”
Opening a poisoned Rich Text File (RTF) document allows the attacker to hijack the PC with the same privileges as the logged-in user.
Microsoft Word 2003, 2007, 2010, 2013, and Office for Mac 2011 are vulnerable, according to Redmond. Microsoft Office Web Apps, Automation Services on SharePoint Server 2010 and 20103, and Outlook 2007, 2010 and 2013 when using Word as the email viewer, are also affected.
In the following six months, before the iPhone went on sale in June 2007, Mr. Christie’s team made other changes. At Mr. Jobs’s urging, they eliminated a split-screen view for email with information about the sender on one side and the message on the other. “Steve thought it was foolish to do a split screen on such a small display,” Mr. Christie said.
For some reason, Chrome on iOS now adds what looks like a per-device GUID to its User-Agent string.
This would seem to be a major privacy concern. There’s more information at Stack Overflow.
However, this bug says:
The tabID is then stripped off from the user agent before the request goes over the network. Again: this tab ID is not send over the network, only the normal user agent is send.
The only place the modified user agent is visible from is navigator.userAgent.
Microsoft is not unique in claiming the right to read users’ emails – Apple, Yahoo and Google all reserve that right as well, the Guardian has determined.
Google’s terms require the user to “acknowledge and agree that Google may access… your account information and any Content associated with that account… in a good faith belief that such access… is reasonably necessary to… protect against imminent harm to the… property… of Google”. Apple “may, without liability to you, access… your Account information and Content… if we have a good faith belief that such access… is reasonably necessary to… protect the… property… of Apple”.
A few years ago, I’m nearly certain that Google accessed my Gmail account after I broke a major story about Google. A couple of weeks after the story broke my source, a Google employee, approached me at a party in person in a very inebriated state and said that they (I’m being gender neutral here) had been asked by Google if they were the source. The source denied it, but was then shown an email that proved that they were the source.
The source had corresponded with me from a non Google email account, so the only way Google saw it was by accessing my Gmail account.
Update (2014-03-28): Microsoft:
Over the past week, we’ve had the opportunity to reflect further on this issue, and as a result of conversations we’ve had internally and with advocacy groups and other experts, we’ve decided to take an additional step and make an important change to our privacy practices.
What makes Java 8 so compelling is its embrace of the functional programming metaphor. This embrace has two primary expressions: the use of closures (or as Java calls them, “lambdas”) and the adoption of composition as a central approach to development. Lambdas, while not quite full first-class functions, enable passing code as a parameter to a function, within limited contexts. By limited, I mean only the mechanics of it, not the opportunities to do so. As Brian Goetz of Oracle explains in one of this week’s features, once the syntax of lambdas had been finalized and its implementation completed, the Java team found numerous opportunities to use lambdas to streamline the standard libraries. They discovered that not only was the code clearer, but the performance better.
The new streams feature in Java 8 enables composability. This language trait, recently explained by Walter Bright, enables software to be implemented using a model that operates like this: data source → algorithm → data sink. This model is highly desirable on today’s platforms where such computational streams can be run in parallel and thereby make full use of multicore processors. It is also an excellent fit in processing Big Data.
Java has long supported lambdas in the form of anonymous inner classes, but the new lambda syntax is much more compact.
When using the CSS JIT, the task of matching a selector is split in two: first compiling, then testing. A JIT compiler takes the selector, does all the complicated computations when compiling, and generates a tiny binary blob corresponding to the input selector: a compiled selector. When it is time to find if an element that matches the selector, WebKit can just invoke the compiled selector.
In the most recent versions of the JIT, the compilation phase is within one order of magnitude of a single execution of SelectorChecker. Given that even small pages have dozen of selectors and hundreds of elements, it becomes easy to reclaim the time taken by the compiler.
There is ongoing work to support everything SelectorChecker can handle. Currently, some pseudo types are not supported by the JIT compiler and WebKit fall backs to the old code. The missing pieces are being added little by little.
If bunny methods repeatedly need to get stuff from the mother object, consider passing the information they need as arguments. If you have too many arguments, make an argument object. If a method on bunny A needs to talk to some other bunny, pass the bunny as an argument.The point here is to unravel the tangled strands of yarn and to knit up your raveled sleeve of care.
The last refactoring round made the bunnies better, but now we’ve made the mother class worse. No worries! Take all those little methods you just made in the mother and sprout a Nanny class.
The nanny, in short, is a new class that abstracts that tangle of yarn. In trade school lingo, it’s a multiple Facade, an interface between bunnies and the mother class and also an interface amongst in homogenous bunny classes.
Hack is a programming language for HHVM that interoperates seamlessly with PHP. Hack reconciles the fast development cycle of PHP with the discipline provided by static typing, while adding many features commonly found in other modern programming languages.
Hack provides instantaneous type checking via a local server that watches the filesystem. It typically runs in less than 200 milliseconds, making it easy to integrate into your development workflow without introducing a noticeable delay.
Our principal addition is static typing. We have developed a system to annotate function signatures and class members with type information; our type checking algorithm (the “type checker”) infers the rest. Type checking is incremental, such that even within a single file some code can be converted to Hack while the rest remains dynamically typed. Technically speaking, Hack is a “gradually typed” language: dynamically typed code interoperates seamlessly with statically typed code.
Within Hack’s type system, we have introduced several features such as generics, nullable types, type aliasing, and constraints on type parameters. These new language features are unobtrusive, so the code you write with Hack will still look and feel like the dynamic language to which PHP programmers are accustomed.
Update (2014-03-26): Marco Arment:
You’re effectively writing in a new language, albeit with a much smaller learning curve than other language switches since you already know most of the syntax and API. But because Hack isn’t PHP, some of PHP’s biggest advantages — ubiquity, maturity, stability — don’t apply.
There are also some comments on Lambda.
Update (2014-04-01): Marco Arment:
What’s needed is a simple compiler that strips Hack’s type annotations out, leaving valid PHP behind. Obviously, this would only work on the additions in Hack that are easily stripped out or compiled to PHP, but even if it’s just the type annotations, that’s still extremely beneficial and in-demand among PHP coders.
What this means is that an attacker who can predict the output of your RNG—perhaps by taking advantage of a bug, or even compromising it at a design level—can often completely decrypt your communications. The Debian project learned this firsthand, as have many others. This certainly hasn’t escaped NSA’s notice, if the allegations regarding its Dual EC random number generator are true.
Unfortunately, so far all I’ve done is call out the challenges with building trustworthy RNGs. And there’s a reason for this: the challenges are easy to identify, while the solutions themselves are hard. And unfortunately at this time, they’re quite manual.
Solving this problem, at least in software, so we can ensure that code is correct and does not contain hidden ‘easter eggs’, represents one of the more significant research challenges facing those of us who depend on secure cryptographic primitives.
A property, minimal-ui, has been added for the viewport meta tag key that allows minimizing the top and bottom bars on the iPhone as the page loads. While on a page using minimal-ui, tapping the top bar brings the bars back. Tapping back in the content dismisses them again.
For example, use <meta name="viewport" content="width=1024, minimal-ui”>.
This is a fantastic and long overdue directive for Mobile Safari. Put simply, the
minimal-uiproperty allows you to display your responsive web page without the browser chrome taking up valuable screen real-estate.
This fixes one of my biggest annoyances with iOS7 Safari - that it ‘steals’ 50px or so of tappable space at the bottom of the screen. If you try to use it, it shows the bottom bar instead of allowing you to interact with the page.
This reminds me of the era of chromeless popup windows. I do not like this move by Apple. A site should not be capable of deciding to make changes to a users browser UI. Especially changes that aren’t explained. I’m actually disappointed this exists.
It seems like an OK pragmatic solution to me, so long as sites don’t use it gratuitously.
Think about usability: the user will not have the back, the share and the tabs buttons available by default. If you are creating an immersive game or a webapp with the main navigation controls then minimal-ui is a good idea; for content- and document-based websites, it might not be nice for the user.
For fundamental contributions to the theory and practice of distributed and concurrent systems, notably the invention of concepts such as causality and logical clocks, safety and liveness, replicated state machines, and sequential consistency.
Among Lamport’s many contributions, one of the most widely implemented is known as the Paxos algorithm, which can be found at work behind the scene in Google or Bing online searches, among much else. It allows a computer network to continue working in a coherent way even in the face of failures, by transferring leadership roles among machines and halting progress rather than allowing damage to occur to the system.
John Warnock had the idea that every document that was ever printed, or ever would be printed, could be represented in a document. This was not an unreasonable idea since Postscript was designed for this purpose and Adobe also had some code from Illustrator that would handle the fonts and graphics and code from Photoshop to display images. So, Warnok started a project (the Carousel project) on his own initiative to pursue his idea that eventually the whole Library of Congress could be represented in an archival electronic format.
Peter Hibberd had written a demo of an ‘object oriented file format’ so Richard Cohn and Alan Wootton went to work trying to adapt his work for use on the Carousel project. After many weeks of struggle it was decided that adapting his work was going to be more work than writing new code and that some of the ‘object oriented’ concepts were not applicable since it was finally becoming obvious that a key-value format was going to be part of the solution. This was the third file format.
The name ‘Acrobat’ was created by a market research team from back east.
Concurrent with the release of Adobe Acrobat & Reader 1.0, the specification was published. So while it was proprietary, it was also published and open to all to use (even the patents were made available on a free basis!) This is how open source tools such as Ghostscript and PDFlib have been able to support PDF for most of those 20 years.
This document describes the base technology and ideas behind the project named “Camelot.” This project’s goal is to solve a fundamental problem that confronts today’s companies. The problem is concerned with our ability to communicate visual material between different computer applications and systems. The specific problem is that most programs print to a wide range of printers, but there is no universal way to communicate and view this printed information electronically.
In this example the new redefined “moveto” and “lineto” definitions don’t build a path. Instead they write out the coordinates they have been given and then write out the names of their own operations. The resulting file that is written by these new definitions draws the same polygon as the original file but only uses the “moveto” and “lineto” operators. Here, the execution of the PostScript file has allowed a derivative file to be generated. In some sense this derivative file is simpler and uses fewer operators than the original PostScript file but has the same net effect. We will call this operation of processing one PostScript file into another form of PostScript file “rebinding.”
From its inception, PDF was, at least in part, a self-describing format. It specifies the filters used to encode its own data stream and, from the outset, Adobe’s Acrobat viewers were designed to interpret a PDF file through these filters. By changing the filter used to decode its own data, Acrobat was able to switch from a pure ASCII file to binary-encoded format. Acrobat Reader 1.0 could read the binary files created by the forthcoming Acrobat 2.0 products.
I have written a paper attempting to describe how Adobe managed the evolution of the PDF file format for over 15 years before turning its management over to ISO. […] This paper was derived from an internal Adobe technical note written by me and a task force of employees who studied the whole issue of versions and compatibility in 2006.
OS X Mavericks (10.9) introduces “Enhanced Dictation”, a locally hosted, non-trainable version of Nuance’s recognizer.
Enhanced Dictation’s omissions of training and editing likely protect sales of the Dragon Mac products (discussed below).
Nuance’s Windows dictation products (Dragon NaturallySpeaking and Medical/Legal) are better than their Mac equivalents, though that’s not saying a lot. The UI is a scattered, slowly-evolving mess; true interaction between keyboard/mouse and voice editing is limited to individual versions of specific applications, and the medical product is expensive (upgrades are $500 on sale).
The main reason I dictate into Windows is the ecosystem surrounding the Dragon products there. There are quite a few abandoned research projects and other near-abandonware to contend with, but it’s possible with some effort to construct a productive system.
I do my serious dictation in a Windows 7 virtual machine. Having recently upgraded my dictation setup and transferred it to a new Mac, I figured it’d be a good thing to share.
Most of the time I’m not actually editing documents directly on Windows; the OS simply holds my text on the way to its destination in a Mac application.
Update (2014-04-15): Nicholas Riley:
Thanks to PowerScribe, I realized that it’s actually easier for me to work with shorter fragments of text, a sentence or a paragraph at a time, rather than importing the entire document at a time. What I’ve implemented so far is on GitHub; here’s a video showing it in use and explaining some technical details.
I was greeted with this message today when I was about to publish few more presentations on Slideshare about Knowledge Management. The offending presentation is from 2008. I have around 20 files created in older Keynote versions. They are not the disposable kinds of presentations – you know, the ones that you prepare, project and forget about them. I like to reuse them, show when I’m talking about various subjects contained in them.
How I am supposed to access them now? “save it with Keynote ’09 first”, but how? I don’t have Keynote ’09 any more on my fresh Mavericks install.
And, of course, Keynote ’09 will at some point stop working on new Macs. Apple—and, to a lesser extent, other developers such as Microsoft—cannot be relied upon to support old file formats. The responsibility then falls to the user. If you use an app that creates files in a proprietary format, as soon as a new version comes out you should update all of your documents to the new format. It’s not fun to do this, but there will probably never be an easier time. And it may be a lossy process, so you should also keep the versions in the older format.
Update (2014-03-20): Drew Crawford:
If you are arguing that Apple “should have” implemented this feature, you are also arguing that there are people who want to buy it, and that is a point that is fairly easy to prove.
I do not find this to be a convincing argument. It reminds me of the old joke about how an economist won’t pick up a coin on the ground because, if it were real, someone else would have already found it.
Update (2014-04-14): Thomas Brand:
Even after iWork became a thing, I still find it hard to believe Apple is using its office suite for anything but presentations. Keynote ’09 will stop working on new Macs eventually, and it is hard to ask a company as large as Apple to update every file in its record of knowledge every couple of years.
Update (2014-11-24): The lack of file format compatibility is discussed in Accidental Tech Podcast #90.
Apple’s Podcasts app promises to handle all the subscription, episode, and playback syncing. The problem, of course, is that Podcasts has always been widely regarded as a piece of shit. But it’s been updated since I first looked at it, and since I second looked at it, too. So I promised myself I’d give Podcasts another tryout, because the upside of automatic syncing on both devices is worth a lesser experience on the phone.
It didn’t work out the way he hoped. I don’t know of a good solution.
Yup, we’ve got ourselves a nib inside a nib here. That’s why it doesn’t have to reload the entire thing, it just reloads the smaller nib embedded as a data blob inside the main nib.
There are some warnings I don’t turn on, for any of several reasons […] The rest of the warnings, I turn on because either they make something clearer or they tell me about either real or potential (i.e., future real) bugs.
Bindings are one-way dataflow constraints, specifically with the equation limited to y = x1. More complex equations can be obtained by using NSValueTransformers. KVO is more of an implicit invocation mechanism that is used primarily to build ad-hoc dataflow constraints.
Anyway, when you add it all up, my conclusion is that while I would really, really, really like a good constraint solving system (at least for spreadsheet constraints), KVO and Bindings are not it. They are too simplistic, too fragile and solve too little of the actual problem to be worth the trouble. It is easier to just write that damn state maintenance code, and infinitely easier to debug it.
KVO is akin to manual memory management - a lot of individual pieces to tweak and remember, and the outcome is very brittle and error-prone.
Calling a function in C requires the signature to be known for each call-site at compile-time; doing so at run-time is not possible and so one must drop down into assembly and party there instead.
When reading and writing data to a socket, you must write your code to accept reading or writing less data than requested. The
writefunction calls return the number of bytes actually read or written. You can get away with ignoring this value in a lot of situations, but not so with socket programming. The amount of data read or written will frequently be less than what you requested when dealing with sockets, so you must write the code to buffer the data and loop in order to make multiple calls.
The key part is that you don’t actually need to build the collaborating object. In fact, you don’t need to worry at all yet about how it will be implemented. All that matters is that you express the messages it responds to, so that the mock object can test whether they’re sent. In effect, the mock lets you say, “I know that at some point I’ll want this, but I don’t want to be distracted by thinking about it.” It’s like a to-do list for TDDers.
The other way in which we can use mock objects is to investigate integration with external code, such as Apple’s frameworks or third-party libraries. The mock object can remove all of the complexity associated with using the framework, so the test doesn’t need to create a full-blown environment just to ensure a small part of our app’s connection to that environment. This use of mock objects follows a test pattern called the Humble Object.
A nice mock records the messages it receives, just like a regular mock, but it doesn’t worry about receiving messages that it wasn’t told to expect.
Partial mocks act as proxies to real objects, intercepting some messages but using the real implementation for messages they weren’t told to replace.
Because the current progress is thread-specific, it is important that the worker object creates its progress object on the same thread it was invoked on. Otherwise, the parent-child relationship will not be set up correctly. Once created,
NSProgressobjects are thread-safe. The worker object can later update properties on the progress from any thread/queue.
The view controller observes the progress using KVO, but the observer method is called on the worker’s thread/queue, so it has to tell the main thread to update the user interface. It seems like this could perhaps be simpler, but overall it looks like
NSProgress has a pretty clean API. It’s definitely one of the more interesting new Cocoa features.
A damages expert will argue on Apple’s behalf that, if the parties had acted reasonably and rationally in a hypothetical negotiation, Samsung would have agreed to pay $40—forty dollars!—per phone or tablet sold as a total royalty for the five patents-in-suit, which relate to (but don’t even fully monopolize) the phone number tapping feature, unified search, data synchronization, slide-to-unlock, and autocomplete. The theory is that Samsung would simply have raised its prices accordingly. (You can find the final list of Apple’s patents-in-suit here; that post also lists Samsung’s patents-in-suit, but three more patent claims have since been dropped).
Apple’s royalty-type damages claim for five software patents is also far out of the ballpark of anything that has ever been claimed or rumored to be paid in this industry for entire portfolios. After Apple and Nokia settled in 2011, the highest per-unit royalty estimate I heard about (and this was just an analyst’s claim, not official information) was in the $10 range—for Nokia’s huge portfolio of SEPs and non-SEPs, not for a handful of patents. Guesstimates of what various Android device makers pay to Microsoft—again, for a portfolio license, not a five-patent license—that have appeared in the media did not exceed $15–20 per unit, at least the ones I’m aware of. (And Microsoft has a stronger software patent portfolio than Apple.)
Objective-Smalltalk is an evolution of Smalltalk based on the Objective-C runtime.
It adds angle brackets for type annotations, both for optional static type checking and to designate C types such as
<double>for interoperating with C and Objective-C. Generic raw pointers are not supported, wrapper objects and bulk collections are preferred.
The other syntactic addition to Smalltalk is that identifiers are generalized to URIs. This addresses interoperability with the Unix filesystem and Web Resources, as well as subsuming Objective-C properties and Keyed Value Coding and making keyed storage such as dictionaries much less necessary and visible.
Objective-Smalltalk is built on top of the Objective-C runtime, as a peer to Objective-C, and uses the host platform’s C ABI and calling conventions, thus being fully integrated (e.g. callable) from other peers on the platform. It does not require a VM or an image.
While Objective-Smalltalk would not require shipping source code with your applications, due to the native compiler, it would certainly allow it, and in fact my own BookLightning imposition program has been shipping with part of its Objective-Smalltalk source hidden its Resources folder for about a decade or so.
Open Source should be more about being able to tinker with well-made apps in useful ways, rather than downloading and compiling gargantuan and incomprehensible tarballs of C/C++ code.
He also has some interesting comments on Hacker News.
Lots of good ideas here. I think a runtime-compatible Objective-C–without-the-C is where we are headed. But, and I hate to say this, I’ve never liked Smalltalk syntax. I like the way Smalltalk works, and I like the Objective-C bracket syntax, but to my eyes Smalltalk has too many spaces to be easily readable. I feel like I am forever parsing it and mentally inserting parens.
The tools and APIs for add-ons are available to everyone. The Google Apps team only steps in ahead of final publication to the store.
From there, developers can submit working prototypes of add-ons to Google Apps for admission. At launch, Google has 25 add-on partners. Add-ons will do everything from print labels to customize emails. For instance, PandaDoc is an add-on that allows you to create legally binding documents with digital signatures, notations and other features.
You use Google Docs and Sheets to get all sorts of stuff done—whether you’re staying up late to finish that final paper or just getting started on a new project at the office. But to help take some of that work off your shoulders, today we’re launching add-ons—new tools created by developer partners that give you even more features in your documents and spreadsheets.
It seems odd that we’re now at the point where Web apps are more customizable than desktop ones.
Pono’s mission is to provide the best possible listening experience of your favorite music. We want to be very clear that PonoMusic is not a new audio file format or standard. PonoMusic is an end-to-end ecosystem for music lovers to get access to and enjoy their favorite music exactly as the artist created it, at the recording resolution they chose in the studio. We offer PonoMusic customers the highest resolution digital music available.
To me, the Pono (pronounced Poe-No) Player looks funky and old-school, as if someone built it from spare parts taken from older devices. But it’s all about the music, right? Given what the critics are saying, I don’t see the win here against existing standards. But I would wager that all that criticism is paper analysis and does not come from hearing the Pono Player itself.
There is a reason why every iPod and iPhone has been a flat device. So it can fit in your pocket! So Pono decided to make their new music player a triangle shape.
They’re making their calculations based on compressed MP3 files (using 256 kbps) and uncompressed high-resolution files. All of the 24/96 files I have in my iTunes library come in at about 2,000 – 3,000 kbps, because they are compressed, as are the FLAC files that are mentioned above. That’s about half the actual bit rate, because FLAC compresses about 50%. But if the Pono people quote uncompressed bit rates, yet still say these are FLAC files, they’re simply lying. (For example, 1411 kbps is the bit rate of uncompressed CD quality files, in either WAV or AIFF format, not in FLAC format as the Pono FAQ says.)
I agree with the Pono folks that we now have the technology to provide a much better listening experience. However, in my view the bulk of the problem is not the quality of 256 kbps AAC. It’s the loudness war that causes those files to be mastered poorly. Remaster the compressed files with more dynamic range, and they would sound much better. Create high-resolution FLAC files with the same bad process, and they would still sound bad.
Update (2014-03-13): Kirk McElhearn:
However, if someone really wants to provide “music as it was intended to be heard,” they’d do a lot better to look at the mastering process that’s been destroying music in recent decades. Colloquially known as “the loudness wars,” music producers, prodded by record labels, use dynamic compression to increase the overall volume of music, making it sound horrendous. Since, in general, louder sounds better, or brighter, when you compare two songs, producers have been cranking up the volume to make their songs stand out. But string together an albums worth of overly loud tracks, and it’s fatiguing. But it’s a war of attrition, and our ears are the losers. No high-resolution files will make this music sound better, ever.
Update (2014-08-24): Kirk McElhearn:
This Guardian article reads like an advertorial. Lazy journalists didn’t want to take the time to examine the questions around high-resolution music files objectively, so they got a company who sells the product they’re reporting on, and pimped that company’s products[…]
I haven’t yet updated, but Polar has some screenshots and live polls regarding the interface changes.
Update (2014-03-11): Jesper:
iOS 7 is a historic fact at this point and they’re not going to go back. But they are having to evolve it out of the supposed already-perfect sprung-from-Jony-Ive’s-forehead state and make changes. Dulling the green is a welcome first step; making the dialer transitions and interface work better than in iOS 6, as opposed to significantly worse (as in iOS 7.0) is another. Most of the other changes are stopgaps.
There’s a measurable improvement over iOS 7.0 across all of these apps, some more noticeable than others. In a few instances, iOS 7.1 very nearly catches up with iOS 6.1.3, which is impressive given the gap between the two operating systems in some of these apps. It’s not a complete recovery from the original iOS 7.0 release, but it’s about as good as Apple can do with hardware this old. The small speed improvements are present throughout the operating system, and this makes the iPhone 4 feel more responsive than it did, if not always as responsive as it once was.
One of the awesome buried gems in iOS is known as “Switch Control.” It lets you create custom switches to perform functions on the phone that would normally be done with your hands using the iPhone’s camera, like multitasking by tilting your head instead of double-tapping the home button.
The update doesn’t come with big new features, but Jony Ive and the software team have made a ton of small design tweaks, most notably in areas like the Phone app, shutdown interface, calendar, as well as a number of Accessibility options that tweak the UI further with Button Shapes and darker colors and improved contrast.
In Settings > General > Accessibility > Increase Contrast, two new options — Darken Colors and Reduce White Point — join Reduce Transparency in making the interface less washed-out. I particularly like Darken Colors, simply because I prefer more saturated colors to iOS 7’s pastels.
I know my mother (and likely many others) will rejoice over the fact that the Calendar app once again has an obvious list option: When viewing a single day, you can tap the list icon at the top to view a full list of your appointments; in month view, you can toggle a list of the highlighted day’s appointments, which show up beneath the month’s calendar. Though I’m a Fantastical convert, this change at least improves the usefulness of the Calendar app for many users.
I’m trying out the “Button Shapes,” “Reduce White Point,” and “Darken Colors” options to go along with “Reduce Motion.” In theory, I would like “Reduce Transparency,” but since iOS 7 was designed with transparency in mind it tends to make things ugly.
Update (2014-03-12): I am seeing a bug where HDR keeps going back to Auto even though I’ve set it to On.
I start with the ideal assumption that everything will run on the main thread.
Once I find that a queue is needed, I keep that queue private to the object that uses it. That object’s public API is main-thread-only, even though internally it uses a queue.
Sounds like a good general rule to me.
Another point to make is that Apple’s terms and conditions make it clear that you do not own any content you purchase from the company, but are only granted access until your death. That’s a much more complicated issue that may, one day, have to be dealt with by the courts.
Update (2014-03-10): Kirk McElhearn:
So be aware that, when Apple says you can “own” a movie, it’s not true. This differs from music, which, not having DRM, does not need an Apple ID and password to play. But for movies, TV shows, books and apps, you never really own them; you’ve just paid a price to use them until you die.
Doug Carlston, computer games pioneer and founder of Brøderbund Software, Inc., has donated to The Strong in Rochester, New York, a collection of games, consumer software, and corporate records that document the history of the company and the development of the computer games industry in the 1980s and 1990s. The materials will be cared for by The Strong’s International Center for the History of Electronic Games (ICHEG) and made accessible to researchers.
The bug in the GnuTLS library makes it trivial for attackers to bypass secure sockets layer (SSL) and Transport Layer Security (TLS) protections available on websites that depend on the open source package. Initial estimates included in Internet discussions such as this one indicate that more than 200 different operating systems or applications rely on GnuTLS to implement crucial SSL and TLS operations, but it wouldn't be surprising if the actual number is much higher. Web applications, e-mail programs, and other code that use the library are vulnerable to exploits that allow attackers monitoring connections to silently decode encrypted traffic passing between end users and servers.
It sounds a lot like the recent Apple bug.
Getty Images is dropping the watermark for the bulk of its collection, in exchange for an open-embed program that will let users drop in any image they want, as long as the service gets to append a footer at the bottom of the picture with a credit and link to the licensing page. For a small-scale WordPress blog with no photo budget, this looks an awful lot like free stock imagery.
Model objects live on the main thread. This makes it easy to use VSNote, VSTag, and so on in view controllers and in syncing.
There is one exception: you can create a “detached” copy of a model object to use with API calls. A detached model object exists on one thread of execution only, is short-lived, and is disconnected from the database. Detached objects aren’t a factor when it comes to concurrency.
When a model object is added, changed, or deleted, updates to the database are placed in a background serial queue.
Update (2014-03-07): Jesper:
Of Apple’s fixes’ own admission, Core Data sync didn’t work because it was a black box with no ability to debug it. It would be unfair to zing Core Data at large with that epithet. But if it’s something that seems true about Apple’s frameworks, love them mostly as I do, it’s that they’re constructed as if to impress on their user how privileged they should feel because of the difficulty of the bar that they set to solve the problem at, and the complexity of implementation they have used to convincingly solve the problem.
Basic features are still painful for people that have been successful Cocoa coders for ten years. They’re not sufficiently saved by the ripening of frameworks as much as by their own accumulated ingenuity. Cocoa is still being developed, features are added, but rarely does something hard get easier.
The second reason has to do with my enduring love of plain-ol’ Cocoa. I like regular Cocoa objects. I like being able to implement
hash, and design objects that can be created with a simple
init(when possible and sensible). I especially like being able to do those things with model objects. (Which totally makes sense.)
I’m really excited about this release! It’s got features that many people have been asking for, and it opens Arq up to a whole new range of options for storing backup data.
Glacier backups now use the S3 Glacier Lifecycle feature. Among other benefits, this allows Arq to prune old Glacier commits (that previously were immortal) and subject them to the budget. Unfortunately, Glacier vaults from previous versions of Arq cannot be transitioned; you have to delete them and create a new backup target (not in that order!).
You can now back up to other S3-compatible destinations such as as DreamObjects, which is about half the price of Amazon S3 and has fewer restrictions than the (even cheaper) Amazon Glacier. I plan to continue using Glacier and S3 because the performance has been great and (in theory, see below) the reliability is unmatched. But it’s nice to have alternative services to switch to or use in parallel.
Arq now supports backups via SFTP, which is something I’ve wanted a backup app to do for as long as I can remember. I have an account with DreamHost, and they offer 50 GB of SFTP space for personal backups. This is a convenient, free space I can use for my most important backups. It avoids the delays and expense of restoring from Glacier. DreamHost Personal Backup is great as a secondary backup target, but it is not itself backed up so you should still use AWS or another service for your primary.
You can also use SFTP to make a local backup or archive on a NAS or other Mac that you have an account on.
Aside from the new storage options, the other big new feature is that you can now have multiple backup targets. This lets you have multiple backups going to different cloud services. You can also spread your files across multiple targets, e.g. if you want your Documents folder to have a different backup schedule than your Aperture or iTunes library. Each target can also have a separate budget, which lets you keep a longer history for certain folders. You can also pause a backup target (by setting its schedule to manual) in order to give priority to other targets (since Arq seems to only back up to one target at a time). Alas, the targets cannot be renamed or reordered, and you cannot copy file exclusion patterns from one target to another.
I’ve been seriously using Arq since version 2, and version 3 was one of my favorite apps. Version 4 so far seems to be better still. The app itself has been reliable (rarely crashing) and has not hogged the CPU (like other backup apps I’ve tried). However, I have had some problems with the reliability of Arq’s backups. It’s not clear whether this is due to a bug in Arq itself or problems with the cloud storage provider (AWS).
Twice in the last six months, I’ve found that backup snapshots (“commits”) older than a certain date had disappeared. Arq stores the commits in a linked list. If a commit object is lost, Arq, naturally, will no longer be able to find the trees and blobs in that commit. But it will also lose the link to the parent commit (previous backup snapshot) and, thus, all of the previous snapshots. In theory, much of the data is still on the server, but it’s no longer in an accessible form, and Arq will garbage collect it when it enforces the budget.
The developer, of course, takes this sort of thing very seriously. The first time I noticed missing backup snapshots, he told me that several other customers had reported the same problem around the same time. It seemed as though the problem was that Amazon S3 was reporting objects as missing (when doing the equivalent of an ls) even though it could successfully fetch their data when asked (the equivalent of stat or cat). So when Arq periodically verified its backups, it would delete objects related to the “missing” ones unnecessarily. An update to Arq was soon released to fix this.
At the time, I was using S3 Reduced Redundancy Storage for my backups. RRS storage is cheaper than regular S3 but offers only 99.99% durability compared with 99.999999999%. Since I have other backups besides Arq, I did not think I needed to pay for those extra 9’s. I thought it was acceptable to lose 1 in 10,000 objects, even though I have many more files than that. What I failed to appreciate was that the lost object might not be a file. It could instead be a commit object. In that case, losing that one object effectively means losing hundreds of thousands or even millions of other objects. These days, I think there is little reason to use RRS with Arq. You can store your backup data in Glacier, which is much cheaper than RRS yet has the same durability as S3. The backup metadata is stored in S3.
It’s not clear whether RRS was at fault, but I switched away from it just to be safe. Then, a few months later, I noticed that more old backups had disappeared. This time, other Arq users had apparently not encountered the same problem. It’s hard to know, though, because it is not obvious in the user interface that backups have been lost. You only notice it when you click a disclosure triangle to see the list of snapshots and see that the list is shorter than expected.
I never actually lost any current backups, but I was intending to use Arq as a historical archive as well, because sometimes I need access to old versions of files. In that sense, the cloud backup is much more than a backup; I do not have master local copies of all the versions.
It’s obviously very troubling to have a backup app or cloud storage provider lose my backups. But I continue to use and recommend Arq for several reasons. First, I have confidence in the product’s basic design and in Stefan, its developer. Second, Arq 4’s support for multiple backup targets offers a variety of ways to mitigate the problems caused by lost objects. Third, I have tried just about every backup product I could find over the years, and I have yet to find one that’s better. The closer I look, the more flaws and design limitations become visible. For example, Backblaze is highly regarded, yet it silently deletes backups of external drives that haven’t been connected in a while.
Backups are important enough that I make local ones (using SuperDuper and DropDMG) even though that’s more work than just relying on the cloud. I want to have copies of my data in my physical possession. There are also obvious benefits to making cloud backups, e.g. using Arq, so I do that as well. What I have more recently come to realize is that cloud backups are important enough that I shouldn’t rely on just one provider. Before Arq I used CrashPlan, and it, too, occasionally lost my data. The lesson here is that there is no perfect cloud provider. I should plan for failure and use multiple good providers. I am now using CrashPlan alongside Arq.
The second lesson I’m learning is that I value access to old versions of files but that there are few, if any, backup products that can provide this over the long term. The answer, I believe, is to structure the data so that the backup, rather than the backup history, contains the old versions. In other words, put the versions in band, where possible. For example, a single backup snapshot of a Git repository includes the complete, checksummed history for those files. I don’t need last year’s backup if I committed the file to Git last year and I have yesterday’s backup. Of course, my source code has been in version control from the beginning. But I am now using version control to track other types of files such as notes, recipes, my 1Password database, and my calendar and address book. This lets a newly created cloud backup contain versions from years ago.
The same logic holds for verifying the backup. It’s nice if the backup software can do this, but if your data has in band checksums you can verify the restored files independently. You can also verify your working files so that you can identify damage and know when you need to restore a clean copy from backup. You can verify files in Git using git-fsck. For files not in Git, I use EagleFiler and IntegrityChecker.
Long ago, as the design of the Unix file system was being worked out, the entries . and .. appeared, to make navigation easier. […] When one typed ls, however, these files appeared, so either Ken or Dennis added a simple test to the program. It was in assembler then, but the code in question was equivalent to something like this:if (name == '.') continue;
I’m pretty sure the concept of a hidden file was an unintended consequence. It was certainly a mistake.
Disallowing an app from controlling another is a good idea (I sure don’t want an app selecting menu items for me!) and the App Sandbox Design Guide’s statements about accessibility make complete sense.
That being said, automatically moving windows around on my screen is something that helps me do my job and something I can explicitly control using Accessibility in System Preferences. As a user, this type of “controlling my app” means “making my work easier”.
He wants part of System Events’ AppleScript dictionary to have an access group so that it can be a scripting target. This would make it possible to target System Events from a sandboxed application using the com.apple.security.scripting-targets entitlement rather than the broader com.apple.security.temporary-exception.apple-events one that’s likely to be rejected by App Review.
Unfortunately, access groups are not yet widely supported by Mac OS X’s built-in applications or by third-party ones. One app that does support access groups is iTunes, whose .sdef file is, curiously, not stored inside iTunes.app.
There are two changes in this update that I really like:
When generating the Markers popup, leading whitespace from the marker name is now used to indent the menu item, so that type-to-select works correctly in the menu.
Control-Tab actually does not work for its intended purpose, which was to flip the sense of “Auto-Expand Tabs” on the fly when entering a tab character. To work around this, Option-Tab has been defined so that it always enters a literal Tab character; thus, if “Auto-Expand Tabs” is turned on, use Option-Tab to enter a tab character instead of spaces.
Read the release notes for each BBEdit update to see just how much behind-the-scenes work it takes to keep a top Mac app up-to-date and polished.
Having an email address in a domain you control and hosting your email at a provider you like can solve numerous problems and perhaps even improve your image.
Apple ships a patched version of OpenSSL with OS X. If no precautions are taken, their changes rob you of the power to choose your trusted CAs, and break the semantics of a callback that can be used for custom checks and verifications in client software.
The reason for this unexpected behavior is that Apple is trying to be helpful. Certificate validation and especially trust databases are a hassle and OpenSSL’s handling of them is rather user-hostile. So Apple patched a Trust Evaluation Agent (TEA) into their OpenSSL. It gives failed verifications a second chance using the system keyring as trust store.
Apple has rebranded iOS in the Car as the much more syllable-friendly “CarPlay”, and launched it in Geneva. This new version has a much different interface than that shown at WWDC, as can be seen on the CarPlay page on Apple’s website. Also of note: there are third-party apps which support CarPlay; it isn’t known yet whether third-party developers require a special agreement to enable CarPlay support.
Interacting with CarPlay can be done via buttons/knobs or directly by touch (if available). It’s important to note that CarPlay likely won’t replace the need for checking an expensive box on your car’s option list. The OEM still needs to provide the underlying hardware/interface, CarPlay simply leverages the display and communicates over Apple’s Lightning cable.
It also has the potential to fizzle out because Apple demands more control than their partners are comfortable with, like iAd, or their interests conflict too much with the partners’ interests without enough upside to the partners, like iTunes TV rentals.
The risk seems clear: Apple isn’t building the hardware in the cars. Color me skeptical that this is going to work smoothly. Also, no third-party app support — yet. UPDATE: Actually, there are a handful of third-party apps — Beats Radio, iHeartRadio, Spotify, and Stitcher — but those are hand-picked partners. What I’m saying is there’s no way yet for any app in the App Store to present a CarPlay-specific interface.
Volvo confirmed that CarPlay’s connection and video mirroring functionality is based on a streaming H.264 video feed, prompting watchers to speculate that the feature is based on AirPlay, an Apple-designed media streaming technology.
In a rather surprising find earlier today, N4BB was able to confirm that CarPlay runs on QNX, an operating system the embattled Canadian smartphone maker BlackBerry acquired Harman International Industries back in 2010…
For all we know, CarPlay might just be an extension to the existing car entertainment systems, using something like VNC (or hopefully something more optimized for the use-case) to show the iOS screen on the existing infrastructure.
In that case, the car is running QNX because it has always been running QNX and because the car must be useable even if the user decides to switch to a different platform or loses their device.
In that scenario, saying CarPlay is running QNX is similar to saying your Thunderbolt display is running OSX when it’s connected to your Mac running OSX, or using an even closer analogy, similar to saying that your OS X machine is running linux because you’re using SSH connected to a linux box (or any other kind of remote desktop)
Previous reports had suggested that CarPlay would communicate with displays wirelessly using some version of Apple’s AirPlay protocol, but according to today’s release, the feature will only work with Lightning-equipped iPhones.
But how does CarPlay stack up to the current crop of infotainment systems? Here’s a breakdown of how Apple’s first real attempt at dashboard dominance competes with the best from the established automakers.
I’m sitting at my desk right now, waiting to sync my iPhone. I think I started about twenty minutes ago, and all I’m doing is adding a bunch of audio files I want to listen to when I go out for a walk. Which I hope to do before the sun goes down…
Back in the day, this process was much faster than it is now. I don’t know exactly what’s changed since iOS 7, but I see this all the time, on all my iOS devices: iPhone, iPod touch, iPad Air.
This has been my experience as well. It is somewhat faster now that I’ve turned off photo syncing in favor of FlickStackr (App Store) and podcast syncing in favor of Downcast (App Store). This is curious since the photos, especially, didn’t change very much but always seemed to require an inordinate amount of time to sync. The nice thing about FlickStackr is that it lets me zoom in more than the regular Photos app. The photos also seem to have fewer JPEG compression artifacts. Unfortunately, I have to remember to tell it to load the photos while I have a Wi-Fi connection.
If you find yourself in a situation that is difficult to solve with Auto Layout, just don’t use it for that particular view. You can freely mix the constraint-based layout with manual layout code, even within the same view hierarchy.
You can think of Auto Layout as just an additional step that runs automatically in your view’s
layoutSubviewsmethod. The Auto Layout algorith performs some magic, at the end of which your subviews’ frames are set correctly according to the layout constraints. When that step is done, the Auto Layout engine halts until a relayout is required (for example, because the parent view size changes or a constraint gets added). What you do to your subviews’ frames after Auto Layout has done its job, doesn’t matter.
The built-in cameras on Apple computers were designed to prevent this, says Stephen Checkoway, a computer science professor at Johns Hopkins and a co-author of the study. “Apple went to some amount of effort to make sure that the LED would turn on whenever the camera was taking images,” Checkoway says. The 2008-era Apple products they studied had a “hardware interlock” between the camera and the light to ensure that the camera couldn’t turn on without alerting its owner.
MacBooks are designed to prevent software running on the MacBook’s central processing unit (CPU) from activating its iSight camera without turning on the light. But researchers figured out how to reprogram the chip inside the camera, known as a micro-controller, to defeat this security feature. In a paper called “iSeeYou: Disabling the MacBook Webcam Indicator LED,” Brocker and Checkoway describe how to reprogram the iSight camera’s micro-controller to allow the camera and light to be activated independently. That allows the camera to be turned on while the light stays off.
See also Checkoway’s iSightDefender on GitHub.
Two years ago I developed a case of Emacs Pinkie (RSI) so severe my hands went numb and I could no longer type or work. Desperate, I tried voice recognition. At first programming with it was painfully slow but, as I couldn’t type, I persevered. After several months of vocab tweaking and duct-tape coding in Python and Emacs Lisp, I had a system that enabled me to code faster and more efficiently by voice than I ever had by hand.
In a fast-paced live demo, I will create a small system using Python, plus a few other languages for good measure, and deploy it without touching the keyboard. The demo gods will make a scheduled appearance. I hope to convince you that voice recognition is no longer a crutch for the disabled or limited to plain prose. It’s now a highly effective tool that could benefit all programmers.
I used the Newton as a productivity device. I used the P800 as a productivity device. But at least for me, the iPad never turned out to be a good productivity device. It turned out to be great for browsing the web, watching movies, and playing games. Great for reading books and comics. Great for consumption. But not great for production.
The iPad will have arrived as a productivity device when news sites stop reporting about people who use iPads for productivity. So in the end, all of these links to articles about people who use their iPads to create things only seem to support the notion that this is not how most people use their iPads.
Metro’s split-screen mode isn’t perfect. It doesn’t cover every use case. But at least for me, it covered surprisingly many of them, and it made the Surface a much better option for creative work than an iPad.
The Surface’s pen is almost as good as my Cintiq’s. Tracking is fast, it’s pressure-sensitive, it works everywhere, and it feels like a real pen. It’s great, unlike every iPad pen I’ve ever tried.