Chris Anderson correctly analysed that the advent of e-commerce sites like Amazon or iTunes gave more prominence to the bottom of the catalogue than ever, therefore making it possible to increase the sales of historically less popular items which in a classical trade model had no chance of being on front display (or even in stock!) nor of having enough success to benefit from the accelerator effect of those at the top of the pile (the top 50 chart in music for example).
While the iTunes App Store is over 5 years old and the number of catalogue references exceeds a million items as many as the Google Play Store for Android, it is legitimate to ask oneself whether the long tail applies to these pure e-commerce sites, next generation offsprings with only slight mutations… Does an app buried away at the bottom of the catalogue benefit from the positive effects mentioned above? Do the app stores facilitate the discovery of apps and allow app publishers and developers to establish a truly profitable business?
These arguments alone suggest that the long-tail effect probably does not hold water on the app stores. This situation is even exacerbated, since if there is no long-tail effect, the opposite becomes possible: the creation of super champions capitalizing on the nature of apps which have built-in sharing and viral features that books or films do not have!
Archive for February 2014
Truth is, you shouldn’t use the flash at a performance like that anyway. Not at a sports event, not at a school play, not on Broadway, not at fireworks, not at the Olympics — because your camera’s flash is useless beyond about eight feet.
Yeah, yeah, I know. I’m telling you to turn off the flash when it’s dark out, but to turn on the flash when it’s sunny?
That’s called a fill flash. Its purpose is to supply a little additional light for the subject to compensate for the overly bright background.
Apple does not log messages or attachments, and their contents are protected by end-to-end encryption so no one but the sender and receiver can access them. Apple cannot decrypt the data.
I still think this is misleading because it ignores the fact that iCloud backups are encrypted with a key that’s in Apple’s possession. We know this because you can buy a new iPhone and restore your backup simply by entering your Apple ID and password. And we know that your password itself is not the key because Apple’s support people can restore your account access if you forget your password.
The other important point is that, since Apple’s servers are handing out the keys, Apple could easily be the “man in the middle” if it ever wanted to intercept messages. In other words, the security in iMessage is purely due to policy (trusting that Apple is not doing this) rather than the architecture or something that we can verify.
The white paper is well worth reading, though I’m not sure why everyone is treating it as a new document, rather than an update to the previous version.
Highly efficient file backup system based on the git packfile format. Capable of doing fast incremental backups of virtual machine images.
It uses a rolling checksum algorithm (similar to rsync) to split large files into chunks. The most useful result of this is you can backup huge virtual machine (VM) disk images, databases, and XML files incrementally, even though they’re typically all in one huge file, and not use tons of disk space for multiple versions.
It uses the packfile format from git (the open source version control system), so you can access the stored data even if you don’t like bup’s user interface.
Unlike git, it writes packfiles directly (instead of having a separate garbage collection / repacking stage) so it’s fast even with gratuitously huge amounts of data. bup’s improved index formats also allow you to track far more filenames than git (millions) and keep track of far more objects (hundreds or thousands of gigabytes).
bup is overly optimistic about mmap. Right now bup just assumes that it can mmap as large a block as it likes, and that mmap will never fail.
Because of the way the packfile system works, backups become “entangled” in weird ways and it’s not actually possible to delete one pack (corresponding approximately to one backup) without risking screwing up other backups.
The Y combinator is a higher-order function. It takes a single argument, which is a function that isn't recursive. It returns a version of the function which is recursive. We will walk through this process of generating recursive functions from non-recursive ones using Y in great detail below, but that's the basic idea.
More generally, Y gives us a way to get recursion in a programming language that supports first-class functions but that doesn't have recursion built in to it. So what Y shows us is that such a language already allows us to define recursive functions, even though the language definition itself says nothing about recursion. This is a Beautiful Thing: it shows us that functional programming alone can allow us to do things that we would never expect to be able to do (and it's not the only example of this).
It’s been 4 years and throughout all this time we've continued to sell RapidWeaver in and out of the Mac App Store. I expected direct sales to trail off as the years went on. I kept thinking it was about to happen… but it never did. In fact, most days the direct version of RapidWeaver continues to outsell the Mac App Store version.
This is what I’m seeing as well. Given that the “storeagent: Unsigned app” Mavericks bug that can prevent Mac App Store apps from launching is still present in 10.9.2, I’m glad that Apple’s store is not my exclusive sales channel.
We’re adding arbitration clauses to our Terms of Service and Dropbox for Business online agreement. Arbitration is a faster and more efficient way to resolve legal disputes, and it provides a good alternative to things like state or federal courts, where the process could take months or even years. If you prefer to opt out of arbitration in the Terms of Service, there’s no need to fax us or trek to the post office — just fill out this quick online form.
No matter what they do (delete your data, privacy breach, overcharging, whatever), you don’t get to sue. Instead, they get to choose the arbitrator according to whatever criteria they want, and thus any dispute is decided by someone they’re paying.
The agreement we make with Dropbox is too important to be enforced only by an arbitrator of their choosing. You have 30 days from the date of notification to opt out of the arbitration clause.
Another question I asked myself was: Is Software Update actually contacting Apple servers or am I being served a compromised update with even more security holes by the NSA?
Does it matter where the update comes from if it’s signed by Apple?
Update (2014-02-26): Nat!:
To get at the meat, use xar -x -f which will get you eventually to a file called Payload. That is a bzip2 encrypted tararchive. Now I find this quite hilarious. After all the hoops Apple went through, with xar, cpio, pax and what have you, they finally use tar to install, as they maybe should have right from the beginning.
Apple has quietly rolled out its iBeacon specification as it starts to certify devices that carry the Bluetooth LE standard.
Under their MFI program, manufacturers can now request that Apple permit them to attach the iBeacon name to their devices so long as they meet certain criteria.
The specifications are available after signing an NDA. Applying to the program in order to register to carry the iBeacon name, we’re told, is free.
We’re getting closer to the first official release of the Wolfram Language—so I am starting to demo it more publicly.
Here’s a short video demo I just made. It’s amazing to me how much of this is based on things I hadn’t even thought of just a few months ago. Knowledge-based programming is going to be much bigger than I imagined…
In a sense, the Wolfram Language has been incubating inside Mathematica for more than 25 years. It’s the language of Mathematica, and CDF—and the language used to implement Wolfram|Alpha. But now—considerably extended, and unified with the knowledgebase of Wolfram|Alpha—it’s about to emerge on its own, ready to be at the center of a remarkable constellation of new developments.
There are plenty of existing general-purpose computer languages. But their vision is very different—and in a sense much more modest—than the Wolfram Language. They concentrate on managing the structure of programs, keeping the language itself small in scope, and relying on a web of external libraries for additional functionality. In the Wolfram Language my concept from the very beginning has been to create a single tightly integrated system in which as much as possible is included right in the language itself.
I also played around with Cocoa Script “shaders” for shape graphics in Acorn. This won’t ship in 4.4 (or maybe ever?), but it was fun to code up and might be something awesome some day. How it works is a little hard to explain, but I'll try. Basically, instead of a rectangle having just a stroke and a fill when it draws, it will call a snippet of Cocoa Script code in place of the normal drawing routines. That snippet of code then has access to a bunch of libraries, and can do whatever it wants in the context it is drawing into.
Working with Woz was like working with the smartest person you’ve ever known kicked up a couple notches combined with a practical joker. The best times Woz and I had were not coding, but rather playing jokes.
I was not yet out of high school and immature; yet he was always willing to deal with my mood swings, and answer every technical question I gave him (and there were a lot!) He loved explaining things — I’ll never forget one evening at Denny’s when he explained how parsers and lexical analysis worked. He was never too busy to explain concepts that were new to me.
We have created a proof-of-concept "monitoring" app on non-jailbroken iOS 7.0.x devices. This “monitoring” app can record all the user touch/press events in the background, including, touches on the screen, home button press, volume button press and TouchID press, and then this app can send all user events to any remote server, as shown in Fig.1. Potential attackers can use such information to reconstruct every character the victim inputs.
Note that the demo exploits the latest 7.0.4 version of iOS system on a non-jailbroken iPhone 5s device successfully. We have verified that the same vulnerability also exists in iOS versions 7.0.5, 7.0.6 and 6.1.x. Based on the findings, potential attackers can either use phishing to mislead the victim to install a malicious/vulnerable app or exploit another remote vulnerability of some app, and then conduct background monitoring.
There is, however, an intrinsic danger in applying this ability without fully thinking through the implications. When enabled within your applications you are essentially building a massively distributed botnet. Each copy of your application will be periodically awoken and sent on a mission to seek and assimilate internet content with only the OS safeguards holding it back. As your app grows in popularity this can lead to some rather significant increases in activity.
My first example of this was when I added Background Fetch to Check the Weather. A weather app’s primary function is displaying up-to-the-minute, constantly changing data so in my initial iOS 7 update I experimented with adding highly frequent background updates. The result was far more dramatic than I’d expected. Here are my weather API requests (which cost 0.01¢ per request) per day once the update went live. I saw an immediate jump in traffic, roughly 16x normal. Suffice to say I immediately had to scale back on my requested update frequency.
The background fetch API is a game-changer for iOS developers. It has the potential to free us of significant server and infrastructure overheads. This is particularly relevant at a time when many developers are wondering how to stay independent. For Castro, the decision was an easy one and we strongly advocate that other developers take full advantage of this new API as well.
Service-backed apps still have a lot of advantages and exclusive capabilities over iOS 7’s Background Fetch. I think server-side crawling is still the best choice for podcast apps and feed readers, for which users want fast updates to collections of infrequently updated feeds.
Overcast has been crawling tens of thousands of podcast feeds every few minutes for the last 6 months using standard HTTP caching headers. In the last week, 62% of all requests have returned 304 (“Not Modified”). Many of the rest returned the entire “new” feed, but none of the episodes had actually changed, making the server download and process hundreds of kilobytes unnecessarily.
The entire Overcast feed-crawling infrastructure can run on a $40/month Linode VPS.
Core Intuition Jobs aims to solve this problem by becoming the go-to source for both employers and job-seekers in the Cocoa development market. Other sites like StackOverflow Careers take a stab at solving the problem, but they suffer from a problem in that they are too large, and serve too many different needs to be uniquely valuable to a niche market such as ours.
Let’s just say I spent a lot of quality time with Google before eventually stumbling across a hint on Microsoft’s developer site. The document talks about using a default setting of 96 DPI. I’ve been spending a lot of time lately with the Mac’s text system, so I knew that TextEdit was using 72 DPI to render text.
That’s another way to think about this problem: a single point of text on your Mac will be 1.33 times larger in your browser.
US cable giant Comcast has announced a deal with Netflix allowing Netflix’s video-streaming service a more direct route through Comcast’s network, which should improve streaming video quality for viewers. The first indications of the new deal between the companies came last week after App.net founder Bryan Berg observed more direct routes for Netflix data through Comcast’s network. The Wall Street Journal reported on Sunday night that the change was the result of a formal, paid agreement between the two companies, but Comcast does not specify how much the deal is worth.
Officially, Comcast’s deal with Netflix is about interconnection, not traffic discrimination. But it’s hard to see a practical difference between this deal and the kind of tiered access that network neutrality advocates have long feared.
Dan Rayburn has a contrary take:
Today’s news is very simple to understand. Netflix decided it made sense to pay Comcast for every port they use to connect to Comcast’s network, like many other content owners and network providers have done. This is how the Internet works, and it’s not about providing better access for one content owner over another, it simply comes down to Netflix making a business decision that it makes sense for them to deliver their content directly to Comcast, instead of through a third party. Tied into Netflix’s decision is the fact that Comcast guarantees a certain level of quality to Netflix, via their SLA, which could be much better than Netflix was getting from a transit provider. While I don’t know the price Comcast is charging Netflix, I can guarantee you it’s at the fair market price for transit in the market today and Comcast is not overcharging Netflix like some have implied. Many are quick to want to argue that Netflix should not have to pay Comcast anything, but they are missing the point that Netflix is already paying someone who connects with Comcast. It’s not a new cost to them.
As does Marc Andreessen:
The venture capitalist argued that too much of the discussion about net neutrality assumes that the internet is a static thing, rather than something that is likely to increase exponentially in terms of its demand for bandwidth, and that a strict or dogmatic adherence to net neutrality would likely “kill investment in infrastructure [and] limit the future of what broadband can deliver.”
Update (2014-02-27): Ben Thompson:
What Netflix is most concerned about from a non-discrimination standpoint are broadband caps, and, more broadly, usage-based broadband pricing. It’s not that their position differs on a point-by-point basis from most net neutrality advocates; rather, the priorities are different.
That leaves unlimited access on the chopping block. While I love the idea of unlimited data, I also am aware that nothing comes for free; in the case of unlimited data, the cost we are paying is underinvestment and/or discriminatory treatment of data. Therefore I believe the best approach to broadband is usage-based payment by both upstream and downstream, with no payments in the middle.
While we have not witnessed a change in peering dynamics as a result of the Netflix/Comcast transaction, a trend we have seen over the past few years is the degrading quality of bandwidth from conventional “tier 1” ISPs, where peering edges have become congested due to the games described above. Network operators commonly discuss on mailing lists how the big four access shops all maintain edges which are boiling hot unless you pay them, or buy from an intermediary paying them. Where it was once possible for an enterprise or content shop to enjoy “good enough” connectivity purchasing from these providers directly, one now must enter the complex game of multi-homing to a half dozen or more providers, or purchase from a route-optimized “tier 2” like an Internap, in order to enjoy a positive and congestion-free user experience.
The SSLVerifySignedServerKeyExchange function in libsecurity_ssl/lib/sslKeyExchange.c in the Secure Transport feature in the Data Security component in Apple iOS 6.x before 6.1.6 and 7.x before 7.0.6, Apple TV 6.x before 6.0.2, and Apple OS X 10.9.x before 10.9.2 does not check the signature in a TLS Server Key Exchange message, which allows man-in-the-middle attackers to spoof SSL servers by (1) using an arbitrary private key for the signing step or (2) omitting the signing step.
This signature verification is checking the signature in a ServerKeyExchange message. This is used in DHE and ECDHE ciphersuites to communicate the ephemeral key for the connection. The server is saying “here's the ephemeral key and here's a signature, from my certificate, so you know that it's from me”. Now, if the link between the ephemeral key and the certificate chain is broken, then everything falls apart. It's possible to send a correct certificate chain to the client, but sign the handshake with the wrong private key, or not sign it at all! There's no proof that the server possesses the private key matching the public key in its certificate.
If I compile with -Wall (enable all warnings), neither GCC 4.8.2 or Clang 3.3 from Xcode make a peep about the dead code. That's surprising to me. A better warning could have stopped this but perhaps the false positive rate is too high over real codebases? (Thanks to Peter Nelson for pointing out the Clang does have -Wunreachable-code to warn about this, but it's not in -Wall.)
John Gruber on the NSA angle:
These three facts prove nothing; it’s purely circumstantial. But the shoe fits.
You can test whether your device is affected at gotofail.com or imperialviolet.org:1266. At this writing, Mac OS X 10.9, including current seeds, is still vulnerable. iOS 5 and Mac OS X 10.8 never had the bug. It’s fixed in iOS 6.1.6 and iOS 7.0.6:
Secure Transport failed to validate the authenticity of the connection. This issue was addressed by restoring missing validation steps.
The offending line of code is a single extra
goto in SSLVerifySignedServerKeyExchange(). In my view, this is not an improper use of goto. The code follows a standard C error handling style. I’m also unpersuaded by the argument that the bug should be blamed on brace format preferences.
Any of us could have written a bug like this, especially when merging changes from different sources. But a flaw in process is what let the bug ship. If ever there were code that should be unit tested, it’s Secure Transport. Landon Fuller shows that it would have been easy to write a test to detect this regression.
Update (2014-02-24): Lloyd Chambers:
This one is unforgiveable. It could have compromised interactions with tens of millions of devices, had hackers exploited it (have they?), and that fact remains true for some time to come because plenty of people won’t update their devices and OS X doesn’t even have a fix as this was written.
You just don’t break a core security protocol like this. Who is in charge over there? Test suites should validate such stuff; it’s not exactly a new protocol. Heads ought to roll on this one and right up to high levels perhaps.
He and Chris Breen suggest that using Firefox or Chrome may be safer than Safari.
Update (2014-02-25): Macworld:
In addition, you may be able to potentially save your traffic from prying eyes with a VPN (Virtual Private Network). Although the VPN hooks into the security framework where the SSL/TLS bug exists, the VPN protocols supported by OS X don’t directly use SSL. You’ll need to check with your network administrator to make sure all your traffic runs through the VPN, however, and it’s not just site-specific (as some work-related VPNs can be).
The bug is fixed in Mac OS X 10.9.2.
Update (2014-05-12): Martin Fowler:
Also, the Security-55471 version of ssl_regressions.h, which appears to list a number of SSL regression tests for this library, remains unchanged in the Security-55471.14 version of ssl_regressions.h. The only substantial difference between the two versions of the library is the deletion of the
goto failstatement itself, with no added tests or eliminated duplication.
The presence of six separate copies of the same algorithm clearly indicates that this bug was not due to a one-time programmer error: This was a pattern. This is evidence of a development culture that tolerates duplicated, untested code.
All of these nuances in the API cause KVO to embody what is known as a pit of failure rather than a pit of success. The pit of success is a concept that Jeff Atwood talks about. APIs should be designed so that they guide you into using them successfully. The should give you hints as to how to use them, even if they don’t explain why you should use them in that particular way.
KVO does none of those things. If you don’t understand the subtleties in the parameters, or if you forget any of the details in implementation (which I did, and only noticed because I went back to my code to reference it while writing this blogpost), you can cause horrible unintended behaviors, such as infinite loops, crashes, and ignored KVO notifications.
I wish Cocoa didn’t have APIs that require you to use KVO.
Unlike a relationship, there is no way to pre-fetch a fetched property. Therefore, if you are going to fetch a large number of entities and then desire to access the fetched property for those properties, they are going to be fetched individually. This will drastically impact performance.
Fetched Properties are only fetched once per context without a reset. This means that if you add other objects that would qualify for the fetched property after the property has been fetched then they won’t be included if you call the fetched property again. To reset the fetched property requires a call to
parentContext was introduced alongside a new concurrency model. To use parentContext both the parent and child contexts must adopt the new concurrency model. But the problem addressed by parentContext is not concurrency. Concurrency is just a problem, albeit a significant one, that needed to be solved to for parentContext to be implemented. The intent of parentContext is to improve the [atomicity] of changes. parentContext allows changes to be batch up and committed en masse. This has always been possible by using multiple
NSManagedObjectContext, but parentContext allows for improved granularity of the batching.
parentContext does provides features that simplify a handful of use cases. Unfortunately the short comings of parentContext mean that it can not be adopted piecemeal. The top of the Core Data stack are managed objects. A good model will provide an interface that works at a high level of abstraction. Creating such an interface requires encapsulating implementation detail. The way that Core Data is designed means that the natural place for this code is in managed object subclasses. Because parentContext affects the behaviour of managed objects adopting it makes it difficult to write managed object subclasses without knowing the context hierarchy in which they’ll be used. Proceed with extreme caution!
Speaking of keyboards, there’s now an SDK for adding the Fleksy keyboard to iOS apps. It’s like TextExpander, where each app has to embed the code separately, since iOS doesn’t allow this sort of customization at the system level.
Rise’s surprising solution is to opt out of iOS multitasking altogether. What was presumably intended by Apple as a temporary option for apps when multitasking was first introduced in iOS 4.0 is still available.
Rise takes advantage of a subtle difference between the pre- and post-multitasking environments: unlike apps that support multitasking (which get suspended in the background when the device gets locked), an app that runs in the pre-multitasking compatibility mode keeps running. All the app has to do is wait for the alarm time and execute its custom code.
The rotated byte encoding allows the 12-bit value to represent a much more useful set of numbers than just 0–4095.
ARM immediate values can represent any power of 2 from 0 to 31. So you can set, clear, or toggle any bit with one instruction.
More generally, you can specify a byte value at any of the four locations in the word.
Everyone can benefit from SwipeSelection’s extremely improved cursor management when compared to stock iOS. I apologize in advance to Kyle Howells, developer of SwipeSelection, but this is one of those tweaks that I wish Apple would straight-up steal; it’s just that good.
The stock method of positioning the cursor while typing in iOS is cumbersome at best, and incredibly frustrating at times. It’s hard to gain the necessary precision using the stock method of editing text, and that’s where SwipeSelection comes in to save the day.
SwipeSelection lets you swipe directly on the iOS keyboard in order to move the cursor. This allows you to place the edit cursor in specific spots with much more precision than before. You can even select text by swiping from the shift key or from the delete key.
I haven’t jailbroken my iPhone, but this is tempting. For all the talk about how a hardware keyboard isn’t necessary and how Apple took the time to get text selection and copy/paste right, the on-screen keyboard remains the most frustrating part of iOS for me.
When editing a command in Terminal, you can type Control-X, Control-E to open it in your text editor of choice (via Wesley Darlington). When you save your changes and close the document, the command is executed.
I have my editor set to BBEdit via this line in ~/.bash_profile:
export EDITOR="bbedit -w"
Secure coding is the practice of writing programs that are resistant to attack by malicious or mischievous people or programs. Secure coding helps protect a user’s data from theft or corruption. In addition, an insecure program can provide access for an attacker to take control of a server or a user’s computer, resulting in anything from a denial of service to a single user to the compromise of secrets, loss of service, or damage to the systems of thousands of users.
Secure coding is important for all software; if you write any code that runs on Macintosh computers or on iOS devices, from scripts for your own use to commercial software applications, you should be familiar with the information in this document.
The new version adds “information about non-executable stacks and heaps, address space layout randomization, injection attacks, and cross-site scripting.”
I was a bit surprised at how straightforwardly this analysis came out. It seems clear that the distribution of people who are purchasing your apps follows closely the overall adoption of users. There doesn’t seem to be anything about their speed of update that impacts their purchasing habits.
Now, that doesn’t mean that dropping support for older versions isn’t a good idea. It just means that this particular line of reasoning shouldn’t be your primary justification. If anything this shows the importance of the dramatic speed at which the general population adopts new OS versions.
But Wealthfront uses Apex Clearing to hold their accounts, and while Apex paid their Intuit tax and offers QFX downloads, guess what? They paid for Windows but not Mac. And so Mac Quicken won’t import their files.
Now, ever since Intuit pulled the INTU.BID stunt with QFX, people have been trying to work around it. If you google around, you’ll find the usual suggestion is to switch out the INTU.BID number for one that works. I tried that, but couldn’t get it to work.
And then I asked myself, why don’t I fix Quicken itself?
Apple’s Emerging Technology group is looking for a senior engineer passionate about exploring emerging technologies to create paradigm shifting cloud based solutions.
The candidate should be highly motivated, have exceptional development and analytical skills and be enthusiastic to research emerging technologies and leverage them to solve complex problems related to big data, internet scale distributed systems, multi-datacenter consistency, availability, search etc. The engineer should have expertise in envisioning, architecting and building high-performance, distributed systems that serve as cloud ‘building-blocks’ for applications.
They want experience with NoSQL, Java, ZFS, Web services, WebDAV, and more.
In short, the single most important statistic about a camera is not the number of megapixels (which actually means very little to picture quality). It’s sensor size.
“We don’t believe that camera sales are slowing down just because people are using their phones for photography now,” a Sony rep told me. “We think it’s because camera makers aren’t doing interesting things anymore.”
Well, Sony has certainly been doing interesting things. A 1-inch sensor in a pocket camera? Never been done. A premium superzoom? Nobody else is doing that. A full-frame sensor in a coat-pocketable body? Unheard of.
This is a pretty good illustration of the scale of mobile: Apple limits itself only to the high end of the mobile market but still sells more units than the whole PC industry.
The user interface has been greatly enhanced thanks to the new inspector panel, which made its appearance on the right side of the main window. This panel will gives you tons of contextual informations on the area you are exploring. From there, you’ll be able to set comments, change the appearance of the operands of an instruction, see the list of references to and from an instruction, and so on…
A great new feature is the new tag system. You can now create arbitrary tags, and put them either on an address, a basic block of a procedure, or on the whole procedure. To illustrate its benefits, Hopper now automatically creates a set of tags when it parses an executable. For instance, it will create an entry point tag on each addresses that will be called by the system during the loading process of the binary (the main entry point itself, but also all the addresses declared in the various MOD_INIT/MOD_TERM sections), and also tags each implementation of each methods of the Objective-C classes! It makes it really convenient to navigate through the methods of a program written in Objective-C! You can now also give colors to addresses, which is very convenient to quickly visualize the code!
I really appreciate the fact that the store allowed me to distribute a program and rapidly gains visibility, but now, it became very difficult to distribute a program like Hopper on the MAS. There are too many restrictions, the main one being the sandboxing mechanism, obviously…
And what about the Apple tax… When one buys a copy of Hopper on the MAS, I give approximately 40 to 45% of the price to Apple (the 30% are on the price without VAT).
This is why I will not distribute Hopper Disassembler v3 on the Mac AppStore at the beginning. If too many users feels the need to see Hopper distributed on the MAS, I’ll reconsider my decision.
One of them is sold on ebay for $3.85 AU ($3.99 US), including postage to Australia. The other is sold at Apple Stores for $29.
The Apple adapter also has many more small components – two inductors (the cheap adapter has none), over twenty five capacitors (the cheap adapter has only nineteen), more resistors. For the cheap adapter design, every fraction of a cent saved is important!
One thing that surprised me is that the cheap adapter has a functioning blue activity LED, that glows through the enclosure. The Apple adapter actually has a space on the PCB for this, but no LED in place (Apple’s designers presumably nixed it for aesthetic reasons.) I’m surprised the manufacturer paid the few cents to add this feature.
In the event that the timestamp server cannot be reached for whatever reason, codesign simply fails. This is probably a good idea, because if it’s important for signed code to also contain a timestamp, you wouldn’t want to accidentally ship a major release of your app without it. But because the timestamp server can be unavailable for a variety of reasons, some of them common, we need some simple solution for continuing with the the day-to-day building of our apps without ever being bothered by the pesky timestamp service issue.
Lucky for us, such a solution exists in the form of a codesign command-line flag: “–timestamp”. Ordinarily this flag is used to specify the URL of a timestamp server, if you choose to use one other than the Apple default. But a special value none indicates that timestamping of the signed code should be disabled altogether.
It’s not clear to me why the timestamp servers should be so unreliable.
Today at the Chaos Computer Congress (30C3), xobs and I disclosed a finding that some SD cards contain vulnerabilities that allow arbitrary code execution — on the memory card itself. On the dark side, code execution on the memory card enables a class of MITM (man-in-the-middle) attacks, where the card seems to be behaving one way, but in fact it does something else. On the light side, it also enables the possibility for hardware enthusiasts to gain access to a very cheap and ubiquitous source of microcontrollers.
Greg Parker on the Mavericks version of
- The method cache data structure is rearranged for higher speed and smaller data cache footprint but larger total dirty memory footprint. Previously, the cache header was allocated separately from the cache buckets, and each cache bucket was a pointer to a Method struct containing the SEL and IMP. This required a chain of four pointer dereferences: object->isa->cache->method->imp. Now the cache header is stored in the class itself, and each cache bucket stores a SEL and IMP directly. The pointer dereference chain is now only three: object->isa->cache->imp, resulting in fewer serialized memory accesses and fewer data cache lines touched. The disadvantage is slower cache updates (to preserve thread-safety) and more dirty memory overall (to store SELs and IMPs in both the method list and the method cache).
- The new method cache data structure also requires fewer registers, so there are now zero register spills.
- One-byte branch hint instruction prefixes are added to the nil check and the tagged pointer check. The CPU’s instruction decoder is most efficient if the instructions are not packed too closely together, and these extra two bytes expand the first few instructions to the optimal size for current CPUs. The branch hints themselves are ignored by the CPU because its branch predictors are smarter than compile-time hinting. The only thing they do is take up space.
He has similar analyses for each version of Mac OS X.
Simply drop your classic application or resource file onto rezycle and it will extract all of the resources for you and place them into a folder next to the original file. But wait, thats not all! It will not only extract the old stuff for you, but it will also convert it into fabulous modern formats! Have some old 'snd ' resources? BANG! Now you have some spiffy new AIFF files! Old icons and cursors? BANG! Transformed into lovely png files! As a special bonus, anything rezycle can't convert will be exported as binary files for you to attack with your favorite hex editor!
Graham Lee and several others pointed me to two standard solutions: you can use the linker to embed the files in the __text section of the Mach-O binary, or you can use a tool called
xxdto convert the file’s data to a C array, and include that directly in your source code. I ended up with the second solution, which I will explain further below. I didn’t investigate using the linker, but Quinn “The Eskimo!” assures me that you use
getsectXXXAPIs to extract the data at run time. (Update: Daniel Jalkut has a post describing this approach.)
Craig Mazin And so I was a very early adopter of Final Draft. And I stayed with Final Draft through the revisions. And along the way I got disillusioned. And I’ve become increasingly disillusioned. An particularly disillusioned with what happened with Final Draft 9.
Marc Madnick For 10 years we provided free phone support. 10 percent of the people — remember now, I run a business; we have to make business decisions. Okay? We’re in business not to go out of business. — 10 percent of people would call up when it was free with no clock and talk and start asking about their printer not working and how do I get Microsoft Word. I mean, things that had nothing to do with us.
Marc Madnick Take all the bells and whistles out of everybody’s product, all the competitor’s products, okay. Take them all out. What it comes down to is pagination. Period. A minute a page. Break it down in eighths. Right, you guys are directors as well, okay. So, we are trusted because it’s the proper pagination. You get a script, it’s 120 pages, you can estimate it’s going to be approximately 120 minutes. That’s really what it comes down to. Does it paginate properly?
Marc Madnick We made an iPad app called the iPad Writer. It took, ready for this, two years. And you’ll say to me, “Marc, some of these apps that are much less expensive, by the way some of them are even free, they told me they took two, three, four months. Why does it take Final Draft two years?”
A year and a half of that two years was spent making sure that your script of 119 pages was 119 pages there. And also on your IBM, your Windows, I’m sorry, look at IBM, I’m old school.
Marc Madnick The biggest one was about 10 years ago Apple, even though we’re a developer and they love us and we have friends over there, they don’t tell you anything. 10 years ago they made you do Carbon language. And you’re familiar with this. And you had to go down there and strip it, you know, put Carbon in.
I’m not a techie, by the way. But, now they come to us three, four years ago and say, “You need to do Cocoa.” That means a page one rewrite for us. What does that mean to the customer? Well, version 8 they came out with MacBook retina displays. Guess when we found out that our font wasn’t really looking as crisp as it should? When somebody came to our office with a MacBook retina display.
It’s not like we got a call, or they mentioned it to us. We didn’t even know until it happened. So, what do we have to do? We have to spend a year and a half rewriting our software so it works on not only today’s latest Mac operating system […]
Paper by Facebook has been out for a day now and the reviews are, for the most part, quite divided. I haven’t been an avid Facebook user for some time, but the design and attention to detail on Paper is unmatched, and is worth sharing with other designers.
The Design Details for Twitter post I wrote yesterday received some great feedback, so I thought I’d quickly compile some of my favorite interactions in Paper - pardon the low-quality GIFs!
The new issue of objc.io is all about strings.
The truth is that an
NSStringobject actually represents an array of UTF-16-encoded code units. Accordingly, the
lengthmethod returns the number of code units (not characters) in the string. At the time when
NSStringwas developed (it was first published in 1994 as part of Foundation Kit), Unicode was still a 16-bit encoding; the wider range and UTF-16’s surrogate character mechanism were introduced with Unicode 2.0 in 1996. From today’s perspective, the
unichartype and the
characterAtIndex:method are terribly named because they tend to promote any confusion a programmer may have between Unicode characters (code points) and UTF-16 code units.
codeUnitAtIndex:would be a vastly better method name.
We must never use
-lowercaseStringfor strings that are supposed to be displayed in the UI. Instead, we must use
When using the
NSLocalizedStringmacro, the first argument you have to specify is a key for this particular string. You’ll often see that developers simply use the term in their base language as key. While this might be convenient in the beginning, it is actually a really bad idea and can lead to very bad localizations.
Good localizable string keys have to fulfill two requirements: first, they must be unique for each context they’re used in, and second, they must stick out if we forgot to translate something.
We recommend using a name-spaced approach[…]
When using date and number formatters or methods like
-[NSString lowercaseStringWithLocale:], it’s important that you use the correct locale. If you want to use the user’s systemwide preferred language, you can retrieve the corresponding locale with
[NSLocale currentLocale]. However, be aware that this might be a different locale than the one your app is running in.
Scanning a hex color is the same as before. The only difference is that we have now wrapped it in a method, and use the same pattern as
NSScannermethods. It returns a
BOOLindicating successful scanning, and stores the result in a pointer to a
This has been a deep dive, and I hope that we’ve presented some useful methodologies that you can use to analyze complex or difficult-to-reproduce issues in your own code. Even if you’re not fluent in assembly, leveraging this deductive approach allows you to break many complex and confounding crashes into approachable, provable hypotheses.
If you are fluent in assembly, I hope we’ve demonstrated just how deeply it’s possible to dive on a difficult-to-reproduce issue. We performed all of this analysis post-mortem, with only a crash report and no reproduction case – we never even actually ran the application in question.
Well, to be sure we’re on the same page, allow me propose rules that must be followed in order to adhere to this heuristic:
IBActionmacro must not be used in a View Controller
@interfaceblock in a View Controller’s header file must be blank.
- A View Controller may not implement any extra
*DataSourceprotocols except for observing Model changes.
- A View Controller may only do work in
viewDidDisappear, and in response to an observed change in the model layer.
I’ve since tested this almost every day for the last couple of weeks. During the day – the bandwidth is normal to AWS. However, after 4pm or so – things get slow.
In my personal opinion, this is Verizon waging war against Netflix. Unfortunately, a lot of infrastructure is hosted on AWS. That means a lot of services are going to be impacted by this.
Instead of having to assign a delegate to the UISearchBar and implement
searchBar:textDidChange:, let’s modify the UISearchBar so there is a signal representing changes to the text.
xcselect_get_manpaths()to dynamically add Xcode-specific paths to
man's search paths. Sneaky.
libxcselect.dylibis key to Apple’s technique of providing stubs binaries at standard locations that do little else but look up the actual tool locations inside
/Applications/Xcode.appand execute them.
The total number of users who’ve rated this particular version is only six. Never mind that 113 people have rated our app before—if you look at the “all versions” rating, our rating is a much more acceptable four stars. But the “all versions” rating is hidden below the “current version” one. The “all versions” rating isn’t the one shown in the results matrix when you search the App Store. Nobody is ever going to click through to an app that’s showing one and a half stars to discover that its real rating is four stars.
From four stars to one and a half stars because of five users whose problems we really want to fix (or have already fixed).
Moving forward, we will be a one product company. That product will be Basecamp. Our entire company will rally around Basecamp. With our whole team - from design to development to customer service to ops - focused on one thing, Basecamp will continue to get better in every direction and on every dimension.
If we can't find the right partner or buyer, we are committed to continuing to run the [other] products for our existing customers forever. We won't sell the products to new customers, but existing customers can continue to use the products just as they always have. The products will shift into maintenance mode which means there will be no new development, only security updates or minor bug fixes. We did this successfully in 2012 with Ta-da List, Writeboard, and Backpack, so we know how to make it work.
For some email providers, new email messages in Mail may only appear to arrive when Mail is first opened. No new email arrives until Mail is quit and reopened.
What’s odd about the continuing Apple Mail problems with Mavericks is that Mail used to be a very reliable app. I’m not sure why the internals were reworked so extensively in 10.9, since there are few outward changes, but the result has been bugs and slowness.
Update (2014-02-18): Dr. Drang:
So I’m using a mail client that can’t be trusted to send or receive mail. Can it get worse? Yes. I learned this past week that searching doesn’t work, either. Or at least not consistently.
That, in fact, is the most annoying thing about this mess. Apple has broken the covenant. The deal was that I get a mail client that isn’t fancy but works, and in return I don’t complain about a lack of features I’ll never use. It’s a simple arrangement that’s worked for nine years, and now it’s all this.
Externally the new ERA looks much the same as the old one, just smaller, lighter and thinner.
The new ERA has shorter battery life but no apparent change to the headset’s excellent range.
The new ERA takes ~3 seconds to audio feedback (which doesn’t sound like a ripoff Mac startup sound any more) and another second to pair. Regardless, this is a substantial improvement — slightly slower than the Q3 but acceptable, given it takes a second or two just to attach the headset to my ear.
The best news: you can triple tap the button to play or pause. I have an older Jawbone ERA that I use for calls, but more often for listening to podcasts and music around the house and while exercising. It’s easily one of my favorite hardware purchases of the last few years. The inability to control non-call audio from the headset is the only serious complaint I have about it. It’s not bothersome enough that I would upgrade to the new one, though. In fact, I think the older model’s longer batter life is worth the weight.
See also iLounge’s review, which has more photos.