Google (Hacker News):
Today, 10 years after of SHA-1 was first introduced, we are announcing the first practical technique for generating a collision. This represents the culmination of two years of research that sprung from a collaboration between the CWI Institute in Amsterdam and Google. We’ve summarized how we went about generating a collision below. As a proof of the attack, we are releasing two PDFs that have identical SHA-1 hashes but different content.
For the tech community, our findings emphasize the necessity of sunsetting SHA-1 usage. Google has advocated the deprecation of SHA-1 for many years, particularly when it comes to signing TLS certificates. As early as 2014, the Chrome team announced that they would gradually phase out using SHA-1. We hope our practical attack on SHA-1 will cement that the protocol should no longer be considered secure.
SHAttered:
This attack required over 9,223,372,036,854,775,808 SHA1 computations. This took the equivalent processing power as 6,500 years of single-CPU computations and 110 years of single-GPU computations.
[…]
The SHAttered attack is 100,000 faster than the brute force attack that relies on the birthday paradox. The brute force attack would require 12,000,000 GPU years to complete, and it is therefore impractical.
nneonneo:
Basically, each PDF contains a single large (421,385-byte) JPG image, followed by a few PDF commands to display the JPG. The collision lives entirely in the JPG data - the PDF format is merely incidental here. Extracting out the two images shows two JPG files with different contents (but different SHA-1 hashes since the necessary prefix is missing). Each PDF consists of a common prefix (which contains the PDF header, JPG stream descriptor and some JPG headers), and a common suffix (containing image data and PDF display commands).
The header of each JPG contains a comment field, aligned such that the 16-bit length value of the field lies in the collision zone. Thus, when the collision is generated, one of the PDFs will have a longer comment field than the other. After that, they concatenate two complete JPG image streams with different image content - File 1 sees the first image stream and File 2 sees the second image stream. This is achieved by using misalignment of the comment fields to cause the first image stream to appear as a comment in File 2 (more specifically, as a sequence of comments, in order to avoid overflowing the 16-bit comment length field). Since JPGs terminate at the end-of-file (FFD9) marker, the second image stream isn’t even examined in File 1 (whereas that marker is just inside a comment in File 2).
I think SHAttered overstates the impact on Git. Linus Torvalds (2005, via Joe Groff):
I really hate theoretical discussions.
The fact is, a lot of crap engineering gets done because of the question
”what if?”. It results in over-engineering, often to the point where the
end result is quite a lot measurably worse than the sane results.
You are literally arguing for the equivalent of “what if a meteorite hit
my plane while it was in flight - maybe I should add three inches of
high-tension armored steel around the plane, so that my passengers would
be protected”.
[…]
And the thing is, if somebody finds a way to make sha1 act as just a
complex parity bit, and comes up with generating a clashing object that
actually makes sense, then going to sha256 is likely pointless too - I
think the algorithm is basically the same, just with more bits. If you’ve
broken sha1 to the point where it’s that breakable, then you’ve likely
broken sha256 too.
He’s being criticized for saying this, but (so far) it looks like he was actually right.
Linus Torvalds (Hacker News):
Put another way: I doubt the sky is falling for git as a sourcecontrol management tool. Do we want to migrate to another hash? Yes.
Is it “game over” for SHA1 like people want to say? Probably not.
I haven’t seen the attack details, but I bet
(a) the fact that we have a separate size encoding makes it much
harder to do on git objects in the first place
(b) we can probably easily add some extra sanity checks to the opaque
data we do have, to make it much harder to do the hiding of random
data that these attacks pretty much always depend on.
Previously: MD5 Collision.
Update (2017-02-24): See also: Subversion (ArsTechnica), Mercurial, Bruce Schneier in 2005 and now.
Update (2017-03-09): See also: Linus Torvalds (Hacker News), Jon Gilmore (via Zaki Manian), Matthew Green.
Update (2017-03-16): See also: Linus Torvalds (via Reddit).
Update (2019-05-17): Thomas Peyrin:
Our paper on chosen-prefix collision attack for SHA-1 is out. TL;DR: computing such collision is very practical, for a reasonable cost. More results coming soon. Remove SHA-1 now if you still implement it for any digital signature/certificate use.
Git Google JPEG PDF Security SHA-1 Subversion
Mozilla (via Emily Toop):
A week ago we completed the migration of the entire Firefox for iOS project from Swift 2.3 to Swift 3.0. With over 206,000 lines of code, migrating a project of this size is no small feat. Xcode’s built in conversion tool is a fantastic help, but leaves your codebase in a completely uncompilable state that takes a good long while to resolve.
[…]
The first hitch in the plan occurred fairly quickly. Our test targets, despite not importing code from other targets further down the dependency tree, all required our primary target, Client, as the host app in order to run. Therefore our plan to ensure each target robustly passed its tests before moving onto the next target was impossible. We would have to migrate all of the targets, then the test targets and then ensure that the tests pass. This would mean that we may possibly be performing code changes in dependent targets on incorrectly migrated code, which added an extra layer of uncertainty. In addition, being unable to execute the code before moving on would mean that if we made a poor decision when solving a migration issue, that decision may end up proliferating through many targets before we realised that the code change produces a crash.
The second hitch came when migrating some of the larger targets, in particular Storage. Even after all this time, Xcode’s ability to successfully compile Swift is…flaky. When, after performing the auto-conversion, your first build error is a segfault in Xcode, this is not at all helpful. Especially when the line of code mentioned in the segfault stack trace is in an unassuming class that is doing nothing special. And when you comment out all of the code in that class, it still produces a segfault.
[…]
It had taken 3.5 engineers, 3 members of QA and 3.5 weeks, but the feeling when we were finally ready to hit merge was jubilant.
They ran into an interesting NSKeyedArchiver issue.
Previously: Getting to Swift 3 at Airbnb.
Update (2017-02-24): Thaddeus Ternes:
The Astra board didn’t change for over two weeks because of the Swift 3 migration. That was two weeks I didn’t get new features or enhancements built, or customer-reported issues fixed. I was simply agreeing to proposed changes by the tools, and then fixing the problems it created along the way.
Bug Cocoa Firefox iOS iOS App Programming Swift Programming Language Testing Xcode
Chance Miller:
Elevation Lab, the company behind a handful of popular Apple accessories, is today announcing its latest product: MagicGrips. The company says that this accessory is designed to work with the Magic Mouse and makes it easier to grip.
Elevation Lab says that this accessory makes the Magic Mouse more comfortable use by widening your grip and allowing you to squeeze the mouse without it moving upwards. Furthermore, the grip is said to release hand tension.
The Magic Mouse is the least comfortable mouse I have ever used, due to the shape of the edges. This looks like it would help.
(Anyone remember the name of the case that snapped around the original iMac puck mouse to give a more standard shape?)
Previously: Apple’s New Magic Keyboard, Mouse, and Trackpad, Magic Mouse Review.
Update (2017-02-23): I think the iMac mouse adapter that I used was the UniTrap.
Mac Mouse
Rob Griffiths:
Yesterday, I wrote about an apparent change in Finder’s Library shortcut key. To wit, it used to be that holding the Option key down would reveal a Library entry in Finder’s Go menu.
However, on my iMac and rMBP running macOS 10.12.3—and on others’ Macs, as my report was based on similar findings by Michael Tsai and Kirk McElhearn—the Option key no longer worked; it was the Shift key. But on a third Mac here, running the 10.12.4 beta, the shortcut was back to the Option key.
[…]
After some experimentation, I was able to discover why the shortcut key changes, and how to change it between Shift and Option at any time. This clearly isn’t a feature, so I guess it’s a bug, but it’s a weird bug.
Update (2017-04-07): Adam C. Engst:
My suspicion is that this weird Finder state, which may date back to the Sierra betas, can be triggered in ways other than relaunching the Finder. Quitting or force-quitting the Finder from within Activity Monitor doesn’t seem to do it, but I can imagine other scenarios that might leave the Finder in an unusual state — a kernel panic, for instance, or a loss of power to the Mac. Over years of usage, it’s easy to see something like this happening to many people.
Bug Finder Keyboard Shortcuts Mac macOS 10.12 Sierra
Marco Solorio (May 2016):
But as good as that juiced up Mac Pro Tower is today, I know at some point, the time will have to come to an end, simply because Apple hasn’t built a PCIe-based system in many years now. As my article described, the alternative Mac Pro trashcan is simply not a solution for our needs, imposing too many limitations combined with a very high price tag.
The Nvidia GTX 1080 might be the final nail in the coffin. I can guarantee at this point, we will have to move to a Windows-based workstation for our main edit suite and one that supports multiple PCIe slots specifically for the GTX 1080 (I’ll most likely get two 1080s that that new price-point).
[…]
Even a Thunderbolt-connected PCIe expansion chassis to a Mac Pro trashcan wont help, due to the inherent bandwidth limits that Thunderbolt has as compared to the buss speeds of these GPU cards. And forget about stacking these cards in an expansion chassis… just not going to happen.
Via John Gruber:
This may be a small market, but it’s a lucrative one. Seems shortsighted for Apple to cede it.
Timo Hetzel:
Moving my video workflow to a modern PC could save me an estimated 4-8 hours every week. I wonder if Apple knows/cares.
Previously: Getting a New 2013 Mac Pro in 2017, How Apple Alienated Mac Loyalists.
Update (2017-02-24): See also: Hacker News.
Cacti:
This has been an ongoing problem since the summer. Some have reverted back to using several 9xx cards (which have spiked in price) while others have switched platforms. Lacking any real progress on this, I would suspect many in this situation would abandon OSX permanently by the end of the year. And if you give up OSX on your desktop, the incentive to stay in that environment on your laptop, tablet, and phone go way down.
This is a serious problem and the only outcomes are either a) Nvidia GPUs are supported, or b) OSX is abandoned, because the simple fact is that Nvidia GPUs are more important long-term than the entire sum of Apple’s hardware; I can replace a tablet or desktop or laptop, but I can’t replace a Pascal TITAN X.
Update (2017-02-25): See also: Reddit.
Update (2017-03-06): Owen Williams (via Jeff Johnson, Hacker News):
I’m a developer, and it seems to me Apple doesn’t pay any attention to its software or care about the hundreds of thousands of developers that have embraced the Mac as their go-to platform.
[…]
It took me months to convince myself to do it, but I spent weeks poring over forum posts about computer specs and new hardware before realizing how far ahead the PC really is now: the NVIDIA GTX 1080 graphics card is an insane work-horse that can play any game — VR or otherwise — you can throw at it without breaking a sweat.
I realized I’m so damn tired of Apple’s sheer mediocrity in both laptops and desktops, and started actually considering trying Windows again.
See also: The Talk Show.
Update (2017-03-22): Owen Williams:
After waiting eagerly for the MacBook Pro refresh, then being utterly disappointed by what Apple actually shipped — a high-end priced laptop with poor performance — I started wondering if I could go back to Windows. Gaming on Mac, which initially showed promising signs of life had started dying in 2015, since Apple hadn’t shipped any meaningful hardware bumps in years, and I was increasingly interested in Virtual Reality… but Oculus dropped support for the Mac in 2016 for the same reasons.
[…]
It took me months to convince myself to do it, but I spent weeks poring over forum posts about computer specs and new hardware before realizing how far ahead the PC really is now: the NVIDIA GTX 1080 graphics card is an insane work-horse that can play any game — VR or otherwise — you can throw at it without breaking a sweat.
[…]
I don’t say this lightly, but Windows is back, and Microsoft is doing a great job. Microsoft is getting better, faster at making Windows good than Apple is getting better at doing anything to OS X.
Adam:
However, in pursuit of the continual shrinking and lightening of the product line, the gap between the specs available from Apple and the major PC vendors in the workstation category has finally reached the point where even Apple loyalists are taking notice. We’ll see what Apple releases over the next few months (and years), but as I write this, compared to the MacBook Pro, portable workstations from the major PC vendors can be configured with faster processors, four times as much system AND video RAM, as well as more (and upgradeable) storage. As compared to the Mac Pro, desktop workstations from the PC vendors can be configured with more than three times the number of processor cores, sixteen times as much RAM, and double the number of (more powerful and replaceable) video cards. Compare these specs to the iMac, and the gap is even larger.
Mac Mac Pro Video Windows