Wednesday, November 11, 2020

The Apple Silicon M1

Andrei Frumusanu:

The new processor is called the Apple M1, the company’s first SoC designed with Macs in mind. With four large performance cores, four efficiency cores, and an 8-GPU core GPU, it features 16 billion transistors on a 5nm process node. Apple’s is starting a new SoC naming scheme for this new family of processors, but at least on paper it looks a lot like an A14X.

[…]

What really defines Apple’s Firestorm CPU core from other designs in the industry is just the sheer width of the microarchitecture. Featuring an 8-wide decode block, Apple’s Firestorm is by far the current widest commercialized design in the industry.

[…]

A +-630 deep ROB is an immensely huge out-of-order window for Apple’s new core, as it vastly outclasses any other design in the industry.

[…]

Exactly how and why Apple is able to achieve such a grossly disproportionate design compared to all other designers in the industry isn’t exactly clear, but it appears to be a key characteristic of Apple’s design philosophy and method to achieve high ILP (Instruction level-parallelism).

[…]

Apple’s usage of a significantly more advanced microarchitecture that offers significant IPC, enabling high performance at low core clocks, allows for significant power efficiency gains versus the incumbent x86 players.

Robert Graham:

In short, Apple’s advantage is their own core design outpacing Intel’s on every measure, and TMSC being 1.5 generations ahead of Intel on manufacturing process technology. These things matter, not “ARM” or “RISC” instruction set.

Howard Oakley:

GPUs are now being used for a lot more than just driving the display, and their computing potential for specific types of numeric and other processing is in demand. So long as CPUs and GPUs continue to use their own local memory, simply moving data between their memory has become an unwanted overhead. If you’d like to read a more technical account of some of the issues which have brought unified memory to Nvidia GPUs, you’ll enjoy Michael Wolfe’s article on the subject.

Apple:

Learn how developers updated their apps for Apple silicon Macs and began taking advantage of the advanced capabilities of the Apple M1 chip.

Apple:

Discover the advances in Metal performance and capability delivered with the Apple M1 chip on Apple silicon Macs. Apple M1 unites the top-end graphics and compute abilities of discrete GPUs with the features and power efficiency of Apple silicon, creating entirely new opportunities for developers of Metal-based apps and games on macOS. We’ll explore the Metal graphics and compute fundamentals of Apple M1, then take you through four important Metal features to make your Mac apps really shine on Apple silicon: tile shading, memoryless render targets, programmable blending, and sparse texturing.

Previously:

Update (2020-11-27): Ken Shirriff:

With Apple’s recent announcement of the ARM-based M1 processor, I figured it would be interesting to compare it to the first ARM processor, created by Acorn Computers in 1985 for the BBC Micro computer.

The Tech Chap:

Nerding out with 2 Vice Presidents at Apple about the new M1 chip in MacBook Air, MacBook Pro and Mac Mini - what it means, how do apps work, and what about Intel?

Update (2020-12-02): Erik Engheim (Hacker News):

Here I plan to break it down into digestible pieces exactly what it is that Apple has done with the M1.

Update (2020-12-08): Don Scansen:

Apple did not identify the die locations of any of these blocks even if it was suggested by the graphics used to describe the CPU, GPU, and neural engine. Those illustrations were stylized which was a good indication of their inaccuracy. However, Apple did show something that looked very much like a genuine optical image of the SoC die layout on their graphic of the physical product. As discussed above, this was an inexact representation in the sense that it was not true to the assembly onto the package substrate. But it turned out to be a precise version of the SoC physical design.

Update (2020-12-16): Mark Bessey:

But there are a couple of odd ideas bouncing around on the Internet that are annoying me. So, here’s a quick fact check on a couple of the more breathless claims that are swirling around these new Macs.

Update (2021-01-06): See also: Dick James (via Hacker News).

Joe Heck:

One of the interesting things about the M1 system-on-a-chip isn’t the chip itself, but the philosophy that Apple’s embracing in making the chip. That pattern of behavior and thought goes way beyond what you can do with commodity stuff. The vertical integration allows seriously advanced capabilities. Commodity, on the other hand, tends to be sort of “locked down” and very resistant to change, even improvements.

Erik Engheim (via Hacker News):

The M1 is the beginning of a paradigm shift, which will benefit RISC-V microprocessors, but not the way you think.

Shac Ron:

arm64 is the Apple ISA, it was designed to enable Apple’s microarchitecture plans. There’s a reason Apple’s first 64 bit core (Cyclone) was years ahead of everyone else, and it isn’t just caches.

Arm64 didn’t appear out of nowhere, Apple contracted ARM to design a new ISA for its purposes. When Apple began selling iPhones containing arm64 chips, ARM hadn’t even finished their own core design to license to others.

[…]

Apple planned to go super-wide with low clocks, highly OoO, highly speculative. They needed an ISA to enable that, which ARM provided.

M1 performance is not so because of the ARM ISA, the ARM ISA is so because of Apple core performance plans a decade ago.

24 Comments RSS · Twitter

Johny Srouji's team has been delivering consistently at a level the rest of Apple could take a page from.

The enormous 192 Kb I-Cache, 128 Kb D-Cache and 12Mb L2 are probably sized to help hide the video frame bandwidth which is something on the order of 10% of total bandwidth.

The gain of not moving textures to GPU memory must be balanced against the continual video frame bandwidth and the extra bandwidth used while rending 3D objects. It's not an obvious and unambiguous gain.

6K resolution = 4.3Gb/s ~ 6144 * 3160 * 32bit color * 60Hz)
The best figure I have for total RAM bandwidth is 34.1Gb/s bandwidth (this is the A13 figure from https://web.archive.org/web/20200708234923/https://techyorker.com/device/apple-a13-bionic/ ).

Robert Graham is mostly right. The fact the M1 decodes 8 instructions simultaneously is difficult (requires lots of transistors and is slow) with an x86 due to variable length instructions, making it a bad trade-off. On the other hand x86 designs can use a trace cache to get around that.

RISC is the weirdest of techno cults. It ceased being meaningful way of describing microprocessors (such as the difference between Apple Silicon and Intel), yet any discussion of processors quickly devolves into RISC advocacy.

Gruber: "The A14 is both a power-efficient smartphone chip and one of the fastest CPUs ever made, period."

I guess we've reverted to late-stage PPC levels of disingenuousness.

Gruber: “The A14 is both a power-efficient smartphone chip and one of the fastest CPUs ever made, period.”

I guess we’ve reverted to late-stage PPC levels of disingenuousness.

Well, PowerPC was fast in some specific areas, but really started falling behind when Motorola and IBM were no longer interested in investing in chips targeting laptops and desktops.

(And now Motorola/Freescale/NXP doesn’t even seem that interested in maintaining PowerPC for their embedded stuff either; they’ve moved on — mostly? — to ARM. That leaves IBM for boring servers.)

So I agree that in the late G4 days, comparisons where increasingly desperate and disingenuous.

Apple’s ARM designs, OTOH, are without question very fast in single-threaded tasks, despite having little thermal headroom.

>Apple’s ARM designs, OTOH, are without question
>very fast in single-threaded tasks, despite
>having little thermal headroom.

Given the pricing of these laptops, they should be competitive with Windows laptops running 4800U chips. I guess in single-threaded tasks, they'll be close, but I'm not sure how relevant that is anymore. In multi-threaded tasks, I would be surprised if the MacBook Air was anywhere close to a comparably priced Ryzen laptop.

But we'll have to wait and see what actual comparisons show.

I'm sure Apple could theoretically put out a CPU that is indeed "one of the fastest CPUs ever made." After all, TSMC makes both Apple's CPUs, as well as actually fast CPUs like the Threadrippers. But so far, it doesn't look like that's what Apple is aiming for.

Given the pricing of these laptops, they should be competitive with Windows laptops running 4800U chips.

In no universe was Apple going to be competitive (or try to be competitive) with low-cost laptops.

I guess in single-threaded tasks, they’ll be close, but I’m not sure how relevant that is anymore.

Very. Any web site. Any Electron app. Tons and tons of stuff is largely I/O-bound rather than CPU-bound, so attempting to parallelize it just isn’t worth it.

What Air customers are going to do every day will rarely involve compiling code (and even that, especially when you’re in the JIT world of .NET et al, is hard to parallelize) or encoding video.

In multi-threaded tasks, I would be surprised if the MacBook Air was anywhere close to a comparably priced Ryzen laptop.

Preliminary benchmarks seem to suggest that it does, in fact, handily beat the 4800U (63% faster at single-core; 27% on multi-core despite having fewer high-performance cores). Once they put, say, a 6+4 or 8+4 setup in the pricier 13-inch Pros, those results will be even further apart.

https://browser.geekbench.com/processors/amd-ryzen-7-4800u
https://forums.macrumors.com/threads/apple-silicon-m1-chip-in-macbook-air-outperforms-high-end-16-inch-macbook-pro.2267256/

>In no universe was Apple going to be competitive (or try to be
>competitive) with low-cost laptops.

1000$ Windows laptops are not low-cost. Honestly, it's pretty bonkers that Apple's customers view 1000$ as "a low-cost laptop."

But yeah, that's kind of the point: Apple's offerings aren't competitive with what's on offer on the Windows side, and it seems that Apple isn't even trying. They probably don't have to, because their customers are locked in and have no choice but to buy Apple's products at whatever prices Apple sets, but it's not clear to me why Apple's customers view this as a positive situation for them.

Apple is putting a cheap phone processor into a laptop, but instead of passing on the savings, they're increasing their margins.

>attempting to parallelize it just isn’t worth it

Exactly. So single-threaded performance isn't really relevant anymore, because all single-threaded apps are fast enough on all current CPUs, which is what I was saying.

>Preliminary benchmarks

Let's see some real data of actual applications doing actual work.

It would certainly be nice to get some more competition in the CPU game. Intel apparently needs some more kicks in the butt to really get moving.

But yeah, that’s kind of the point: Apple’s offerings aren’t competitive with what’s on offer on the Windows side, and it seems that Apple isn’t even trying.

I mean, that would’ve been shocking news in 1996, but… this is hardly a new development. That doesn’t mean we can’t criticize it, but I really don’t understand how it relates to the CPU.

Like, I could say “the A14 is very fast for a phone chip”, and you could respond “it better be; I can get a Nokia for $179”. And, yes, that’s true, but also, it’s so far away from the market segment that I don’t see it being conducive to an interesting conversation.

Apple is trying to compete with brands like Dell XPS and Microsoft Surface here.

1000$ Windows laptops are not low-cost. Honestly, it’s pretty bonkers that Apple’s customers view 1000$ as “a low-cost laptop.”

Where is that number from?

Apple is putting a cheap phone processor into a laptop, but instead of passing on the savings, they’re increasing their margins.

The M1 is not “a cheap phone processor”. It beats most laptop CPUs in both speed and battery life. That’s not Apple’s marketing; that’s just reality. Microsoft’s/Qualcomm’s SQ2 comes close in battery life, but is way slower; Tiger Lake-UP3 comes close in performance, but draws way more power.

Exactly. So single-threaded performance isn’t really relevant anymore, because all single-threaded apps are fast enough on all current CPUs, which is what I was saying.

Tell that someone who loads a web page on the current highest-end 16-inch Coffee Lake Refresh MacBook Pro (a Comet Lake one would barely be faster), then does it on an M1 MacBook Air for a third the price.

"all single-threaded apps are fast enough on all current CPUs"

That sounds backwards to me.

Multi-threaded apps (by which I assume you mean multi-processing, not just "spin up a thread to wait on I/O") are easy for me as a user to get any level of performance I need: just buy a more-cored CPU. If 4 cores aren't enough, I'll buy 8, or 16. I could even spin up 100 (or 1000) machines on AWS if I really needed my parallel code to finish fast.

When single-threaded apps (i.e., most processes on my computer right now -- I checked) aren't fast enough, I'm pretty much SOL. And single-threaded is the default for any app. Despite new features like GCD and Combine, it takes work to do multi-threading well. Even major apps which can scale-out, like Xcode, only hit all my cores briefly during the part of the build that is parallelizable.

Lukas - around half of the people who buy Macs are first time Mac buyers. It's not true to say that people buy them because they're locked in.

@Sören The M1 is cheap in the sense that it likely costs Apple much less per unit than the Intel processors it was using.

>Where is that number from?

I said: "Given the pricing of these laptops, they should be competitive with Windows laptops running 4800U chips."

Note that both these Macs, as well as laptops with 4800U chips, are typically priced around 1000 US$. I picked similarly priced computers since that seems like a fair comparison.

You said: "In no universe was Apple going to be competitive (or try to be competitive) with low-cost laptops."

I then said "1000$ Windows laptops are not low-cost", because we were talking about laptops around the 1000$ range. That's where the number is coming from.

>The M1 is not “a cheap phone processor”.

Apple just switched from a CPU that costs as much as a phone to a CPU that is inside a phone. So let me clarify: cheap *for Apple.* Obviously, it's not cheap for the people who buy these devices.

>When single-threaded apps (i.e., most processes on my
>computer right now -- I checked) aren't fast enough,
>I'm pretty much SOL

Sure. If you depend on a slow single-threaded app, then single-threaded performance matters to you. But the last time this was the case for me was back when I still used a garbage version of Photoshop around a decade ago.

I'm not saying single-threaded performance doesn't matter at all. What I *am* saying is that pretty much all applications where performance is an issue have been parallelized years ago, thanks to Intel's inability to substantially increase single-threaded performance on their CPUs. So what really matters for most people who depend on CPU power, the performance that is actually relevant for most people, is multi-threaded performance.

(Perhaps Xcode is an exception, I have no idea, since I haven't opened it in years.)

That's why I'd like to see some actual real-world application benchmarks comparing the M1 to something like a 4800U before I accept the claim that the M1 is "one of the fastest CPUs ever made, period."

If that sounds unreasonable to you, then I guess I really don't know what to tell you.

Of course, in the end, it doesn't really matter, I guess. I'm sure the M1 is going to be fast enough for most people who use Macs, and the people who don't use Macs won't be able to get an M1 chip anyways. It's just a bit funny to see people declare it "one of the fastest CPUs ever made, period," before we've even seen any real-world benchmarks.

>around half of the people who buy Macs are first time Mac buyers

Let me tell you a little story. Back in the 90s, when Apple was about to be bought by Sun because it was hemorrhaging money left and right and not producing any good products, Apple had this idea that its own customers could potentially be used in lieu of an actual marketing budget. So if you asked them, they sent you packages of brochures that explained just how great the Mac was, which you could then give away to your acquaintances. I know, it's a genius plan.

Being a huge Apple fan, I obviously ordered a box of marketing pamphlets, and dutifully read them. One of the things I learned from their "trust us, we're actually doing amazingly well!" booklet was the claim that - and you're not going to believe this! - half of all Mac buyers are actually new to the Mac.

That was during the 90s, when (figuratively) nobody bought any Macs.

Since then, Apple has regularly trotted out this statistic, and oddly, it's always remained the exact same number. It's always 50%. During the 90s, when Apple was doing terribly, it was apparently 50%. During the early 00s, when Apple was doing much better, it was still 50%. During the 10s, when Mac sales were starting to level off? Weirdly, still 50%.

Doesn't that strike you as being kind of strange?

And also, now that I think about it, how does Apple even know what this number is?

The reality is that Mac sales have levelled off around 2010, despite a growing global population, which probably means that Apple is mostly selling to existing customers, and not reaching a lot of new people.

@Lukas: "The reality is that Mac sales have levelled off around 2010,"

Mac sales in 2010: 13.7 million. Sales in 2019: 18.4 million. Mac sales have been bopping between 18.something to 19.something since 2015.
https://www.statista.com/statistics/276308/global-apple-mac-sales-since-fiscal-year-2002/

"now that I think about it, how does Apple even know what this number is?"

It's called market research. They survey their customers. Plus, they know exactly how many new Apple IDs are created, which ones have unique address/credit card info, and which ones are associated with which devices. Whether the numbers they brag about mean anything is another question.

"which probably means that Apple is mostly selling to existing customers, and not reaching a lot of new people."

Mac market share as a fraction of the whole PC market has been ever so slowly growing for every year tracked in this chart. I think that counters your assumption that Macs are mostly selling to existing customers. https://www.statista.com/statistics/576473/united-states-quarterly-pc-shipment-share-apple/

Because of a paywall, it's best to visit those two pages in a private browser window.

>Mac sales in 2010: 13.7 million

I don't know if I really need to point this out, but I picked 2010 because it's a round number, not because that was the exact year they peaked. That's why I wrote "around 2010". Look at any Mac sales chart, and you'll see sales level off around 2010.

You obviously know this as well as I do. Are you just arguing because it's fun, or do you genuinely believe that it is not true that Mac sales levelled off around 2010? Because further down, you link to your own source of data indicating the same, that Apple hasn't been able to make any real gains since then.

>They survey their customers

Yeah, that's a highly reliably way to establish this kind of number.

>they know exactly how many new Apple IDs are created

Which a lot of people don't have.

>Mac market share as a fraction of the
>whole PC market has been ever so slowly
>growing for every year tracked in this
>chart. I think that counters your assumption
>that Macs are mostly selling to existing
>customers.

How? That link shows that Apple hasn't been able to make real gains in market share, which supports the notion that it is impossible for 50% of Mac users to be new to the Mac (unless a similar number of existing Mac users abandons the platform, which might be a possible explanation, but one which also doesn't make Apple look particularly great).

The M1 is cheap in the sense that it likely costs Apple much less per unit than the Intel processors it was using.

Apple just switched from a CPU that costs as much as a phone to a CPU that is inside a phone. So let me clarify: cheap for Apple. Obviously, it’s not cheap for the people who buy these devices.

Ah, yes.

(I’m still not sure what the relevance of a similar CPU being in a phone is. It seems like an inaccurate attempt to disparage it.)

I said: “Given the pricing of these laptops, they should be competitive with Windows laptops running 4800U chips.”

Note that both these Macs, as well as laptops with 4800U chips, are typically priced around 1000 US$. I picked similarly priced computers since that seems like a fair comparison.

OK.

I don’t think a $1,000 laptop is low-cost. But I also don’t see any point in looking at a competitor, realizing they’re cheaper, and screaming “a-ha! Apple isn’t the cheapest!”

Yes, Apple likely has higher CPU margins on these now. Then again, they also have to factor in their own CPU engineering. So, in particular on low-volume Macs like the Mac Pro, the margin difference may not be what you expect it to be.

Sure. If you depend on a slow single-threaded app, then single-threaded performance matters to you. But the last time this was the case for me was back when I still used a garbage version of Photoshop around a decade ago.

You’re discounting all the little foreground and backgrounds tasks you do that could operate a lot faster if single-threaded speed were higher. Including, yes, in the current version of Photoshop.

You’re simultaneously asking for real-world examples while also discounting how most people are going to use a machine. Especially, heck, we’re talking about consumer Macs. The Air, the lowest-end Pro (which really shouldn’t be called that), the low-end mini. The usage profiles on those machines are much closer to those on, say, a phone: little multi-threaded work, mostly short bursts of single-threaded high CPU usage. And in those bursts, Apple’s design helps a lot.

That’s different when you get to the high end, like the Mac Pro, or servers. Especially once you’re, say, the Amazon Graviton2 / Neoverse N1. Then you want many cores, and don’t care as much about each core’s speed: for one, because a lot of those cores will be segregated to run VMs anyway, and two, because a lot of tasks will be stuff like servicing an HTTP request. Scaling that to many requests matters more than having each individual request be fast.

I’m not saying single-threaded performance doesn’t matter at all. What I am saying is that pretty much all applications where performance is an issue have been parallelized years ago

Again, I invite you to go to any non-trivial web page or run an Electron app. JavaScript, safe for some rarely-used APIs like workers, is single-threaded, and you’ll absolutely feel the difference of a CPU with higher single-threaded speed.

thanks to Intel’s inability to substantially increase single-threaded performance on their CPUs.

Well, they’re finally getting there. Tiger Lake is a much more formidable competitor, and Apple is a little disingenuous claiming high advances compared to their previous Macs, when none of those used Intel’s latest chip. Rocket Lake will hopefully port much of that microarchitecture back to higher TDPs, and Alder Lake will finally unify the two. However, Apple is unquestionably a generation or two ahead.

So what really matters for most people who depend on CPU power, the performance that is actually relevant for most people, is multi-threaded performance.

Again, these computers aren’t “for people who depend on CPU power”, and also, I don’t think your premise is correct.

(Perhaps Xcode is an exception, I have no idea, since I haven’t opened it in years.)

Per my understanding, Xcode’s compilers benefit hugely from having a lot of cores. That’s quite unlike the .NET world I mostly reside in, where Roslyn has apparently proven hard to parallelize.

That’s why I’d like to see some actual real-world application benchmarks comparing the M1 to something like a 4800U before I accept the claim that the M1 is “one of the fastest CPUs ever made, period.”

No doubt there will be people comparing, say, a video encode any day now.

I do think the phone CPU comment is relevant in that the RAM is on chip which reduces RAM access latency massively, which probably goes some way to explaining the performance boost. But the fact it's on chip also complicates cooling the CPU (particularly if the RAM covers the CPU die).

With M1, Apple motherboards look much more like the motherboards of phones than those of conventional laptops. Calling it a phone CPU could be understood as a put down in that M1 is another step towards even more closed computers, but it's also a compliment in that a solution with even higher integration (usually jack-of-all-trades) is beating specialized components (and debugging SOCs is no walk in the park).

FWIW, video encode ain't that great a CPU test if the GPU includes dedicated hardware support. Compilation, SAT solving or general file compression are better general CPU tests...

I liked this article: https://jamesallworth.medium.com/intels-disruption-is-now-complete-d4fa771f0f2c which speaks about Intel's travails. He's more bullish than I am on ARM replacing x86 soon.

I wish AMD were also on his graph, since we all know Intel is having process difficulties (14nm...), and both AMD (7nm) and Apple (5nm) are relying on TSMC to provide cutting edge processes. I'm very curious whether M1 stays on the the straight line of Apple's performance improvements, or whether the line starts bending (it should be a sigmoid), and similarly for M2, etc.

Also... I agree with Lukas that single thread performance should not matter for well written native software -- CPUs are so incredibly fast versus what we had in the 80's. But Soren is right: we're in the age of Electron Apps, and the like, which waste an incredible number of cycles. The problem is simple: most software companies feel that time to market and cheaper more easily replaceable software engineers matter more than product quality or speed. And so far, the market is proving them right.

I think it's funny that I'm now seeing comments (elsewhere on the web) that the M1 is basically the death knell for Intel. I'm sure Intel and PC makers will be fine. CPUs started to be "fast enough" for the majority of computer users 2-3 years ago. Not everybody edits 8K video all day long. And in my observation, most of the bottlenecks these days are I/O or something other than the CPU.

The M1 efficiency is nice, but again, many PC laptops get 10+ hours of battery which is plenty (seriously, how often are people away from a power outlet these days?) and there are now USB-C external batteries. It's just a non-issue for PC users all around. The M1 is good for Mac users who don't need to run Parallels and depend on Final Cut Pro or some other niche software that isn't on Windows. Nobody else cares, certainly not Intel or Microsoft.

>I’m still not sure what the relevance of a
>similar CPU being in a phone is.

In the context of the specific discussion we were having, I pointed out that the chip is in phones as evidence that it can't be expensive for Apple to produce, since we were specifically talking about the cost of the chip.

>But I also don’t see any point in looking at
>a competitor, realizing they’re cheaper

Again, not what I was doing. What I was doing was trying to find a point of comparison for performance. If Gruber makes a claim that a chip is "one of the fastest CPUs ever made, period", he's comparing it against other CPUs. The question then is what we're comparing against. Since it would be unfair to compare it against something like a Threadripper, I picked a similarly-priced Windows laptop as a point of comparison.

>I invite you to go to any non-trivial web page or
>run an Electron app.

As it happens, I just wrote an Electron app that maximizes all cores and threads on my Ryzen.

I also happen to manage the development of non-trivial web apps as my day job. Web apps or Electron apps aren't special in any way. They can be parallelized, and if it's necessary because performance is otherwise not satisfactory, they are parallelized.

It's just that most of these apps run perfectly fine without any parallelization, on any recent CPU. Slack runs just fine on any low-end Windows laptop.

The problem starts to occur once you open five different Electron apps and fifty tabs in your browser, but fortunately, this is a problem that is again solved by simply having more cores and threads in your CPU, without trying to parallelize each individual app.

>However, Apple is unquestionably a
>generation or two ahead.

Again, how do you know that? How is that unquestionably a fact? Which reliable third-party has produced extensive benchmarks of the M1, and compared it against contemporary CPUs, such as Zen 3 chips?

Or do you simply mean that Apple paid TSMC more than everybody else, so they can get on their 5nm process first? In which case, true, Apple is a generation ahead of AMD, and, well, let's not even talk about Intel, the poor bastards. But what's relevant isn't the technology used to produce these CPUs, what's relevant is their performance.

Again, let me just repeat this: it's absolutely possible that the M1's single-threaded performance smokes anything comparable coming from Intel and AMD, and it's even theoretically possible that what is essentially a four-core, four-thread M1 outperforms an 8-core, 16-thread Ryzen. But before I believe it, I'd like to see somebody properly test these claims.

>these computers aren’t “for people who
>depend on CPU power”

That's true. So why are Apple and the Apple media so focused on performance?

>No doubt there will be people comparing,
>say, a video encode any day now.

Comparing a video encode is hardly a comprehensive benchmark, and might not even measure CPU performance at all, but instead the video encoding hardware. Which is relevant, but it's not really a good measure of whether this is "one of the fastest CPUs ever made, period."

This thread has a lot of hypothesizing about whether the M1's performance gains will benefit normal apps, as well as whether normal users already have enough power and battery life for their needs.

Here's a real, non-8K-video-editing, non-Xcode-compiling use case to think about.

I use a MacBook Air at work. I'm not a dev or a video editor. My typical day is spent in Slack, Slack calls, Zoom calls, Jira, Confluence, Google Sheets, Google Docs, Mixpanel, etc.

Slack video calls make my fans spin up in no time. Jira's pretty taxing too, and using it _while_ on a video call often brings my computer to its knees. I wish I could spend my time using well written native apps, but like @Old Unix Geek said, that's not the current reality of the software industry.

I use my laptop on battery all the time for a dozen reasons. (Here's a situation I bet is common right now for many people: my partner and I are both working from home, and I need to go into another room because we both have calls at the same time.) 10+ hour battery life isn't really 10+ hours when inefficient web & electron apps are maxing out your CPU.

Do I care about the M1's performance jump? Absolutely.

Do I care about the crazy long battery life? Absolutely.

The M1 seems like the first significant step forward for laptops in both these areas after years of stagnation. I'm looking forward to it.

I don't think anyone expects these to not be an improvement over Apple's Intel notebooks, but that's not just because the M1 is probably pretty good, it's also because Apple's Intel notebooks are definitely very bad, and not representative of state-of-the-art notebooks.

Leave a Comment