Monday, November 25, 2019

NVIDIA Drops CUDA Support for macOS

Alex Cranz (Hacker News):

The last vestiges of Nvidia and Apple’s long-term relationship are ending shortly. On Monday Nvidia published the release notes for the next update of its CUDA platform and noted that “CUDA 10.2 (Toolkit and NVIDIA driver) is the last release to support macOS for developing and running CUDA applications.” That means all future versions of CUDA will lack support for Apple devices, which could leave a decent share of the pro community, as well as the hackintosh community, without support for the most popular discrete GPUs being made at the moment.

[…]

But despite the reliance on AMD hardware Apple continued to support Nvidia GPUs. If you wanted to cram a Nvidia card into your older Mac Pro or rely on it for you hackintosh than Apple and Nvidia had you covered. Until last year when Apple quietly stopped support CUDA with the release of macOS 10.14 Mojave. That forced apps that relied on CUDA for hardware acceleration, like Adobe’s suite of software, to issue warnings and reminders to customers.

Previously:

9 Comments RSS · Twitter

Sören Nils Kuklau

Did we ever learn anything about this at all? Is Apple merely disinterested or actually still annoyed over the MacBook Pro logic board failures? Has Nvidia recently tried talking to them, or are they both playing a game of chicken (where the customer loses)?

Around 2011, Apple started hiring AMD execs, who in turn hired their buddies, into hardware engineering. NVIDIA was systematically discriminated against by people who still held AMD stock. We've seen the results play out in hardware & software plans ever since.

Nvidia tried to force it's closed acceleration API and ecosystem within Apple. A wallet garden within a wallet garden. Flash-in-GPU. Also AMD is willing to customise.

Simona Cardenas

Correct me if I'm wrong, but isn't this a dispute more about IP/trade secrets and a lack of a user-space display driver developer framework from Apple than vendor discrimination?

Apple wants access to nVidia's driver source code for certification purposes going forward; nVidia doesn't trust Apple and doesn't want to agree to this.

The user-space alternative (DriverKit) for drivers that Apple has been promoting in recent years isn't designed to build display drivers, so nVidia has no workaround for OS X, unlike on Windows where userspace graphics drivers are common. nVidia might be able to cobble together a compute-only DriverKit driver and support CUDA that way, but it would be a limited use hack for a very niche audience.

> Nvidia tried to force it's closed acceleration API and ecosystem within Apple. A wallet garden within a wallet garden. Flash-in-GPU. Also AMD is willing to customise.

Nvidia poured huge amounts of money into CUDA and made it an industry standard that people can rely on. They equally invested huge amounts of cash on hardware. They've earned their success.

Apple and AMD announced OpenCL years ago in response, promised they would support it, and then didn't. Consequently it withered, and the big libraries like TensorFlow and PyTorch skipped it: see here for more: https://towardsdatascience.com/on-the-state-of-deep-learning-outside-of-cudas-walled-garden-d88c8bbb4342

Now Apple created a proprietary standard called Metal. They announced at a WWDC meeting that they would consider a Metal backend to Tensorflow, but haven't. Instead they (implicitly) recommend researchers abandon Jupyter notebooks, high-quality plotting libraries like Bokeh, and GPU math engines like Tensorflow in exchange for CoreML, with Swift, using Playgrounds and fairly basic plotting. All round it's a worse and more limiting experience for a research, which inhibits collaboration with non-Mac users. Tensorflow, by comparison, runs on Macs (CPU only), Windows, and Linux (CPU or CUDA both), Google TPUs via Google Colab and Amazon's Nvidia GPUs via SageMaker

So it's Apple that has created the walled garden. In doing so, they've squeezed out most deep-learning professionals.

AMD too have announced another API --HIP / GPUOpen -- and a fiddly CUDA compatibility layer that is promised to just run CUDA code via transpilation. They've promised to be investing money in it, but they said that about OpenCL too.

Sören Nils Kuklau

Nvidia tried to force it’s closed acceleration API and ecosystem within Apple. A wallet garden within a wallet garden.

I mean, Apple isn’t really one to talk there.

They kickstarted OpenCL back in the day, but now they’re all in on their own Metal Compute, which is arguably more walled than CUDA.

(And now there’s Vulkan, too.)

Correct me if I’m wrong, but isn’t this a dispute more about IP/trade secrets and a lack of a user-space display driver developer framework from Apple than vendor discrimination?

Apple wants access to nVidia’s driver source code for certification purposes going forward; nVidia doesn’t trust Apple and doesn’t want to agree to this.

I haven’t heard that Apple wants their source code, only that for whatever reason kernel extension certification isn’t happening. Whether the blame for that is on Apple’s side, Nvidia’s, or both, I don’t know.

The user-space alternative (DriverKit) for drivers that Apple has been promoting in recent years isn’t designed to build display drivers

Well, DriverKit was just introduced this year. The way Apple rolls with frameworks is that they taketh away and then giveth — in the long run, you end up with something better (in most ways), but in the meantime, people who have a real-world need are screwed. It’s a story that happens over and over.

Do I know that 10.16 Death Valley or 10.17 West Coast Best Coast will feature GPU support in DriverKit? No. But never judge an Apple API’s long-term feature roadmap by its initial release.

[…] Michael Tsai – Blog – NVIDIA Drops CUDA Support for macOS – […]

I remember reading Apple earns more than 90% (100% at times) overall net profits of the mobile market. Representing about three quarters net profits of the combined desktop and mobile markets.

When Apple witnessed the explosive growth of iPhone and planned five years ahead had to decide about owning or delegating the mobile GPU technology stack. It clearly worked out the right answer. Did Nvidia’s CUDA fit the vertical integration strategy? No. They were in competition with it. Did openCL? Also not.

OpenCL has served it’s purpose as an open alternative to CUDA for who needed GPU compute with AMD, ARM, Imagination and Nvidia hardware on desktop and mobile. With the demise of Nvidia (and Microsoft) in the mobile space, Android vendors mostly limited to vanilla ARM and Apple going proprietary there’s clearly no space for CUDA (and DirectX) there.

With the need to overhaul openCL to consolidate graphics and compute in desktop and mobile would have CUDA been a good choice? Doesn’t work by design in mobile and is optimized for headless server and HPC compute where Apple never intended to compete.

Metal, AMD and proprietary GPUs were the only choice. And they work beautifully.

All evidence points to Apple just not caring about the Pro markets they used to doninate. Now they just care about iToys for consumer sheep.

Leave a Comment