Apple supports JAX[0] along with PyTorch[1] and Tensorflow[2] on macOS with both Apple Silicon and AMD GPUs (on x86 Macs). Although, the perf isn't great. I write most of my experimental ML code in JAX on an M2 Macbook Air and then move to a proper multi-GPU Linux box for full training runs.
We don't support Windows GPU because we haven't had the engineer bandwidth to support it well.
We recommend WSL2 for GPU on Windows at the moment because that is a compromise: it allows CUDA support, without us having to support another release variant.
We don't release Windows GPU wheels at the moment, but that's because we're a small team and none of us use Windows personally. We welcome contributions!
(I verified that the Windows CUDA GPU support built as recently as two weeks ago, but I don't have the ability to test that it works.)
We recommend WSL2 because that's just using our existing Linux CUDA release.
Yes, we made this more formally supported recently.
We felt that Windows CPU support was important so everyone can run JAX, even if it's not always the most-accelerated version of JAX. And we got some great PRs from the community that helped fix a few open issues.
Very nice! I just installed, I hope to eventually contribute down the line, especially in terms of custom operators. They weren't even document until recently, and there's still quite some work to add them.
If IBM could have taken Mainframe to the cloud, with option for CICS OLTP and COBOL/JCL Batch processes for smaller enterprises, it could have helped them a lot.
Yes absolutely, just imagine how many corps would not even think about migrating away from mainframe if that would be a possibility...i even think that would count for massive enterprises too.