Hacker News .hnnew | past | comments | ask | show | jobs | submit | csunoser's commentslogin

I am not sure what context Jensen said that. But midjourney uses tpu. Apple uses tpu. They are no other frontier labs that use it, but Google + Anthropic is 2 out of 3 frontier lab so.....

You could reasonably say that "A majority of frontier labs uses TPU to train and serve their model."


Afaik, TPUs are only used for inference, not training. Maybe that was also what the quote referred to.

Mayhaps! But I think as far as google, anthropic[1] and apple[2] goes, they do use the tpus for training. Ofc v4 and v5 (older generations of tpus) were more specialized for search related embedding workloads and i could see people not using them for training.

[1]: We train and run Claude on a range of AI hardware—AWS Trainium, Google TPUs - April 6th, Anthropic on Google and Broadcom partnership [2]: "[Apple foundation model]... builds on top of JAX and XLA, and allows us to train the models with high efficiency and scalability on various training hardware and cloud platforms, including TPUs and both cloud and on-premise GPUs" - Apple in 2024


I am more sympathetic to the OG.

There are many good criticisms against data center. And yet, the water issue always comes up first. Must we spew false/untruthhood just so our political message is catchy? I suppose yes - in times of war/politics, the laws/truths are silent. But it doesn't have to be so here.


> the water issue always comes up first

I've never had it come up first. Neat how 2 people can have 2 opposite experiences based on their different life paths.

Anyways: Between our 2 opposite experiences, it might as well be totally random, so I don't think the ordering of concerns is that important. Better to focus on substance, like the concerns themselves.


Huh. I initially thought this is just another finetuning end point. But apparently they are partnering up with customers on the pretraining side as well. But RL as well? Jeez RL env are really hard to get right. Best wishes I guess.


I have used both (albeit 2 years ago, and things change really fast). At the time, Candle didn't have 2d conv backprop with strides properly implemented. And getting Burn running libtch backend was just a lot simpler.

I did use candle for wasm based inference for teaching purposes - that was reasonably painless and pretty nice.


~~Is anyone getting 404 on the referral link to `gemini.google.com/music` when clicking on try in gemini button?~~

Seems to be working for some folks now.


Working for me, it's just been hugged to death. Taking ages for the actual track to download for me.

Quality is like... Suno v3? Maybe v4 if I'm being generous?


Same here, but they’re gradually rolling it out since many users are slowly getting it under Tools.


same


Even after reading the source, it doesn’t seem like they were hacked? Or if they were, they were not accused of such.

I do think hand rolling your own thing is fraught. But it is very confusing to equate one mother’s complaint to “they have been hacked”.

PS: The people who made their own s3 rans a baby monitor company. News article is about a mother reporting hearing a weird voice from the baby monitour.



Maybe this is the future. But I dread looking at perfectly formatted yet sterile readme with too many emojis for comfort.


It's one emoji


I mean, literally not true. There are 7. The problem is that most of the emojis there don't do anything for the content.

Emojis are not the core problem. Mindlessly letting claude do the work and then farm karma on HN is.


Your reply mentioned "perfectly formatted yet sterile", which could just be someone paying more than 10 minutes of attention to the damn thing, and the emoji. The way you made it sound, it was full of smileys and trees and rocket ships. It's one check mark emoji used in a list of 5 items and at the end of 2 headers. You didn't say anything about Claude.


It seems like you think the author wrote this by hand and paid a great deal attention.

What do you think is the chance that claude code wrote the readme?


Sadly, it seems like it is no longer about wholesale. Nothing wrong with that, but a working wholesales market like Rungis and Toyosu Market has a different kind of functional charm to it.


At around 7.4 orthonormal basis and there after, the tex rendering stops working on the github readme preview page.

Instead, it is replaced with a red error box saying: [ Unable to render expression. ]

I wonder if there is an artificial limit for the amount of latex expression that can rendered per page.


I switched to the epub at that point. Still, credit I think to github that the page renders as well as it does.


It does say `Experience up to 1 petaFLOP of AI performance at FP4 precision with the NVIDIA Grace Blackwell architecture.` in the features section.

But yeah, this should have been further up.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: