HN2
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
brucethemoose2
on Dec 7, 2023
|
parent
|
context
|
favorite
| on:
AMD MI300 performance – Faster than H100, but how ...
Well local LLM running with rocm isnt exactly huge business. You are not wrong, but I can
envision
why decision makers wouldn't want to spoil the pro line.
dharma1
on Dec 8, 2023
[–]
The real pros will use data centre GPUs like the MI300. I think local ML will become huge in games
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: