Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, it is quite specialized for inference. It's unlikely that you'd see this stuff outside of hardware specifically for that.

Development systems for AI inference tend to be smaller by necessity. A DGX Spark, Station, a single B300 node... you'd work on something like that before deploying to a larger cluster. There's just nothing bigger than what you'd actually deploy to.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: