Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Right.I was thinking about it, you still need batch refill, however, Apple Core ML tools were failing for attention activations quantization. Long context, pre-fill is still compute bound.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: