Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd argue that their own foundational models are getting outperformed by the Llama finetunes on HF and at this point they're shifting cost structures (getting rid of training clusters in favor of hosted inference).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: