Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
'You are a helpful mail assistant,' and other Apple Intelligence instructions (theverge.com)
9 points by minimaxir on Aug 5, 2024 | hide | past | favorite | 3 comments


A "do not hallucinate" prompt engineering instruction is certainly a first since it implies the reason LLMs hallucinate is just for fun.


I don’t understand how that would even work. Can a LLM even know the difference between a hallucination and an (for want of a better word) inference?


No, it's pointless and likely doesn't affect the output.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: