Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pretty funny outcome of tipping for better results:

https://old.reddit.com/r/ChatGPT/comments/1atn6w5/chatgpt_re...



For about a year now I've privately wondered if GPT-4 would end up modeling/simulating the over-justification effect.

Very much appreciate the link showing it absolutely did.

Also why I structure my system prompts to say it "loves doing X" or other intrinsic alignments and not using extrinsic motivators like tipping.

Yet again, it seems there's value in anthropomorphic considerations of a NN trained on anthropomorphic data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: