https://old.reddit.com/r/ChatGPT/comments/1atn6w5/chatgpt_re...
Very much appreciate the link showing it absolutely did.
Also why I structure my system prompts to say it "loves doing X" or other intrinsic alignments and not using extrinsic motivators like tipping.
Yet again, it seems there's value in anthropomorphic considerations of a NN trained on anthropomorphic data.
https://old.reddit.com/r/ChatGPT/comments/1atn6w5/chatgpt_re...