Hacker Newsnew | past | comments | ask | show | jobs | submitlogin



That was a nice read, but it looks like the article concludes that the "Reversal Curse" observed by the authors of the paper is likely better attributed to the researchers' methodology. Some quotes from that article:

"As mentioned before, It’s important to keep in mind ChatGPT and GPT-4 can do B is A reasoning. The researchers don’t dispute that."

"So in summation: I don’t think any of the examples the authors provided are proof of a Reversal Curse and we haven’t observed a “failure of logical deduction.” Simpler explanations are more explanatory: imprecise prompts, underrepresented data and fine-tuning errors."

"Since the main claim of the paper is “LLMs trained on “A is B” fail to learn “B is A”“, I think it’s safe to say that’s not true of the GPT-3.5-Turbo model we fine-tuned."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: