That's actually an interesting argument. I wonder if it helps explain the growing support for the far right political party in Germany (ADL?), considering that they've been effectively blacklisted by the other political parties.
> Which isn’t to say you’re somehow wrong for wanting a safe space away from political discourse. It’s important not to forget how empowering silence and ignorance are to causes that typically undermine the rights of the populace.
I feel like these two statements nearly contradict each other. Maybe the second sentence should have begun with "However,..."
Either way, I totally agree that apathy is enemy #1 and policies that enable censorship for any reason are enemy #2.
Some one recently told me that their definition of intelligence was data-efficient extrapolation & I think that definition is pretty good as it separates intelligence from knowledge, sentience, & sapience.
Cool. I don't know how data-efficient current LLMs are, in the sense that I don't know what they know and don't know. It certainly seems like in some domains, they can can extrapolate, but I don't know what's in their training data.
Correction for the record: METR is not funding my work directly. I am participating in a METR study regarding performance effects of software development AI-assistants & have chosen the project above to be my primary focus during the study.
To avoid hitting HN's rate limit, I will be adding my answers to this post for as long as I am able to edit (2 hours, I think). Once I am no longer able to edit or have hit a character length restriction, I will create a reply post to this one and do the same thing there.
Also, to reduce confusion, the original title was "Show HN: A belief system is how we create AI that actually think like humans do"
Answer to taylodl's question[0]:
AI will never be able to think exactly like humans do. There will be trade-offs to this. However, enabling AI to think more like humans will enable both AI & humanity to better explore these tradeoffs.
Reply to Voland0's post[1]:
I've found trying to specify my own personal solution process to be very enlightening, but also quite mentally taxing. Such self-awareness comes at quite the cognitive cost!
> How is GP's idea related to 3~4 different home projects of yours and not just one?
My thought process is that if I actually make this for myself, those 3-4 projects would magically get a very impatient new stakeholder who will pester me to actually deliver those projects to them.
It's gonna be interesting to see if I'm able to trick my brain into not just brushing off the agent (or if that's even really an issue, no clue at this point). I've started rolling around some ideas on handling that scenario but I'm just gonna let that stew while I play with different setups so I don't end up just building a convoluted reminder app lol
Unfortunately I had an unexpectedly hectic holiday weekend so I won't get to start playing with the idea for real until tomorrow afternoon.
reply