> remember we're 2 years away from self driving car since 2010
Depending on how you define that milestone, we are already there. There are multiple companies offering self-driving cars to the public in multiple cities.
> Depending on how you define that milestone, we are already there.
My definition is definitely not "only on straight wide and sunny Californian roads and sometimes crashing full speed into a stopped vehicle".
Remember that in 2014 Uber was talking about buying 500 000 fully autonomous teslas "by 2020", they bought 0, and they're still not anywhere close to autonomous.
A human mind is not a rules model for Chess, and comparisons to a human mind are worthless. We're not asking if the LLM is human, only if it is modeling the game rules or modeling statistics
My point is this:
* If the results of the model were perfect, not a single counter-example, that would be a strong argument that the LLM has built an internal model of Othello
* If counter-examples exist, the result of the experiment is indistinguishable from a sufficiently clever statistical inference and proves nothing
> So you assert that parent doesn’t have an internal model of chess?
I think it's an interesting question, I find it more likely we have an imperfect ability to perceive the whole chess board, whatever the hell that might mean.
In any case it's irrelevant to an LLM which has perfect information from the input set.
So perhaps we rephrase the question, I think the parent, if given a list of individual chess moves (in whatever notation pleases them), should be able to tell if a given move is valid or not for the given piece. If the parent failed to do that, I would say they do not have a valid model of chess, ya.
> If the parent failed to do that, I would say they do not have a valid model of chess
Surely even the world's foremost chess experts will occasionally make errors in such a situation, so what's the point of holding language models to an intelligence standard that even humans can't meet?
I see two ways of interpreting this: either language models do approximately learn world models, or else world models aren't necessary for human intelligence
EDIT: I see from your other posts in this thread you're probably more inclined to believe the latter statement, which wasn't obvious to me from your first post and I think that explains why there is so much contention in this thread.
I would argue that the current model of capitalism is already struggling hard, if not failing. Near-zero interest rates have ensured that the evolutionary mechanisms built into capitalism are effectively negated. Zombie companies that produce no real products or cash flow can exist in perpetuity - far from the "survival of the fittest" ethos of capitalism.
Parent’s suggestion about typography is nonsense, the Rust project desperately needed somebody with skills in user experience and usability engineering. Now it’s too late.
Two :’s is an extremely loud combination of visual elements compared to the subtle point. :: drags the eye away from the content and says “look at me oscillate”
In addition, humans group similar visual elements together so a combination of anything::doesnt::matter::what::between::clumps it is impossible to escape the common pattern and the eye jumps between the ::’s. Therefore it takes double the cognitive load to read.
What’s worse, the eye gets trapped within each :: because it’s a combination of four dots, which naturally creates an implied circular pattern which draws the viewer in further.
Compare to something like:
use HashMap from std.collections
I’m not dyslexic actually. Just thinking in terms of good user experience. A lot of the issues with C syntax is that it’s largely designed to make it easier for compiler writers.
I feel like Rust has similar pitfalls in that vein—instead of defining the user experience first, they went for a symbol that is not used elsewhere—making it easier on the engineers while satisfying the “functional requirement” of namespaces.
So—Rust will not be the language-to-end-all-languages because it is not beautiful enough. Perhaps it is almost there in functionality, but i foresee a problem with any language’s longevity unless it’s literally perfectly thought-out user experience.
I’d love to jump on the hype train with Rust but it’s not really that exciting. It doesn’t really feel all that natural. Well-thought-out user experiences should feel natural through-and-through.
> Two :’s is an extremely loud combination of visual elements compared to the subtle point. :: drags the eye away from the content and says “look at me oscillate”
> In addition, humans group similar visual elements together so a combination of anything::doesnt::matter::what::between::clumps it is impossible to escape the common pattern and the eye jumps between the ::’s. Therefore it takes double the cognitive load to read.
It is true that the colons take up more space and your example looks good on HN.
But this problem will be immediately solved by syntax coloring. It's just never going to come up.
True, but why bother with the symbols in the first place? User experience is just as important for developers reading/writing text and languages are typically created behind closed doors with a small engineering team only.
Guido got a lot right with Python because it’s clean and easy to jump in. Minimal cognitive load. But still very powerful.
Depending on how you define that milestone, we are already there. There are multiple companies offering self-driving cars to the public in multiple cities.
I'd count that as a success.