Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't really buy this post. LLMs are still pretty weak at long contexts and asking them to find some patterns in data usually leads to very superficial results.




No one said you cannot run LLMs with the same task more than once. For my local tooling, I usually use the process of "Do X with previously accumulated results, add new results if they come up, otherwise reply with just Y" and then you put that into a loop until LLM signals it's done. Software-wise, you could add so it continues beyond that too, for extra assurance.

In general for chat platforms you're right though, uploading/copy-pasting long documents and asking the LLM to find not one, but multiple needles in a haystack tend to give you really poor results. You need a workflow/process for getting accuracy for those sort of tasks.


> and then you put that into a loop until LLM signals it's done

And after that? What's next?


Then you have non-superficial results based even on long contexts. Wasn't it clear I was responding directly to parent's concerns? Could I have made it clearer?

What I was getting at is that running LLMs in a loop doesn't really address those concerns at all. You simply get more information, it doesn't do anything for the correctness of that information.

Any patterns it identifies could be imaginary, and you don't have any indication of confidence that all relevant trends have been identified. The most valuable patterns are likely going to be the most hidden and easily missed - those are the ones people are paid to find, not just the low hanging fruit that can be identified with basic statistics.

In the same way that asking an LLM to review some code might produce a hundred superficial comments and miss the things that actually matter.


no one said you can't turn on the radio and start listening to static

Sure. Is there a point you're trying to make by saying this? I'm afraid your comment is so succinct it isn't obvious what you are trying to say.

[flagged]


Either you're here to discuss with others, or it's better you don't waste bytes on the storage medium of the servers and avoid commenting.

As expected the LLM does understand what you're trying to say. A rational actor would use this opportunity to update their priors but something tells me you're not a rational actor. Pity.

This was on chatgpt, tell free to try for yourself, since you probably think it's fake.

Prompt:

Do you understand what user 2 is trying to say in below context?

User1: No one said you cannot run LLMs with the same task more than once

User2: no one said you can't turn on the radio and start listening to static

User2: Ask the LLM if it understands what I'm trying to say

Response:

Yes. User 2 is making an analogy, not a literal rebuttal.

What they’re trying to say is roughly this:

> Just because something is allowed or possible doesn’t mean it’s meaningful or useful.

More specifically:

User1’s claim: “You can run an LLM on the same task more than once.” → This frames repeatability as neutral or acceptable.

User2’s analogy: “You can turn on the radio and listen to static.” → Yes, it’s allowed — but it’s noise, not signal.

So User 2 is implying that:

Re-running an LLM on the same task (especially without controlling randomness, evaluation criteria, or epistemic grounding) may just produce stochastic variation rather than new information.

Treating multiple runs as meaningful evidence can be like mistaking static for content.

In epistemic terms, they’re criticizing procedural permissibility being confused with epistemic value. The analogy suggests:

Repetition ≠ insight

Output ≠ signal

Variance ≠ understanding

...


my point is that embedding-shape's reading comprehension and/or language skills are too weak for him to safely use LLM technology.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: