It will probably be unpopular here, where people appear to have drawn the lines and formed unyielding positions, but...
The whole llm paranoia is devolving into hysteria. Lots of finger pointing without proof, lots of shoddy evidence put forward and nuance missing points.
My stance is this: I don't really care whether someone used an llm or wrote it themselves. My observation is that in both cases people were mostly wrong and required strict reviews and verification, with the exception of those who did Great Work.
There are still people who do Great Work, and even when they use llms the output is exceptional.
So my job hasn't changed much, I'm just reading more emojis.
If you find yourself becoming irrationally upset by something that you're encountering that's largely outside of your control, consider going to therapy and not forming a borderline obsession with purity on something that has always been a bit slippery (creative originality ).
> My observation is that in both cases people were mostly wrong and required strict reviews and verification, with the exception of those who did Great Work.
Sure, but LLMs allow people to be wronger faster now, so they could conceivably inundate the reviewer with a new set of changes requiring a new two hour review, by only pressing buttons for two minutes.
> If you find yourself becoming irrationally upset by something that you're encountering that's largely outside of your control, consider going to therapy and not forming a borderline obsession with purity on something that has always been a bit slippery (creative originality ).
Maybe your take on it is slightly different because your job function is somewhat different?
I assume that many people complaining here about the LLM slop are more worried about functional correctness than creative originality.
If it's important to the argument, my title is "Principal Software Engineer MTS". I review code, ADRs, meeting summaries, design docs, PRDs etc...
> I assume that many people complaining here about the LLM slop are more worried about functional correctness than creative originality.
My point is, I've been in the game for coming up on 16 years, mostly in large corporate FAANG-adjacent environments. People have always been functionally incorrect and not to be trusted. It used to be a meme said with endearment, "don't trust my code, I'm a bug machine!" Zero trust. That's why we do code reviews.
> Sure, but LLMs allow people to be wronger faster now, so they could conceivably inundate the reviewer...
With respect, "conceivably" is doing a lot of work here. I don't see it happening. I see more slop code, sure. But that doesn't mean I _have_ to review it with the same scrutiny.
My experience thus far has been that this is solved quite simply: After a quick scan, "Please give this more thought before resubmitting. Consider reviewing yourself, take a pass at refining and verify functionality."
> Maybe your take on it is slightly different because your job function is somewhat different?
> I assume that many people complaining here about the LLM slop are more worried about functional correctness than creative originality.
Interestingly, I see the opposite in the online space. First of all, as an aside, I don't see many people complaining at all in real life (other than the common commiseration of getting slop PRs, which has replaced the common commiseration of getting normal PRs of sub-par quality).
I primarily see people coming to the defense of human creativity and becoming incensed by reading (or I should say, "viewing" more generally) something that an llm has touched.
It appears that mostly people have accepted that llms are a useful tool for producing code and that when used unethically (first pass llm -> production), of course they're no good.
There is a moral outrage and indigence that I've observed however (on HN, and elsewhere) when an LLM has been used for the creative arts.
Something I noticed about gemini: I've been experimenting with transcribing old handwritten gaelic archives. Qwen 235b a22b instruct appears to give a much more faithful reproduction compared to gemini, for the simple fact that gemini keeps hallucinating an old gaelic faerie tale
I don't believe that this is going to happen, but the primary arguments revolving around a "super intelligent" ai involve removing the need for us to listen to it.
A super intelligent ai would have agency, and when incentives are not aligned would be adversarial.
In the caricature scenario, we'd ask, "super ai, how to achieve world peace?" It would answer the same way, but then solve it in a non-human centric approach: reducing humanities autonomy over the world.
Fixed: anthropogenic climate change resolved, inequality and discrimination reduced (by reducing population by 90%, and putting the rest in virtual reality)
If out AIs achieve something like this, but they managed to give them the same values the minds in Iain Bank's Culture Series had, I think humanity would be golden.
What are your primary usecases? Are you mostly using it as a chatbot?
I find gemini excels in multimodal areas over chatgpt and anthropic. For example, "identify and classify this image with meta data" or "ocr this document and output a similar structure in markdown"
I run a bunch of smaller models on a 12gb vram 3060 and it's quite good. For larger open models ill use open router. I'm looking into on-
demand instances with cloud/vps providers, but haven't explored the space too much.
I feel like private cloud instances that run on demand is still in the spirit of consumer hobbyist. It's not as good as having it all local, but the bootstrapping cost plus electricity to run seems prohibitive.
I'm really interested to see if there's a space for consumer TPUs that satisfy usecases like this.
The whole llm paranoia is devolving into hysteria. Lots of finger pointing without proof, lots of shoddy evidence put forward and nuance missing points.
My stance is this: I don't really care whether someone used an llm or wrote it themselves. My observation is that in both cases people were mostly wrong and required strict reviews and verification, with the exception of those who did Great Work.
There are still people who do Great Work, and even when they use llms the output is exceptional.
So my job hasn't changed much, I'm just reading more emojis.
If you find yourself becoming irrationally upset by something that you're encountering that's largely outside of your control, consider going to therapy and not forming a borderline obsession with purity on something that has always been a bit slippery (creative originality ).
reply