Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

First, I'm using frontier models with Cursor agenic mode.

> Also, if you use an LLM haphazardly and it introduces a security flaw, you as the user are responsible. The LLM is a power tool, not a person.

I 100% agree. That was my point. A lot of people (not saying you, I don't know you) are not qualified to take on that level of responsibility yet they do it anyway and ship it to the user.

And on the human side, that is precisely why procedures like code review have been standard for a while.

But my main objection to the parent post was not that LLMs can't be powerful tools but that specifically the examples used of maintainability and security are (IMO) possibly the worst examples you can use. Since 70k line un-reviewable pull requests are not maintainable and probably also not secure (how would you know?).





Okay, I'm pretty sure we would heavily agree on a lot of this if we pulled it all apart.

It really boils down to who is using the LLM tool and how they are using it and what they want.

When I prompt the LLM to do something I scout out what I want it to do, potential security and maintenance considerations, etc. I then prompt it precisely, sometimes with equivalent of multi page essay, sometimes with a list of instructions, etc. the point is I'm not vague. I then review what it did and look for potential issues. I also ask it to review what it did and if it sees potential issues (sometimes with more specific questioning).

So we are mashing together a few dimensions, my GPC was pointing out:

- A: competent developer wants software functionality produced that is secure and maintainable

- B: competent developer wants to produce software functionality that is secure and maintainable

The distinction between these is subtle but has a huge impact on senior developer attitudes to LLMs from what I've seen. Dev A more likely to figure out how to get most out of LLMs, Dev B will resist and use flaws as excuse to do it themselves. Reminds me a bit of early AWS days and engineers hung up on self hosting. Or devs wanting to built everything from scratch instead of using a framework.

What youre pointing out is that if careless or inexperienced developers use LLMs they will produce unmaintainable and insecure code. Yeah I agree. They would probably produce insecure and unmaintainable code without LLMs too. Experienced devs using LLMs well can produce secure and maintainable code. So the distinction isn't LLMs, it's who is using them and how.

What just occured to me though, and I suspect you will appreciate, is the fact that I'm only working with other very experienced devs. Experienced devs working with JR or careless devs who can now produce unmaintainable and insecure code much faster is a novel problem and would probably be really frustrating to deal with. Reviewing a 70k line PR produced by an LLM without thoughtful prompting and oversight sounds awful. I'm not advocating this is a good thing. Though surely there is some way to manage that, and figuring out how to manage it probably has some huge benefits. I've only been thinking about it for 5 min so I definitely don't have an answer.

One last thought that just occured to me: the whole narrative of AI replacing junior devs seemed bonkers to me because there's still so much demand for new software and LLMs don't remotely compare to developers. That said, as an industry I guess we haven't figured out how to mix LLMs and Jr developers in a way that's net constructive? If JR+LLM = 10x more garbage for SR to review, maybe that's the real reason why JR roles are harder to find?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: