Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All you did was changing the programming language from (say) Python to English. One is designed to be a programming language, with few ambiguities etc. The other is, well, English.

Speed of typing code is not all that different than the speed of typing English, even accounting for the volume expansion of English -> <favorite programming language>. And then, of course, there is the new extra cost of then reading and understanding whatever code the AI wrote.





The thing about this metaphor that people don't seem to ever complete is.

Okay, you've switched to English. The speed of typing the actual tokens is just about the same but...

The standard library is FUCKING HUGE!

Every concept that you have ever read about? Every professional term, every weird thing that gestures at a whole chunk of complexity/functionality ... Now, if I say something to my LLM like:

> Consider the dimensional twins problem -- how're we gonna differentiate torque from energy here?

I'm able to ... "from physics import Torque, Energy, dimensional_analysis" And that part of the stdlib was written in 1922 by Bridgman!


> The standard library is FUCKING HUGE!

And extremely buggy, and impossible to debug, and does not accept or fix bug reports.

AI is like an extremely enthusiastic junior engineer that never learns or improves in any way based on your feedback.

I love working with junior engineers. One of the best parts about working with junior engineers is that they learn and become progressively more experienced as time goes on. AI doesn't.


People need to decide if their counter to AI making programmers obsolete is "current generation AI is buggy, and this will not improve until I retire" or "I only spend coding 5% of my time so it doesn't matter if AI can instantly replace my coding".

And come on: AI definitely will become better as time goes on.


It gets better when the AI provider trains a new model. It doesn't learn from the feedback of the person interacting with it, unlike a human.

Exactly. LLMs are faster for me when I don't care too much about the exact form the functionality takes. If I want precise results, I end up using more natural language to direct the LLM than it takes if I just write that part of the code myself.

I guess we find out which software products just need to be 'good enough' and which need to match the vision precisely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: