In case the author is reading this, I have the receipts on how there's a real step function in how much software I build, especially lately. I am not going to put any number on it because that makes no sense, but I certainly push a lot of code that reasonably seems to work.
The reason it doesn't show up online is that I mostly write software for myself and for work, with the primary goal of making things better, not faster. More tooling, better infra, better logging, more prototyping, more experimentation, more exploration.
Here's my opensource work: https://github.com/orgs/go-go-golems/repositories . These are not just one-offs (although there's plenty of those in the vibes/ and go-go-labs/ repositories), but long-lived codebases / frameworks that are building upon each other and have gone through many many iterations.
I have linked my github above. I don't know how that fares in the bigger scope of things, but I went from 0 opensource to hundreds of tools and frameworks and libraries. Putting a number on "productivity" makes no sense to me, I would have no idea what that means.
I generate between 10-100k lines of code per day these days. But is that a measure of productivity? Not really...
He said "generate". This is trivial to do. And probably this is what Amodei meant when he said 90% of code would be AI by now. It doesn't meant that generated code is actually useful and gets checked in.
Trivial is a pretty big word in this context. Expanding an idea into some sort of code is indeed a matter of waiting. The idea, the prompt, the design of the overall workflow to leverage the capabilities of llms/agents in a professional/long-lived codebase context is far from trivial, imo.
I tuned in to a random spot at a random episode, didn't see any coding but did get to hear you say:
"I'm a person who hates art now...I never want to see art again. All I want to see is like, AI stuff. That's how bad it's gotten. Handmade? nuh-uh. Handmade code? ... anything by humans, just over. I'm just gonna watch pixels."
I'm always a very serious person while I wait for people to join the stream. I'm sorry you weren't impressed, but tbf that's not really my goal, I just like building things and yapping about it.
I'm not sure why you bother yapping about it yourself. It's too human. Just give an LLM a list of lowercase bullet points and have an AI voiceover read them. It'll be 10x more efficient.
I very often put some random idea into the llm slot machine that is manus, and use the result as a starting point to remold it into a proper tool, and extracting the relevant pieces as reusable packages. I’ve got a pretty wide treesitter/lsp/git based set of packages to manage llm output and assist with better code reviews.
Also, every llm PR comes with _extensive_ documentation / design documents / changelogs, by the nature of how these things work, which helps both humans and llm-asssisted code review tools.
Since I get downvoted because I guess people don’t believe me, I’m sitting at breakfast reading a book. I suddenly think about yaml streaming parsing, start a gpt research, dig a bit deeper into streaming parser approaches, and launch a deep research on streaming parsing which I will print out and read tomorrow at breakfast and go through by hand. I then take some of the gpt discussion and paste it into Manus, saying:
“ Write a streaming go yaml parsers based on the tokenizer (probably use goccy yaml if there is no tokenizer in the standard yaml parser), and provide an event callback to the parser which can then be used to stream and print to the output.
Make a series of test files and verify they are streamed properly.”
This is the slot machine. It might work, it might be 50% jank, it might be entire jank. It’ll be a few thousand lines of code that I will skim and run. In the best case, it’s a great foundation to more properly work on. In the worst case it was an interesting experiment and I will learn something about either prompting Manus, or streaming parsing, or both.
I certainly won’t dedicate my full code review attention to what was generated. Think of it more as a hyper specific google search returning stackoverflow posts that go into excruciating detail.
Same. On many days 90% of my code output by lines is Claude generated and things that took me a day now take well under an hour.
Also, a good chunk of my personal OSS projects are AI assisted. You probably can't tell from looking at them, because I have strict style guides that suppress the "AI style", and I don't really talk about how I use AI in the READMEs. Do you also expect I mention that I used Intellisense and syntax highlighting too?
The author’s main point is that there hasn’t been an uptick in total code shipped, as you would expect if people are 10x-ing their productivity. Whether folks admit to using AI in their workflow is irrelevant.
Their main point is "AI coding claims don't add up", as shown by the amount of code shipped. I personally do think some of the more incredible claims about AI coding add up, and am happy to talk about it based on my "evidence", ie the software I am building. 99.99% of my code is ai generated at this point, with the occasional one line I fill in because it'd be stupid to wait for an LLM to do it.
For example, I've built 5-6 iphone apps, but they're kind of one-offs and I don't know why I would put them up on the app store, since they only scratch my own itches.
I'd suspect that a very large proportion of code has always been "private code" written for personal or intra-organizational purposes, and which never get released publicly.
But if we expect the ratio of this sort of private code to publicly-released code to remain relatively stable, which I think is a reasonable expectation, then we'd expect there to be a proportional increase in both private and public code as a result of any situation that increased coding productivity generally.
So the absence of a notable increase in the volume of public code either validates the premise that LLMs are not actually creating a general productivity boost for software development, or instead points to its productivity gains being concentrated entirely in projects that never do get released, which would raise the question of why that might be.
Oh yeah, I love building one off tools with it. I am working on a game mod with a friend, we are hand writing the code that runs when you play it, but we vibe code all sorts of dev tools to help us test and iterate on it faster.
Do internal, narrow purpose dev tools count as shipped code?
This seems to be a common thread. For personal projects where most details aren't important, they are good at meeting the couple things that are important to you and filling in the rest with reasonable, mostly-good-enough guesses. But the more detailed the requirements are, the less filler code there is, and the more each line of code matters. In those situations it's probably faster to type the line of code than to type the English equivalent and hand-hold the assistant through the editing process.
I don't think so, although I think at that point experience heavily comes into play. With GPT-5 especially, I can basically point cursor/codex at a repo and say "refactor this to this pattern" and come back 25 minutes later to a pretty much impeccable result. In fact that's become my favourite past time lately.
I linked some examples higher up, but I've been maintaining a lot of packages that I started slightly before chatgpt and then refactored and worked on as I progressively moved to the "entirely AI generated" workflow I have today.
I don't think it's an easy skill (not saying that to make myself look good, I spent an ungodly amount of time exploring programming with LLMs and still do), akin to thinking at a strategic level vs at a "code" level.
Certain design patterns also make it much easier to deal with LLM code: state reducers (redux/zustand for example), event-driven architectures, component-based design systems, building many CLI tools that the agent can invoke to iterate and correct things, as do certain "tools" like sqlite/tmux (by that I mean just telling the LLM "btw you can use tmux/sqlite", you allow it to pass hurdles that would otherwise just make it spiral into slop-ratatouille).
I also think that a language like go was a really good coincidence, because it is so amenable to LLM-ification.
I don’t think this is necessarily true. People that didn’t ship before still don’t ship. My ‘unshipped projects’ backlog is still nearly as large. It’s just got three new entries in the past two months instead of one.
>Do you also expect I mention that I used Intellisense and syntax highlighting too?
No, but I expect my software to have been verified for correctness, and soundness by a human being with a working mental model of how the code works. But, I guess that's not a priority anymore if you're willing to sacrifice $2400 a year to Anthropic.
$2400? Mate, I have a free GitHub Copilot subscription (Microsoft hands them out to active OSS developers), and work pays for my Claude Code via our cloud provider backend (and it costs less per working day than my morning Monster can). LLM inference is _cheap_ and _getting cheaper every month_.
> No, but I expect my software to have been verified for correctness, and soundness by a human being with a working mental model of how the code works.
This is not exclusive with AI tools:
- Use AI to write dev tools to help you write and verify your handwritten code. Throw the one-off dev tools in the bin when you're done.
- Handwrite your code, generate test data, review the test data like you would a junior engineer's work.
- Handwrite tests, AI generate an implementation, have the agent run tests in a loop to refine itself. Works great for code that follows a strict spec. Again, review the code like you would a junior engineer's work.
Agree. In the hands of a seasoned dev not only does productivity improve but the quality of outputs.
If I’m working against a deadline I feel more comfortable spending time on research and design knowing I can spend less time on implementation. In the end, it took the same amount of time, though hopefully with an increase of reliability, observability, and extendibility. None of these things show up in the author’s faulty dataset and experiment.
Not sure what you mean? This was a demo in a live session that took about 30 minutes, including ui ideation (see pngs). It’s a reasonably well featured app and the code is fairly minimal. I wouldn’t be able to write something like that in 30 minutes by hand.
codeact is a really interesting area to explore. I expanded upon the JS platform I started sketching out in https://www.youtube.com/watch?v=J3oJqan2Gv8 . LLMs know a million APIs out of the box and have no trouble picking more up through context, yet struggle once you give them a few tools. In fact just enabling a single tool definition "degrades" the vibes of the model.
Give them an eval() with a couple of useful libraries (say, treesitter), and they are able not only to use it well, but to write their own "tools" (functions) and save massively on tokens.
They also allow you to build "ephemeral" apps, because who wants to wait for tokens to stream and a LLM to interpret the result when you could do most tasks with a normal UI, only jumping into the LLM when fuzziness is required.
Most of my work on this is sadly private right now, but here's a few repos github.com/go-go-golems/jesus https://github.com/go-go-golems/go-go-goja that are the foundation.
This take is toxic. You could write the same article in 2001 and lament all the newcomers writing insecure applications in php3, or in 2009 with all the newcomers writing insecure applications with node.js.
The solution is not to aggressively shame people into doing things the way you learned to do them, but to provide not just education and support, but better tools and frameworks to build applications such as these securely.
Is it really toxic though? The dev shipped something that compromises the privacy of their users and shows zero regard for quality or law. Once you cross the line of shipping something, it's no longer a hobby thing, and likewise, this is something that Apple approved into the App Store. Both the dev and Apple failed in their due diligence.
The post points out exactly what's wrong, however, if it wasn't, it should have been sent to the dev prior to publishing the vuln(s). How can you educate somebody who doesn't actually know how to develop something? It's just prompting an AI.
The real story here is that Apple has continually slipping standards.
There are millions of apps, small software shops, and small shop websites everywhere. The idea that all of these are following best practices is pure fantasy.
Not only would you contact the author first, but spamming users with edgy notifications is puerile at best. As for “it’s just prompting an AI”, who cares, this person built an application that people find useful. This is the world we are at now, where a new set of people can use computers to make things happen. More senior developers can rage against the clouds, but that only gets you so far. This kind of gatekeeping happens at each wave of democratization of building software.
There’s also some pervasive view that handcrafted human code is somehow of superior quality which… uh…
They did. They claim that the author was not keen on fixing the problems.
> There’s also some pervasive view that handcrafted human code is somehow of superior quality which… uh…
That's completely orthogonal to the issue here. Nice bait, but I'm not biting!
Whether handcrafted or vibecoded, a service is being shipped here to actual users with lives and consequences. The developer of the service is making money. The developer owes it to themselves and their users to conduct a basic security audit. Otherwise it is gross negligence!
right, do you think this article is going to be very productive in that regard? If the author of the blog approached the author of the software in that manner (hey, you have kids on the app, btw I spammed them with porn humor), do you think they would wave it away?
As for the human code thing, it's not bait. I don't know if you were around in the php or early node days, but beginners were... not writing that kind of code.
I agree that the ease of vibecoding things that turn out to be useful that people do immediately want to pay money for it means that tackling security issues is a priority.
Saying that certain people shouldn't be allowed on the internet, based on your decades of experience _being_ on the internet, is just going to cause you to wither away and drown in cynicism.
> As for “it’s just prompting an AI”, who cares, this person built an application that people find useful.
I feel you've rather missed my point.
You said that we should educate people. I said that the app was just created via prompting. How can we impart years worth of information unto someone who is LARPing as a dev and doesn't even know the fundamentals?
This is the programming equivalent of a kid getting access to their father's gun. The only thing you can do is tell them to stop, explain why it was wrong and tell them to educate themselves. It isn't our job to educate at that level as bystanders and perhaps even victims.
I feel like it is. What should happen? Everybody born after 2015 is forbidden to use a computer? Or should only be allowed under strict supervision to be typing in code by hand? When people told me that in the nineties, with my linux, putting up shoddy cgi-bins, I just gave them the finger and said "whatever man".
The people who made an influence in my life and taught me how to do things properly were those that took me seriously as someone building software. And this person built software, the same way I now build software without having to think about every byte and malloc, and knowing that I don't really have to gaf about how much memory i allocate. It's fine, because we have good GCs and a lot of resources to learn about memory management when things hit the limit. The solution wasn't to say that everybody not programming C or assembly would not be allowed near a computer.
What should happen? Probably what happened here — disclose and when the developer chooses to ignore it, bring in the shaming and pressure campaign. Someone’s right to tinker and learn doesn’t trump the rights of the victims they are exposing. Releasing code for public consumption has responsibilities and no one is entitled to make money at the expense of others. If I started selling dodgey go karts made from scrap metal to kids it would be the same principle. I am entitled to mess around and even ride it myself, but bringing other people into your orbit of incompetence is another thing.
maybe the article should reflect that? This just seems like "I found an app that has a security hole and I'm being a dick about it". Sure, feel free to do it, I don't think it's productive, and actually toxic. This is not a new situation, this is a pattern that we have observed since the internet existed, vibe coding or not. However, compared to 30 years ago, we now have better investigation and disclosure procedures, as well as a much better understanding of how to build secure applications and teaching people about them. It's not about this guy Christian, it's about a whole generation of new developers that are joining us more senior developers. I think that is fantastic.
I feel you're taking the idea of someone being disallowed to do something too literally. The younger generations say extreme stuff all the time, but you don't take it literally. Context is key. Op's girlfriend is in her mid twenties according to the blog post if she didn't lie about her age on the account she registered. This is what people in their twenties are like these days.
The dev is making money from his prompted output—he can pay for his own education if he chooses to receive an education, but you have boundary issues if you want to force someone to be educated. This is what op realized that you didn't—you usually cannot force someone to learn or take responsibility for their behaviour as a bystander, you can only document it and attempt to get help from someone more able to do so once they've got all the facts. Do I agree with the method completely? No, but what's done is done.
What is necessary here isn't an education, it's personal development and emotional maturity, which comes with experience and thus comes through time, allowing accountability for mistakes. You can't teach that to someone who isn't ready for it who doesn't want to learn it.
I was a young dickhead too once, I know them when I see them. You only have to see their tweets to realize they are a young dickhead.
We go back to likening it to a kid finding their father's gun or stealing condoms from their old man. Sure, they can produce a child when it turns to shit, but the time to have learned is before, not after. After? It's about taking responsibility for your actions. The action has been taken, the consequences must now be dealt with as per law.
What should happen? Apple should take the app down immediately and an internal investigation should be started. The host should follow their policies on ToS breaches and account termination and report it to the relevant authorities to protect their own legal interests. As for the dev? I personally don't care, we are far beyond that moment now. What about the users? Will they be informed? What's the scale? Are their passwords compromised too?
Complete assholes can build things—why should we give them energy to build things that serve their own asshole agenda? It's an unoriginal, derivative slop app. If the dev wants to learn, they can pay for an education, but they'd be better off seeking legal counsel immediately.
Anyone can make software. But not everybody should with the level of personal development they're at in any given moment. It's an ever-moving target. Teen pregnancy or in young adolescence? Disaster. Pregnancy in thirties? Normal and can deal with it. Time changes things. Sometimes. For some people.
Romanticising what happened to you in the '90s helps nobody. It's 2025. There are laws to protect people from things like this, and Apple slipped up big time in approving this in the first place. There also weren't the vast syllabi in place in the '90s, the embarrassment of riches in readily available educational materials beginning at free or cheap either. The dev can pay for an LLM, so he can pay for an education if he wants one.
The dev wanted a shortcut though because he is lazy. Play stupid games, win stupid prizes.
Op is young too, but op is clearly intelligent and well-intentioned. There's no money in him having written the blog post, and even if it misses the mark on several levels for me, I understand what they're trying to do. The dev? Greedy and lazy with zero regard for their users, law, and shirks accountability.
If you want to educate anyone, educate op who wrote the blog post, their heart is at least in the right place, but obviously young too. It happens to all of us.
Despite being an ancient one, you too perhaps have some personal development to work on, despite your greater number of years. You immediately jump down the throat of people you incorrectly perceive to be shit-talking using AI to code, and that's because it clearly touches something you're insecure about as you do this: https://x.com/ProgramWithAI
If you're so sure of yourself and that what happened to you is so great, where is your own confidence? The inability to engage with the topic at hand yet consistently attempting to make it about something else entirely screams insecurity or abusing an LLM to parse everything for you. The loudest people are frequently the least confident.
If you don't see what's wrong with what the dev did or what Apple failed to do then that says it all. If you're using these tools to prompt your way into being a dev and seeing these problems too then perhaps you should feel unconfident. I would be quaking in my boots at seeing someone else go through a "that could have been me with a different roll of the dice" kind of scenario.
Don't mistake vibe coders for developers. They're frequently prompt engineers LARPing as devs. Likewise, musicians are not always composers, and DJs are not always musicians. Totally different disciplines. Loaded digital guns in the hands of young dickheads is not "fantastic"—it's a disaster of unprecedented scale. "Us senior devs" are the father figures and they've gotten access to not just one gun, but the entire global armory with the inevitable lack of judgement capabilities typical of someone their age.
A blog post is going to be the least of the dev's concerns, frankly. The likely legal shitstorm that's probably coming his way is going to make your comments here look bizarre.
Building tools that enable people with no experience to create and ship software without following any good software engineering practices.
This is in no way comparable to any previous period in the industry.
Education and support are more accessible than ever. Even the tools used to create such software can be educational. But you can't force people to learn when you give them the tools to create something without having to learn. You also can't blame them for using these tools as they're marketed. This situation is entirely in the hands of AI companies. And it's only going to get worse.
The only thing experienced software developers outside of the AI industry can do is observe from the sidelines, shake our heads, and get some laughs out of this shit show. And now we're the bad guys? Give me a break.
A computer always was a tool to enable people without technical knowledge to build software. That was true for me as 9 year old in the 80ies.
LLMs are incredible engineering tools and brushing them aside as nonsense is imo doing a disservice to everybody, and especially ourselves if we take our craft seriously. You can literally replace llm with php and post the same take on usenet in 1999, or whenever you started writing software.
I am tired of engineers just throwing their hands up and being defeatist while fully endorsing whatever narratives the ai industry is throwing out there, when what we are talking about is a big pile of floats that is able to generate something that makes it into the App Store. It is unprecedented in its abilities, but it’s also nothing new conceptually. It makes computer things easier.
> A computer always was a tool to enable people without technical knowledge to build software.
That's just not true.
Every past technology that claimed to enable non-technical people to build software has either failed, or was ultimately adopted by technical people. From COBOL, to BASIC, to SQL, to Low-Code, to No-Code, and others. LLMs are the latest attempt at this, and so far, they've had much more success and mainstream adoption than anything that came before.
The difference with LLMs is that it's the first time software can be built and deployed via natural language by truly anyone. This is, after all, their most advertised feature. The skills required to vibe code are reading and writing English, and basic knowledge to use a modern computer. This is a much lower skill requirement than for using any programming language, no matter how user friendly it is. Sure, there is a lot of poor quality software today already, but that will pale in comparison to the software that will be written by vibe coding. Most of the poor quality software before LLMs was limited in scope and reach. It would never have been deployed, and it would remain abandoned in some GitHub repo. Now it's getting deployed as quickly as it can be generated. "Just fucking ship it."
> LLMs are incredible engineering tools and brushing them aside as nonsense is imo doing a disservice to everybody
I'm not brushing them aside as nonsense. I use these tools as well, and have found them helpful at certain tasks. But there is a vast difference from how domain experts use these tools, and how the general public uses them. Especially people who are only now getting into software development, and whose main interest is to quickly cash out. If you think these people care about learning best software development practices, you'd be sorely mistaken. "Just fucking ship it."
I don't think that COBOL, BASIC, SQL have failed. They allowed many non-technical people to get started building things with computers. The skills to vibe-code (or more generally building applications with LLMs) are not reading and writing english, they are the skill of using LLMs to build applications.
In the context of people not learning "real programming", you can equate LLMs to say, wordpress plugins or making a squarespace site. Deployment of software has never been gated by how much effort it took to write it, there's millions of wordpress sites out there that get deployed way faster than an LLM can generate code.
If we care about the security of it all, then let's build the platforms to have LLMs build secure applications. If we care about the craft of programming, whatever that means in this day and age, then we need to catch people building where they are. I'm not going to tell people to not use computers because they want to cash out, they will just use whatever tool they find anyway. Might as well cash out on them cashing out while also giving them better platforms to build upon.
As far as the OP goes, these kind of security issues due to hardcoded credentials are basically the hallmark of someone shipping a (mobile|web) app for the first time, LLMs or not. The only reason the LLM actually used that is because it was possible for the user to provide it tokens, instead of replit/lovable/expo/whatever providing a proper way to provision these things.
Every cash~out fast bro out there these days uses stripe and doesn't roll their own payment processing anymore. They certainly used to do so because they just clicked a random wordpress plugin. That's what I think a more productive way to tackle the issue is.
> I don't think that COBOL, BASIC, SQL have failed. They allowed many non-technical people to get started building things with computers.
Those didn't fail, but they're certainly not used by non-technical people. That was my point: that all technologies that previously promised to make software development accessible for non-technical people didn't deliver on that promise, and that they're used by software engineers today. I would chalk up the Low-Code and No-Code tools as general failures, since neither business people nor engineers want to use them.
> In the context of people not learning "real programming", you can equate LLMs to say, wordpress plugins or making a squarespace site.
I don't think that's an accurate comparison, as website builders only cover a small fraction of what's possible with "real programming". Web authoring and publishing tools have existed since the dawn of the web, and the modern ones simply turned it into a service model.
LLMs OTOH allow creating any type of software (in theory). They're much broader in scope, and lower the skill requirements to create general-purpose software much more than any previous technology. The software in TFA was an iOS app. This is why they're a big deal, and why we're seeing scam artists and grifters pump out these low-effort applications in record time and volume. They were already enabled by WordPress and Squarespace, and there are certainly a lot of scam and spam sites on the web thanks to website builders, but their scope, reach and productivity got multiplied by LLMs.
> If we care about the security of it all, then let's build the platforms to have LLMs build secure applications.
That's easier said than done, if it's possible at all. Security, privacy, and bug-free software is not something that can be automated, at least with current technology. It requires great care and attention to detail from expert humans, which grifters have zero interest in, and non-expert non-grifters don't have the experience or patience to do. Vibe coding, after all, is the idea that you keep pasting errors to the LLM and prompting it until the software on the surface works as you expect it to. Code is just the translation layer for the LLM to write and interpret; vibe coders don't want to know about it.
Could we encode some general security and privacy hints in the LLM system prompt so that it can check for specific issues? Sure. It will never be exhaustive, though, so it would just give a false sense of security.
> As far as the OP goes, these kind of security issues due to hardcoded credentials are basically the hallmark of someone shipping a (mobile|web) app for the first time, LLMs or not.
Agreed. What I think you're not taking into account is the fact that there is a large swath of the population who just doesn't care about this. The only thing they care about is having an easy way to pump out a service that attracts victims who they can quickly exploit in some way. Once that service is no longer profitable, they'll replace it with another. What LLMs have given these people is an endless revenue stream with minimal effort.
This is not the same group of people who cares about software, the product they're building, and their users. Those are a small minority of the new generation of software developers who will seek out best practices and figure out how to use these tools for good. Unfortunately, I don't think they will ever become experts at anything other than interacting with an LLM, but that's a separate matter.
So the key point is: building high quality software starts with caring. Good practices that ensure high quality are discovered by intentionally seeking out established knowledge, or by trial and error. But the types of issues we're seeing here are not because the developer is inexperienced and made a mistake—it's because they don't care. Which should be criticized and mocked in public, and I would argue regulated and fined, depending on the severity. I even think that a software development license is even more important today than ever before.
re o3: you can zip the file, upload it, and it will use python and grep and the shell to inspect it. I have yet to try using it with a sqlite db, but that's how i do things locally with agents.
Author mentions that by doing that they didn't get a high quality response. Adding the texts into model's context make all the information available for it to use.
Otherwise, 99% of my code these days is LLM generated, there's a fair amount of visible commits from my opensource on my profile https://github.com/wesen .
A lot of it is more on the system side of things, although there are a fair amount of one-off webapps, now that I can do frontends that don't suck.
Software methodologies and workflows are not engineering either, yet we spend a fair amount of time iterating and refining those. You can very much become better at prompt engineering. There is a huge differential between individuals, for example.
The code coming out of LLMs is just as deterministic as code coming out of humans, and despite humans being feckle beings, we still talk of software engineering.
As for LLMs, they are and will forever be "unknowable". The human mind just can't comprehend what a billion parameters trained on trillions of tokens under different regimes for months corresponds to. While science has to do microscopic steps towards understanding the brain, we still have methods to teach, learn, be creative, be rigorous, communicate that do work despite it being this "magical" organ.
With LLMs, you can be pretty rigorous. Benchmarks, evals, and just the vibes of day to day usage if you are a programmer, are not "wishful thinking", they are reasonably effective methods and the best we have.
- documentation (reference, tutorials, overviews)
- tools
- logging and log analyzers
- monitoring
- configurability
- unit tests
- fuzzers
- UIs
- and not least: lots and lots of prototypes and iterating on ideas
All of these are "trivial" once you have the main code, but they are incredibly valuable, and LLMs do a fantastic job.
Even putting the common lisp aside, PAIP is my favourite book about programming in general, by FAR. Norvig's programming style is so clear and expressive, the book touches on more "pedestrian" parts of programming: building tools / performance / debugging, but also walks you through a serious set of algorithms that are actually practical and that I use regularly (and they shape your thinking): search, pattern matching, to some extent unification, building interpreters and compilers, manipulating code as data.
It's also extremely fun, you go from building Eliza to a full pattern matcher to a planning agent to a prolog compiler.
It's always hard to parse if people mean functional programming when bringing up Lisp. Common Lisp certainly is anything but a functional language. Sure, you have first order functions, but you in a way have that in pretty much all programming languages (including C!).
But most functions in Common Lisp do mutate things, there is an extensive OO system and the most hideous macros like LOOP.
I certainly never felt constrained writing Common Lisp.
That said, there are pretty effective patterns for dealing with IO that allow you to stay in a mostly functional / compositional flow (dare I say monads? but that sounds way more clever than it is in practice).
> It's always hard to parse if people mean functional programming when bringing up Lisp. Common Lisp certainly is anything but a functional language. Sure, you have first order functions, but you in a way have that in pretty much all programming languages (including C!).
It's less about what the language "allows" you to do and more about how the ecosystem and libraries "encourage" you to do.
As a long time embedded programmer, I don't understand this. Even 20 years ago, there is no way I really understood the machine, despite writing assembly and looking at compiler output.
10 years ago, running an arm core at 40 Mhz, I barely had the need to inspect my compiler's assembly. I still could roughly read things when I needed to (since embedded compilers tend to have bugs more regularly), but there's no way I could write assembly anymore. I had no qualms at the time using a massively inefficient library like arduino to try things out. If it works and the timing is correct, it works.
These days where I don't do embedded for work, I have no qualms writing my embedded projects in micropython. I want to build things, not micro optimize assembly.
> As a long time embedded programmer, I don't understand this
I think you both should define what your embedded systems look like. The range is vast after all. It ranges from 8 bit CPU [0] with a few dozen kilobytes of RAM to what almost is a full modern PC. Naturally, the incentives to program at a low level are very different across the embedded systems range.
I was trying to bit-bang 5 250KHz I2C channels on a 16MHz ATTiny while acting as an I2C slave on a 6th channel.
This is really not something you can do with normal methods, the CPU is just too slow and the assembly is too long. No high level language can do what I want because the compiler is too stupid. My inline assembly is simple and fast enough that I can get the bitrate I need.
In my view, there's two approaches to embedded development: programming á la mode with arduino and any unexamined libraries you find online, or the register hacker path.
There are people who throw down any code that compiles and moves on to the next thing without critical thought. The industry is overflowing with them. Then there are the people who read the datasheet and instruction set. The people painstakingly writing the drivers for I2C widgets instead of shoving magic strings into Wire.Write.
I enjoy micro-optimizing assembly. I find it personally satisfying and rewarding. I thoroughly examine and understand my work because no project is actually a throwaway. Every project I learn something new, and I have a massive library of tricks I can draw from in all kinds of crazy situations.
If you did sit down to thoroughly understand the assembly of your firmware projects, you'd likely be aghast at the quality of code you're blindly shipping.
All that aside for a moment, consider the real cost of letting your CPU run code 10x slower than it should. Your CPU runs 10x longer and consumes a proportional amount of energy. If you're building a battery powered widget, that can matter a lot. If your CPU is more efficient, you can afford a lighter battery or less cooling. You have to consider the system as a whole.
This attitude of "ship anything as quickly as possible" is very bad for the industry.
The reason it doesn't show up online is that I mostly write software for myself and for work, with the primary goal of making things better, not faster. More tooling, better infra, better logging, more prototyping, more experimentation, more exploration.
Here's my opensource work: https://github.com/orgs/go-go-golems/repositories . These are not just one-offs (although there's plenty of those in the vibes/ and go-go-labs/ repositories), but long-lived codebases / frameworks that are building upon each other and have gone through many many iterations.