That's really good because it means it will be able to have more exposure, more exposure means more improvement, more improvement eventually dig out bad bugs and reduces the attack surface in the long run
The ability to pick fields is nice, but the article failed to mention GraphQL's schema stitching and federation capability, which is its actual killer feature that is yet to be seen from any other "RPC" protocols, nix the gRPC which is insanely good for backend but maybe too demanding for web, even with grpc-web *1.
It allows you to separate your GraphQL in multiple "sub-graphs", and forward them to different microservices and facilitates separation of concern at backend level, while putting them back as one unified place for the frontend, giving it the best of both world in theory.
Yet unfortunately, both stitching and federation is rarely in practice due to the people's lack of fundamental abilities to comprehend and manage complexity, and that the web development is so fast, that one product is put out for one another year by year, and the old code is basically thrown away and remain unmaintained, they eventually "siloified"/solidified *2, and therefore it is natural for a simple solution like REST and OpenAPI/Swagger beats the complicated GraphQL, becaues the tech market right now just want to make the product quick and dirty, get the money, then let it go, rinse and repeat. Last 30 years of VC is basically that.
So let me tell you, this is the real reason GraphQL lost: GraphQL is the good money that was driven out, because the market just need the money, regardless of whether it is good, bad or ugly.
It is so natural, and I've tried to make it run in the new single file C#, plus the dependency injection and NativeAOT...I think I made the single-file code in their discussion tab, but I couldn't find it.
Another good honorable mention would be this: https://opensource.expediagroup.com/graphql-kotlin/docs/sche..., I used it before in place with Koin and Exposed, but I eventually went back to Spring Boot and Hibernate because I needed the integrations despited I loved to have innovations.
*1: For example, why force everyone to use HTTP/2 and thus enfoced TLS by convention? This makes gRPC development quite hard that you will need to have self-signed key and certificates just for starting the server, and that is already a lot of barrier for most developers. And the protobuf, being a compact and concise binary protocol, is basically unreadable without the schema/reflection/introspection, and GraphQL still returns a JSON by default and you can choose to return MessagePack/CBOR based on what the HTTP request header asked for. Yes, grpc-web does return JSON and can be configured to run on H2C, but it is more like an afterthought and not designed for frontend developers
*2: Maybe the better word would be enshittified, but enshittification is a dynamic process to the bottom, while what I mean is more like rotten to death like a zombie, so is it too overboard?
The problem is, container (or immutable) based development environment, like DevContainers and Nix Flakes, still aren't the popular choice for most developments.
I self-hosted DevPods and Coder, but it is quite tedious to do so. I'm experimenting with Eclipse Che now, I'm quite satisfied with it, except it is hard to setup (you need a K8S cluster attached to a OIDC endpoint for authentication and authorization, and a git forge for credentials), and the fact that I cannot run real web-version of VSCode (it looks like VSCode but IIRC it is a Monaco fork that looks almost like VSCode one-to-one but not exactly it) and most extensions on it (and thus limited to OpenVSIX) is a dealbreaker. But in exchange I have a pure K8S based development lifecycle, all my dev environment lives on K8S (including temporary port forwarding -- I have wildcard DNS setup for that), so all my work lives on K8S.
Maybe I could combine a few more open source projects together to make a product.
Uhm, pardon my ignorance... but wouldn't restricting an AI agent in a development environment be just a matter of a well-placed systemd-nspawn call?...
That's not the only stuff you need to manage. Having a system level sandbox is all about limiting the physical scope (the term physical in terms of interacting with the system using shell and syscalls) of stuff that the LLM agent could reach, but what about the logical scope that it could reach too, before you pass it to the physical scope? e.g. git branch/commit, npm run build, kubectl apply, or psql to run scripts that truncate your sql table or delete the database. Those are not easily controllable since they are concrete with contextual details.
Sure, but at least we can slow down that fat finger by adding safeguards and clean boundaries check, with a LLM agent things are automated at much higher pace, and more "fat fingers" can be done simultaneously, then it will have cascading effect that is beyond repairable. This is why we don't just need physical limitation, but also logical limitation as well.
That's exactly why I let the LLM run read-only commands automatically, but anything that could potentially trigger mutation (either removal or insertion) requires manual intervention.
Another way to prevent this is to run a filesystem snapshot each mutation command approval (that's where COW based filesystems like ZFS and BTRFS would shine), except you also have to block the LLM from deleting your filesystem and snapshots, or dd'ing stuff to your block devices to corrupt it, and I bet it will eventually evolve into this egregiously.
I hate to say it (and I know a lot of C apologists will downvote it), but there is no native closure in C, all you have is a function pointer in C, and you need to manually add the "context" pointer to make it a closure, in the strict (textbook) sense. That's because C does not have the concept of "data ownership", only automatic memory (that is on stack or register) or manual memory (in the sense of malloc/sbrk'd blocks), but a (again, textbook definition of) closure requires you to have access to the data of caller/"parent"/upper layer [^1].
And that's why I generally don't see C to have closures, and requires a JIT/dynamic code generation approach as this article has actually done (using shadow stacks). There is also a hack in GNU C which introduce local function lambda, but it is not in ISO C, and obviously won't in the next decade or so.
Not just this could improve Minecraft world generation (heh) but this could also have good use on 3D surface material generation as well, namely on the layering of different materials and generation using multi diffusions, if you look at the surface as a microscopic terrain
reply