You won't run out of 64-bit integer. IMO, 64-bit integer (and even less for some tables that's not expected to grow much) it the best approach for internal database ID. If you want to expose ID, it might make sense to introduce second UUID for selected tables, if you want to hide internal ID.
It's a tooling issue. No one has done the work to make things work as smoothly as they could.
Traditionally, cross-compilers generally didn't even work the way that the Zig and Go toolchains approach it—achieving cross-compilation could be expected to be a much more trying process. The Zig folks and the Go folks broke with tradition by choosing to architect their compilers more sensibly for the 21st century, but the effects of the older convention remains.
In my experience, the cross-compiler will refuse to link against shared libraries that "don't exist", which they usually don't in a cross compiler setup (e.g. cross compiling an aarch64 application that uses SDL on a ppc64le host with ppc64le SDL libraries)
The usual workaround, I think, is to use dlopen/dlsym from within the program. This is how the Nim language handles libraries in the general case: at compile time, C imports are converted into a block of dlopen/dl* calls, with compiler options for indicating some (or all) libraries should be passed to the linker instead, either for static or dynamic linking.
Alternatively I think you could "trick" the linker with a stub library just containing the symbol names it wants, but never tried that.
You just need a compiler & linker that understand the target + image format, and a sysroot for the target. I've cross compiled from Linux x86 clang/lld to macOS arm64, all it took was the target SDK & a couple of env vars.
Clang knows C, lld knows macho, and the SDK knows the target libraries.
Well, you need to link against them and you can't do that when they don't exist. I don't understand the purpose of a stub library, it is also only a file and if you need to provide that, you can also provide the real thing right away.
It took computer scientists of the past, a lot of effort to come up with these complicated algorithms. They are not easy or trivial. They are complicated and that's OK that you can't just quickly understand them. Your imaginary "real developer" at best memorised the algorithms, but that hardly differs from smart monkey, so probably not something to be very proud of.
It is your choice which career to pursue, but in my experience, absolute majority of programmers don't know algorithms and data structures outside of very shallow understanding required to pass some popular interview questions. May be you've put too high artificial barriers, which weren't necessary.
To be a professional software developer, you need to write code to solve real life tasks. These tasks mostly super-primitive in terms of algorithms. You just glue together libraries and write so-called "business-logic" in terms of incomprehensible tree of if-s which nobody truly understands. People love it and pay money for it.
Thanks for your kind comment! I do not have any systematic leaning of computer science; I often feel confused when reading textbooks on algorithms hahaha.
Should I be familiar with every step of Dijkstra’s search algorithm and remember the pseudocode at all times? Why don’t the textbooks explain why the algorithm is correct?
> Should I be familiar with every step of Dijkstra’s search algorithm and remember the pseudocode at all times?
Somehow, I think you already know the answer to that is "no".
I've been working as a software engineer for over 8 years, with no computer science education. I don't know what Dijkstra's search algorithm is, let alone have memorised the pseudocode. I flicked through a book of data structures and algorithms once, but that was after I got my first software job. Unless you're only aiming for Google etc, you don't really need any of this.
You should know the trade-offs of different algorithms, though. Many libraries let you choose the implementation for a spcific problem. For instance tree vs. hash map where you trade memory for speed.
> Why don’t the textbooks explain why the algorithm is correct?
The good ones do!
> Should I be familiar with every step of Dijkstra’s search algorithm and remember the pseudocode at all times?
If it’s the kind of thing you care to be familiar with, then being able to rederive every step of the usual-suspect algorithms is well within reach, yes. You don’t need to remember things in terms of pseudocode as such, more just broad concepts.
For Chrome, I don't know if anyone has compiled the stats, but navigating from https://chromium.googlesource.com/chromium/src/+/refs/heads/... I see at least a bunch of vendored crates, so there's some use, which makes sense since in 2023 they announced that they would support it.
If you’re trying to demonstrate something about Rust by pointing out that someone chose C over Perl, I have to wonder how much you know about the positive characteristics of C. Let alone Rust.
Your comment comes across disingenuous to me.
Writing it in, for example, Java would have limited it to situations where you have the JVM available, which is a minuscule subset of the situations that curl is used in today, especially if we're not talking "curl, the CLI tool" but libcurl.
I have a feeling you know that already and mostly want to troll people.
And Golang is only 16 years old according to Wikipedia, by the way.
Java might not be the most popular VM in Linux, but let's talk Perl or Python. It's installed by default almost everywhere, it's probably impossible to find a useful Linux installation without these runtimes. So writing curl with Python makes perfect sense, right? It's memory safe language, good for handling inherently unsafe Internet data. Its startup time is miniscule, compared to typical network response. Lots of advantages. Yet curl is still written with C.
I've never used libcurl and I don't know why is it useful, so let's focus on curl. Of course if you want C library, you gotta write it with C, that's kind of weird argument.
My point is, there were plenty of better options to replace C. Yet people chose to use C for their projects and continue to do so. Rust is not even good option for most projects, as it's too low level. It's a good option for Linux kernel, but for user space software? I'm not sure.
"[...] it's probably impossible to find a useful Linux installation without [Perl or Python]. [...]"
Oof. We seem to have very, very different definitions for both "Linux" and "useful".
If all Linux installs w/o Perl or Python would cease to exist tomorrow, we'd probably enter a global crisis. Industrial processes failing left and right, collapse of wide swaths of internet and telecom infrastructure and god knows what else from ships to cars and smartphones.
Regarding libcurl: libcurl probably represents the vast majority of curl installations. curl the CLI tool is mostly porcelain on top of libcurl. libcurl is used in _a lot_ of places. For example, inside the PHP runtime. And god knows were else, there must be billions of installations as part of other projects. It's not a weird argument, libcurl is 95% of the raison d'être for curl.
If you want a curl-like tool in Python or Perl, you gotta write it in Python or Perl. Somebody probably already did. So maybe just use one of these?
Instead of demanding that curl be transformed into something which is incompatible with it's mission statement.
For my hobby code, I'm not going to start writing Rust anytime soon. My code is safe enough and I like C as it is. I don't write software for martian rovers, and for ordinary tasks, C is more ergonomic than Rust, especially for embedded tasks.
For my work code, it all comes down to SDKs and stuff. For example I'm going to write firmware for Nordic ARM chip. Nordic SDK uses C, so I'm not going to jump through infinite number of hoops and incomplete rust ports, I'll just use official SDK and C. If it would be the opposite, I would be using Rust, but I don't think that would happen in the next 10 years.
Just like C++ never killed C, despite being perfect replacement for it, I don't believe that Rust would kill C, or C++, because it's even less convenient replacement. It'll dilute the market, for sure.
I don’t agree that C++ is a bad language, though it has been standardized to death into a bad language. But the whole point is for C++ to not be worse than C while offering a lot more, which I think it does well. Of course, my last serious use of C++ was a little after release E…
My experience was: get 3-year certificate for free, install it and forget about it. With LetsEncrypt, it's always pain, expired websites everywhere. Too bad that american IT mafia put these good CA out of business.
For public websites most probably don't even need to touch cron as Apache/Caddy/NGINX/Traefik have built-in options these days. The only time I run something like a cron task is for certain internal IoT type certs.
I was about to say that I never encounter TLS errors while browsing, but that's not strictly true. There is one such website, and it's only because the webmaster had a stroke and can't maintain it currently. But apart from that rather sad story I can't relate to your issues at all.
I agree. I don't remember the last time I saw an expired cert, and it was probably an abandoned web site (which would eventually expire even with a 3-year certificate as well). At least with Let's Encrypt you have to automate it.
American IT Mafia? That provides free certificates? You'd think setting up renewal would be less of a hassle than dealing and paying CAs even if it's once every 3 years, so that would be a rather benevolent mafia. Which of those CAs went out of business by the way?
Do you think Let's encrypt is less popular outside the US?
StartSSL, WoSign were the ones I've used. Very convenient services, much more convenient, compared to this certbot insanity.
I think that the rest of the world does not have much choice, because US uses their IT superiority to force political decisions to the rest of the world. I experienced that first-hand. When my country wanted to implement MITM to improve Internet usability for their citizens, US companies blacklisted government root certificate which disrupted this scheme and forced my country to roll back this plan. Now I have lots of websites completely blocked, instead of more careful and precise per-page blocking that would only be possible with MITM.
Hopefully, over time, China and Russia will destroy this superiority and will provide viable alternatives.
I just explained that. Basically government wants to block some specific webpage, say https://en.wikipedia.org/wiki/Nursultan_Nazarbayev. Without MITM, they'll end up with blocking the entire en.wikipedia.org domain, so citizens will lose access to a lot of information. With MITM, they'll be able to target precisely one page and I can read any other wikipedia article without issues.
And with MITM they can read literally all of your private internet traffic… That seems like a significantly worse tradeoff to just using a VPN to browse Wikipedia.
I hope that's not literally incrementing a sequence. Because it would lead to trivial neighbor ID guessing attacks.
I've implemented this thing, though not called it ULID. I've dedicated some bits for timestamp, some bits for counter within millisecond and rest for randomness. So they always ordered and always unpredictable.
Another approach is to keep latest generated UUID and if new UUID requested within the same timestamp - generate random part until it's greater than previous one. I think that's pretty good approach as well.
> I hope that's not literally incrementing a sequence. Because it would lead to trivial neighbor ID guessing attacks.
It is and it does.
Also the ULID spec suggests you use a CSPRNG, but doesn't mandate that or provide specific advice on appropriate algorithms. So in practice people may reach for whatever hash function is convenient in their project, which may just be FNV or similar with considerably weaker randomness too.
reply