Yes, you are correct. But actually, I am not claiming someone claimed it :) I am actually trying to get at the idea, that the "business people" usually bring up, that they are looking after the user's/customer's interest and that others don't have the "business mind", while actually when it comes to this kind of decision making, all of that is out of the window, because they want to shift the blame.
A few steps further stepped back, most of the services we use are not that essential, that we cannot bear them being down a couple of hours over the course of a year. We have seen that over and over again with Cloudflare and AWS outages. The world continues to revolve. If we were a bit more reasonable with our expectations and realistic when it comes to required uptime guarantees, there wouldn't be much worry about something being down every now and then, and we wouldn't need to worry about our livelihood, if we need to reboot a customer's database server once a year, or their impression about the quality of system we built, if such a thing happens.
But even that is unlikely, if we set up things properly. I have worked in a company where we self-hosted our platform and it didn't have the most complex fail-safe setup ever. Just have good backups and make sure you can restore, and 95% of the worries go away, for such non-essential products, and outages were less often than trouble with AWS or Cloudflare.
It seems that either way, you need people who know what they are doing, whether you self-host or buy some service.
That's more a small business owner perspective. For a middle manager rattling some cages during a week of IBM downtime is adequate performance while it is unclear how much performative response is necessary if mom&pops is down for a day.
I've definitely built the same piece of software hundreds of times over, probably thousands. I've even set up CI to automate the build process.
The problem is that the construction equivalent of a software developer is not a tradesman but an architect. Programs are just blueprints that tell the compiler what to build.
Maybe you should re-read the "do things that don't scale" article. It is about doing things manually until you figure out what you should automate, and only then do you automate it. It's not about doing unscalable things forever.
Unless you have a plan to change the laws of physics, space will always be a good insulator compared to what we have here on Earth.
Tigerbeetle is very cool and I would love to see more of it. AFAIR they have been hinting that you could in theory plug in storage engines different from the debit/credit model they've using for some time. Has any of this materialized? I would love to use it but just don't have any bookkeeping to do at the scale where bringing in Tigerbeetle would make sense. :(
It is the other way around --- it is _relatively_ easy to re-use the storage engine, but plug your custom state machine (implemented in Zig). We have two state machines, an accounting one, and a simple echo one here: https://github.com/tigerbeetle/tigerbeetle/blob/main/src/tes....
I am not aware of any "serious" state machine other than accounting one though.
Gleam is really quite a nice language. I did AoC in it this year as well and came away with the following: (incomplete list for both positive and negative, these are mainly things that come to mind immediately)
Positive:
- It can be pretty performant if you do it right. For example, with some thought I got many days down to double digit microseconds. That said, you do need to be careful how you write it and many patterns that work well in other languages fall flat in Gleam.
- The language server is incredibly good. It autoformats, autocompletes even with functions from not-yet-imported-but-known-to-the-compiler packages, shows hints with regarding to code style and can autofix many of these, autofills missing patterns in pattern matches, automatically imports new packages when you start using them and much much more. It has definitely redefined my view of what an LSP can do for a language.
- The language is generally a joy to work with. The core team has put a lot of effort into devex and it shows. The pipe operator is nice as always, the type system is no haskell but is expressive enough, and in general it has a lot of well-thought out interactions that you only notice after using it for a while.
Negative:
- The autoformatter can be a bit overly aggressive in rewriting (for example) a single line function call with many arguments to a function call with each argument on a different line. I get that not using "too much" horizontal space is important, but using up all my vertical space instead is not always better.
- The language (on purpose) focuses a lot on simplicity over terseness, but sometimes it gets a little bit much. Having to type `list.map` instead of `map` or `dict.Dict` instead `Dict` a hundred times does add up over the course of a few weeks, and does not really add a lot of extra readability. OTOH, I have also seen people who really really like this part of Gleam so YMMV.
- Sometimes the libraries are a bit lacking. There are no matrix libraries as far as I could find. One memoisation library had a mid-AoC update to fix it after the v1.0 release had broken it but nobody noticed for months. The maintainer did push out a fix within a day of realizing it was broken though. The ones that exist and are maintained are great though.
I can live with these negatives. What irritates me the most is the lack of if/else or guards or some kind of dedicated case-distinction on booleans. Pattern matching is great but for booleans it can be kinda verbose. E.g.
case x < 0 {
True -> ...
False ->
case x > 10 {
True -> ...
False ->
case x <= 10 {
True -> ...
False -> ...
}
}
}
You most likely asked an AI for this. They always think there is an `if` keyword in case statements in Gleam. There isn't one, sadly.
EDIT: I am wrong. Apparently there are, but it's a bit of a strange thing where they can only be used as clauses in `if` statements, and without doing any calculations.
> - It can be pretty performant if you do it right. For example, with some thought I got many days down to double digit microseconds.
Was this the time of everything or just the time of your code after loading in the text file etc.?
The hello world starter takes around 110 ms to run on my PC via the script generated with `gleam export erlang-shipment` and 190 ms with `gleam run`.
Is there a way to make this faster, or is the startup time an inherent limitation of Gleam/the BEAM VM?
The time reported by the "gladvent" package when running with the "--timed" option. AFAICT that does not count compilation (if needed), VM startup time, any JITting happening, or reading in the text file. I'm fine with that tbh, I'm more interested in the time actually spent solving the problem. For other languages I wouldn't count language-specific time like compilation time either.
As to whether you can make startup time faster, I suppose you could keep a BEAM running at all times and have your CLI tools hotswap in some code, run it, and get the results back out or something. That way you can skip VM startup time. Since the BEAM is targeted more at very long-running (server) processes with heaps and heaps of concurrency, I don't think ultrafast startup time is really a focus of it.
> Having to type `list.map` instead of `map` or `dict.Dict` instead `Dict` a hundred times does add up over the course of a few weeks, and does not really add a lot of extra readability.
I did it in F# this year and this was my feeling as well. All of the List.map and Seq.filter would have just been better to be called off of the actual list or Seq. Not having the functions attached to the objects really hurts discoverability too.
Re argument formatting, I'd guess it's because it uses the Prettier algorithm which works like that.
However in my experience it's much better than the alternative - e.g. clang-format's default "binpack"ing of arguments (lay them out like prose). That just makes them hard to read and leads to horrible diffs and horrible merge conflicts.
From the wiki about IEX: "It was founded in 2012 in order to mitigate the effects of high-frequency trading." I can see how they don't want to track internal latency as part of that, or at least not share those numbers with outsiders. That just encourages high frequency traders again.
One would hope for a more technical solution to HFT than willful ignorance lol. For example, they could batch up orders every second and randomize them.
I worked in HFT. (Though am now completely out of fintech and have no skin in the game). "Flash Boys" traditional HFT is dead already, the trade collapsed in 2016-2018 when both larger institutions got less dumb with order execution, and also several HFTs "switched sides" and basically offered "non-dumb order execution" as a service to any institutions who were unable to play the speed game themselves. Look at how Virtu's revenue changed from mostly trading to mostly order execution services over that time period.
Flash Boys was always poorly researched and largely ignorant of actual market microstructure and who the relevant market participants were, but it also aged quite poorly as all of their "activism" was useless because the market participants just all smartened up purely profit-driven.
If you want to be activist about something, the best bet for 2026 is probably that so much volume is moving off the lit exchanges into internal matching and it degrades the quality of price discovery happening. But honestly even that's a hard sell because much of that flow is "dumb money" just wanting to transact at the NBBO.
Actually, here's the best thing to be upset about: apps gamifying stock trading / investing into basically SEC-regulated gambling.
This is what should happen, because what the game actually being played is to profit off those who cannot react fast enough to news event, rather than profit off those who mispriced their order.
Or leave things in place, but put a 1 minute transaction freeze during binary events, and fill the order book during that time with no regard for when an order was placed, just random allocation of order fills coming out of the 1 minute pause.
These funds would lose their shit if they had to go back to knowledge being the only edge rather than speed and knowledge.
This isn't a good approach because it assumes there are no market makers on trading venues, and that they (as well as exchanges) do not compete for order flow. Also, maybe you haven't noticed, but stocks are often frozen during news announcements by regulatory request, so such pauses are already in place and are designed to maintain market integrity, not disrupt it with arbitrary fills.
Someone once tried this on me during Friday drinks and I successfully conquered the challenge with "tar --help". The challenger tried in vain to claim that this was not valid, but everyone present agreed that an exit code of zero meant that it was a valid solution.
Some drunks in a gnu-shaped echo chamber concluded that the world is gnu-shaped. That's not much a joke, if there is one here. Such presently popular axioms as "unix means linux" or "the userland must be gnu" or "bash is installed" can be shown as poor foundations to reason from by using a unix system that violates all those assumptions. That the XCDD comic did not define what a unix system is is another concern; there are various definitions, some of which would exclude both linux and OpenBSD.
I seem to remember "tar xvf filename.tar" from the 1990s, I'll try that out. If I'm wrong, I'll be dead before I even notice anything. That's better than dying of cancer or Alzheimer's.
z requires it's compressed with gzip and is likely a GNU extension too (it was j for bzip2 iirc). It's also important to keep f the last because it is parametrized and a filename should follow.
So I'd always go with c (create) instead of x (extract), as the latter assumes an existing tar file (zx or xz even a gzipped tar file too; not sure if it's smart enough to autodetect compress-ed .Z files vs .gz either): with create, higher chances of survival in that xkcd.
is always a valid command, whether file.name exists or not. When the file doesn't exist, tar will exit with status '2', apparently, but that has no bearing on the validity of the command.
Compare these two logs:
$ tar xvzf read.me
tar (child): read.me: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
$ tar extract read.me
tar: invalid option -- 'e'
Try 'tar --help' or 'tar --usage' for more information.
Do you really not understand the difference between "you told me to do something, but I can't" and "you just spouted some meaningless gibberish"?
The GGP set the benchmark at "returns exit code 0" (for "--help"), and even with XKCD, the term in use is "valid command" which can be interpreted either way.
The rest of your slight is unneccessary, but that's your choice to be nasty.
Like I said, I was operating on a lot of zipped tars. Not sure what you are replying about.
The other commenter already mentioned that the xkcd just said "valid", not return 0 (which to be fair is what the original non xkcd required so I guess fair on the mixup)
Oh, just funny mental gymnastics if we are aiming for survival in 10 seconds with a valid, exit code 0 tar command. :)
As tar is a POSIX (ISO standard for "portable operating system interfaces") utility, I am also highlighting what might get us killed as all of us are mostly used to GNU systems with all the GNU extensions (think also bash commands in scripts vs pure sh too).
Hehe fair enough in that case. Tho nothing said it had to work on a tar from like 1979 ;)
To me at least POSIX is dead. It's what Windows (before WSL) supported with its POSIX subsystem so it could say it was compatible but of course it was entirely unusable.
Initial release July 27, 1993; 32 years ago
Like, POSIX: Take the cross section of all the most obscure UNICES out there and declare that you're a UNIX as long as you support that ;)
And yeah I use a Mac at work so a bunch of things I was used to "all my life" so to speak don't work. And they didn't work on AIX either. But that's why you install a sane toolchain (GNU ;) ).
Like sure I was actually building a memory compactification algorithm for MINIX with the vi that comes with MINIX. Which is like some super old version of it that can't do like anything you'd be used to from a VIM. It works. But it's not nice. That's like literally the one time I was using hjkl instead of arrow keys.
reply