It's mostly embedded / microcontroller stuff. Things that you would use something like SDCC or a vendor toolchain for. Things like the 8051, stm8, PIC or oddball things like the 4 cent Padauk micros everyone was raving about a few years ago. 8051 especially still seems to come up from time to time in things like the ch554 usb controller, or some NRF 2.4ghz wireless chips.
Those don’t really support C in any real stretch, talking about general experience with microcontrollers and closed vendor toolchains; it’s a frozen dialect of C from decades ago which isn’t what people think of when they say C (usually people mean at least the 26 year old C99 standard but these often at best support C89 or even come with their own limitations)
It’s still C though, and rust is not an option. What else would you call it? Lots of c libraries for embedded target C89 syntax for exactly these reasons. Also for what it’s worth, SDCC seems to support very modern versions of C (up to C23), so I also don’t think that critique is very valid for the 8051 or stm8. I would argue that c was built with targets like this in mind and it is where many of its quirks that seem so anachronistic today come from (for example int being different sizes on different targets)
Please don't get me wrong. I'm glad the world has mostly transitioned over to HTTPS, but what are you actually concerned about with reading a blog post over HTTP? If you had to log in or post form data, or hosted binaries or something I would get it. But what is wrong with reading an article in the clear? And how would SSL prevent that?
I do pay for premium but my impression of the parent was that this was independent of ads. The test I did in the other comment didn't trigger an ad for some reason even though I was logged out, which may be why it loaded so fast.
Ah. The parent mentioned several frustrations that I am not familiar with (presumably since I also pay for premium and don’t block the ads), but my impression was that the delay was caused by the code refusing to play the video until the time slot for the ad had completed even if the ad failed to load (as would happen when blocking the ad http request)
The point made in the article was about social contract, not about efficacy. Basically if you use an llm in such a way that the reader detects the style, you lose the trust of the reader that you as the author rigorously understand what has been written, and the reader loses the incentive pay attention easily.
I would extend the argument further to say it applies to lots of human generated content as well. Especially sales and marketing information which similarly elicit very low trust.
This is exactly what the advice is trying to mitigate. At least as I see it, the responsible engineer (meaning author, not some quality of the engineer) needs to understand the intent of the code they will produce. Then if using an llm, they must take full owners of that code by carefully reviewing it or molding it until it reflects their intent. If at the end of this the “responsible” engineer does not understand the code the advice has not been followed.
> like there is an inherent flaw with the x6-64 ISA that means a chip that provides it can never be competitive with ARM on power consumption.
This is only one of many factors, but I know that high performance instruction decoding doesn't scale nearly as well on x86-64 due to the variable width instructions as it does on ARM. Any reasonable performance OoO core needs to read multilpe instructions ahead in order for the other OoO tricks to work. x86-64 is typically limited to about 5 instructions, and the complexity and power required to do that does not scale linearly since x86-64 instructions can be anywhere from 1 byte to 15 bytes making it very hard to guess where to start reading the second instruction before the first has been decoded. Arm cores have at most 2 widths to deal with and with ARV v8 I think there is only one leading to cores like M1 firestorm that can read 8 instructions ahead in a single cycle. Intel's E cores are able to read 3 instructions at two different addresses (6 total, just not sequential) that can help the core look at predicted branches but doesn't help as much in fast optimized code with fewer branches.
so at the low end of performance where mobile gaming sits you really need an OoO core in order to be able to keep up, but ARM really has a big leg up for that use-case because of the instruction encoding.
> x86-64 is typically limited to about 5 instructions
Intel Lion-cove decodes 8 instructions per cycle and can retire 12. Intel Skymont's triple decoder can even do 9 instructions per cycle and that's without a cache.
AMD's Zen 5 on the other hand has a 6K cache for instruction decoding allowing for 8 instructions per cycle, but still only a 4-wide decoder for each hyper-thread.
And yet AMD is still ahead of intel in both performance and performance-per-watt. So maybe this whole instruction decode thing is not as important as people are saying.
Well written! I’m Seattle based (although at Google) I think the mood is only slightly better than what you describe. But the general feeling that the company has no interest in engineering innovation is alive and well. Everything needs to be standardized and engineers get shuffled between products in a way that discourages domain knowledge.
Wow, that’s crazy. Does anyone have any context on why they didn’t fix this by either disallowing NULL, or not treating the pointer as non-nullable? I’m assuming there is code that was expecting this not to error, but the combination really seems like a bug not just a sharp edge.
Treating the pointer as not-nullable is precisely the point of the feature, though. By letting the compiler know that there's at least N elements there, it can do things like e.g. move that read around and even prefetch if that makes the most sense.