My 2 cents is they do justify it by the interest of the consequences, as Tychonoff or Nullstellensatz. I wouldn't call that faith: Best practices is to state Tychonoff as "AC implies Tychonoff" and that last is logically valid. Sometimes the "AC implies..." is missing, buried in the proof or used unawaredly or predates ZFC, and is a bad thing. But very ofen one now see asterisks on theorems needing it.
In a complementary story, from light reading I gather that Zellig Harris, that advised Chomsky's dissertation, contributed decisively to distributional semantics, a forerunner of word embeddings, in turn a key to the door of neural NLP. The ML guys had a shiny toolbox ready to use for decades. Had it been more fashionable...
I am testing it as a replacement of MathPix, first few tests look rather decent. In python for windows: https://pastebin.com/uyiFHKdJ (alpha version prototype). Launches windows snip tool, waits for clipboard image, calls Mistral, retrieves markdown and puts it as text in the clipboard, ready to be pasted in Typora, Obsidian, or other markdown editor.
Could one mount a real coexisting ext4 partition to reduce some of the perf penalty of having to simulate a block device on top of those ugly big image files?
Linux system calls WERE 80h. If your code is still using an interrupt to access kernel functions then you've got problems. Syscall exists for the simple reason that interrupts are expensive.
x86-64 introduced a `syscall` instruction to allow syscalls with a lower overhead than going through interrupts. I don't know any reason to prefer `int 80h` over `syscall` when the latter is available. For documentation, see for example https://www.felixcloutier.com/x86/syscall
While AMD syscall or Intel sysenter can provide a much higher performance than the old "int" instructions, both syscall and sysenter have been designed very badly, as explained by Linus himself in many places. It is extremely easy to use them in ways that do not work correctly, because of subtle bugs.
It is actually quite puzzling why both the Intel designers and the AMD designers have been so incompetent in specifying a "syscall" instruction, when such instructions, but well designed, had been included in many other CPU ISAs for many decades.
When not using an established operating system, where the implementation for "syscall" has been tested for many years and hopefully all bugs have been removed, there may be a reason to use the "int" instruction to transition into the privileged mode, because it is relatively foolproof and it requires a minimum amount of code to be handled.
Now Intel has specified FRED, a new mechanism for handling interrupts, exceptions and system calls, which does not have any of the defects of "int", "syscall" and "sysenter".
The first CPU implementing FRED should be Intel Panther Lake, to be launched by the end of this year, but surprisingly, recently when Intel has made a presentation providing information about Panther Lake no word was said about FRED, even if this is expected to be the greatest innovation of Panther Lake.
I hope that the Panther Lake implementation of FRED is not buggy, which could have made Intel to disable it and postpone its introduction to a future CPU, like they have done many times in the past. For instance, the "sysenter" instruction was intended to be introduced in Intel Pentium Pro, by the end of 1995, but because of bugs it was disabled and not documented until Pentium II, in mid 1997, where it finally worked.
Yep, agreed. CMake does deps graphviz (been there), that is better than nothing. But big diagrams need support for exploding subdiagrams and going back.
reply