> If we were to re-write browser standards today, cross-domain POST requests probably just wouldn't be permitted.
That would be a terrible idea IMO. The insecurity was fundamentally introduced by cookies, which were always a hack. Those should be omitted, and then authorization methods should be designed to learn the lessons from the 70s and 80s, as CSRF is just the latest incarnation of the Confused Deputy:
Ah, so true. That's what i mean! Cross domain requests that pass along the target domain's cookies. As in, probably every cookie would default to current __Host-* behavior. (and then some other way to allow a cookie if you want. Also some way of expressing desired cookie behavior without a silly prefix on its name...)
It wouldn't be the first time the specs have gone too far and beyond their perimeter.
C's "register" variables used to have the same issue, and even "inline" has been downgraded to a mere hint for the compiler (which can ignore it and still be a C compiler).
inline and register still have semantic requirements that are not just hints. Taking the address of a register variable is illegal, and inline allows a function to be defined in multiple .c files without errors.
> We're not gonna see significant model shrinkage until the money tap dries up.
I'm not sure about that. Microsoft has been doing great work on "1-bit" LLMs, and dropping the memory requirements would significantly cut down on operating costs for the frontier players.
> while LLMs are just echoing narrow portions of the intelligent output of humans
But they aren't just echoing, that's the point. You really need to stop ignoring the extrapolation abilities in these domains. The point of the jagged analogy is that they match or exceed human intelligence in specific areas in a way that is not just parroting.
It's tiresome in 2025 to keep on having to use elaborate long winded descriptions to describe how LLMs work, just to prove that one does understand, rather than be able to assume that people generally understand, and be able to use shorter descriptions.
Would "riffing" upset you less than "echoing"? Or an explicit "echoing statistics" rather than "echoing training samples"? Does "Mashups of statistical patterns" do it for you?
The jagged frontier of LLM capabilty is just a way of noting the fact that they act more like a collection of narrow intelligences rather than a general intelligence who's performance might be expected to be more even.
Of course LLMs are built and trained to generate based on language statistics, not to parrot individual samples, but given your objection it's amusing to note that some of the areas where LLMs do best, such as math and programming, are the ones where they have been RL-trained to override these more general language patterns and instead more closely follow the training data.
Seems fair. XSS is a confused deputy attack, a type of vulnerability known since the 1980s. That we keep reinventing it in every new medium is frankly embarassing.
That would be a terrible idea IMO. The insecurity was fundamentally introduced by cookies, which were always a hack. Those should be omitted, and then authorization methods should be designed to learn the lessons from the 70s and 80s, as CSRF is just the latest incarnation of the Confused Deputy:
https://en.wikipedia.org/wiki/Confused_deputy_problem
reply