True, but on the other hand they always give themselves away so it's very easy to spot and adjust expectations.
I found this article and discussion valuable anyway.
I would also encourage you to stop using "Americans" as synonymous to "US:er", there are many countries, cultures and peoples in the American continents that are not from the USA and quite a few that geographically exist within it too.
By pointing out that they are US:ers we can help each other to remind ourselves and them that they are not in fact the only version of human that exist.
> I agree that the PS1 had more piracy, but I'm not sure that actually diminished its success?
At least in my corner of the world (Spain), piracy improved its success. Everybody wanted the PSX due to how cheap it was, I think it outsold the N64 10:1.
> So, getting a much smaller CPU would have been a big corner to cut, that could have saved enough resources to increase the texture cache to a useful resolution like 128x128 or so.
How? The texture RAM (TMEM) is in the RSP, not in the CPU.
How is that relevant? "Resources" really just means money, which can be allocated between different items on the BoM at-will. The N64's chips are all (more or less) bespoke, so the functionality of each individual part is completely under Nintendo's control. Spend less on the CPU, and you suddenly have money left to spend on the RSP. (And on the RDP, which contains the TMEM -- it lives on the same chip as the RSP, but is a distinct thing. I assume you know this, but just to add to the discussion for readers - the RSP is the N64's SIMD coprocessing unit, which most games use to perform vertex shading, whereas the RDP is the actual rasterization and texturing hardware.)
Realistically it wasn't even "We only have X dollars to spend". They needed the console to have a final budget and they really could have "just" added more transistors dedicated to that texture unit without significantly altering prices or profit.
But hardware was actively transitioning and what we "knew" one year was gone the next and Nintendo was lucky to have made enough right choices to support enough good games to survive the transition. They just got some bets wrong and calculated some tradeoffs poorly.
For example, almost everything Kaze is showing off, all the optimizations were technically doable on original hardware, but devs were crunching to meet deadlines and nobody even thought to wonder whether "lets put a texture on this cube" needed another ten hours of engineering time to optimize. Cartridges needed to be constructed by Christmas. A lot of games made optimization tradeoffs that were just wrong, and didn't test them to find out. Like the HellDivers 2 game size issue.
Sega meanwhile flubbed the transition like four different ways and died. Now they have the blue hedgehog hooked up to a milking machine. Their various transition consoles are hilariously bad. "Our cpu and rasterizer can't actually do real polygon rendering and can't fake it fast enough to do 3D graphics anyway. Oh, well what about two of them?"
You are right about the RSP/RDP distinction. My point is that removing transistors from one chip doesn't magically let you add more transistors to another chip, that's not how IC fabrication works. And the CPU was not a custom design, it was a VR4300 licensed by NEC from the original R4300.
Anyway, the real problem is that TMEM was not a hardware-managed cache, but a scratchpad RAM fully under the control of the programmer, which meant that the whole texture had to fit under a meagre 4 kB of RAM! It is the same mistake that Sony and IBM later made with the Cell.
It isn't unfortunately as the physical size of the resonators need to match a given wavelength. So for each wavelength you need a new circuit in parallel.
I am also Spanish, living in Japan, and our bars is one the things I miss the most. Seriously, you don't realize how amazing Spanish bars are until you don't have them.
Here I just stop by a konbini, grab a can coffee and a plastic-wrapped sandwich, and off I go. There is no social nexus, and no neighbourhood for that matter. It's depressing.
When I was teaching in Ube, Japan in 1979, there was a great jazz music coffee bar. The entire wall behind the counter was covered with jazz LP's. They had huge speakers, a massive turntable, and a McIntosh amp. You would go in, pick an album and order coffee. The counter was lined with vacuum coffee makers. The barista would grind your choice of bean and fire up the coffee alembic. The boiling water would erupt into the upper chamber, brew a while, then magically get sucked down into the bottom carafe when he took it off the flame. You could drink at the counter or go to a table. I didn't look like a beatnik, but I felt like one! Cool, daddio.
Izakaya usually only open for dinner, maybe lunch, but definitely not breakfast. Many of them also offer private rooms, which is the complete opposite of the social aspect of Spanish bars...
I live in Misawa (https://en.wikipedia.org/wiki/Misawa,_Aomori) and work in Rokkasho (https://en.wikipedia.org/wiki/Rokkasho), which is the area where the earthquake hit the strongest. It was quite violent, apparently the strongest earthquake ever recorded in the region. My house suffered no damage other than a few things falling off the cabinets, and I could sleep soundly afterwards, but lets see today at work.
CERN is the biggest scientific facility in the world, with a huge IT group and their own IXP. Most places are not like that.
Heck, I work at a much smaller particle accelerator (https://ifmif.org) and have met the CERN guys, and they were the first to say that for our needs, OpenStack is absolutely overkill.
> Heck, I work at a much smaller particle accelerator (https://ifmif.org) and have met the CERN guys, and they were the first to say that for our needs, OpenStack is absolutely overkill.
I currently work in AI/ML HPC, and we use Proxmox for our non-compute infrastructure (LDAP, SMTP, SSH jump boxes). I used to work in cancer with HPC, and we used OpenStack for several dozen hypervisors to run a lot of infra/services instances/VM.
I think that there are two things determine which system should be looked at first: scale and (multi-)tenancy. More than one (maybe two) dozen hypervisors, I could really see scaling/management issues with Proxmox; I personally wouldn't want to do it (though I'm sure many have). Next, if you have a number internal groups that need allocated/limited resource assignments, then OpenStack tenants are a good way to do this (especially if there are chargebacks, or just general tracking/accounting).
As a Spanish guy living in Japan, I find the Japanese system hugely complicated (or better said, antiquated), so I shudder to think how bad the American system might be...
This line of thought seems to be extremely common among Americans, and honestly it is quite annoying for the rest of us.
reply