> I also don't love enums for errors because it means adding any new error type will be a breaking change
You can annotate your error enum with #[non_exhaustive], then it will not be a breaking change if you add a new variant. Effectively, you enforce that anybody doing a match on the enum must implement the "default" case, i.e. that nothing matches.
This is a project that we've been working in collaboration with Google and AWS. We present a vulnerability that allows a malicious virtual machine to leak all physical memory of its host, including the memory of other virtual machines running on the system. L1TF Reloaded combines two long-known transient execution vulnerabilities, L1TF and (Half-)Spectre. By combining them, commonly deployed software-based mitigations against L1TF, such as L1d flushing and core scheduling, can be circumvented.
We've demonstrated our attack on real-world KVM-based cloud solutions. Both Google Cloud [1] and AWS [2] wrote a blog post in response to this attack, where they describe how they mitigate against L1TF Reloaded and how they harden their systems against unknown transient execution attacks. Google also decided to award us a bug bounty of $151,515, the highest bounty of their Cloud VRP yet.
When you can modify the microcode of a CPU, you can modify the behaviour of the RDRAND/RDSEED instructions. For example, using EntrySign [1] on AMD, you can make RDRAND to always return 4 (chosen by a fair dice roll, guaranteed to be random)
I don't mean to say that RDSEED is sufficient for security. But a "correctly implemented and properly secured" RDSEED is indeed, quantum random.
IE: While not "all" RDSEED implementations (ie: microcode vulnerabilities, virtual machine emulation, etc. etc.) are correct... it is possible to build a true RNG for cryptographic-level security with "correct" RDSEED implementations.
------
This is an important factoid because a lot of people still think you need geiger counters and/or crazy radio antenna to find sufficient sources of true entropy. Nope!! The easiest source of true quantum entropy is heat, and that's inside of every chip. A good implementation can tap into that heat and provide perfect randomness.
Just yeah: microcode vulnerabilities, VM vulnerabilities, etc. etc. There's a whole line of other stuff you also need to keep secure. But those are "Tractable" problems and within the skills of a typical IT Team / Programming team. The overall correct strategy is that... I guess "pn-junction shot noise" is a sufficient source of randomness. And that exists in every single transistor of your ~billion transistor chips/CPUs. You do need to build out the correct amplifiers to see this noise but that's called RDSEED in practice.
Rowhammer is an inherent problem to the way we design DRAM. It is a known problem to memory manufacturers that is very hard, if not impossible, to fix. In fact, Rowhammer only becomes worse as the memory density increases.
It’s a matter of percentages… not all manufacturers fell to the rowhammer attack.
The positive part of the original rowhammer report was that it gave us a new tool to validate memory (it caused failures much faster than other validation methods).
As far as I am aware, the course material is not public. Practical assignments are an integral part of the courses given by the VUSEC group, and unfortunately those are difficult to do remotely without the course infrastructure.
The Binary and Malware Analysis course that you mentioned builds on top of the book "Practical Binary Analysis" by Dennis Andriesse, so you could grab a copy of that if you are interested.
Disabling SMT alone isn’t enough to mitigate CPU vulnerabilities. For full protection against issues like L1TF or MDS, you must both enable the relevant mitigations and disable SMT. Mitigations defend against attacks where an attacker executes on the same core after the victim, while disabling SMT protects against scenarios where the attacker runs concurrently with the victim.
It depends on your threat model. If you don't run any untrusted code on your hardware (including Javascript), you can safely disable the mitigations. If you do run untrusted code, keep them enabled.
What is the threat model if I run lots of untrusted JavaScript, but I only have a small amount of memory in other processes worth reading and I would notice sustained high CPU usage?
Is there an example in the wild of a spectre exploit stealing my gmail cookie and doing something with it? (Would be difficult since it's tied to other fingerprints like my IP)
Or stealing credit card numbers when they're in memory after I place an online order?
In the context of a regular end-user desktop machine, this seems overly paranoid to me. The odds of encountering a real, JS-based spectre attack in the wild are basically zero (has anyone ever seen a browser-based Spectre attack outside of a research context? even once?), and the odds of it then being able to retrieve actual sensitive data are also basically zero. That's two astonishingly tiny numbers multiplied together. The threat just isn't there.
For regular end-user desktop machines, the mitigations only decrease performance for no real benefit. Spectre is a highly targeted attack, it's not something you can just point at any random machine to retrieve all their bank passwords or whatever.
> While FLOP has an actionable mitigation, implementing it requires patches from software vendors and cannot be done by users. Apple has communicated to us that they plan to address these issues in an upcoming security update, hence it is important to enable automatic updates and ensure that your devices are running the latest operating system and applications.
RowHammer is not a thing of the past. In fact, modern DRAM chips are significantly more susceptible to RowHammer due to their increased chip density [1].
In the "countermeasures" section of the linked paper[1], it mentioned that there are some new techniques available, but repeatedly mentioned that they are not yet available in consumer systems. Maybe rowhammer will eventually be a thing of the past despite the increasing chip density.
Thanks for the link. I guess I thought wrong. But I have more questions.
> with RowHammer protection mechanisms disabled
I wonder what this means. Is it S/W mitigations or does it include H/W factors like disabling on-die ECC.
It makes sense to me that with all other things being equal that higher density would lead to more susceptibility to Rowhammer. But as always, other things are not equal. I expect that on-die ECC would reduce susceptibility to Rowhammer and AFAIK that is used for DDR4 and DDR5 RAM, but perhaps not exclusively. Or did disabling "protection mechanisms" include disabling that (if it is even possible.)
The mitigations are usually about limiting the number of times you can access without refreshing. ECC helps in detecting and correcting (obviously) but it doesn't solve the underlying issue that accessing a cell over and over can cause bit flips in neighbors. ECC can be defeated if uncorrectable errors are not fatal or if the attacker can just crash the system over and over. Being able to introduce memory errors is a fundamental and unmitigatable issue that must be resolved by making these errors impossible. This isn't a problem software can solve.
You can annotate your error enum with #[non_exhaustive], then it will not be a breaking change if you add a new variant. Effectively, you enforce that anybody doing a match on the enum must implement the "default" case, i.e. that nothing matches.