How do you interpret these numbers? If your point is that we can simply overprovision photovoltaik arrays by a factor of 6.67, then that would make solar the most expensive power generation method by far.
And it only gets worse the more households transition to heatpumps, because the consumption in winter is so lopsided. For example, I heat my home with a heatpump, and I have 10 kWp of solar arrays on my roof. In the last week of July, we consumed 84 kWh and generated 230 kWh (273 %). In the last week of November, we consumed 341 khW and generated 40 kWh (11 %). This means we'd need roughly 10 times as much PV area to match demand (10 roofs?), and huge batteries because most of that consumption is in the evening, at night, and in the morning.
Of course, utility-scale and residential solar behave a bit differently, and it becomes more complicated if wind is factored in. But it shows that you can't just overprovision PV a little to fix the main problem of solar power: that it is most abundant in summer, and most in demand in winter.
My point was really only that neither is solar what I'd consider negligible in winter, nor are there really weeks with no wind. Other than that, my interpretation is pretty much the same as yours.
Above, I looked at the weekly min/max ratio. Of course the daily ratios are much higher, 1:60 for solar, and about 1:30 for wind. But wind and solar do have a useful anti-correlation: the ratio is "only" about 1:15 for combined solar+wind. Still high, but a huge improvement on both wind and solar individually.
In reality, the ratio is even higher since we routinely have to drop solar and turn off wind turbines when there is more production than demand (and I don't think that generation is reflected in the graph).
Ie. the max is already a representation more of grid and demand than of production, and it'd make more sense to use the ratio of min:mean, so comparing what we expect PV+wind to produce on average with what they give on the worst day. That gets us a different, more favorable ratio: 195 TWh produced in 2025 so far, let's call it 550 GWh/day, giving a ratio of about 1:6.
Thank you for actually running the numbers. I think the data is quite convincing that overprovisioning won't be the solution to the seasonal storage problem, or at least not the major factor in it.
Personally, I have high hopes for flow batteries. Increasing storage capacity is so easy with them, liquids can easily be stored for a long time, and it would even make long-distance transport by ship feasible. If only we can find a cheap, suitable electrolyte.
This is just a slighty more sophisticated version of the "solar doesn't work at night" trope.
The implications of bringing it up is that these silly hippies haven't even thought of this basic fact so how can we trust them with our energy system.
Meanwhile, actual energy experts have been aware of the concept of winter for at least a few years now.
If you want to critique their plans for dealing with it, you'd need to do more than point out the existence of winter as a gotcha.
I don't see you countering my argument, only attempting to ridicule it ("slighty more sophisticated", "trope", "these silly hippies", "been aware of the concept of winter", "existence of winter as a gotcha"). That sucks, man :-(
> If you want to critique their plans for dealing with it […]
There are many ideas for seasonal storage of PV-generated electricity, but so far there is no concrete plan that's both scalable to TWh levels and economically feasible. Here on HN, there's always someone who'll post the knee-jerk response of "just build more panels", without doing the simple and very obvious calculation that 5x to 10x overprovisioning would turn solar from one of the cheaper into the by far most expensive power generation method out there [1].
[1] Except for paying people to crank a generator by hand, although that might at least help with obesity rates.
> 5x to 10x overprovisioning would turn solar from one of the cheaper into the by far most expensive power generation method out there.
This is trivially false if the cost of solar generation (and battery storage) further drops by 5x to 10x.
Additionally that implies the overprovisioned power is worthless in the summer, which does not have to be the case. It might make certain processes viable due to very low cost of energy during those months. Not trivial as those industries would have to leave the equipment using the power unused during winter months, but the economics could still work for certain cases.
Some of the cases might even specifically be those that store energy for use in winter (although then we're not looking at the 'pure' overprovisioning solution anymore).
> This is trivially false if the cost of solar generation (and battery storage) further drops by 5x to 10x.
That's a huge "if". The cost of PV panels has come down by a factor of 10 in the last 13 years or so, that's true. I doubt another 10x decrease is possible, because at some point you run into material costs.
But the real issue is that price of the panels themselves is already only about 35% of the total installation cost of utility-scale PV. This means that even if the panels were free, it would only reduce the cost by a factor of 1.5.
> That's a huge "if". The cost of PV panels has come down by a factor of 10 in the last 13 years or so, that's true. I doubt another 10x decrease is possible, because at some point you run into material costs.
A factor of 5 is certainly within the realms of physics, given the numbers I've seen floating around. Note that prices are changing rapidly and any old price may not be current: around these parts, they're already so cheap they're worth it as fencing material even if you don't bother to power anything with them.
> But the real issue is that price of the panels themselves is already only about 35% of the total installation cost of utility-scale PV. This means that even if the panels were free, it would only reduce the cost by a factor of 1.5.
This should have changed your opinion, because it shows how the material costs are not the most important factor: we can get up to a 3x cost reduction by improving automation of construction of utility-scale PV plants.
I think I've seen some HN startups with this as their pitch, I've definitely seen some IRL with this pitch.
> But the real issue is that price of the panels themselves is already only about 35% of the total installation cost of utility-scale PV. This means that even if the panels were free, it would only reduce the cost by a factor of 1.5.
1. Do the other costs scale with the number of panels? Because if the sites are 5 times the scale of the current ones I would imagine there are considerable scale based cost efficiencies, both within projects and across projects (through standardization and commoditization).
2. Vertically mounted bifacial PV already greatly smoothes the power production curve throughout the day, improving profitability. Lower cost panels make the downside of requiring more panels in such a setup almost non-existent. Additionally, they reduce maintenance/cleaning costs by being mounted vertically.
3. Battery/energy storage (which further improve profitability) costs are dropping and can drop further.
Also, please address the matter of using the overprovisioned power in summer. Possible projects are underground thermal storage ("Pit Thermal Energy Storage", only works in places where heating is required in winter), desalination, producing ammonia for fertilizer, and producing jet fuel.
> 1. Do the other costs scale with the number of panels?
Mostly yes. Once you're at utility-scale, installation and maintainance should scale 1:1 with number of panels. Inverters and balancing systems should also scale 1:1, although you might be able to save a bit here if you're willing to "waste" power during peak insolation.
But think about it this way: If it was possible to reduce non-panel costs by a factor of 5 simply by building 5x larger solar plants, the operating companies would already be doing this. With non-panel costs around 65%, this would result in 65% * (1 - 1/5) = 52% savings and give them a huge advantage over the competition.
I agree that intra-day fluctuations will be solved by cheaper panels and cheaper batteries, especially once sodium-ion battery costs fall significantly. But I'm specifically talking about seasonal storage here.
> Also, please address the matter of using the overprovisioned power in summer.
I'm quite pessimistic about that. Chemical plants tend to be extremely capital-intensive and quickly become non-profitable if they're effectively idle during half of the year. Underground thermal storage would require huge infrastructure investments into distribution, since most places don't already have district heating.
Sorry, very busy today so I can't go into all details, but I still wanted to give you an answer.
What amounts to „concrete plan“? Right now we’re still in the state where building more generation is the best use of our money with batteries for load shifting a few hours ramping up. So it’s entirely expected that there is no infrastructure for seasonal storage yet. However the maths for storing energy as hydrogen and heat looks quite favorable and the necessary technology exists already.
"Concrete plan" means a technology which satisfies all of these requirements:
1) demonstrated ability in a utility-scale plant
2) already economically viable, or projected to be economically viable within 2 years by actual process engineers with experience in scaling up chemical/electrical plants to industrial size
Yes, that's hard to meet. But the thing is, we've seemingly heard of hundreds of revolutionary storage methods over the last decade, and so far nothing has come to fruition. That's because they were promised by researchers making breakthroughs in the lab, and forecasting orders of magnitude of cost reductions. They're doing great experimental work, but they lack the knowledge and experience to judge what it takes to go from lab result to utility-scale application.
> 2) already economically viable, or projected to be economically viable within 2 years by actual process engineers with experience in scaling up chemical/electrical plants to industrial size
Why 2 years?
Even though I'm expecting the current approximately-exponential growths of both PV and wind to continue until they supply at least 50% of global electrical demand between them, I expect that to happen in the early 2030s, not by the end of 2027.
(I expect global battery capacity to be between a day and a week at that point, still not "seasonal" for sure).
Electrolysis hydrogen is only a little bit more expensive than hydrogen derived from methane and electrolyzers with dozens of megawatt are available. That seems pretty solid to me at this point in the energy transition.
Hydrogen generation isn't the problem, storing it over several months is. Economical, safe, and reliable storage of hydrogen is very much an unsolved engineering challenge. If it weren't, hydrogen storage plants would shoot out of the ground left and right: Even here in Germany, we have such an abundance of solar electricity during the summer months that wind generators have to be turned off and the spot price of electricity still falls to negative values(!) over noon, almost every day.
Yes, those are easier to store, but more expensive and less efficient to generate.
The question is the same as for hydrogen: If it's easy, cheap, and safe to generate, store, and convert back into electricity, why isn't it already being done on a large, commercial scale? The answer is invariably that it's either not easy to scale, too expensive (in terms of upfront costs, maintainance costs, or inefficiencies), or too unsafe, at least today.
With rapidly dropping PV prices it just gets cheaper - this is only a relatively recent thing; the projects that exist to exand production are barely complete yet .. capital plant takes time to build.
Fortescue only piloted athe the world's first ammonia dual-fuel vessel late last year, give them time to bed that in and advance.
If that's so easy, cheap, and safe, why aren't there companies doing it on a large scale already? We're talking about billions of Euros of market volume.
Right now it’s cheaper to make hydrogen from methane and methane is easier to store and process so no large scale storage of hydrogen is in demand. Nevertheless storage in salt caverns is a proven process that is in use right now eg. Linde does it.
That's a funny meta comment, where are you from? Are you consuming a lot of US based content? I ask because I mainly see Americans here writing about the "CCP" based on what they regularly hear from government officials and certain news outlets. It's rarely framed as "China" it's usually "the Chinese Communist Party" emphasizing "Communist" because that word carries negative connotations in the US given its history and in the EU. But maybe framing is similar in your country.
So just to clarify, I'm from the EU, and I'm not paid for anything I write here. Maybe your world model is influenced by propaganda? The world isn't black and white.
I also encourage people to read more about the history and culture of other countries, especially the ones they have strong opinions about, which they often haven't formed themselves (In my experience, this is often lacking in US education, people learn a lot about US history, but not as much about the rest of the world).
Reading more philosophy can also broaden your perspective. In particular, I recommend learning about Singapore, its history, Lee Kuan Yew, and why many highly educated people there willingly accept restrictions on individual freedom. If you understand that, you can then start reading about China, its culture, and its history.
It's not. Why would lsl+csel or add+csel or cmp+csel ever be faster than a simple add? Or have higher throughput? Or require less energy? An integer addition is just about the lowest-latency operation you can do on mainstream CPUs, apart from register-renaming operations that never leave the front-end.
In the end, the simple answer is that scalar code is just not worth optimizing harder these days. It's rarer and rarer for compilers to be compiling code where spending more time optimizing purely scalar arithmetic/etc is worth the payback.
For one, signed integer overflow is allowed and well-defined in Rust (the result simply wraps around in release builds), while it's Undefined Behavior in C. This means that the LLVM IR emitted by the Rust compiler for signed integer arithmetic can't be directly translated into the analogous C code, because that would change the semantics of the program. There are ways around this and other issues, but they aren't necessarily simple, efficient, and portable all at once.
You guys seem to be assuming transpiling to C means it must produce C that DTRT on any random C compiler invoked any which way on the other side, where UB is some huge possibility space.
There's nothing preventing it from being some specific invocation of a narrow set of compilers like gcc-only of some specific version range with a set of flags configuring the UB to match what's required. UB doesn't mean non-deterministic, it's simply undefined by the standard and generally defined by the implementation (and often something you can influence w/cli flags).
> You guys seem to be assuming transpiling to C means it must produce C that DTRT on any random C compiler invoked any which way on the other side, where UB is some huge possibility space.
Yes, that's exactly what "translating to C" means – as opposed to "translating to the very specific C-dialect spoken by gcc 10.9.3 with patches X, Y, and Z, running on an AMD Zen 4 under Debian 12.1 with glibc 2.38, invoked with flags -O0 -g1 -no-X -with-Y -foo -blah -blub...", and may the gods have mercy if you change any of this!
The Gregorian calendar is the de-facto global calendar system today, even in cultures and states that are far removed from its Christian and European roots. You might as well complain about the text on the website being in English.
But he is not complaining that we use the Gregorian calendar. He is pointing out that is just one calendar among many, and we should be aware that it is a conscious choice the world has made to use it by convention.
> But he is not complaining that we use the Gregorian calendar.
Yes, he is:
>>> Yet this "world" history uses Europe's reference point [of BC/CE] as universal.
It wouldn't make sense to use any other than the Gregorian calendar for this map, and it also wouldn't make sense to mix different calendar systems.
> He is pointing out that is just one calendar among many […]
But it's not. The Gregorian calendar is the calendar in world wide use today. Giving dates in BC/CE is not an expression of Eurocentrism, it simply reflects reality.
> Yet this "world" history uses Europe's reference point [of BC/CE] as universal.
What in this sentence indicates he think is it wrong to use that calendar? He is saying it is NOT universal. What about that is hard to understand?
> The Gregorian calendar is the calendar in world wide use today.
Again, you are arguing with a straw-man. Please read my comment carefully again. I am not arguing this your statement.
As an analogy, the WWW is the dominant (probably virtually only) form of the internet in use today, but it is only one architecture. There were/can be others, but they failed to gain or maintain traction. A summary from Google:
> Besides Gopher, other historical internet systems and protocols existed before the World Wide Web, including Wide Area Information Servers (WAIS) and the Archie search engine. While the World Wide Web eventually surpassed them all, these systems provided different ways of discovering and navigating information online in the early 1990s.
When you construct an object containing a mutex, you have exclusive access to it, so you can initialize it without locking the mutex. When you're done, you publish/share the object, thereby losing exclusive access.
struct Entry {
msg: Mutex<String>,
}
...
// Construct a new object on the stack:
let mut object = Entry { msg: Mutex::new(String::new()) };
// Exclusive access, so no locking needed here:
let mutable_msg = object.msg.get_mut();
format_message(mutable_msg, ...);
...
// Publish the object by moving it somewhere else, possibly on the heap:
global_data.add_entry(object);
// From now on, accessing the msg field would require locking the mutex
Initialization is always special. A mutex can't protect that which doesn't exist yet. The right way to initialize your object would be to construct the message first, then construct the composite type that combines the message with a mutex. This doesn't require locking a mutex, even without any borrow checker or other cleverness.
Dude, it's a simplified example, of course you can poke holes into it. Here, let me help you fill in the gaps:
let mut object = prepare_generic_entry(general_settings);
let mutable_msg = object.msg.get_mut();
do_specific_message_modification(mutable_msg, special_settings);
The point is, that there are situations where you have exclusive access to a mutex, and in those situations you can safely access the protected data without having to lock the mutex.
Sorry, I don't find that convincing but rather construed. This still seems like "constructor" type code, so the final object is not ready and locking should not happen before all the protected fields are constructed.
There may be other situations where you have an object in a specific state that makes it effectively owned by a thread, which might make it possible to forgo locking it. These are all very ad-hoc situations, most of them would surely be very hard to model using the borrow checker, and avoiding a lock would most likely not be worth the hassle anyway.
Not sure how this can help me reduce complexity or improve performance of my software.
Your view on mutex performance and overhead is outdated, at least for the major platforms: The Rust standard library mutex only requires 5 bytes, doesn't allocate, and only does a syscall on contention. The mutex implementation in the parking_lot library requires just 1 byte per mutex (and doesn't allocate and only does a syscall on contention). This enables very fine-grained, efficient locking and low contention.
These are OS primitives I'm talking about - I haven't checked out the standard library version but the parking_lot version uses a spinlock with thread sleep when the wait times get too high - it has no way of getting notified when the mutex gets unblocked nor does it support priority inversion.
It seems it's optimized for scenarios with high performance compute heavy code, and short critical sections.
These assumptions may let it win benchmarks, but don't cover the use cases of all users. To illustrate why this is bad, imagine if you have a Mutex protected resource that becomes available after 10us on average. This locks spins 10 times checking if it has become available )(likely <1us) then yields the thread. The OS (lets assume Linux) wont wake it up the thread until the next scheduler tick, and its under no obligation to do so even then (and has no idea it should). But even best-case, you're left waiting 10ms, which is a typical scheduler tick.
In contrast OS based solutions are expensive but not that expensive, let's say that add 1us to the wait. Then you would wait 11us for the resource.
A method call taking 10ms and one taking 15 us is a factor of 60x, which can potentially kill your performance.
You as the user of the library are implicitly buying into these assumptions which may not hold for your case.
There's also nothing in Rust that protects you from deadlocks with 100% certainty. You can fuzz them out, and use helpers, but you can do that in any language.
So you do need to be mindful of how your mutex works, if you want to build a system as good as the one it replaces.
No single concurrency primitive covers all use cases. I was addressing your misconceptions about mutex performance and overhead, not whether mutexes are the best solution to your particular problem.
> […] it has no way of getting notified when the mutex gets unblocked […] The OS (lets assume Linux) wont wake it up the thread until the next scheduler tick, and its under no obligation to do so even then (and has no idea it should).
You've misunderstood the parking_lot implementation. When thread B tries to lock a mutex that's currently locked by thread A, then, after spinning a few cycles, thread B "parks" itself, i.e., it asks the kernel to remove it from the Runnable task queue. On Linux, this is done using the futex syscall. When thread A unlocks the mutex, it detects that another thread is waiting on that mutex. Thread A takes one thread from the queue of waiting threads and "unparks" it, i.e., it asks the kernel to move it into the Runnable task queue. The kernel is notified immediately, and if there's a free CPU core available, will tend to dispatch the thread to that core. On a non-realtime OS, there's no guarantee how long it takes for an unblocked thread to be scheduled again, but that's the case for all concurrency primitives.
The best practices I adopt for rust avoid the use of mutex whenever possible precisely because of how easy a deadlock is. It turns out it is always possible. There are entire languages the disallow any mutable state, much less shared mutable state. The question becomes how much performance are you willing to sacrifice to avoid the mutex. By starting with no shared mutable state and adding it when something is too slow, you end up with very few mutexes.
> avoid the use of mutex […] It turns out it is always possible
How would you handle the archetypical example of a money transfer between two bank accounts, in which 100 units of money need to be subtracted from one account and atomically added to another account, after checking that the first account contains at least 100 units?
The simplest pure functional way would be to copy the whole database instantiating a new copy with the desired change if the condition was met. That obviously doesn't scale, which is where the performance thing comes in. A still pure way would be to use a persistent tree or hash mapped trie that allows efficient reuse of the original db. There are times a purely functional approach doesn't perform well enough, but even with large scale entity component type systems in both rust and c++, the number of times I've had to use a mutex to be performant is small. Atomic is much more common, but still not common. Persistent data structures alleviate most of the need.
pure or not eventually this comes down to durability, no?
and the way to do it is to either have some kind single-point-of-control (designated actor or single-threaded executor) or mark the data (ie. use some concurrency control primitive either wrapping the data or in some dedicated place where the executors check [like JVM's safepoints])
using consistent hashing these hypothetical accounts could be allocated to actors and then each transaction is managed by the actor of the source (ie. where the money is sent from, where the check needs to happen), with their own durable WAL, and periodically these are aggregated
(or course then the locking is hidden in the maintenance of the hashring as eating philosophers are added/removed)
Since the thread mentions Rust: in Rust, you often replace Mutexes with channels.
In your case, you could have a channel where the Receiver is the only part of the code that transfers anything. It'd receive a message Transfer { from: Account, to: Account, amount: Amount } and do the required work. Any other threads would therefore only have copies of the Sender handle. Concurrent sends would be serialized through the queue's buffering.
I'm not suggesting this is an ideal way of doing it
What you're describing is called the "Actor model"; in your example, the receiver is an actor that has exclusive control over all bank accounts.
The actor model reaches its limits as soon as you need transactions involving two or more actors (for example, if you need to atomically operate on both the customers actor and the bank accounts actor). Then you can either pull all involved concerns into a single actor, effectively giving up on concurrency, or you can implement a locking protocol on top of the actor messages, which is just mutexes with extra steps.
> These are OS primitives I'm talking about - I haven't checked out the standard library version but the parking_lot version uses a spinlock with thread sleep when the wait times get too high - it has no way of getting notified when the mutex gets unblocked nor does it support priority inversion.
Uhh no, everyone in Linux userspace uses futexes these days to wait on a contended lock.
Unfortunately the standard library mutex is designed in such a way that condition variables can't use requeue, and so require unnecessary wakeups. I believe parking lot doesn't have this problem.
How does it avoid cache contention with just a few bytes per mutex? That is, multiple mutex instances sharing a cache line. Say I have a structure with multiple int32 counters protected by their own mutex.
Cache contention is (mostly) orthogonal to your locking strategy. If anything, fine-grained locking has the potential to improve cache contention, because
1) the mutex byte/word is more likely to be in the same cache line as the data you want to access anyway, and
2) different threads are more likely to write to mutex bytes/words in different cache lines, whereas in coarse-grained locking, different threads will fight for exclusive access over the cache line containing that one, global mutex.
@magicalhippo: Since I'm comment-rate-throttled, here's my answer to your question:
Typically, you'd artificially increase the size and alignment of the structure:
#[repr(align(64))]
struct Status {
counter: Mutex<u32>,
}
This struct now has an alignment of 64, and is also 64 bytes in size (instead of just the 4+1 required for Mutex<u32>), which guarantees that it's alone in the cache line. This is wasteful from a memory perspective, but can be worth it from a performance perspective. As often when it comes to optimization, it very heavily depends on the specific case whether this makes your program faster or slower.
> different threads are more likely to write to mutex bytes/words in different cache lines
If you got small objects and sequential allocation, that's not a given in my experience.
Like in my example, the ints could be allocated one per thread to indicate some per thread status, and the main UI thread wants to read them every now and then hence they're protected by a mutex.
If they're allocated sequentially, the mutexes end up sharing cache lines and hence lead to effective contention, even though there's almost no "actual" contention.
Yes yes, for a single int you might want to use an atomic variable but this is just for demonstration purposes. I've seen this play out in real code several times, where instead of ints it was a couple of pointers say.
The issue might be allocating the int contiguously in the first place. No language magic is going to help you avoid thinking about mechanical sympathy.
And allocating the int contiguously might actually be the right solution is the cost of sporadic false sharing is less than the cost of wasting memory.
But the mutex encapsulates the int, so if the mutex ensured it occupied a multiple of cache lines, there would be no contention. At the very small cost of a few bytes of memory.
By not avoiding it. And a year later you get to write a blog post about how you discovered and fixed this phenomenon hitherto unknown to computer science.
Don't do that here.
reply