Hacker Newsnew | past | comments | ask | show | jobs | submit | more PunchyHamster's commentslogin

SSD block size is far bigger than 4kB. They still benefit from sequential write

The waste comes with asterisk

That "waste" (that overcommit turns into "not a waste") means you can do far less allocations, with overcommit you can just allocate a bunch of memory and use it gradually, instead of having to do malloc() every time you need a bit of memory and free() every time you get rid of it.

You'd also increase memory fragmentation that way possibly hitting performance.

It's also pretty much required for GCed languages to work sensibly


overcommit 0

- Apache2 runs. - Apache2 takes 50MB. - Apache2 spawns 32 threads. - Apache2 takes 50MB + (per-thread vars * 32)

overcommit 2

- Apache2 runs. - Apache2 takes 50MB. - Apache2 spawns 32 threads. - Apache2 takes 50MB + (5032) + (per-thread vars 32)

Boom, Now your simple apache server serving some static files can't fit on 512MB VM and needs in excess of 1.7GB of memory just to allocate


it's absolutely wasted if apps on server don't use disk (disk cache is pretty much only thing that can use that reserved memory).

You can have simple web server that took less than 100MB of RAM take gigabytes, just because it spawned few COW-ed threads


> Even if you disable overcommit, I don't think you will get pages assigned when you allocate. If your allocations don't trigger an allocation failure, you should get the same behavior with respect to disk cache using otherwise unused pages.

Doesn't really change the point. The RAM might not be completely wasted, but given that near every app will over-allocate and just use the pooled memory, you will waste memory that could otherwise be used to run more stuff.

And it can be quite significant, like it's pretty common for server apps to start a big process and then have COWed thread per connection in a pool, so your apache2 eating maybe 60MB per thread in pool is now in gigabytes range at very small pool sizes.

Blog is essentially call to "let's make apps like we did in the DOS era" which is ridiculus


> It's not really wrong. For something like redis, you could potentially fork and the child gets stuck for a long time and in the meantime the whole cache in the parent is rewritten.

It's wrong 99.99999% of the time. Because alternative is either "make it take double and waste half the RAM" or "write in memory data in a way that allows for snapshotting, throwing a bunch of performance into the trash"


It's also performance, as there is no penalty for asking for more RAM than you need right now, you can reduce amount of allocation calls without sacrificing memory usage (as you would have to without overcommit)

That's what I mean by my first reason

It's not harmful. It's necessary for modern systems that are not "an ECU in a car"

> Memory overcommit means that once you run out of physical memory, the OOM killer will forcefully terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software which should handle out-of-memory situations gracefully.

The big software is not written that way. In fact, writing software that way means you will have to sacrifice performance, memory usage, or both because you either * need to allocate exactly what you always need and free it when it gets smaller (if you want to keep memory footprint similar)m and that will add latency * over-allocate, and waste RAM

And you'd end up with MORE memory related issues, not less. Writing app where every allocation can fail is just nightmarish waste of time for 99% of the apps that are not "onboard computer of a space ship/plane"


It's not gonna change anything. But you might get interested into software like earlyoom and similar, that basically tried to preempt oomkiller and kill something before it gets to sluggish state

Author is just ignorant to the technicals and laser focusing on some particular cases that he thinks are problem but are not.

The redis engineers KNOW fork-to-save will at most result in few tens of MBs of extra memory used in vast majority of cases and benefit of seamless saving. Like, there is a theoretical where it uses double the memory but it would require all the data in database be replaced during short interval snapshot is saved and that's just unrealistic


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: