Hacker Newsnew | past | comments | ask | show | jobs | submit | PunchyHamster's commentslogin

So the grift can continue

Non-Euclidean geometry (geometric axioms in which one postulate is rejected such that the 3 angles of a triangle are not exactly 180 degrees) was considered a meaningless word game and fundamental mistruth.

Later, non-Euclidean geometry was actually essential to modern physics.

It's intellectually sketchy to judge future value by the present.


Might as well fund someone researching whether quantum theory run on little gnomes, if there is no serious path to verification after 50 years, why not quantum gnomes?

On this topic (parallel postulate), it took ~2000 years from Euclid and then 3 people all came to the same conclusion independently within ~10-20 years.

Progress is weird.


Is it so weird? See Multiple discovery https://en.wikipedia.org/wiki/Multiple_discovery and Zeitgeist https://en.wikipedia.org/wiki/Zeitgeist namely that there might, or might not, be objective knowledge out there describing properly an objective World but we as a specie do chip at it with itself over time. When a new discovery in any field is made it propagates through our social network and tools we have for it, e.g. cafes, scientific journal, Web. As soon as something is discovered most of us connected enough get the new tool or perspective, update our Worldview and chip it at it again. IMHO this way it seems pretty normal that things hitherto unknown, no matter for how long, become knowable to seemingly independent discovers.

Ideally, one should explore all possibilities. It is remarkable how far "merely" predicting the next word has taken us.

That was constant progress with measurable goals, not "big things are coming decade from now" every decade.

> We should stop funding research into prime numbers. They're stupid and useless. Who cares about them, if they will never be used for anything? Number theory should be stopped, you may as well research gnomes.

I imagine this is what you would have sounded like 100 years ago.


You may be understating how much 15 orders of magnitude are.

The only truly exponential technological progress we’ve ever had, transistors, only scaled by ~5 orders of magnitude in feature size. Thermal engines went from maybe 0.1% to ~50%, less than 3 orders of magnitude, in about 200 years. There’s very fundamental physical laws that suggest that engines are done, and transistor scaling as we have known it for 30 years is also done. Perhaps very clever things might give us 5 more orders of magnitude? E.g. truly 3D integration somehow? Then we’re still 5 orders of magnitude off from our target. I can’t think of any technology that ever improved by more than 10^6, perhaps 10^9 if you count some derivative number (like “number of transistors on chip”, rather than actual size), and that’s from literally zero to today. Not from already-pretty-advanced to Death Star scale.

Another perspective is that, to get to those kinetic energies, we need accelerators as large as the solar system. Possibly the galaxy, I can’t quite remember. Will you concede that galaxy-wide objects are so far from current reality that there’s no point seriously talking about them?


Are you seriously insinuating that string physics are asking for this collider you alone entirely made up? As if people who actually do study string theory are too stupid to know primary school math and this criticism is somehow high-brow and novel?

Not to mention you entirely missed the point of what I said. There is research into the most niche, useless fields imaginable, because not every endeavor taken by every human being needs to be profitable or applicable. Sometimes people are just really good at making jigsaws or want to make a stinky chemical or get fascinated with properties of prime numbers.

And then, sometimes, those turn out to be the fundamental underpinnings of an entire generation of economic and military strategy. You can't often know what spurs what in that sense.


I didn’t make it up, it’s a well known talking point around string theory. I think it was first mentioned by a practicing string theorist. Of course it’s not a novel critique, but I’ve read about putting data centers in space on this website, so I think it’s worth trying to teach people how to do these sort of Fermi problems quickly.

I did indeed miss your point, it was well hidden under a lot of sarcasm. I think it is of course completely valid. People should be free to research what they want, and I’m sure string theory must be beautiful mathematics.

But if your goal is unifying QM and GR, and/or achieving a theory of everything (as is for most theoretical physicists), then me and a growing fraction of physicists think that it’s not a promising avenue. I’m not advocating for only working on “useful” things, because such a theory is not likely to yield much profit to anyone in the foreseeable future anyway. But if you state that a unifying theory is your goal and seek funding for that goal, then string theory should move to the backstage. The mathematics department would rightfully be happy to house you otherwise.


you are mixing up gambling spend vs whole industry spend. If string theory was a small handful of people making up a small m*nority of physics departments like non-euclidea geometry research was that would be fine. Its huge swaths of most physics departments and a huge suck on research funding. For that kind of spend you better show results because you are in production phase at that point not lotto ticket moonshit phase. If we are buying lotto tickets with the money bey lots of different lotto tickets not a whole bunch of one lotto ticket

> m*nority

Is this a typo? I see a lot of words being censored these days and I assumed it's because of some algorithms and visibility. That shouldn't be the case here tho..


They do the same thing with the i in “doing” in another post. It seems like just a typo this person sometimes makes.

I think you are vastly overestimating the number of string physicists and how much their non-experimental research costs.

There's maybe a couple or few hundred-ish in the whole world that focus on it. And they don't need much money because it's pretty much all math.


As a percentage of theoretical physicists it is probably significant though. A Better question is how much love/money/attention is going into rival theories ?

In the 1700s, perhaps. But we have come a long way since that.

Yet, OP is repeating the same logical fallacy: the absence of a result is not a result of absence.

>Non-Euclidean geometry (geometric axioms in which one postulate is rejected such that the 3 angles of a triangle are not exactly 180 degrees) was considered a meaningless word game and fundamental mistruth.

This is just a lie though. Non-Euclidean geometry is a mathematical model of how distances behave on non-linear spaces. Nobody ever believed it to be a "fundamental mistruth", even suggesting it would look ridiculous. It would be akin to denying linear algebra, even the meaning is unclear.

That the physical reality of space is not linear was a shocking revelation, since all human experience and basically every experiment done up until that point indicated otherwise.


This is a generally known part of the history of mathematics.

> Nobody ever believed it to be a "fundamental mistruth"

https://math.libretexts.org/Courses/College_of_the_Canyons/M...

"Lobachevsky [mathematician contemporary of Gauss, who claimed parallel postulate was unnecessary] was relentlessly criticized, mocked, and rejected by the academic world. His new “imaginary” geometry represented the “shamelessness of false new inventions”"

Further, many claimed premature success in finding logical contridictions in geometry lacking parallel (Euclid's 5th) postulate; which meant they believed a 4-postulate geometry to be fundamentally false.


I imagine that elliptic geometry had some use before modern physics.

Yeah, even just trying chart a course on a ship across a reasonable distance will cause you to need to reevaluate some "obvious" things (like "what path is the shortest between these two ports" being a curve rather than a line).

The specific controversy was whether without the parallel/5th postulate, there existed a logical contradiction, i.e. proving the parallel postulate

Planck energy: ~10^19 GeV is approx 2 GJ per collision

Energy to vaporize Earth's oceans: ~4 x 10^27 J

For a Planck-scale linear collider at LHC-like collision rates (~10^8/sec):

Beam power requirement: ~2 x 10^17 W

With realistic wall-plug efficiency of ~1%: ~2 x 10^19 W

Annual energy consumption: ~6 x 10^26 J

At 1% efficiency, one year of operation would:

Vaporize about 15% of Earth's oceans

Or vaporize the Mediterranean Sea roughly 50 times

Or boil Lake Superior every 5 hours

Or one complete ocean vaporization every 6-7 years of operation

It's about 1 million times current global power consumption

Or about 50,000 Suns running continuously

Or 170 billion Large Hadron Colliders operating simultaneously


I prefer short and snarky "then drop your pants". Shuts up most.

Maybe "give me your bank PIN" for those that have been waiting for an excuse to drop their pants.

Both of these are not good responses and are easily discounted by most people ("I don't take nude photos"/"I never tell anyone my pin").

Average people see zero equivalence between sending nudes or their bank pin to a specific stranger and Google keeping a record of every website they've ever visited.


Surely the "I never tell anyone my pin" is just an admission that they do have something to hide?

The point is that there are many things that should be kept private/secret and often the need for that secrecy isn't obvious to people who have never been in particular situations. A woman trying to escape from an abusive relationship may need to keep her location secret to avoid being murdered by her ex, but your typical white male who declares "nothing to hide" may have difficulty in understanding that, whereas they may be able to grasp why their PIN should be kept secret.


I understand the point, but you're taking their statement "I have nothing to hide" way too literally, and these sorts of arguments based on literal interpretations rarely convince anyone. Has anyone actually ever been convinced to take privacy more seriously by this "insight"?

It's not so much a method to convince people to take privacy more seriously, but demonstrating that people who say "I have nothing to hide" haven't really thought about it and how it's such a ridiculous statement.

My aim would be to get people to understand that everyone has stuff that should be kept secret and that it varies according to their circumstances.


I was using "surely you won't mind public webcam in your bedroom then?" too

passing blocks of memory around vs referencing filesystem/database, ACLs, authentication and SSL

Ceph is far higher on RAM usage and complexity. Yeah if you need block storage in addition it's a good choice, but for anything smaller than half a rack of devices it's kinda overkill

Also from our experience the docs outright lie about ceph's OSD memory usage and we've seen double or more than what docs claim (8-10GB instead of 4)


> Debian has been doing this for decades, yes, but it is largely a volunteer effort, and it's become a meme how slow Debian is to release things.

which is a bit silly considering that if you want fast, most packages land in testing/unstable pretty quickly.


But then you as a consumer/user of Debian packages need to stay on top of things when they change in backwards-incompatible ways.

I believe the sweet spot is Debian-like stable as the base platform to build on top of, and then commercial-support in a similar way for any dependencies you must have more recent versions on top.


> But then you as a consumer/user of Debian packages need to stay on top of things when they change in backwards-incompatible ways.

If you need latest packages, you have to do it anyway.

> I believe the sweet spot is Debian-like stable as the base platform to build on top of, and then commercial-support in a similar way for any dependencies you must have more recent versions on top.

That if the company can build packages properly. Also too old OS deps sometimes do throw wrench in the works.

Tho frankly "latest Debian Testing" have far smaller chance breaking something than "latest piece of software that couldn't figure out how to upstream to Debian"


The difference is between staying on stable and cherry-picking the latest for what you really do need, and being on everything latest.

The latter has a huge maintenance burden, the former is the, as I said already, sweet spot. (And let's not talk about combining stable/testing, any machine I tried that on got into an non-upgradeable mess quickly)

I am not saying it is easy, which is exactly why I think it should be a commercial service that you pay for for it to actually survive.


The docs of it and the marketing is a bit of a mess, tho I'm just gonna blame that on culture barrier as the devs are chinese

My small adventure with rustfs is that it is somewhat underbaked at the moment.

And also it is already rigged for a rug-pull

https://github.com/rustfs/rustfs/blob/main/rustfs/src/licens...


yeah, their docs look pretty comprehensive, but there's a disturbing number of 404s that scream "not ready for prime-time" to me.

from https://rustfs.com/ if you click Documentation, it takes you to their main docs site. there's a nav header at the top, if you click Docs there...it 404s.

"Single Node Multiple Disk Installation" is a 404. ditto "Terminology Explanation". and "Troubleshooting > Node Failure". and "RustFS Performance Comparison".

on the 404 page, there's a "take me home" button...which also leads to a 404.


For someone recently migrating from minio, caveats

* no lifecycle management of any kind - if you're using it for backups you can't set "don't delete versions for 3 months", so if anyone takes hold of your key, you backups are gone. I relied on minio's lifecycle management for that but it's feature missing in garage (and to be fair, most other) S3

* no automatic mirroring (if you want to have second copy in something other than garage or just don't want to have a cluster but rather have more independent nodes)

* ACLs for access are VERY limited - can't make a key access only sub-path, can't make a "master key" (AFAIK, couldn't find an option) that can access all the buckets so the previous point is also harder - I can't easily use rclone to mirror entire instance somewhere else unless I write scrip iterating over buckets and adding them bucket by bucket to key ACK

* Web hosting features are extremely limited so you won't be able to say set CORS headers for the bucket

* No ability to set keys - you can only generate on inside garage or import garage-formatted one - which means you can't just migrate storage itself, you have to re-generate every key. It also makes automating it harder, in case of minio you can pre-generate key and pass then fed it to clients and to the minio key command, here you have to do the dance of "generate with tool" -> "scrape and put into DB" -> put onto clients.

Overall I like the software a lot but if you have setup that uses those features, beware.


>no lifecycle management of any kind - if you're using it for backups you can't set "don't delete versions for 3 months", so if anyone takes hold of your key, you backups are gone

If someone gets a hold of your key, can't they also just change your backup deletion policy, even if it supported one?


> If someone gets a hold of your key, can't they also just change your backup deletion policy, even if it supported one?

Minio have full on ACLs so you can just create a key that can only write/read but not change any settings like that.

So you just need to keep the "master key" that you use for setup away from potentially vulnerable devices, the "backup key" doesn't need those permissions.


technically if you have 3 or more sources that would be caught; NTP protocol was designed for that eventuality

> technically if you have 3 or more sources that would be caught; NTP protocol was designed for that eventuality

Either go with one clock in your NTPd/Chrony configuration, or ≥4.

Yes, if you have 3 they can triangulate, but if one goes offline now you have 2 with no tie-breaker. If you have (at least) 4 servers, then one can go away and triangulation / sanity-checking can still occur with the 3 remaining.


Your probably meant trilaterate.

Sure, but not needing a failure to cascade to yet another failsafe is still a good idea. After all, all software has bugs, and all networks have configuration errors.

> WALs, and related low-level logging details, are critical for database systems that care deeply about durability on a single system. But the modern database isn’t like that: it doesn’t depend on commit-to-disk on a single system for its durability story. Commit-to-disk on a single system is both unnecessary (because we can replicate across storage on multiple systems) and inadequate (because we don’t want to lose writes even if a single system fails).

And then a bug crashes your database cluster all at once and now instead of missing seconds, you miss minutes, because some smartass thought "surely if I send request to 5 nodes some of that will land on disk in reasonably near future?".

I love how this industry invents best practices that are actually good then people just invent badly researched reasons to just... not do them.


> "surely if I send request to 5 nodes some of that will land on disk in reasonably near future?"

That would be asynchronous replication. But IIUC the author is instead advocating for a distributed log with synchronous quorum writes.


But we know this is not actually robust because storage and power failures tend to be correlated. The most recent Jepsen analysis again highlights that it's flawed thinking: https://jepsen.io/analyses/nats-2.12.1

The Aurora paper [0] goes into detail of correlated failures.

> In Aurora, we have chosen a design point of tolerating (a) losing an entire AZ and one additional node (AZ+1) without losing data, and (b) losing an entire AZ without impacting the ability to write data. [..] With such a model, we can (a) lose a single AZ and one additional node (a failure of 3 nodes) without losing read availability, and (b) lose any two nodes, including a single AZ failure and maintain write availability.

As for why this can be considered durable enough, section 2.2 gives an argument based on their MTTR (mean time to repair) of storage segments

> We would need to see two such failures in the same 10 second window plus a failure of an AZ not containing either of these two independent failures to lose quorum. At our observed failure rates, that’s sufficiently unlikely, even for the number of databases we manage for our customers.

[0] https://pages.cs.wisc.edu/~yxy/cs764-f20/papers/aurora-sigmo...


I believe testing over paper claims

The biggest lie we’ve been told is that databases require global consistency and a global clock. Traditional databases are still operating with Newtonian assumptions about absolute time, while the real world moves according to Einstein’s relativistic theory, where time is local and relative. You dont need global order, you dont need global clock.

You need a clock but you can have more than one. This is an important distinction.

Arbitrating differences in relative ordering across different observer clocks is what N-temporal databases are about. In databases we usually call the basic 2-temporal case “bitemporal”. The trivial 1-temporal case (which is a quasi-global clock) is what we call “time-series”.

The complexity is that N-temporality turns time into a true N-dimensional data type. These have different behavior than the N-dimensional spatial data types that everyone is familiar with, so you can’t use e.g. quadtrees as you would in the 2-spatial case and expect it to perform well.

There are no algorithms in literature for indexing N-temporal types at scale. It is a known open problem. That’s why we don’t do it in databases except at trivial scales where you can just brute-force the problem. (The theory problem is really interesting but once you start poking at it you quickly see why no one has made any progress on it. It hurts the brain just to think about it.)


Till the financial controller shows up at the very least.

Also even if not required makes reasoning about how systems work a hell lot easier. So for vast majority that doesn't need massive throughtputs sacrificing some speed for easier to understand consistency model is worthy tradeoff


All financial systems don't care about time.

Prety much all financial transactions are settled with a given date, not instantly. Go sell some stocks, it takes 2 days to actually settle. (May be hidden by your provider, but that how it works).

For that matter, the ultimate in BASE for financial transactions is the humble check.

That is a great example of "money out" that will only be settled at some time in the future.

There is a reason there is this notion of a "business day" and re-processing transactions that arrived out of order.


The deeper problem isnt global clocks or even strict consistency, it’s the assumption that synchronous coordination is the default mechanism for correctness.That’s the real Newtonian mindset, a belief that serialization must happen before progress is allowed. Synchronous coordination can enforce correctness, but it should not be the only mechanism to achieve it. Physics actually teaches the opposite assumption, time is relative and local, not globally ordered. Yet traditional databases were designed as if absolute time and global serialization were fundamental laws, rather than conveniences.We treat global coordination as inevitable when it’s really just a historical design choice, not a requirement for correctness.

That's why we use UUIDv7 primary keys. Relativity be damned, our replication strategy does not depend upon the timestamp factor.

Happens all the time (the ignores best practices because it’s convenient or ‘just because’ to do something different), literally everywhere including normal society.

Frankly, it’s shocking anything works at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: