Hacker Newsnew | past | comments | ask | show | jobs | submit | CJefferson's commentslogin

For a start, it tells you the engine can actually be used to make a full finished game — which with hobby game engines isn’t a guarantee. If you want me to use an engine, I’d like at least one finished game, preferably even released on Steam.

I agree, by this argument why teach any child 2+2? This has been performed perfectly by computers for years.

I can't be sure, but this sounds entirely possible to me.

There are many, many people, and websites, dedicated to roleplaying, and those people will often have conversations lasting thousands of messages with different characters. I know a people whose personal 'roleplay AI' budget is a $1,000/month, as they want the best quality AIs.


The world doesn’t consider it reasonable for businesses to sell beer to kids, and expect us all to constantly follow our kids around to make sure they don’t get beer. Bars don’t get to say ‘woops, we got thousands of 9 year olds drunk, their parents should keep an eye on them’”.

And at this point, most kids, most people, spend more time online than outside walking around


> Bars don’t get to say ‘woops, we got thousands of 9 year olds drunk, their parents should keep an eye on them’”.

Because there's no whatsoever downside in requiring bars to not serve children (if we assume that it's just to not give alcohol to children); online age checks instead have very big negative consequences for the whole populace.


I really like slides.com, which is a web front end to reveal.js, I’ve used it for a few things, and it lets you export the reveal.js html and JavaScript, so you know you won’t lose it.

It’s not perfect, but as you say, whenever I’ve made a slide deck outside a gui I’ve regretted it. Quarto is better for documents, but still has rough edges.


Given most machines have cores to spare, and people want answers faster, is that a bad thing?


I think the complaint is that the C version isn’t multi threading ignoring that Rust makes it much easier to have a correct multithreaded implementation. OP is conveniently also ignoring that the Rust ports that I reference Russinovich talking about are MS internal code bases where it’s a 1:1 port, not a rearchitecture or an attempt to improve performance. The defaults being better, no aliasing that the compiler takes advantage, and automatic struct layout optimization all largely explain that it ends up being 5-20% faster having done nothing other than rewrite it.

But critics seem to often never engage with the actual data and just blindly get knee jerk defensive.


Honestly, if these languages are only winning by 25% in microbenchmarks, where I’d expect the difference to be biggest, that’s a strong boost for Java for me. I didn’t realise it was so close, and I hate async programming so I’m definitely not doing it for an, at most, 25% boost.


It’s not about the languages only, but also about runtimes and libraries. The vert.x vertices are reactive. Java devrel folks push everyone from reactive to virtual threads now. You won’t see it perform in that ballpark. If you look at the bottom of the benchmark results table, you’ll find Spring Boot (servlets and a bit higher Reactor), together with Django (Python). So “Java” in practice is different from niche Java. And if you look inside at the codebase, you’ll see the JVM options. In addition, they don’t directly publish CPU and memory utilization. You can extract it from the raw results, but it’s inconclusive.

This stops short of actually validating the benchmark payloads and hardware against your specific scenario.


> So “Java” in practice is different from niche Java.

This is an odd take, especially when in the discussion of Rust. In practice when talking about projects using Rust as an http server backend is non-existent in comparison. Does that mean we just get to write off the Rust benchmarks?

Java performs, as shown by the benchmarks.


I don’t understand what you’re saying. Typical Java is Spring Boot. Typical Rust is Axum and Actix. I don’t see why it would make sense to push the argument ad absurdum. Vert.x is not typical Java, its not easy to get it right. But Java the ecosystem profits from Netty in terms of performance, which does the best it can to avoid the JVM, the runtime system. And it’s not always about “HTTP servers” though that’s what that TechEmpower benchmark subject matter is about - frameworks, not just languages.

Your last sentence reads like an expression of faith. I’ll only remark that performance is relative to one’s project specs.


In some of those benchmarks, Quarkus (which is very much "typical Java") beats Axum, and there's far more software being written in "niche Java" than in "typical Rust". As for Netty, it's "avoiding the JVM" (standard library, really) less now, and to the extent that it still does, it might not be working in its favour. E.g. we've been able to get better results with plain blocking code and virtual threads than with Netty, except in situations where Netty's codecs have optimisations done over many years, and could have been equally applied to ordinary Java blocking code (as I'm sure they will be in due time).


Hey Ron, I’ve got deep respect for what you do and appreciate what you’re sharing, that’s definitely good to know. And I understand that many people take any benchmark as a validation for their beliefs. There are so many parameters that are glossed over at best. More interesting to me is the total cost of bringing that performance to production. If it’s some gibberish that takes a team of five a month to formulate and then costs extra CPU and RAM to execute, and then becomes another Perlesque incantation that no one can maintain, it’s not really a “typical” thing worth consideration, except where it’s necessary, scoped to a dedicated library, and the budget permits.

I don’t touch Quarkus anymore for a variety of issues. Yes, sometimes it’s Quarkus ahead, sometimes it’s Vert.x, from what I remember it’s usually bare Vert.x. It boils down to the benchmark iteration and runtime environment. In a gRPC benchmark, Akka took the crown in a multicore scenario - at a cost of two orders of magnitude more RAM and more CPU. Those are plausible baselines for a trivial payload.

By Netty avoiding the JVM I referred mostly to its off-heap memory management, not only the JDK APIs you guys deprecated.

I’m deeply ingrained in the Java world, but your internal benchmarks rarely translate well to my day-to-day observations. So I’m quite often a bit perplexed when I read your comments here and elsewhere or watch your talks. Without pretending I comprehended the JVM on a level comparable to yours, in my typical scenarios, I do quite often manage to get close to the throughput of my Rust and C++ implementations, albeit at a much higher CPU and memory cost. Latency & throughput at once is a different story though. I genuinely hope that one day Java will become a platform for more performance-oriented workloads, with less nondeterminism. I really appreciate your efforts toward introducing more consistency into JDK.


I didn’t make the claim that it’s worth it. But when it is absolutely needed Java has no solution.

And remember, we’re talking about a very niche and specific I/O microbenchmark. Start looking at things like SIMD (currently - I know Java is working on it) or in general more compute bound and the gap will widen. Java still doesn’t yet have the tools to write really high performance code.


But it does. Java already gives you direct access sto SIMD, and the last major hurdle to 100% of hardware performance with idiomatic code, flattened structs, will be closed very soon. The gap has been closing steadily, and there's no sign of change in the trend. Actually, it's getting harder and harder to find cases where a gap exists at all.


It is called JNI, or Panama nowadays.

Too many people go hard on must be 100% pure, meanwhile Python is taking over the AI world, via native library bindings.


That’s simplistic. I maintain some large C and Rust programs, and I find Unix/windows issues much easier to fix on rust than in C, and supporting Linux and windows well matters a lot more than some obscure CPU Rust doesn’t support.


While I’m sure it’s much more advanced, out of interest is this similar to the Python tool ‘fabricate’, which would use strace to track all files a program read, and wrote?


I used to use a python program called ‘fabricate’ which did this. If you track every file a compiler opens, then id the same compiler is run with the same flags, and no input changed, you can just drop a cached copy of the outputs in place.

I’m actually disappointed this type of thing never caught on, it’s fairly easy on Linux to track every file a program accesses, so why do I need to write dependency lists?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: