Hacker Newsnew | past | comments | ask | show | jobs | submit | aliceryhl's commentslogin

Where I'm from, it probably would not be stolen by anyone.

How?


Verizon unlimited plans will be about that after taxes and fees for two lines.

Add in phones being financed and you’re easily over $200/mo direct with a carrier.


It's trivial to implement an async runtime in the kernel. The kernel's workqueue is already essentially a runtime.


I was about to take offence at the use of “trivial” in this context. But then I noticed your handle, lol. You have the license to say that, thanks for your contributions!


It never made it into upstream Linux, but there is already a sample implementation that Wedson wrote in 2022: https://github.com/Rust-for-Linux/linux/pull/798


Won't that be an eager runtime though? Breaking Rust's assumption that futures do nothing until polled? Unless you don't submit it to the queue until the poll call, I guess


It won't be different from Tokio. When you pass a future to tokio::spawn, that will also eagerly execute the future right away.


> IIRC Alice from the tokio team also suggested there hasn't been much interest in pushing through these difficulties more recently, as the current performance is "good enough".

Well, I think there is interest, but mostly for file IO.

For file IO, the situation is pretty simple. We already have to implement that using spawn_blocking, and spawn_blocking has the exact same buffer challenges as io_uring does, so translating file IO to io_uring is not that tricky.

On the other hand, I don't think tokio::net's existing APIs will support io_uring. Or at least they won't support the buffer-based io_uring APIs; there is no reason they can't register for readiness through io_uring.


This covers probably 90% of the usefulness of io_uring for non-niche applications. Its original purpose was doing buffered async file IO without a bunch of caveats that make it effectively useless. The biggest speed up I’ve found with it is ‘stat’ing large sets of files in the VFS cache. It can literally be 50x faster at that, since you can do 1000 files with a single systemcall and the data you need from the kernel is all in memory.

High throughput network usecases that don’t need/want AF_XDP or DPDK can get most of the speedup with ‘sendmmsg/recvmmsg’ and segmentation offload.


For TCP streams syscall overhead isn't a big issue really, you can easily transfer large chunks of data in each write(). If you have TCP segmentation offload available you'll have no serious issues pushing 100gbit/s. Also if you are sending static content don't forget sendfile().

UDP is a whole another kettle of fish, get's very complicated to go above 10gbit/s or so. This is a big part of why QUIC really struggles to scale well for fat pipes [1]. sendmmsg/recvmmsg + UDP GRO/GSO will probably get you to ~30gbit/s but beyond that is a real headache. The issue is that UDP is not stream focused so you're making a ton of little writes and the kernel networking stack as of today does a pretty bad job with these workloads.

FWIW even the fastest QUIC implementations cap out at <10gbit/s today [2].

Had a good fight writing a ~20gbit userspace UDP VPN recently. Ended up having to bypass the kernels networking stack using AF_XDP [3].

I'm available for hire btw, if you've got an interesting networking project feel free to reach out.

1. https://arxiv.org/abs/2310.09423

2. https://microsoft.github.io/msquic/

3. https://github.com/apoxy-dev/icx/blob/main/tunnel/tunnel.go


Yeah all agreed - the only addendum I’d add is for cases where you can’t use large buffers because you don’t have the data (e.g. realtime data streams or very short request/reply cycles). These end up having the same problems, but are not soluble by TCP or UDP segmentation offloads. This is where reduced syscall overhead (or even better kernel bypass) really shines for networking.


I have a hard time believing that google is serving YouTube over QUIC/HTTP3 at 10Gbit/s, or even 30Gbit/s.


These are per-connection bottlenecks, largely due to implementation choices in the Linux network stack. Even with vanilla Linux networking, vertical scale can get the aggregate bandwidth as high as you want if you don’t need 10G per connection (which YouTube doesn’t), as long as you have enough CPU cores and NIC queues.

Another thing to consider: Google’s load balancers are all bespoke SDN and they almost certainly speak HTTP1/2 between the load balancers and the application servers. So Linux network stack constraints are probably not relevant for the YouTube frontend serving HTTP3 at all.


I'm quite careful to tightly control the dependencies of Tokio. All dependencies are under control by members of the Tokio team or others that I trust.


The caption for the picture says this:

> The six planets orbit their central star HD 110067 in a harmonic rhythm with planets aligning every few orbits.


Regarding the review process ... one thing that I find challenging and don't know a good solution to is documentation. I've received many PRs where the change itself is fine, but the PR is dragging out because the documentation is lacking, and getting the PR author to improve it sometimes takes a lot of review rounds.

What would you do to avoid this?

Sometimes the same situation comes up with tests, but it is not as common in my experience.


Get people to write the docs first. Not many people like writing docs after the fact, and much of the value of working documentation is lost if you do it after the implementation.

Assuming we’re not taking about user guide kind of docs, then a major benefit of writing docs first is to clarify your thinking. Being able to explain your intent in the written word is valuable because you will often uncover gaps in your thinking. This applies to a specification, or to acknowledging problem reports and updating with theories on what the cause of said problem is and an approach to confirming or fixing it. You can even reference that problem report in commits and merge requests. It pretty beneficial all around.

And docs don’t have to me masterpiece works of art. Just getting people to clarify intent is a huge win. Peer reviewers don’t have time to do a super deep dive into code. If they know what you intended code to do, that’s something many reviewers can check pretty quickly without having to know much context.

It’s selfish and naive to disregard basic documentation of intent.


One option would be to take an initial stab at the documentation yourself - that makes it clear to the submitter where things are unclear, because you made mistakes or omitted things, and they can just correct that, which is a lot more feasible to do than figuring out what's important while your head is in the code.


Make it clear that documentation is part of the code. Missing or poor docu = code not acceptable.


It's well known that what is being stabilized today is lacking the Send bounds stuff. In fact, there was a lot of discussion about whether they should completely block this feature until the Send bounds stuff was ready. Ultimately I think it is good that they shipped this part of the feature even though the other part isn't ready yet - Tower isn't able to use this yet, but other crates can.


Yes Alice fully agreed. I do understand this. I just want to share my experience as a warning to others. As what is shipped is nothing less than amazing. It would be ashamed if this results to disappointment due to wrong expectations, as happened to me. I’m however very grateful for what is already there.


If you pay too much in bounties, you risk having your own red-team employees leave so that they can report bugs externally and get paid much more via bounties.


I'm surprised that CLN-003 made the list even as low severity. It's intended to make reverse engineering of the binary harder, but the code is already freely accessible (and CLN-003 also acknowledges this).


That vulnerability seems like something added to adhere to the rule of 3. In at least Western culture, we have this ingrained thing for groups of 3 - for example Trinity, 3 point outline, 3 sentences in a paragraph etc.

It seems like this was picked to end up with 3 vulnerabilities so the security researchers can feel they did a complete job.


I see it as a note to be exhaustive. It’s the kind of thing if you don’t add it to your report, some smart ass WILL say something like « actually they forgot about the bin symbols, how could they miss this? ». There’s always someone like this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: