Theoretically yes, practically not really (any more). NVMe cards are fast enough that a single core traversing the file system can actually be a chokepoint, so parallelisation helps a lot here.
I also should have made it clear that my comment also wasn't so much about the search (although the parallel search is absolutely a nice-to-have)...it was about the `-x, --exec` being automatically in parallel.
A common usecase is to find all files of X criteria, and then perform the same operation on all of them, e.g. find all session logs older than N days and then compress them, or convert all wav files in a directory tree to mp3
If the operation is computationally expensive, using more than one core speeds things up considerably. With `find`, the way to do that was by piping the output to GNU parallel.
With `fd` I can just use `-x, --exec` and it automatically spins up threads to handle the operations, unless instructed not to.
I also should have made it clear that my comment also wasn't so much about the search (although the parallel search is absolutely a nice-to-have)...it was about the `-x, --exec` being automatically in parallel.
A common usecase is to find all files of X criteria, and then perform the same operation on all of them, e.g. find all session logs older than N days and then compress them, or convert all wav files in a directory tree to mp3
If the operation is computationally expensive, using more than one core speeds things up considerably. With `find`, the way to do that was by piping the output to GNU parallel.
With `fd` I can just use `-x, --exec` and it automatically spins up threads to handle the operations, unless instructed not to.