Is "network request" a synchronous or asynchronous activity? It depends whether your code blocks and wait's until response (or timeout) or continues executing and handling it when it comes. It's property of the "attention" of the caller, not of activity itself.
But "can be gracefully executed asynchronously" is a property of an action.
Something that pegs the CPU to 100% because it's doing intense processing isn't a good candidate for async. Similarly, some code can cause issues when written in a non-streaming fashion. Take the following example (in python):
x = [x for x in range(1_000_000_000)]
y = (x for x in range(1_000_000_000))
If you spawn a bunch of async workers doing the first one concurrently, you'll OOM your system. If you spawn a bunch of async workers doing the second one concurrently, your system will be fine.
In other words, "async" is a label on a box of donuts that implies (though doesn't ensure, people can of course still do bad things) that the donut won't explode if you look away from it.
Hmm, is it a post-factum rationalization or it's the original logic behind async/await? Let's mark "heavy" functions with a label, so user has to call them differently not to overload the system?
Even if this is an original logic, why language is deciding for me what is considered heavy or not? What if I'm fully aware that I'm doing heavy processing and I want it to be happening in the background. What if I'm writing HFT software and every call is heavy for me. Language is not a right level of abstraction of marking "heaviness" of the code.
It really just doesn't make sense. Why stop there and not start marking functions with how many times per second they can be called? Like you can call "light" function 100 trillion times per second and OOM the system, so let's mark it "func light() async 2_times_per_second {}". It only can be called from functions that have lower per second label.
Plus, if Python cares so much about not OOM-ing the system I would start by not requiring 32-bit int to occupy 28 bytes in the first place.
>is it a post-factum rationalization or it's the original logic behind async/await?
Well, I'm not that deeply versed into any language's history, but I imagine any 15+ year old language had a point where they needed asyncrhonous functionality but had to write around legacy code. I don't think Javascript would have been written the way it was in the 90's if asynchronous operations were a mandatory priority.
So it's probably a lot more "this is how we hack this in without creating Python 3.0". Relatively elegant to make it an optional part of the language that you explicitly choose to delve into, instead of a core feature that would have broken thousands of sites behind the scenes if you made it "right".
>why language is deciding for me what is considered heavy or not?
because we decided decades ago that we didn't like using fork(), nor creating/destroying processes ourselves. problems that would inevitably need multiple solutions when supporting multiple platforms because these aren't language features so much as OS calls made in a language that you may or may not be writing in.
Remember at the end of the day that languages are abstractions on top of an OS. Any interesting ideas for problems involving more than a single process space that your OS makes for you is bound by what the OS's API lets you do. It's arbituary but also based on historical problems to wrangle.
So we haven't had enough problems where we need to bound a function by how much it is called. We have for how to interact with all the above OS/language problems.
It's only functions that "block and wait" that it makes sense to be async - the point being that blocking and waiting doesn't use up CPU cycles, so you might as well free the thread up to do other stuff. If a function issues a socket write, then continues utilising the CPU while also periodically checking if data's available to read, it's effectively doing manually what async does for you, though I'm not sure there are too many real life examples of such functions. But pre-async it was certainly common enough for block-and-wait functions to tie up a thread and hence execution of programs with limited multi-threading (even today, GUIs often require all events and updates to be processed on the primary thread).
But consider a network request. The vast majority of the time is not spent in the CPU.
Right?