From the article, they claim that the parameter names are easier to understand and therefore easier to enter correctly. As I've not used either lib directly, I'll have to take their word for it.
Personally I would avoid this kind of npm-esque micro-dependency in my Python projects. I’ve written essentially the same decorator for many of my projects; it takes ten minutes and a few dozen lines of code.
Would you really want to roll your own exponential backoff w/jitter for every project though? It's not the easiest to get right, and since you're looping in a somewhat complicated way, it's definitely prone to accidental infinite loops in my experience.
This library posted here is only ~150 loc without docstrings, and it’s more complicated than my typical equivalent decorator factory with basically the same features, sans logging.
I just don’t find this tricky. If only every problem I need to solve is as simple as this one.
> Is this more symptom of Python deps being "riskier" than Haskell deps due to types or versioning or something?
No, it’s a distaste for tons of small third-party packages that could easily be done in place. If anything I found Haskell deps more prone to breakage with upgrades compared to Python deps (given my limited experience with Haskell), at least with the set of popular packages I tend to use. Part of that is Python packages tend to not depend on a ton of other packages.
The latter — not necessarily PVP constraint violations though, but rather dependency hell situations. One example that pops to mind: packaging git-annex for Homebrew was always “great fun” until we settled on resolving with LTS Haskell (IIRC).
Any dependency is also a risk. What if the next version does not just contain a nice feature that you want but also a nasty and subtle bug that you notice only later? Also, maintenance of a small library is by no means guaranteed to remain at the same level of quality. Generally, one is much safer with larger and more serious libraries. I would say one should just have dependencies that provide some rather serious set of features that would cost a lot of time to implement oneself.
Not at all, I think they just have some notion that npm = bad and npm = (small) deps, so (small) deps = bad.
I agree, it's harder to get right that it seems from first glance. The number of implementations I've seen with broken or not-honoured timeouts...
Python also had a retrying library; apparently it's no longer maintained. I've used the tenacity "fork", which has also evolved/introduced some great features like combinable and chainable wait/backoff.
> Not at all, I think they just have some notion that npm = bad and npm = (small) deps, so (small) deps = bad.
No, I don’t have that notion. Apparently not every package on npm is small or trivial, but the abuse of trivial packages is most well-known and disastrous in the npm ecosystem.
Edit: My god, iOS autowrong is all the rage today.
The thing that always gets me with retries is error handling.
What errors should be considered retryable, vs raise immediately?
Do you log a retryable error? I've had systems mysteriously "hung" before which on closer inspection were infinitely retrying with a retry library that didn't log anything...
And if you do exceed your max retries, which error do you raise? The first one? The last one? There's no guarentee they'll be the same. A completely different "TooManyRetries" error?
These are the concerns I'd like to see addressed in a high level overview of a library like this one.
Also: what if you get different retriable errors every time you re-try: should you have different retry count for each distinct retriable error? When you finally give up, should you report _all_ of the distinct errors or just the earliest or latest retriable error, or ...
Would be interesting to read more specific comparisons with existing mature libraries, e.g. backoff [0]. Feeling I get from reading the motivation behind a new library is that most of it can be achieved by using backoff and perhaps a custom delegating decorator that remaps parameter names and does extra checks if necessary.
The vowel combinations always trip up foreigners. Nieuw is Dutch for new and it is pronounced essentially the same. The ew sound basically transliterates to ieuw. Op (on) combined with Nieuw, basically means "on new", or 'again' in proper English.
Still seems a decent company. Nice that you take the time to properly deal with technical problems instead of just doing the whole 'plough forward even though you're just ploughing air' thing that happens just about everywhere.
If you're using something like this, then the details of when, what and how you retry is probably something you care about, so having the library be explicit about it and making it possible to know what something means without having to look at the docs seems sensible to me.
Something I picked up from another developer is to use timedelta for these type of parameters. It avoids hoping that you get the right granularity for everyone.
While the idea of '60s' is interesting, in lieu of that I always err on the side of explicitly specifying the time units when the argument provided is just a number. Not doing so has bitten me too many times.
60s. was wondering how we could add physical units to python? highjack the number literals and variables like this: 1.s, pi.radians? or maybe the way golang does it: 1 * time.seconds?
Which uses the interesting approach of setting all "units" to random floats:
A complete set of independent base units (meters, kilograms, seconds, coulombs, kelvins) are defined as randomly-chosen positive floating-point numbers. All other units and constants are defined in terms of those. In a dimensionally-correct calculation, the units all cancel out, so the final answer is deterministic, not random. In a dimensionally-incorrect calculations, there will be random factors causing a randomly-varying final answer.
Which presumably is done to avoid overhead from carrying around units with each number, but forces you to run calculations multiple times (with different random seeds) to verify the result is correct.
If you are not familiar with how retrying is used and why exponential back off, jitter, etc are useful here is a shameless plug to a blog post I wrote recently https://blog.najaryan.net/posts/autonomous-robustness/ (the first half pretty much is about all these things).
The post clearly states the problems they were trying to solve, none of which are taken into account here (consistency, ease of parameterization,..) It seems like people are reading too many "X in Y lines of code" posts :D
seems to me that the authors wrote their own library instead of trying to make the ones that didn't meet their need better. well, they never mentioned they tried...
anyways, some of their choices like that long variable name are odd to me. i can understand the unit problem though. golang solves that so nicely!
[1] https://github.com/jd/tenacity