Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article [0] suggests that false negative results will not be as problematic if we manage to start screening on a population level.

Inaccuracy of up to 15% could still keep the viral R0 under 1.0 (the critical value for eliminating an epidemic).

From the article:

> "False negative rates up to 15% could be tolerated if 80% comply with testing, and false positives can be almost arbitrarily high when a high fraction of the population is already effectively quarantined"

https://medium.com/@sten.linnarsson/to-stop-covid-19-test-ev...



Shouldn't we optimize for a higher number of false positives? In the end, sending someone in quarantine for 2 weeks without them being infected is probably less dangerous than a false negative. So making tests overly sensitive would seem more applicable to me here.


If you're quarantining a large number of medics then that's potentially serious and possibly more dangerous overall if your numbers are off.


You could administer two tests for those, I believe that's what's already done.


No, we should optimize for a balance between the two.

For example a 150,000 batch of rapid tests China recently sent to Czechia had 80% error rates. You don't want to be mass quarantining people because of such terrible testing supplies.


There's nothing inherently bad with mass quarantines.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: