This article [0] suggests that false negative results will not be as problematic if we manage to start screening on a population level.
Inaccuracy of up to 15% could still keep the viral R0 under 1.0 (the critical value for eliminating an epidemic).
From the article:
> "False negative rates up to 15% could be tolerated if 80% comply with testing, and false positives can be almost arbitrarily high when a high fraction of the population is already effectively quarantined"
Shouldn't we optimize for a higher number of false positives? In the end, sending someone in quarantine for 2 weeks without them being infected is probably less dangerous than a false negative. So making tests overly sensitive would seem more applicable to me here.
No, we should optimize for a balance between the two.
For example a 150,000 batch of rapid tests China recently sent to Czechia had 80% error rates. You don't want to be mass quarantining people because of such terrible testing supplies.
Inaccuracy of up to 15% could still keep the viral R0 under 1.0 (the critical value for eliminating an epidemic).
From the article:
> "False negative rates up to 15% could be tolerated if 80% comply with testing, and false positives can be almost arbitrarily high when a high fraction of the population is already effectively quarantined"
https://medium.com/@sten.linnarsson/to-stop-covid-19-test-ev...