Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Out of curiosity: how do you connect your databases to external services that are consuming these data? In places I do similar work, databases are usually in the same private network as the instances which are reading and writing data to them. If you put them somewhere on the internet, apart from security, doesn't it affect latency?


Their databases are hosted on AWS and GCP so latency isn't much of an issue. They also have AWS Private Link and if configured it won't go over the internet.


No matter if its hosted on Azure GCP or AWS, latency is real. Cloud providers doesn't magically eliminates the Geography and phhysics. Private network don't eliminates latency magically. In general, Any small latency hike can potentially create performance bottlenecks for write operations in strong consistency DB like postgres or MySQL because each write operation go through a round trip from your server to remote planetscale server that create transaction overhead. Complex transactions with multiple statements can amplify this latency due to this round trip. But you could potentially reduce this latency by hosting your app near to where planet scale host their DB cluster though. But that is a dependency or compromise. Edit: A few writes per second? Probably fine. Hundreds of writes per second? Those extra milliseconds become a real bottleneck.


You can simply place your database in the same AWS or GCP region and the same AZs.


Your database will get slower before the latency is an issue.


> Hundreds of writes per second? Those extra milliseconds become a real bottleneck.

Of course it's nicer if the database can handle it, but if you are doing hundreds of sequential non-pipelined writes per second, there is a good chance that there is something wrong with your application logic.


Not universal, there are systems that need high frequency, low latency, strong consistency, writes.


Yes, but for the majority of those, rhose would be individial transactions per e.g. request, so the impact would be a fixed latency penalty rather than a multiplicative one.


PlanetScale runs in AWS/GCP, so not really “somewhere on the internet” if your workload is already there.


This is the thought I always come back to with the non-big cloud services. It’s pretty much always been mandatory at non-startups to have all databases to be hidden away from the wider internet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: