I skimmed through it but still didn't see any part describing how is the local dev env is? Let's say that you work on a service that does something for ad serving, how do you write code for that and test it?
I understand unit tests and e2e tests are used but what I'm referring is just simple opening web browser, navigating to the localhost:3000/foo/bar/something and seeing if it's ok, I found this as much faster feedback loop while writing code in addition to the tests. Can anyone from Google share that?
I don't work there but when I did it was no problem to just `blaze run //the:thing -- --port=3000`. If a service needs to make RPCs (which is generally the case) that's not a problem because developer workstations are able to do so via ALTS[1]. Developers can only assert their own authority or the authority of testing entities, so such processes do not have access to production user data.
Another possibility is to install your program in production but under your own personal credentials. Every engineer has unlimited budget to do so, albeit with low priority.
Aside from the above, other practices vary but a team I was on had several dev environments in Borg (the cluster management system). One of which was just "dev" where anyone was welcome to release any service with a new build at any time. Another of which was "test" which also had basically no rules but existed for other teams to play with integrating with our services. The next of which was "experimental" where developers could release only an official release branch binary because it served a tiny amount of production traffic, "canary" which served a large amount of production traffic and required a 2-person signoff to release, and finally full production.
So basically developers had four different environments to just play with: their own workstations under their own authority, in prod, under their own authority, and dev/test in prod under team testing credentials.
Basically every service has already been packed full with so many things that an instance can barely fit on a server, you wont be able to run that monstrosity locally. Which is why they started doing "micro services", out of necessity since when each binary gets over a few gigabytes you don't got many other options. their micro services still takes gigabytes, but it let them continue adding more code. But each of those depends on hundreds of other micro services. And those microservices are of course properly secured so you wont be able to talk to production servers from your development machine.
Are there a lot of binaries "over a few gigabytes"? On x86_64 you can only have 32-bit offsets for relative jumps. How would you build and link something that large?
I received during my tenure several peer bonuses for unblocking the search or ads release by keeping the [redacted] under 2GiB. They are huge just because authors are lazy, not as an inevitable consequence of any technical choice that has been made. It was always easy for me (for some reason, easier than it was for the program's own maintainers) to use `blaze query` and a bit of critical thinking to break up some library that was always called `:util` into smaller pieces and thereby remove tens of megabytes from the linker output. People are just so lazy they aren't thinking about the consequences of their build targets.
Most developer's main targets are much, much smaller.
When i worked there you couldn’t even checkout code locally - had to ssh into office workstation. At that point just run your dev service on borg using free quota
I understand unit tests and e2e tests are used but what I'm referring is just simple opening web browser, navigating to the localhost:3000/foo/bar/something and seeing if it's ok, I found this as much faster feedback loop while writing code in addition to the tests. Can anyone from Google share that?