I've maintained some pretty big libraries inside Google at one time or another in the last 16 years. I believe our system is bad for both sides (library maintainers and users). Here are some of my frustrations:
* the One Version Rule means that library authors can't change a library "upstream": your changes must work now or they can't get in. This sets the bar for landing changes very high, so development moves at a snail's pace. The version control and build system we have makes it difficult to work inside a branch, so collaboration on experimental/new things between engineers is difficult.
* users can create any test they wish that exercises their integration with your library. They can depend on things you never promised (Hyrum's law, addressed in the book). Each of these tests becomes a promise you, the library maintainer, make to that user: we will never break that test, no matter how weird it is. This is another huge burden on library maintainers.
* the One Version Rule means that, as a user, I can't just pin my dependence on some software to version X. For example, if I depend on Bigtable Client, just give me version 1.4. I don't need the meager performance improvements in version 1.5, because I don't want to risk breaking my project. This means every roll-up release you make, every sync to HEAD you do, risks bringing in some bug in the bleeding-edge version of literally everything you depend on.
I worked on one of those infrastructure services (like Bigtable). It was a huge boon to my productivity that the one version rule existed.
If we had to maintain 6 past months of releases, we would never get anything done. Since breakages in our client library were our problem, getting a months-old bug means that we can just tell you to recompile rather than figuring out how to backport a fix for you.
Hyrum's Law considerations got really weird, especially when people take them too far. I think this is based on the kernel idea of "don't break userspace," but its practical implications are nuts. Hyrum's law has killed infrastructure projects that could have made things a lot better, and has resulted in crazy test-only behavior being the norm.
One person on an adjacent team loved taking behaviors as promises (and he also had a reputation as one of the most prolific coders at G). We had to clean up his messes every time he relied on unspecified behavior. I pushed back on his nonsense a few times, particularly when he used an explicitly banned combination of configuration options that we forgot to check-fail on, but always lost. 1.5 SWEs on our team were full-time cleaning up after him.
You are a rare engineer who actually pushes back on nonsense. For that, I say "thank you". I feel that there are too few people at Google who will say "stop, that is going to create a problem in the future". Some people are very enthusiastic and well meaning and "productive" (high change count), but create a burden for their colleagues in their fervor to change things.
To the point in your last sentence, a few years ago I worked at a software company where there was a prolific code writer who similarly tied up about ~2 SWEs cleaning up his messes. He was a Perl programmer who was set loose on C# code. The broken ways the company tracked productivity meant the prolific writer rated as doing great while the three SWEs who had to spend 2/3rds of their time fixing the messes he left behind were rated as unproductive.
Agree. I've maintained some widely used libraries at Google for over a decade (the larger one getting invoked from over 5e9 qps at peak) and I'm very grateful for the build-everything-from-head practice (and short build horizons). Yeah, it introduces a few small problems, but I think they are ~easy to deal with —e.g., protect new functionality with flags to control canary/roll out when needed; file bugs (and, eventually, deliberately break) customers with unreasonable tests.
Just to pick on the last example, having customers with unreasonable tests _is_ a real problem. But letting clients pin down specific/older builds of their dependencies (your code) to deal with this doesn't solve the problem, just pushes it down the road, imo making things worse.
.. and these problems are absolutely worth having in return for the simplicity of ~only having to support head (and, in some cases, like client LB policies, just a relatively short build horizon).
Any ideas on mitigating, kneecapping "prolific coders"?
My observation has been the torrent-of-poo coder also demands quick (pro forma ~~performa~~) code reviews while nitpicking concern trolling other people's PRs. Not sure what to call this? Gatekeeping, control freak, PKI hacking, passive-aggressive, or maybe just being an asshole.
It's a culture thing IMO. I used to be an electrical engineer, which has a strong culture of "everything I don't specify is a black box that I am free to change" (since the core products of EEs literally are black boxes with some wires sticking out the bottom). I was shocked that programmers don't think the same way. Also, the super-coders having bad attitudes on PRs matched my experience, and is incredibly toxic.
I think part of it is that programmers don't like to write a lot of documentation, but infrastructure services have very long lives and naturally build up a lot of documentation. If you write down a lot of promises, you also have documentation about what you don't promise. Unfortunately, the culture of G was that docs don't get updated and you promise all observable behavior.
This is always the result of someone looking at productivity as zero sum, some management practices encourage zero sum behavior. If management doesn't actively incentivize non-zero sum behavior - then zero sum behavior rules. Consider the following list of highly effective zero sum behaviors.
Effective Zero Sum behaviors that management should avoid:
0) I can ship faster than all of my team mates if I nitpick their CRs and get mine quickly approved by our easier reviews.
1) I can make sure my CR reviews are always faster if I am the nitpicky a-hole on every other CR
2) I can ship faster if I don't add as many tests as my peers. I'll promise to add them in a later CR and forget about it.
3) I can ship faster if I write software that is difficult for others to work on.
4) I can seem like a smarter engineer than everyone else by using jargon and writing difficult to understand code.
5) I can convince management to look at my X,Y,Z productivity metrics, while the team remains ignorant. (Code Review iterations is a dangerous example of this)
Big thing is to recognize signs of zero sum behavior, and investigate if it's being done intentionally or unintentionally. Even if its intentional, it may simply be a bad reaction to the surrounding incentive model of the organization.
I think about Eli Goldratt's The Goal all the time.
Paraphrasing: A team (system) is only as fast as its slowest member (task); to speed up the team, focus on making the bottle neck faster.
Said another way, Goldratt also explains how a process that outpaces the others works to slow down the system overall. Local optimization leads to gloal inoptimization.
Pretty basic queue theory stuff. This aspect is aka Theory of Constraints.
Anyhoo. I don't buy most "10x programmer" tales. I just wonder who's on the receiving end of such awesomeness. And at what cost.
Holy shit I just got flashbacks from the dude on my team who used to do that asymmetric code review warfare. Hardly saw his PRs cause he had one person on the team insta-stamp them. At the same time he’d happy-glad you to death with blocking change requests. Said requests would trickle in over the course of a day or two because he didn’t just sit down and do a review, he’d graze on it.
Thank you for saying this. I bought the physical copy of the book a few years ago, before I was a heavy user of Google OSS projects. I cracked it after I was shocked by how bad many of the practices seemed to be.. And it's all working as intended? The bit on docs is confusing. Google docs of some of the most poorly organized, least easily referenced docs I've ever seen. [0]
Skip forward a few years and my projects are full of workarounds for years old Google bugs. It feels like fixing basic functionality just isn't a priority. Most of them are literally labeled "NOT A PRIORITY".
[0]: You can read Scrapy's docs, and the docs for most major Python libraries, from beginning to end and just "know" how to use it (https://docs.scrapy.org/_/downloads/en/latest/pdf/). With Google docs you have to piece together fragments of information connected by an complex web of "[barely] Related Topics".
Google overcompensates for a number of practices that don't scale by throwing ungodly computational power at them. I don't know how much CPU forge/TAP consumes these days, but I remember when it was at least 90K cores in a single cluster. It's insane to me that hundreds of thousands of giga-brains are pinned 100% 24/7 to dynamically check literally trillions of things because the combinatorial space was too hard to factor.
This is not to disparage the people who built those systems, but there is only so much concrete you can put in a rocket ship.
I'd agree with you that this would be bad for most companies I've worked at barring arguably one (Amazon, which is also gigantic) -- at any startup/medium sized company, these practices would not be right-sized to the work being done.
Even during my tenure at AWS inside Amazon, I have difficulty believing this would be useful; for example, at AWS, we'd separate services by tiers based on how close to the center they were inside the internal service graph. Running EC2/S3 or another tier 1 service? Yes, you're probably going to index on moving a bit slower to reduce operational risk than not. Running a new AWS service that is more of a "leaf node" than a center node? You can go ahead and move fairly quickly so long as you're obeying AWS best practices, which while somewhat hefty by startup standards are quite relaxed by other corporate standards.
What I wonder is whether this kind of heterogeneity would have been a better path for Google than what you describe. Or, is it the case that the sheer monolithic scale of search/ads at Google is such that it just wouldn't make sense, and that continuing to pile incremental resources into the monstrosity (and I mean this gently/positively) of search is what the company must do and so what the engineering culture must enable.
But, as you might be alluding to, perhaps the current approach doesn't even suit the needs of the company and is purely bad even for Google's specific problems -- in that case, is it simply there due to cultural cruft/legacy? I haven't worked at Google before, so it's hard for me to say something based on my experience with it.
I thought google did this versioning thing for libraries before, but it was stopped for reasonable reasons (g3 components).
Basically if you could pin lib versions everyone would be stuck on old versions for a long time, causing difficult debugging work for each user of the library. You'd then also have all sorts of diamond problems: what if you want newest absl but older bigtable client?
It's a difficult problem no matter which way you go.
This is inherently contradictory with the trunk-based development model.
I get the feeling you need to pin to v1.4 but ideally by being on the trunk head at all times, you force everyone (especially the library owners, and yourself, by writing tests around your wrapper) to do things properly such as having enough tests in place. Otherwise, you find yourself digging a grave for yourself when the time comes to migrate from v1.4 to v1.7, and it becomes grunt work that nobody wants to take on.
On the other hand, my users can pin versions, and we maintain a longer LTS window for those features. To this day, that LTS window has never been exercised because we end up having to build backwards compatibility into everything we do. The backwards compatibility promise also means our testing is extremely verbose.
Not sure how much we should discuss in a public forum, but while you may be technically correct, I don't think this is the right solution here.
Long-running shared Git branches are a useful tool for large scale changes and integrations. They're not ideal in small teams, but unavoidable at some level and useful if done well. Fig isn't doing that.
The book itself is a bit abstract and generalised (on purpose I suppose).
People who have worked within Google vs not will think differently on how they can apply it in practice in their own company/team. Many of the practices that work very well at Google don't work as well elsewhere primarily due to bootstrapping problems. Google's developmental practice are built on top of so many layers of highly complex and large tech-infra capabilities that don't exist outside. The process/culture practices also have cyclic dependencies with those infra capabilities.
Interestingly, so many things that are quite easy everywhere else aren't so easy to do at Google. Software engineering within Google is painless in many of the usual sense of pains seen outside Google, but it has its own pains – some quite painful pains.
> Software engineering within Google is painless in many of the usual sense of pains seen outside Google, but it has its own pains some quite painful pains.
One simple example was that I couldn’t install dependencies from public registries such npmjs (npm) or pypi (pip). It took an approval process for an internal team to review and clone packages onto Google’s internal registry.
On the other hand, things like deployment and monitoring were so trivial and magic.
It is a pain point, but relevant for legal liability and tracking security concerns, else you quickly have wild west where it's unclear what kind of problems affect which team and how
Yeah, I kind of understand what that comment is saying but also feel like it can be interpreted in so many different actual ways that it's practically useless.
Google is the only company I'm aware of where the engineers constantly publish popular books about their engineering practices.
Apple, Amazon / AWS, MSFT, etc. have all done impressive things in their space at various points, but seem to lack the mixture of personalities / culture / reputation where "Engineering at Apple" isn't quite the hit that SRE at Google [1] or this book may be.
One anecdote I enjoyed was how Steve Jobs said the guiding principle for Safari was that it had to load pages faster than any other existing browser so the engs made a benchmark with top 100/1k sites part of the CI to enforce the principle: diffs that slowed-down the browser could not be merged.
They forgot Pillar 0 - not everyone's going to get along, let people choose teammates until they find ones that fit.
Once they do, pillars 1, 2 and 3 no longer need to exist.
If you need to tell people to be nice, you're doing it wrong. You're not their mother and these are not kids to be talked down to and told how to behave.
>let people choose teammates until they find ones that fit.
Good way to get a very echo room of single way of thinking with strong bias. But I agree that it is way faster without the friction of different point of views.
FYI, this is also available as an audiobook from "Upfront Books". The first few chapters are easy to listen to, and the book is broken down into shorter sections. When it gets into talking about syntax formatting rules, it's a bit un-listenable. Some chapters have a lot of advice in them, others, like the chapter on inclusivity, have less specific, actionable advice. (What do I mean? Well, you could start talking about user research or scientific UX studies to get feedback beyond groupthink at the company, but that chapter on inclusivity didn't mention it.) But besides the unevenness, it's a book that does add value, I think, to general conversations on software development. And it's a rare technical book that's also available as an audiobook, so that helped it stand out for me.
Code reviews are an interesting topic since there is so much variability on the part of the reviewer. One could spend hours reviewing a change only to respond with "Looks good to me" after finally concluding that there is no better way, do a cursory code style level review with the same response or spend the same amount of time as the original author completely re-writing the code.
I skimmed through it but still didn't see any part describing how is the local dev env is? Let's say that you work on a service that does something for ad serving, how do you write code for that and test it?
I understand unit tests and e2e tests are used but what I'm referring is just simple opening web browser, navigating to the localhost:3000/foo/bar/something and seeing if it's ok, I found this as much faster feedback loop while writing code in addition to the tests. Can anyone from Google share that?
I don't work there but when I did it was no problem to just `blaze run //the:thing -- --port=3000`. If a service needs to make RPCs (which is generally the case) that's not a problem because developer workstations are able to do so via ALTS[1]. Developers can only assert their own authority or the authority of testing entities, so such processes do not have access to production user data.
Another possibility is to install your program in production but under your own personal credentials. Every engineer has unlimited budget to do so, albeit with low priority.
Aside from the above, other practices vary but a team I was on had several dev environments in Borg (the cluster management system). One of which was just "dev" where anyone was welcome to release any service with a new build at any time. Another of which was "test" which also had basically no rules but existed for other teams to play with integrating with our services. The next of which was "experimental" where developers could release only an official release branch binary because it served a tiny amount of production traffic, "canary" which served a large amount of production traffic and required a 2-person signoff to release, and finally full production.
So basically developers had four different environments to just play with: their own workstations under their own authority, in prod, under their own authority, and dev/test in prod under team testing credentials.
Basically every service has already been packed full with so many things that an instance can barely fit on a server, you wont be able to run that monstrosity locally. Which is why they started doing "micro services", out of necessity since when each binary gets over a few gigabytes you don't got many other options. their micro services still takes gigabytes, but it let them continue adding more code. But each of those depends on hundreds of other micro services. And those microservices are of course properly secured so you wont be able to talk to production servers from your development machine.
Are there a lot of binaries "over a few gigabytes"? On x86_64 you can only have 32-bit offsets for relative jumps. How would you build and link something that large?
I received during my tenure several peer bonuses for unblocking the search or ads release by keeping the [redacted] under 2GiB. They are huge just because authors are lazy, not as an inevitable consequence of any technical choice that has been made. It was always easy for me (for some reason, easier than it was for the program's own maintainers) to use `blaze query` and a bit of critical thinking to break up some library that was always called `:util` into smaller pieces and thereby remove tens of megabytes from the linker output. People are just so lazy they aren't thinking about the consequences of their build targets.
Most developer's main targets are much, much smaller.
When i worked there you couldn’t even checkout code locally - had to ssh into office workstation. At that point just run your dev service on borg using free quota
Don't try to emulate google. Seriously. It's how you're going to kill your company if you do.
Google is not an efficient or fast company. You need to be efficient and fast to deal with the momentum google has.
If you interview people from FAANG and you see they are going to want to recreate their company within yours, don't hire them. They're going to ruin team dynamics. And seriously, make sure these people aren't assholes. With some companies in particular you need to really be an asshole to survive. Weeding out assholes is far more important for most smaller companies than technical expertise. Heck for most people you probably don't need a separate technical interview.
Just, honestly whatever google does do the opposite. From hiring to project management to planning to software development.
> They're going to ruin team dynamics. And seriously, make sure these people aren't assholes.
I’ve worked with a number of X-FANNGers, and interviewed at a couple places that were predominantly run by them. Your comment brought up some unsavoury memories.
When I research places to work now, if any of the main people are X-FANNG I politely decline the opportunity - it just saves everyone’s time.
I'm glad I'm not the only with this view. I've even refrained from stating such statements in the past on HN.
While I wouldn't go to your extent just yet, I'm on the sample size of 2 large projects. Projects where ex-Googler's were in technical/team lead positions, and those two were the flakiest, bikesheddy and glacially moving of all projects I've worked on.
I'm sure they're familiar with good sturdy systems from Google and wanted to replicate it within other organization they got hired into, but without high upfront investment to build a Google-like infrastructure (with assumingly Google internal-like tooling) the projects turned out into a mess of multiple very fragile scattered components. Both in terms of actual project code, and infrastructure systems.
I've interviewed with an ex FANNG employee and it was an intense / chaotic interview. A wholly unpleasant experience that felt a little like what happens to people when they get involved in a cult.
Lots of personal questions about attitude, what you do in your spare time, how you feel about things that bordered on the illegal.
I was personally intimidated by the process and in the end they never even bothered to contacted me back.
I think if I see this again I will pass but unfortunately I was never told this chap was going to interview me until I was in the office shaking his bloody hand. Maybe they wanted to keep it secret for this exact reason.
I'm not sure what FANG they were from, but Google doesn't have a behavioral interview so they didn't learn that there. Every non-FANG I've interviewed at has had a behavioral interview.
Huh? That's absolutely false. FYI this dude wasn't just ex-google he was ex-facebook and ex-google (worked at both). He was a Product Manager and then I think some VP of something. He was like a VP at the the parent company of the company I was interviewing for.
FAANG stand for: "Facebook (now Meta), Amazon, Apple, Netflix, and Google (now Alphabet)".
Looking at the questions I suspect these came from facebook but were altered. I even asked the next interviewer (the one I was actually told I was being interviewed by) that I found the questions strange and he agreed and said that the guy was kind "out there".
Huh? Maybe there wasn't a officially-labeled "behavioral interview" on the schedule, but the vast majority of interviews I had with Google in the past included behavioral questions/discussions/etc.
FANNG is a completely arbitrary phrase. What about Microsoft? Why Netflix and not Hulu, HBO, Disney? What about Uber, Lyft, AirBnB, and so on? Twitter?
It's definitely an outdated term. Nowadays I personally use FAANG as an alias to "tech company in which employment is synonymous with status in the tech community"
Speaking of hiring, just yesterday I spoke with a Google recruiter but bowed out when he told me that the process takes months.
I can't think of any point in my career where I'd be willing to put up with a process that takes MONTHS. Either I'm actively looking for work and want something ASAP, or I'm not actively looking, in which case why would I put myself through that? My time is valuable, I won't spend months on an interview/hiring process. If anything, Google better spend that time telling me why I should work there, because I have plenty of alternative options.
So, if you copy their multi-month hiring process, I certainly won't be applying.
Agree that on behavioral front, there is not much new signal after a few hours of conversation. Some evidence for this is a well-known study [1]. Unsure if the study has been validated, but the idea has stuck with me.
Regarding cognitive decay: I like to think time is simply pruning the weak and lazy neurons. At least that's what my remaining grey matter is telling itself. Your comment made me laugh. Thanks.
I can't give citation, but someone was talking about an assessment unit in the army that tried to identify candidates to fast-track to more senior command positions. What they found was that there was no reliable way to determine such candidates. Unpromising candidates went on to great things, and the so-called promising ones didn't. The assessor wryly noted that despite the failure, enthusiasm for assessment didn't abate.
One thing that struck me years (decades) ago was when I was working for a company as a programmer. I happened to see the CVs of the candidates. On the basis of less than 15 minutes, I came to the conclusion that they were all likely to be as good as the other. There was perhaps one lemon in the group.
Some companies are very successful, and obviously employ bright people. There's a mindset of "manifest destiny", and of course a certain conceit.
I think that there was a talk at Google about how companies rise and fall. He said that Google will eventually fall. Nothing lasts forever. (Although I will say that a company like Microsoft has beaten a lot of history as to how long the edge can last). One Google employee raised a hand and asked how it could be prevented. You could tell it in their eyes and manner of speech that they thought that the sheer brilliance of their minds would prevent such a thing from happening. I can't remember what the response was, but I think it was along the lines of "you can't". Time and tide happen to all men.
I thought about this, and I did come to one interesting idea: spinoffs. Google may do better for their shareholders if they explored the idea of spinoffs. How so? Well, we've seen the sheer volume of projects that have been started and abandoned by Google. Presumably the logic behind this is that they'll encounter, by chance, another project that will propel them to the next level.
The idea is good, but there's a snag: Google is too successful. This leads them a lot of dilettantism. They try this, they try that, they try the other. Then they get bored, and go back to their ad revenues.
Let's take Google+. At one point, people speculated that it would usurp Facebook. There was even an animated gif where a bus with the Google logo above it smashes into a car with the Facebook logo above it. As we now know, Google+ disappeared and Facebook is still strong. Perhaps a better strategy would have been to spin off the Google+ division, retaining only a 20% stake. This has two effects: 1) it creates a smaller division with less diseconomies of scale, 2) the new division is pot-committed. Failure is not an option. Success is by no means guaranteed, of course, but success is more likely when there's a gun at your head.
I stayed at one company for 9 years and by 2008 at 34, I was the epitome of an “expert beginner”. I changed jobs four times between 2008 and 2016 and I was just your regular old enterprise “full stack developer”.
In 2016, I was the lead developer at a medium size non tech company and looking at where I wanted to be in four years after I my youngest (step)son graduated and we could move anywhere the money took us.
I looked around at the landscape and saw that my salary was going to plateau with one more job hop locally unless I became a manager. I looked at what it would take to get into $BigTech as a software engineer and I was neither interested in “grinding leetCode” or even working as a software engineer at a large company with a bunch of 20 something’s.
I kind of put the idea to the side and started working on the skills it would take to become a consultant (not a contractor doing staff augmentation who calls themselves a consultant).
Out of the blue, while I was a working as a de facto “cloud architect” at a startup in 2020, BigTech cloud provider recruiter sent me a message about a role in the consulting department. So there I was at 46 with my first job in $BigTech - making about what a 26 year old who was promoted to a mid level engineer was making. But I do get to work remotely.
I yada yada yada’d over a lot. I was first exposed to consultants when I was the newly hired lead at a company in 2016 brought in to design an on prem project. At the last minute, they decided to “move to the cloud”.
They brought in two sets of consultants. One set that was suppose to know about AWS. But they ended up just treating AWS like an overpriced Colo and I didn’t know any better at the time and the other set who in hindsight were just following a cookie cutter script who were “helping” the business manage integrating a bunch of recent acquisitions.
Once I started belated learning about all the services that AWS had to offer developers and for “DevOps”, I knew I wanted to specialize in consulting in that area and set myself apart from the old school network folks who got one certification and called themselves “cloud architects”.
I figured I could bring more to the table from a software engineering standpoint.
I changed jobs in 2018 where the then new CTO wanted to be “cloud native”. He was trying to build out an internal engineering department. He knew I had no hands on experience with AWS. But he liked my proposals.
Two years and a lot of projects later, I had the technical side down pat. It took me a couple of years at AWS to get good enough at the soft skills/customer interaction side.
My original plan was to get a job at one of the big consulting agencies to hone my craft. I got lucky to skip over that part.
From the perspective of someone who accepted a competing offer after passing the interview: Google's reputation for senior++ engineers isn't what it once was, and the friction of the hiring process wasn't worth it for average total compensation.
Tip: you should always be looking. Doesn't mean you should put up with spending several months in the process with Google, but keeping your skills sharp and offers flowing is a good way to navigate your career on your own terms.
Sure. I am always looking. But my time is incredibly valuable to me and I simply won’t put up with that kind of process. I have plenty of alternative options. I worked for a large (not google large but still large) multinational company in a previous job on the back of a single 45 minute video call. Prior to that I worked at another multinational on the back of them inviting me out for lunch and then an hour chat in their office at a later time. In both cases, I had offers 2 to 3 days later.
Why would I put up with a multi month process when many of my alternatives are so much less stress? It just feels like Google doesn’t value my time.
It can be reduced to ~3-4 weeks from first contact to offer if you are in a rush.
It's as easy as saying like "I just got an offer from $competitor".
The process may take months because candidates usually ask for time to prepare, and some latency once the offer is accepted (background check, etc). If you are actively looking for a job, you are most likely already prepared.
> It can be reduced to ~3-4 weeks from first contact to offer if you are in a rush.
That’s still terrible by industry standards. We lost many candidates because they did have competing offers and couldn’t sit on them for a month while the hiring committee waited until the next blood moon to gather.
I'm sure there are plenty of candidates giving up because they received a competing offer and wouldn't wait, but I think it's reasonable to expect a hiring process to take a month, even in small companies.
Edit: I also have anecdotal evidence of cases where an offer would be made withing 2-3 business days after the interviews at both Google a Facebook.
Yes. It’s fucking terrible. That pace isn’t acceptable for things with significantly more at stake (buying a house, getting married, donating an organ). Google just abused its position as a desirable place to work while I was there to take time to hire candidates.
I think it all depends on the candidate's motivation to join a particular company. Candidates are more inclined to wait for offers from tier-1 companies (e.g. FAANG) but will have less patience for lower tier companies.
Google is way worse than the rest of FAANG though. The Facebook recruiters would explicitly warn you that if you wanted to wait for a Google offer to not even start with Facebook until you passed the interviews and were waiting on team matching.
A month is really slow. When I last interviewed every place gave me an offer within a week, and they would informally tell me I was getting one usually within 24 hours.
Even a weekly hiring committee will eventually prove to be too slow, because there will be competitors making hiring decisions within 2 business days or just after the interview.
Onsite to offer is just one part. The process starts earlier and ends later. Even a startup with 2 day onsite to offer can have a 3-4 week process. Recruiter chat, recruiter screen, tech screen, take home, onsite, references, meet and greets, offer, waiting for other offers, negotiating offers, acceptance, and then finally waiting for the 2 weeks and the start date.
Do you want to optimize to hire people who accept whatever offer comes first? Isn't it worth waiting a couple weeks to get the right job that you will spend the next few years of your life at?
I would gladly spend the time. Also, having some time between interview stages allows you to prepare well. It doesn't seem that unreasonable to me but moving to Google would be a big step up in my career. If you have alternatives that are similar enough, then it makes more sense to be picky about their hiring process.
How old are you, if you don't mind me asking? The process may span over months but obviously most of that is waiting, so if you are happy with your job (enough to stay a few more months) but would be happier with Google (enough to interview with them), I don't see the intrinsic problem.
It's almost definitely confirmation bias. Google/FAANG employee a lot of people, so as an absolute number, there are probably a lot of assholes and when you meet them you'll remember their employer. I doubt there's a significantly higher percentage of assholes at those companies compared to regular life/other large companies.
Slightly off topic, but happened this week. A Google employee commissioned my girlfriend on Reddit to draw two minions. Paid for the first one and then didn't for the second, it was literally $10 for someone that's making > 250k. It's also kinda funny that he paid with a PayPal account that's tied to his entire digital identity: github/linkedin/etc.
I'll definitely remember that the guy was from Google (ex-Amazon) and is/was an asshole.
I think there's some substance. I've personally experienced examples where teams have looked at Google or other FAANGs, cargo-culted in practices - or at least their understanding of the practices - and then things have not gone well. With that being said, it's not to say that there's nothing to be learned from FAANGs: you can learn plenty but you need to be choosy in the application of that knowledge.
The wider point is that I've observed a trend in industry - stronger at some companies than others - of applying solutions, trendy or otherwise, without a clear understanding and articulation of the problem being solved and the value of solving that problem to the business.
That being said, I don't think it's fair to generalise that one has to be an asshole to survive at one of these companies - which is certainly implied by GP's choice of phrasing. I now know quite a few people who've worked at FAANGs for varying periods of time, and none of them are assholes. There are no doubt assholes at FAANG but I've seen no evidence that they're any more concentrated than in the general population.
It doesn't mention any specifics, then it jumps into assuming that Google / FAANG have more assholes?
These are companies that operate at the sort of scale where you harvest huge amounts of data about people and then use it to maximize your profit regardless of the impact on the people themselves. Those are asshole tactics, and people who think "I'll get paid a lot to work on that!" are probably skewed towards the asshole end of the spectrum of all humans.
I don't think it's a huge leap to believe FAANGs overrepresent the number of assholes they employ.
I also don't think that's necessarily a bad thing. As much as many of the staff are assholes, the companies make things a shitload of people want. There's some good in that. You can be an asshole, make a pile of money, and still be doing good work.
(Of course, if you're reading this and you work for a FAANG I don't mean ~you~. Although, if you're reading HN... )
The so-cal adtech, social media, & gaming crews make fintech seem saintly in comparison--the difference between preying on people with enough assets to know better vs. preying on children, the elderly, & the mentally infirm.
The kids are getting their brains fried, and they don't even know it.
I never said they have more assholes. I said the culture of these big companies creates an unhealthy system that leads to people having to change how they interface with others.
Agree with the other commentator that this is an unconstructive message.
Replace Google in the message with Apple/Netflix/Microsoft and it will read exactly the same since it has no specifics except "don't do what x company is doing" with no particular reasoning behind it.
Gotta say this is absolutely not my experience. I've worked at a lot of companies and have never seen one with anywhere near the developer productivity as Google enjoys. They have the groundwork that enables the velocity, so you can build/test/deploy very very quickly. Other organizations believed they had velocity because they skipped unit tests, code review, production security, supply chain security, etc. Consequently their whole thing becomes a haunted graveyard that everyone is terrified to change.
Name names. I think we are doing ourselves injustices not to name name's here. If Google, or Meta (Facebook) or whatever has terrible cultural practices as you describe, I think we as a community owe it to ourselves to call it out.
This also otherwise rings a little hollow without specific things to backup the assertion. I think being direct forces us to have the real conversation.
Is the first chapter explaining why all lines must be less than 80 characters in width and all parameters and conditional statements must be automatically formatted in the most compact way possible - damn readability.
Having worked at Google for many years, 80 char line limit was easily the biggest frustration with coding. Because of that, I was forced to write less readable code, shorten variable and function names, omit some comments, fight with formatter and linter, spend extra time aligning things, etc.
For example, Python requires extra effort to brake lines, often adding extra parentheses or doing other unnecessary tricks. Lambda does not fit on the rest of the line? Too bad, define an extra function, because liner won't let you submit a lambda that spans more than one line...
Now, I am so happy with 120 char limit while still being able to fit two columns on one screen.
FWIW, a counterpoint: I work in C++ (80) and Go (no line length limit) at Google and I barely even think about it. I just write code however feels natural and then run 'hg fix' before sending for review.
I would definitely not impose 80 if it were my choice but it really doesn't bother me when clang-format is set up as nicely as it is.
Probably Python should switch to the Go model, since auto-splitting python lines sounds tricky.
> Python requires extra effort to brake lines, often adding extra parentheses or doing other unnecessary tricks.
There are much more compact languages than Python.
> Now, I am so happy with 120 char limit while still being able to fit two columns on one screen.
The standard width of a terminal is 80 characters; if you are writing 120-character-wide code then your lines will read very poorly on a terminal, or a window sized to a terminal.
Note too that to be readable text should not be too wide, which is why newspaper columns are narrower, and why websites normally try to have a fairly narrow text box. It turns out that 80 characters wide is a pretty good readability standard.
You are repeating an argument you read somewhere, not actually something you believe. No-one sizes their terminals to 80chars. Heck, I don't even remember the last time I came across one, maybe other than some BIOS over RS232 (i.e. NOT coding). For the 0% of you who code on a dot matrix printer, please enable line wrapping in your editor and let the rest of the world move on.
I am not saying lines should be arbitrary long, but a 100-120 _soft_ limit would really not hurt anyone and would help code readability a LOT.
p.s. even dot matrix printers support a 100 character mode.
EDIT: long lines only reduce readability for prose. Code is inherently easier to parse for the eye because it has a shape. Unless of course if you f--k up that shape with arbitrary line breaks.
> You are repeating an argument you read somewhere, not actually something you believe.
I think I am a better judge of my beliefs than you are.
> No-one sizes their terminals to 80chars.
Mine open at 24×80, although to be honest I normally tile them instead. And I much prefer code formatted to be 80 chars wide, with functions that fit within a page or two of text.
> And I much prefer code formatted to be 80 chars wide, with functions that fit within a page or two of text.
Wouldn't you agree that largely depends on the code in question? Of course I also "prefer" shorter lines in general, but that relationship is somewhat linear: 81 is not infinitely worse than 80.
If the function in question would be more readable with just 1-2 lines that happens to be 83 chars, wouldn't you opt for that over placing some arbitrary closing bracket on the next line? Whether a code is more readable (for a human) should really not be decided by an arbitrary technical limit from 50 years ago. We have code reviews for that.
btw, as far as I know the linux kernel has a _recommended_ 100 char maximum now.
What is with these draconian rules of having any limit at all? There are tons of edge cases where allowing a 100k line of text would totally make sense to most developers.
Are the tools at Google and elsewhere unable to diff code if a line exceeds some arbitrary width?
The tools at Google are perfectly capable of diffing lines of arbitrary width.
Those in charge of the C++ style guide choose not to allow them without strong justification. In practice, you just convince your reviewer and move on. Most reviewers are reasonable, but edge cases a where it is needed are rare.
I don’t especially like it, but in terms of annoying style issues, it is very far down the list, especially when the tooling handles it all for you via auto formatting and whatnot.
Sure. Unit test where you have hundreds of signature specimens which need to be evaluated against graphics processing code. Maintaining a separate list of binary files for each resource was a mess for us. Inline base64 string in each unit test method containing the exact specimen was way more convenient. These lines are long but cause no trouble for IDE responsiveness or our github tooling.
I think maintaining a directory of binary files and using it at runtime is a lot better in the long run. Either through a separate build step which embeds them into the test binary. Or via proper env setup where tests actually know how to find the data in the file system.
At the very least you can inspect your test data easily without having to extract it from base64-encoded strings.
Current approach may be working for your team, but I can totally see why a company which has a dedicated team working on build and testing infra, prefers a slightly more complex approach. It requires one-off investment in tooling which can be payed off very soon given enough people using it.
Why does it need to be in the same test file at all? That sounds like a test resource that should be in some kind of test material/resources folder where it can be read from.
Of course you can embed binary files as base64, but have you thought if you should
I dunno - I strive for 80 characters and do Python development these days, and it isn't hard at all - with good editor support. Adding an extra parenthesis shouldn't hurt that much. I choose reasonable variable names, and almost never shorten them to get a line to fit in 80 characters.
We don't have an 80 character rule, but none of the other developers has come to me saying my code is hard to read.
The lambda problem is a language problem. If Python allowed multi-statement lambdas, you wouldn't have this frustration. When I used to do C++, most of my lambdas were multi-line - my coworkers found it easier to read.
Perfect person to ask this. What is behind the need to constantly abandon products/services in favor of new offerings that have a fraction of the functionality?
Because Google earns more money by making those engineers optimize ads rather than maintaining systems with a fraction of the ROI.
The upper leaders even talked about that. How Google was looking for "the next big thing", and every time it turned out to just be be more ads since nothing else they tried even came close to being as profitable.
Other than the reasons already posted, I find it much easier to read code with 80 char line limits vs longer. For me it's like the benefit of reading an article in columnar format - going down vertically just feels more comfortable.
When you take into account IDE stuff like file tree on the left, plugins on the right, or the split view when looking at a file diff, more than 80 char is too much even on a large monitor in my opinion.
If you run 2560x1440 at its native resolution you can fit (4) side by side code windows at 80 characters with a very readable font size. Having 120 character lines only lets you have 2 (3 will cut off the last one by a decent amount).
Being able to comfortably open 4 files side by side is like having access to a completely new world compared to 2. You can often fit the entire context of what you're working on in one visible space.
I use 80 characters for mainly this reason and it's also no extra time spent when using languages like Python because Black will auto-format it for you, it comes down to running 1 command and letting the machine do all of the work.
Yeah but line formatting usually includes white space indentation in the calculation of the line length, I don't know whether you can 1:1 translate findings from normal text paragraphs like that. With linted code the actual relevant length might only be 20 characters, maybe we need some smarter linters.
Sentences feel different from lines of code. "One statement doing one thing per line of code" seems a reasonable idea but "one sentence per line of text" seems a bit absurd.
The nice thing about books about how Google does X (SRE, SWE, etc.) is you can adopt the good/relevant parts however you see fit and ignore the stuff that either only applies to Google scale or makes no sense.
I think we need some contextual line width limits.
When splitting a function/method definition over multiple lines, I like to align the parameters. Now if there is a short line limit this will force us to left align the parameters instead. This has the consequences that there is less visual, spatial, distinction between method definition and their code, resulting in a big hodgepodge of characters.
If we have contextual limits, we can make method headers arbitrarily long while keeping the body within a tight limit. Or allow a longer line when defining a lambda in python as another comment laments.
> When splitting a function/method definition over multiple lines, I like to align the parameters.
Please don't do this. It's horrible for us who use proportional fonts.
In fact I would go as far as arguing that "artificial" line breaks are always bad. They aren't really more readable, quite the opposite usually, and most importantly they break diff.
I like short lines, and it's a good ideal to strive for. 98.5% of my code fits in 80 columns, but if I need a longer line, don't artificially make it worse in the pursue of some arbitrary rule.
Sorry, i didn’t mean to use that word to mean that. Was just talking about how it’s easier to read parameters and long conditional statements that are split over separate lines.
In a personal project that makes a lot of sense (and in a personal project you have your choice), but once you are in a corporate or open source project of more than just a couple of people, code is read far more often than it is written, and by many more people than the original author.
In those circumstances, it makes sense to optimize for readability to everyone rather than the author’s personal preferences. One can easily argue that Google’s style isn’t optimal, but it’s harder to argue that a personal preferences free for all is optimal.
The fact is that the limit of 80 characters length is arbitrary. No evidence that code of 80 characters length is more readable than code of 83 characters length.
The other fact is that few devs craft their code to fit 80 characters. Instead they write code and let formatters do their thing. But automated formaters are doing a horrible job, aka if I write a line that is 81 characters long, they will massacre it to a set of unreadable lines with no proper aligning.
So now we are at the mercy of automated formatters that try to enforce an arbitrary constraint that is irrelevant with today's code editing/reviewing technologies.
I work in a codebase where a previous dude constantly did this. I do not find those lines amusing.
Whenever I'm un-fucking his stuff one of the first things to do is expand his infinite lines into usually 5-50 lines because visualizing and reasoning about 50 lines of code crammed into one requires someone who is totally a way, way better programmer than me.
I don't need to git blame to know when the code was written by that guy.
Diffing an excessively long line is uncomfortable in most tools. A crash dump mentioning the 1000-character line leaves you thinking where in that line the error has happened.
The actual limit should be the complexity of one lihe, something like the number of AST nodes in it. But our tools which go back all the way to mechanical teletypes and punched cards are firmly line-oriented. This allows them to be language-agnostic.
Indefinitely, not endlessly. You cannot predict the date when your project is going to be sunsetted: it may be in 6 month, or may be well past your retirement.
I think GP was thinking more of https://killedbygoogle.com/, Google is somewhat infamous for killing off well-liked projects if they are not successful enough.
Well, for people who are ridiculing the absurdity, note that it's software engineering at Google, not software engineering. One's peculiarities are another absurdity. Reserve judgement, and try to understand the rationale.
* the One Version Rule means that library authors can't change a library "upstream": your changes must work now or they can't get in. This sets the bar for landing changes very high, so development moves at a snail's pace. The version control and build system we have makes it difficult to work inside a branch, so collaboration on experimental/new things between engineers is difficult.
* users can create any test they wish that exercises their integration with your library. They can depend on things you never promised (Hyrum's law, addressed in the book). Each of these tests becomes a promise you, the library maintainer, make to that user: we will never break that test, no matter how weird it is. This is another huge burden on library maintainers.
* the One Version Rule means that, as a user, I can't just pin my dependence on some software to version X. For example, if I depend on Bigtable Client, just give me version 1.4. I don't need the meager performance improvements in version 1.5, because I don't want to risk breaking my project. This means every roll-up release you make, every sync to HEAD you do, risks bringing in some bug in the bleeding-edge version of literally everything you depend on.