Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I'm not a Kubernetes expert, or even a novice, so I have no opinions on necessary and unnecessary bits and bobs in the system. But I have to think that container orchestration is a new enough domain that it must have some stuff that seemed like a good idea but wasn't, some stuff that seemed like a good idea and was, and lacks some things that seem like a good idea after 10 years of working with containers.

I've grown to learn that the bulk of the criticism directed at Kubernetes in reality does not reflect problems with Kubernetes. Instead, it underlined that the critics are actually the problem, not Kubernetes. I mean,they mindlessly decided to use Kubernetes for tasks and purposes that made no sense, proceeded to be frustrated due to the way they misuse it, and blame Kubernetes as the scapegoat.

Think about it for a second. Kubernetes is awesome in the following scenario:

- you have a mix of COTS bare metal servers and/or vCPUs that you have lying around and you want to use it as infrastructure to run your jobs and services,

- you want to simplify the job of deploying said jobs and services to your heterogeneous ad-hoc cluster including support for rollbacks and blue-green deployments,

- you don't want developers to worry about details such as DNS and networking and topologies.

- you want to automatically scale up and down your services anywhere in your ad-hoc cluster without having anyone click a button or worry too much if a box dies.

- you don't want to be tied to a specific app framework.

If you take ad-hoc cluster of COTS hardware out of the mix, odds are Kubernetes is not what you want. It's fine if you still want to use it, but odds are you have a far better fit elsewhere.



> - you don't want developers to worry about details such as DNS and networking and topologies.

Did they need to know this before Kubernetes? I've been in the trade for over 20 years and the typical product developer never cared a bit about it anyway.

> - you don't want to be tied to a specific app framework.

Yes and no. K8s (and docker images) indeed helps you in deploying more consistently different languages/frameworks but the biggest factor against this is in the end still organizational rather than purely technical. (This in an average product company with average developers, not super-duper SV startup with world-class top-notch talent where each dev is fluent in at least 4 different languages and stacks).


> Did they need to know this before Kubernetes?

Yes? How do you plan to configure an instance of an internal service to call another service?

> I've been in the trade for over 20 years and the typical product developer never cared a bit about it anyway.

Do you work with web services? How do you plan to get a service to send requests to, say, a database?

This is a very basic and recurrent usecase. I mean, one of the primary selling points of tools such as Docker compose is how they handle networking. Things like Microsoft's Aspire were developed specifically to mitigate the pain points of this usecase. How come you believe that this is not an issue?


You just call some DNS that is provided by sysadmins/ops. The devs don't know anything about it.


I used to be that sysadmin, writing config to set that all up. It was far more labor intensive than today where as a dev I can write a single manifest and have the cluster take care of everything for me, including stuff like configuring a load balancer with probes and managing TLS certificates.


Nobody is denying that. But GP was saying that now with k8s developers don't need to know about the network. My rebuttal is that devs never had to do that. Now maybe even Ops people can ignore some of that part because many more things are automated or work out of the box. But the inner complexity of SDNs inside k8s in my opinion is higher than managing your typical star topology + L4 routing + L7 proxies you had to manage yourself back in the days.


> But GP was saying that now with k8s developers don't need to know about the network. My rebuttal is that devs never had to do that.

The only developers who never had to know about the network are those who do not work with networks.


I think a phone call analogy is apt here. Callers don’t have to understand the network. But they do have to understand that there is a network; they need to know to whom to address a call (i.e., what number to dial); and they need to know what to do when the call doesn’t go through.


Devs never had to do that because Moore's Law was still working, the internet was relatively small, and so most never had to run their software on more than one machine outside of some scientific use-cases. Different story now.


Which is why you often had to wait weeks for any change.

Hell, in some places, Ops are pushing k8s partially because it makes DNS and TLS something that can be easily and reliably provided in minimal amount of time, so you (as a dev) don't have a request for DNS update wait 5 weeks while Ops are fighting fire all the time.


> You just call some DNS that is provided by sysadmins/ops.

You are the ops. There are no sysadmdins. Do you still live in the 90s? I mean, even Docker compose supports specifying multiple networks where to launch your local services. Do you ever worked with web services at all?


> Kubernetes is awesome in the following scenario: [...]

Ironically, that looks a lot like when k8s is managed by a dedicated infra team / cloud provider.

Whereas in most smaller shops that erroneously used k8s, management fell back on the same dev team also trying to ship a product.

Which I guess is reasonable: if you have a powerful enough generic container orchestration system, it's going to have enough configuration complexity to need specialists.

(Hence why the first wave of not-k8s were simplified k8s-on-rails, for common use cases)


> Ironically, that looks a lot like when k8s is managed by a dedicated infra team / cloud provider.

There are multiple concerns at play:

- how to stitch together this cluster in a way that it can serve our purposes,

- how to deploy my app in this cluster so that it works and meets the definition of done.

There are some overlaps between both groups, but there are indeed some concerns that are still firmly in the platform team's wheelhouse. For example, should different development teams have access to other team's resources? Should some services be allowed to deploy to specific nodes? If a node fails, should a team drop work on features to provision everything together again? If anyone answers "no" to any of the questions, it is a platform concern.


I suspect it's a learning thing.

Which is a shame really because if you want something simple, learning Service, Ingress and Deployment is really not that hard and rewards years of benefits.

Plenty of PaaS who will run your cluster for cheap so you don't have to maintain it yourself, like OVH.

It really is an imaginary issue with terrible solutions.


> they mindlessly decided to use Kubernetes for tasks and purposes that made no sense...

Or... were instructed that they had to use it, regardless of the appropriateness of it, because a company was 'standardizing' all their infrastructure on k8s.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: