Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel like I’m already living in the Kubernetes 2.0 world because I manage my clusters & its applications with Terraform.

- I get HCL, types, resource dependencies, data structure manipulation for free

- I use a single `tf apply` to create the cluster, its underlying compute nodes, related cloud stuff like S3 buckets, etc; as well as all the stuff running on the cluster

- We use terraform modules for re-use and de-duplication, including integration with non-K8s infrastructure. For example, we have a module that sets up a Cloudflare ZeroTrust tunnel to a K8s service, so with 5 lines of code I can get a unique public HTTPS endpoint protected by SSO for whatever running in K8s. The module creates a Deployment running cloudflared as well as configures the tunnel in the Cloudflare API.

- Many infrastructure providers ship signed well documented Terraform modules, and Terraform does reasonable dependency management for the modules & providers themselves with lockfiles.

- I can compose Helm charts just fine via the Helm terraform provider if necessary. Many times I see Helm charts that are just “create namespace, create foo-operator deployment, create custom resource from chart values” (like Datadog). For these I opt to just install the operator & manage the CRD from terraform directly or via a thin Helm pass through chat that just echos whatever HCL/YAML I put in from Terraform values.

Terraform’s main weakness is orchestrating the apply process itself, similar to k8s with YAML or whatever else. We use Spacelift for this.



In a way it's redundant to have the state twice: once in Kubernetes itself and once in the Terraform state. This can lead to problems when resources are modified through mutating webhooks or similar. Then you need to mark your properties as "computed fields" or something like that. So I am not a fan of managing applications through TF. Managing clusters might be fine, though.


That's fair, and I think why more people don't go the Terraform route. Our setup is pretty simple which helps. We treat entire clusters more like namespaces, where a cluster will run a single main application and its support services as a form of "application level availability zone". We still get bin packing for all that stuff with each cluster, but maybe not MAXIMUM BIN PACKING that we'd get if we ran all the applications in a single big cluster, and there's some EKS cost paying for more clusters.

We do sometimes have the mutating webhook stuff, for example when running 3rd party JVM stuff we tell the Datadog operator to inject JMX agents into applications using a mutating webhook. For those kinds of things we manage the application using the Helm provider pass-through I mentioned, so what Terraform stores in its state, diffs, and manages on change is the input manifests passed through Helm; for those resources it never inspects the Kubernetes API objects directly - it will just trigger a new helm release if the inputs change on the Terraform side or delete the helm release if removed.

This is not a beautiful solution but it works well in practice with minimal fuss when we hit those Kubernetes provider annoyances.


Exactly our experience hence we moved to plain K8s objects 6 years ago and never looked back.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: