Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I set up Kubernetes on AWS, a staging and a production cluster. Staging had 1 master, 3 worker nodes, Production has 3 masters, 5 worker nodes, with two compute classes (fixed and burst, so pods could be scheduled to what was needed). All of this was on CoreOS, and included independent etcd clusters. So all total, we're talking 18 AWS EC2 instances. I put live users on it, with real traffic and tinkered with it as I learned things from a production workflow.

This was back in mid-2015, which meant figuring out things like how to get overlay networking with AWS's routing, fiddling with AWS's IAM settings, VPC settings, figuring out how to split the nodes across availability zones. This included integrating with AWS AutoScalingGroups to bring up new CoreOS/K8S nodes whenever something goes down. I never got EBS working as a Kubernetes volume type (at the time, you have to add hackish tools into CoreOS to detect unformatted EBS blocks and format them when they get mounted; forget it), but fortunately, used AWS RDS for persistent stores. Persistent storage is something I look forward to trying on GKE.

I could have tried creating CloudFormation scripts but I was making my way through documentation, blog posts, and Github issues to pull it all together. It was worth it when I was done.

I didn't say it was complicated. I said it was a pain in the ass, and it's something that Kubernetes developers have acknowledged and are working on it.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: