Kube is actually fantastic for on-prem/dedicated/colo. I still wind up needing to manage my _non_ Kube stuff - and fall back to Ansible + systemd for the data layer for example.
Additionally, one big complexity re: Kube on dedicated is dealing with custom hardware. Kube and Docker really require modern and specialized kernels - This actually applies to AWS as well; the current Kube AMI for AWS doesn't install the `linux-aws` kernel for example, so you have no access to the ENA driver or the ixgbefv driver, and the t* and i3 instance performance is awful (the most common and the most vital machines respectively). There is a PR for that open somewhere and I'm sure it will be fixed soon, but you see my point.
After the hardware/kernel build (which you'll need to manage outside of Kube, so you'll probably choose something like Ansible), you'll probably want auth management at the host level (assuming you run on-prem), which you'll most likely fall back to Ansible for again.
I should probably also admit somewhere in this thread I actually use and prefer Chef :)
Additionally, one big complexity re: Kube on dedicated is dealing with custom hardware. Kube and Docker really require modern and specialized kernels - This actually applies to AWS as well; the current Kube AMI for AWS doesn't install the `linux-aws` kernel for example, so you have no access to the ENA driver or the ixgbefv driver, and the t* and i3 instance performance is awful (the most common and the most vital machines respectively). There is a PR for that open somewhere and I'm sure it will be fixed soon, but you see my point.
After the hardware/kernel build (which you'll need to manage outside of Kube, so you'll probably choose something like Ansible), you'll probably want auth management at the host level (assuming you run on-prem), which you'll most likely fall back to Ansible for again.
I should probably also admit somewhere in this thread I actually use and prefer Chef :)