We recently used Kind for a k8s workshop. We provisioned a beefy cloud server and ran 15 3-node Kind clusters on it, so everyone had it's own k8s instance without having to install anything locally. It worked absolutely great for this purpose.
I wrote some scripting around it so people can claim their own cluster via SSH. I'm planning to write a post about it soon and make the code available.
I’ve tried minikube, microk8s, the one bundled with Docker Desktop for Windows, k3s and Red Hat CodeReady. Of these I had the best experience with Kind (by far) and the worst experience with CodeReady (also by far).
The thing I like most with Kind: Being inside Docker makes Kind very ephemeral. Every time I start it up I get a fresh cluster. I know where everything is and it doesn’t contaminate my machine.
Since some of the authors are on the thread I would like say thank you. I really appreciated the recent improvements to kubectl-integration and the addition of local storage.
In the future I would like it to be easier to play with pod and network policies, reduced cluster startup time and reduced node image sizes.
OP is right about CodeReady - I’ll unhesitatingly say it’s a pos. It’s way too heavyweight for even a high end laptop. It’s single node only. It’s falls over if you enable monitoring unless you can give it 8 cores and 12GB, then it sort of works but is too slow. The 3 times I tried deploying the provided samples - they didn’t work out of box. It also requires you to download new release every month I think - no in place updates I think.
It used too many resources for my computer. It took about 10 minutes to start a cluster and once the cluster was up and running my computer had a hard time performing any additional tasks - like having IDE open and compiling source code.
In comparison k3s takes seconds to start a cluster. Kind takes about a minute. Neither will consume resources to a point where my computer becomes unusable.
I reported my experience to Red Hat and they replied that it was to be expected.
I'm a great fan of kind, it's made my life so much easier for a couple of use cases.
1) I run a training course on container security. We moved from using straight kubeadm on the student's VMs to using kind clusters. the advantage here being we can customize different clusters for different scenarios by providing a kind config file on start-up. We can also have multiple clusters running on a single VM easily with no interferance between them.
2) when evaluating software or trying out a feature, it's really nice to be able to spin up a test cluster in < 2 minutes and try it out, then it's just "kind delete cluster" to get rid of it again.
when I compare it to other options (e.g. minikube, microk8s etc) it subjectively feels less "magic" to me, in that it's just one or more Docker containers, running kubeadm, so as long as you understand those two things, you can get a picture of what's going on.
I've recently started prototyping our move to k8s - and my recommendation is stay away from minikube, k3s and kind. Kind looks the best on paper. But canonical has done great with https://microk8s.io/
I'd love to hear why anyone preferes any other solution for local development/experimentation.
microk8s is really cool! We wanted kind for development of kubernetes itself and I don't think microk8s was around at the time.
One difference besides being able to build & run arbitrary Kubernetes versions is being able to run on mac, windows, and Linux instead of only where snap is supported.
We're paying more attention to local development of applications now, expect some major improvements soon :-)
That's great news. In my experience kind was a bit resource heavy - but more importantly didn't seem to have clear documentation that was geared towards local testing (for users/consumers of k8s).
I've spent 2 years now developing a pretty decent sized stack on k8s. I've used microk8s, minikube, and Docker Desktop's built-in kubernetes distro for a while. I feel like docker desktop worked the best for me.
However, I've wasted so much time over the past 2 years trying to figure out why something wasn't working, only to find out it was because there were differences between the k8s distro I was using, and our production system. Ultimately I found the best solution was deploying exactly what we run in production on some spare bare metal I had laying around (after adding a hundred gigs of RAM).
Luckily we have a production setup that is designed to run on-prem, so this was an option for me. Regardless, I think having as close to production as possible will make your life easier.
That being said, I still might try this project out.
Trying to find a dev setup that was feature complete and as similar as possible to production, while still running locally.
As a sibling comment mentions, there are a number of differences between distributions/implementations - and especially when new to k8s it's way too easy to waste time trying to figure out why something doesn't work.
I've had the same experience, absolutely love Microk8s, though I am hopeful about k3s and k3d (k3s in Docker). As of right now they "mostly" work, but unfortunately that's enough to break things.
Also, you can't beat the one-line snap install for Microk8s.
I just tried to set up the same stack of ~7 services I had running on Microk8s locally, and a few things went wrong in the process. Couldn't get it running.
My kubectl-fu is not strong enough to fix it, so for me that was a dealbreaker.
Though I am super passionate about k3s and support the hell out of everything Rancher Labs does, so by no means did it leave a bad taste in my mouth.
One caveat on my previous comment - if you use any one of these in production (I guess k3s is the most likely one there) - then I think using it for dev should be fine. The biggest issue is differences between versions - we're deploying to managed k8s in azure, and need a dev environment that works similarly.
I use a Minikube cluster w/ KVM as the driver for my self-hosted Gitlab CI/CD and it's worked flawlessly. Wonder what issues you encountered to recommend against it.
I'll just say that I found end user documentation for microk8s to be nice and friendly. And that k3s (the little I looked) felt maybe a little too much like administering and running a production k8s cluster. We're not planning to do that; what I needed was something that worked easily for prototyping and experimenting - and could be run mostly via kubectl (and/or helm) just like a managed cluster.
K3s is a lot easier to get working on rhel and fedora. Canonical tends to build things in ways that makes it barely work on Ubuntu and completely fail everywhere else. Same with lxc. I was a bit upset rhel dropped it in 8 and then I tried to get it running and saw the horror show and decided to rather look at rootfs podman.
So does this mean you can run containers in containers orchestrating other containers.
Containers must really be the holy grail of serverless and cloud "nativeness".
Seems reasonable to me. Outside of some weird edge cases and some "technically..."s a container is just a process with its own namespace and file system, and maybe it's own IP. If we didn't have shared-filesystem, shared-namespace, shared-ports processes for historical reasons, who would be clamoring to add them? Why wouldn't you run everything in a container, container-scheduler included?
Technically you can define which namespaces to inherit and which ones to create "from scratch" at process initialization time. (Actually there's an unshare() syscall that does it, but clone() is the standard way to create new namespaces and new processes in them, plus there's setns() to put a thread into some other namespace given a fd pointing to that NS.)
So, namespaces are task level things in the kernel. (Every thread is a task, and by default every process has one thread, so every process is also at least one task.)
I think the "best" depends on what you're doing to be honest, (e.g. if you only develop on ubuntu, check out microk8s too! they have some good ideas, eg focusing on straight-forward support for a local registry instead of side-loading) and there's a _lot_ of room to improve kind, but the vote of confidence is still very nice to see :-)
I like KinD, but find k3s much faster to bootstrap and lighter-weight too. Rancher have gone GA with it and provide commercial support, Darren Shepherd also tracks the latest k8s release very closely.
Linux -> k3s (build a cluster or single node via https://k3sup.dev)
MacOS/Windows -> k3s (runs k3s in a Docker container, and is super fast)
That said, if you're hacking on K8s code, then KinD has nice features to support that workflow. KinD is only suitable for dev, k3s can run in prod also, try both, compare. They are both easy to use.
Used kind + skaffold for 6 months and it was pretty solid. However, eventually switched to k3d and tilt and feeling like this combo is amazing. Cluster takes 2 seconds to create now.
Kind has been a godsend for me. When you've got a 16Gb MBP with both Docker and K8S running, re-using the Docker virtual machine makes a big difference in memory and CPU usage. Thanks to the team!
I really wanted to use kind but the fact that it loses all the data after restart/sleep of computer keeps me from using it.
I’m developing Kubernetes controllers and the Custom Resources represent the bits of cloud infrastructure ( https://crossplane.io ). So when I lose the kind cluster, I have to go and delete each and every resource in AWS :( I am unhappily forced to use minikube until support comes to kind.
loses all the data because you have to start a new cluster? The data should be persisted...
If this refers to https://github.com/kubernetes-sigs/kind/issues/148, the good news is that we're most of the way there and I'm going back to work on this now, ideally out in a v0.8 in the next week or so.
I switched my local "lab" setup from minikube (which was in use for a long time) to kind recently. The main reasons were
None of us run on Linux which means we're all using VMs for our containers and we all use Docker Desktop for various things. That meant we're running extra local VMs for no good reason. With kind I can just use the one vm for all the container things.
But the real reason for the actual switch was I just kept running into things that minikube couldn't do and Kind could, as well as having things I had decided to ignore like the fact that minikube does everything on one node which is 100% unnatural for kubernetes and I had multiple cases where this setup blinded me to problems that would occur in a real cluster.
3) I've also found I prefer the configuration/customization approach of kind over minikube though admittedly that's kind of a small thing.
Ultimately I find kind is a better simulator for the purpose of prototyping future cluster changes as well as use as a local "lab" for diagnosing services in a "production like" environment 100% under your control.
kind was originally built for developing kubernetes itself, as a cheaper option for testing changes to the core components.
it wasn't really meant to compete with minikube et. al, but complement for differing usage, but you may now find it useful as a lightweight option with a slightly different feature set.
it's also the only local cluster that is fully conformant as far as I know, because conformance tests involve verifying multi-node behavior, at the time minikube did not support
- building kubernetes from a checkout and running it
- docker based nodes
- multiple nodes per cluster
These days they've gotten more similar, we're both shipping docker and podman based nodes.
I think one of the most interesting things about kind is that the entire kubernetes distro is packed into a single "node" docker image, it's very easy to work with fully offline.
I really wish you had used a regular service definition when testing KinD. The omission reduces the usefulness of your comparison. I want to choose a local k8s cluster that is as close to production as possible. And I want my local deployment configs to be as close as possible to production.
You say that "ingress in kind is a little trickier than in the above platforms" with no explanation.
For me, use Docker if you want k8s started up every time you start Docker, and easy ingress. I don't love having a cluster always running, so I'm keeping the k8s function off by default.
Use kind if you want multi-node clusters, and a production-like simulation of your environment.
Use minikube for a straightforward dev experience, where you have control over k8s version, resource allocation, and don't need meaningful configuration of the control plane.
That is pretty neat but it is nice to have a tool that had it built natively, rather than having to search through 30 different provider flags with Minikube. Hopefully there documentation has gotten better but I'd generally trust the innovation behind the Kind tool than the Minikube devs who are late to the game.
you can simulate multiple nodes for example, if you want to experiment with node selectors or test having a multi-node setup, it's quite easy with kind
I tend to recommend minikube to new comers as it provides easy addons like ingress, image registry, load balancer via minikube tunnel. You can run minikube with 2GB
Easy when only dealing with one node.
Once your familiar then I will recommend Kind when you need more than one worker node and test scenario that require multiple nodes or if you know your way to install and configure addons yourself
As a chromebook user with crostini, I'm wondering if this is going to work for me - as neither k3s, nor minikube, or minishift did (due to limitations in crostini).
k3s author here. With user namespace and rootless support it is getting closer to k3s running in crostini, but nobody is really working on this. I was a big fan of crostini when it came out but the insistence on lxc and user namespaces makes it too limited and wouldn't recommend it if you work with containers.
This shouldn't happen, can you please file a bug with more more information about your environment and specific usage?
I run kind in the minimum docker for mac spec which is one core / 1gb and it performs just fine. We've worked hard to make it lightweight, including a KEP upstream for slimming down the binaries.
Are you sure you are talking about Kind and not minikube for instance? for me Kind is the most efficient way of running a real cluster on my machine, a bare cluster merely takes 600MB for RAM in my case and the creation takes too long only in the first time because it downloads the docker images.
I wrote some scripting around it so people can claim their own cluster via SSH. I'm planning to write a post about it soon and make the code available.