Minus the header, looks like ~24. I use a single-node Kubernetes cluster running Talos [1]. Running a single-node cluster is kinda dumb architecturally, but adding a new service takes <10 minutes most of the time, which is nice. I've standardized on Cuelang [2] for my configs, so adding a new service is some DNS/Caddy config fiddling, then:
Where `kube.cue` sets reasonable defaults (e.g. image is <local registry>/<service>). The "cluster" runs on a mini PC in my basement, and I have a small Digital Ocean VM with a static IP acting as an ingress (networking via Tailscale). Backups to cloud storage with restic, alerting/monitoring with Prometheus/Grafana, Caddy/Tailscale for local ingress.
Interested in how you're using DO as an ingress. I currently run a droplet that's reaching its capacity because I'm running all the services directly on that underpowered machine. I would much rather run them from a local computer. Is it pretty straightforward to set that kind of thing up with tailscale?
Indeed! I use Headscale (though hosted Tailscale will work just fine), DO hosts the controlplane, and is also on the tailnet itself. My Caddy config has something like:
<list of public hosts> {
reverse_proxy 100.64.0.<mini PC>
}
The mini PC IP is a Tailscale container in a pod with a second Caddy instance that routes within the cluster. For sensitive/personal services, they're only configured in the cluster-internal Caddy config, and thus only accessible over the tailnet.
One can optionally add other "hardening" at the DO layer, like Crowdsec, to minimize automated/malicious/bot traffic into your home.
We work with small-mid startups and not many of those use Azure :)
IME, azure has higher adoption once there is a CIO in the company.
However, in the Argonaut context, that only limits the overall infra management piece where we don't have a seamless integration using IAM roles and can't provision infra like dbs and queues.
The app deployments to Azure kubernetes (or any k8s cluster) work seamlessly though, and some of our customers deploy to AKS in this manner.
I usually prefer to write more readable code, and then optimize later if necessary. If the short circuit becomes critical for performance then I would leave a comment, or as the parent suggests just order it with an early return.
> If the short circuit becomes critical for performance the
The point is not performance. The point is readability, and expressing things clearly and with the appropriate level of detail and leaving out nesting and scope. Safety and performance are important but secondary advantages.