HN2new | past | comments | ask | show | jobs | submitlogin

It sounds like you didn’t have persistent data, and were only offering compute? If there’s no need for a coherent master view accessible/writeable from all the clusters, there would be no reason to use multi-region cluster whatsoever.


We did. But the persisted data didn't live inside those ephemeral compute clusters though.


So your data store was still multi AZ? I’m a little confused how you’d serve the same user’s data consistently from multiple silos. Do you pin users to one AZ?


Yeah, keep stateful stuff and stateless stuff separate; separate clusters, network spaces, cloud accounts, likely a mix of all that.

Clearly define boundaries and acceptable behavior within boundaries.

Setup up telemetry and observability to monitor for threshold violations.

Simple. Right?


i mean you could also just spin up a reeeeeally big compute node and just do it all there.

fewer things to monitor. fewer things that can fail.

just log in from time to time to update packages.

see, cloud doesn’t have to be complex.


until you get "hardware failure" note from cloud provider. Or the person updating packages makes a typo and messes up a system.

sure, use "pet" computers for experiments and dev.. but having produluction be a "cattle" makes your life so much less stressful.


I think everyone internalizes pets vs cattle a little differently, but I think people often don't realize that if you have cattle, then you now have a cattle ranch. I.e. you often create a new concept to manage your cattle, and you often run that like it's a pet.

E.g. you want to think about your machines/containers like cattle, so you put them into a kubernetes cluster, which has become your new pet. If all your infra fits on one machine, it's way easier to have that as your pet the same way it's easier to have a dog than run a livestock operation.


In cloud environments, it's pretty standard practice to automate cluster provisioning. If your cluster are pets, you're not doing it quite right.

> If all your infra fits on one machine, it's way easier to have that as your pet

What stops you from automating the provisioning of that?

These are two orthogonal issues. Clusters are used to manage many machines. If you need only one machine, you don't need a cluster. Either way, the provisioning of single nodes and clusters should be automated. Whether you have pets or cattle is not related to how many machines you have.


Having a few computers doesn't imply a "pets" approach to managing them.


more like a zoo, right?


I'm not a cloud guy, but if you're going to put everything in one region, what's stopping you from shoving everything into a bunch of containers on a t2.2xlarge instance (or equivalent), and adding like one CloudWatch alarm (or equivalent) that reboots the instance if it stops responding?


At least three obvious reasons might be:

(1) Raw scale might mean you just plain can't fit everything on a single t2.2xlarge.

(2) Different services (containers) might have different performance profiles, so you may want a few different types of machines around.

(3) You probably still want N+2 redundancy even within your single AZ, so this scheme should at least be upgraded to three t2.2xlarge boxes. ;)


will you settle for one big container?


Hey you know what; you do you.

I’m an EE by education. It’s electron state, silly leaky abstraction Stan’ing to my head.

Different babble for allocation of memory and algorithmic manipulation of the values stored within.

Correctness is important when it comes to results being mapped to human consumption and even then the subset of parameters to be be rigorous with can be made subj. personally I lean on a subset that includes biological health and well being and deprioritize religiosity




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: