This made me audibly guffaw. Kubernetes is a lot of things, but "portable" is not one of them. GKE, EKS, AKS, OCP, etc., portability between them is nowhere near guaranteed.
It is if you stick to standard Kubernetes resources, and it has gotten even easier with better storage class and load balancer support. All of the cloud providers now give you default storage classes and ingresses when you provision a cluster on them, so you can use the exact same deployment on any of them an automatically get those things provisioned in the right way out of the box.
>It is if you stick to standard Kubernetes resources
"If you stick to standard C..."
No one does, that's the issue. Helm charts that only support certain cloud providers, operators and annotations that end up being platform specific, etc.
>now give you default storage classes and ingresses
Ingress is being deprecated, it's Gateway now! Welcome to hell, er, Kubernetes.
Would love to use Gateway! Every time I spin up a new cluster it goes like this:
- New cluster setup, time to use gateway! Yay!
- Oh crap, like 80% of the helm chart and other existing configurations I need for the softwares I'm trying to deploy STILL doesn't use gateway, this new API that's been out for... like half a decade at least.
- Even core networking things like Istio/Envoy only have limited gateway support compared to ingress
- Sigh. Ingress again.
It's been like this since gateway's inception and every time I check the needle has moved like 2% towards gateway. So I'm looking forward to year 2050 when I can use gateway!
The problem, as CNCF knows, if they pushed Gateway and deprecated ingress the world would revolt due to the amount of work involved to migrate stuff. Therefore, they leave it up to "the people" to do the extra work themselves, who have no incentive to do so since for many usecases it's not materially better.
If you're using it after it's dead, you're at risk of further problems of this nature that aren't in the underly nginx reverse proxy but in the code wrapping it.
That's one reason I've always used Traefik as my Ingress (I work mostly with K3S, which uses it by default). Which appears to have had its own security issues too, but it still looks like an implementation issue, not a weakness designed in by the spec.
On EKS I'm using whatever AWS has brewed up to integrate ELB/ALB, but I'll tend to trust it ... though maybe I shouldn't, given all the troubles I have with other integrations like secrets management.
I use Kubernetes every day, and have worked with dozens of helm charts, and have yet to encounter cloud specific helm charts. Are these internal helm charts for your company?
Obviously you can lock yourself in if you choose, but I have yet to see third party tools that assume a specific provider (unless you are using tools created BY that provider).
At my previous spot, we were running dozens of clusters, with some on prem and some in the cloud. It was easy to move workloads between the two, the only issue was ACLs, but that was our own choice.
I know they are pushing the new gateway api, but ingresses still work just fine.
"When deploying a JFrog application on an AWS EKS cluster, the AWS EBS CSI Driver is required for dynamic volume provisioning. However, this driver is not included in the JFrog Helm Charts."
"JFrog validates compatibility with core Kubernetes distributions. Some Kubernetes vendors apply additional logic or hardening (for example, Rancher), so JFrog Platform deployment on those vendor-specific distributions might not be fully supported."
I'm a Kubernetes user and advocate but to call it "portable" just tells me you've never actually tried to deploy the similar thing on multiple different clouds. Even the standardized kubernetes resources behave differently due to various cloud idiosyncracies. You can of course make the situation easier, but to call it entirely portable is probably a misnomer.
Not even close to being true, unless you specifically mean 10Gbps over twisted pair (Cat6/7) cable. SFP+ is the default on a ton of network gear still.
I think the point he is making is that the industry first went with a 10g single link, and then 40g over 4 links. Then they figured out how to do 25g over a single link, and 100g over 4 links. Those 25g/100g are common for enterprise switches. It might be fairer to say 40g is dead, 10g still has use cases.
Edit to add: If you want an example, these are the NVidia ConnectX nics available from FS.com, the lowest end one is 25g, then 100g, 200g etc.
What they mean is that the cost per bit both capex and opex/power is worse for 10G than 25G for a while now as long as you talk about new hardware.
We're at the point where 25GBaud PAM4 is being replaced by 50GBaud PAM4.
That's 50 to 100 Gbit/s.
But iirc the use of PAM4 for the faster ones than "only" 25Gbit/s lanes is a hindrance to managing bottom-barrel price-per-bit.
PCIe 3 was 8, PCIe4 was 16, and PCIe 5 is 32 GBaud with a line code basically like the 10+ Gbit/s Ethernet links (well, it's 66b/64b for Eth and 130b/128b for PCIe).
reply