Red Hat has a similar eBPF based tool https://github.com/netobserv/network-observability-operator
(Disclaimer: i work on it) - the cool thing imho with retina or redhat netobserv or pixie is they aren't tied to a specific CNI.
Now one of the problems that arises is potential conflicts and lack of collaboration between eBPF based tools, as there are more and more. Something called bpfman aims to address this aspect
You can also check out DeepFlow [1], where we implemented distributed tracing for microservices using eBPF, which of course also includes observability of K8s networks.
Just today at $DAYJOB we had a complaint that one teams Azure Kubernetes clusters was "slow". About only metric out of norm was network traffic - but lack of detailed instrumentation meant we couldn't really isolate the cause to specific container or process
If the only explanation someone can provide is that "a cluster is slow", the issue isn't with network observability. They need to do at least the minimum level of analysis before escalating.
Yes, that would be great, but unfortunately there are application teams (particularly in the enterprise) lacking such tact when blaming infrastructure for issues.
Good old silos are alive and well, and ownership is not always part of the culture.
In our case the expected golden path is that once our team figures the proper procedure, we will establish it for the downstream teams that are direct supports of the application teams.
So at least in theory things are somewhat well set up, but there's too much siloing at our level (wildly separate network teams, teams for specific clouds, etc.)
It’s like Cilium + Hubble but useful for you don’t/can’t run cilium. Uses eBPF to collect metrics and stats on what flows where, can record an impressive amount of stuff, without any required instrumentation of your applications. Amazingly handy for when you run both first party and 3rd party apps in your K8s cluster. The network maps these tools produce are handy too.
Although, Cilium is pretty great, so not sure why you wouldn’t run it, given the option…
Cilium is a CNI - the functionality that provides the K8s cluster inter-pod networking. The fact that it uses eBPF to deliver its functionality is what gives it the impressive observability you usually only get from a service mesh. I agree that not everyone needs a service mesh.
Haven't used this but I tried out Pixie trying to debug where outgoing traffic was coming from and where it was going and was fairly successful although Pixie wasn't very stable/had a lot of issues causing crashes.
In this case, we had a couple services talking to 3rd party services running on AWS so it wasn't obvious from generic flow logs.
I also used Lacework a couple years ago which is eBPF based and it was pretty trivial to see things phoning home or one off maintenance where a new connection was being initiated.
There is a flood of observability tools based on eBPF coming out these days [1]. eBPF is used to collect metrics without the need to instrument the code.
Speaking of observability tools. Anybody here know how to gather more in-depth metrics on mTLS requests? Have an internal (self signed) CA and just want to know which issued certs are presented to nodes. Would be nice to get cert serial number and other metadata as well
That is a very interesting ask, let me raise an issue against the repo and see how we can solve this with eBPF in this repo. I am pretty sure this is a very common problem for a lot of kube admins.
Don't change the name of this crate, it is in an unrelated domain. (Also unrelated to retina displays and uhh.. this Brazilian intrusion detection system https://sunsoftware.com.br/retina/ and whatever this thing is https://retina.ai/ among other things)