Mit #Cilium Netzwerkverkehr bis auf API-Ebene kontrollieren? Im dritten Teil seiner Reihe zeigt Dominik Guhr, wie du mit eBPF, Deny Policies & Service Maps Sicherheitsregeln sichtbar und effektiv machst – und Stolperfallen vermeidest.
https://www.innoq.com/de/articles/2025/04/cilium-kubernetes-3/
Cilium question:
How are `ClusterIP` type services supposed to be routed when Kube-Proxy is replaced?
Like Service Cluster IP Range must be set in the K8s API server configuration, so the IPAM for that is well outside of Cilium's so... is it supposed to be natively routable? Everything says CIDR overlaps are a no-no.
The pod routing table contains nothing about the ClusterIP, though default routing should be okay `traceroute` shows it hitting the node's cluster IP and then just... nothing.
Is eBPF supposed to be doing DNAT?
I don't get it and I'm so close!
Took a while but I've discovered what I want/need for IPv6 dynamic iBGP peering with Cilium just isn't possible without hacking around OPNsense a bit.
Well, at least I *know* now it's not doable. Tweaking settings semi-blindly and poking logs wasn't exactly fulfilling.
As is par for the course I've found the GitHub issue for it closed by a stalebot.
> Many customers choose to combine the L3/L4 features of Cilium with the L4/L7 and encryption features of Istio for a defense-in-depth strategy.
Those poor bastards / Like hell they do!
Netzwerkmonitoring, Security und Observability – direkt im Kubernetes-Kernel dank eBPF.
Dominik Guhr zeigt dir in seiner neuen Artikelreihe, wie du mit #Cilium einen lokalen Kubernetes-Cluster aufsetzt – und ihn einsatzbereit für Observability und Security machst. Neben den Grundlagen lernst du unter anderem, wie Cilium-Netzwerkregeln erstellt und in Echtzeit überwacht werden.
Jetzt lesen: https://www.innoq.com/de/articles/2025/12/cilium-kubernetes-1/
Недостатки Istio по сравнению с Cilium: подробное объяснение
В этой статье мы разберём основные недостатки Istio в сравнении с Cilium Service Mesh, чтобы даже начинающий разработчик мог понять, в чём разница и почему некоторые команды выбирают Cilium вместо Istio.
Provide additional metadata information to Cilium for IP addresses outside of the Kubernetes cluster scope https://www.danielstechblog.io/provide-additional-metadata-information-to-cilium-for-ip-addresses-outside-of-the-kubernetes-cluster-scope/ #Azure #AKS #Cilium #Kubernetes
YES! Generally Available: Azure CNI Overlay Dual-stack with Cilium Dataplane SUPPORT #aks #kubernetes #Azure #cilium https://azure.microsoft.com/updates?id=486814
Public Preview: Cilium WireGuard Encryption Support in AKS #kubernetes #cilium #Azure #aks #wireguard https://azure.microsoft.com/en-us/updates
Cilium doesn't add IPv6 pod IPs to the host network interface, which means that returning packets get discarded before hitting the OS unless promiscuous mode is on.
I've enabled masquerading but the docs say it won't SNAT anything in the `ipv6NativeRoutingCIDR`.
So I'm confused, how is the Native routing mode supposed to work then in IPv6? The pods are all getting globally unique addresses all valid for the subnet...
This morning's *absolute* WTF moment:
Pod network traffic 100% packet loss outbound UNTIL I `tcpdump` it on the node, then it starts working fine.
Hubble is up and running ! Great way to visualize traffic in your cluster.
#DevOps #k8s #learnk8s #cncf #100DaysOfCode #100DaysOfDevOps #kubernetes #cilium
ok so cilium's hubble tool is really helpful, even the non-enterprise version
it has been really helpful for troubleshooting network policies
but can we maybe talk about how it renders the connections?
like, what in the heck? #Kubernetes #Cilium
Nico Vibert, my beloved,,,,
is Cilium native routing mode supposed to publish pod IPs on the interfaces in the host network namespace?
That would make sense to me as using the native network layer 2/3 routing.
Or am I required to turn on SNAT using the IP masquerading feature?
Pods are getting valid IPv6 GUAs in the LAN/host subnet, but of course nothing can return to them...
Kubernetes DNS question:
Couldn't the CNI actually manage DNS instead of CoreDNS?
I mean it'd be potentially a lot of data to throw at eBPF for in-cluster records. It's already distributing information for routing.
It could also enforce all-pods upstreams by using DNAT - assuming DNSSEC remains a niche concern.
...in my defence, I never said my ideas were good...