mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

15K
active users

#Cilium

2 posts2 participants0 posts today

Недостатки Istio по сравнению с Cilium: подробное объяснение

В этой статье мы разберём основные недостатки Istio в сравнении с Cilium Service Mesh, чтобы даже начинающий разработчик мог понять, в чём разница и почему некоторые команды выбирают Cilium вместо Istio.

habr.com/ru/articles/903736/

ХабрНедостатки Istio по сравнению с Cilium: подробное объяснениеВ этой статье мы разберём основные недостатки Istio в сравнении с Cilium Service Mesh, чтобы даже начинающий разработчик мог понять, в чём разница и почему некоторые команды выбирают Cilium вместо...

Cilium doesn't add IPv6 pod IPs to the host network interface, which means that returning packets get discarded before hitting the OS unless promiscuous mode is on.

I've enabled masquerading but the docs say it won't SNAT anything in the `ipv6NativeRoutingCIDR`.

So I'm confused, how is the Native routing mode supposed to work then in IPv6? The pods are all getting globally unique addresses all valid for the subnet...

is Cilium native routing mode supposed to publish pod IPs on the interfaces in the host network namespace?

That would make sense to me as using the native network layer 2/3 routing.

Or am I required to turn on SNAT using the IP masquerading feature?

Pods are getting valid IPv6 GUAs in the LAN/host subnet, but of course nothing can return to them...

Kubernetes DNS question:

Couldn't the CNI actually manage DNS instead of CoreDNS?

I mean it'd be potentially a lot of data to throw at eBPF for in-cluster records. It's already distributing information for routing.

It could also enforce all-pods upstreams by using DNAT - assuming DNSSEC remains a niche concern.

...in my defence, I never said my ideas were good...

Ok, serious question, what do I replace tailscale with? Can I run ipsec in a k8s pod?

I know #cilium can do multicluster but it wasn't fun. Managing another CA and making sure policies are multicluster-aware sucks. And I've hit a few issues where I had to restart the cilium node agent until it'd finally catch up (was a while ago, so maybe a non-issue nowadays).

What I want to have is to have a k8s service in cluster A that resolves to a k8s pod in cluster B. It's almost http only but not quite.

I guess I could get away with setting up an LB pool in both clusters and doing a host-to-host wireguard or ipsec to bridge those lb pools over. Still not ideal as it'd be harder to firewall everything off.