Kubernetes 1.29: Easily Modify Volume Attributes – A Developer's Advantage!
The release of Kubernetes 1.29 introduces an exciting new feature: an alpha capability that allows …
Kubernetes v1.26 brings significant advancements in network traffic management. Two features, support for internal traffic policy for Services and endpoint slice termination conditions, have been promoted to General Availability (GA). A third feature, proxy for terminating endpoints, has reached Beta status. These improvements aim to address challenges in traffic management and open new possibilities for the future.
Before Kubernetes v1.26, clusters could experience traffic loss through Service Load Balancers during rolling updates when the externalTrafficPolicy field was set to Local. To understand this, a brief overview of how Kubernetes manages Load Balancers is helpful!
In Kubernetes, you can create a Service with type: LoadBalancer to expose an application externally via a Load Balancer. The implementation of the Load Balancer varies between clusters and platforms, but the Service provides a generic abstraction that is consistent across all Kubernetes installations.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
type: LoadBalancer
Under the hood, Kubernetes assigns a NodePort to the Service, which is then used by kube-proxy to provide a network data path from the NodePort to the Pod. A controller then adds all available nodes in the cluster to the Load Balancer’s backend pool, using the assigned NodePort for the Service as the backend target port.

Often, it is beneficial to set externalTrafficPolicy: Local for Services to avoid additional hops between nodes that do not run healthy Pods for that Service. When externalTrafficPolicy: Local is used, an additional NodePort for health checks is assigned, so nodes that do not contain healthy Pods are excluded from a Load Balancer’s backend pool.

A common scenario where traffic can be lost is when a node loses all Pods for a Service, but the external Load Balancer has not yet checked the health check NodePort. The likelihood of this situation heavily depends on the configured health check interval of the Load Balancer. The larger the interval, the more likely this is, as the Load Balancer continues to send traffic to a node even after kube-proxy has removed the forwarding rules for that Service. This also happens when Pods begin terminating during rolling updates. Since Kubernetes does not consider terminating Pods as “ready,” traffic can be lost if only terminating Pods are present on a node during a rolling update.

As of Kubernetes v1.26, kube-proxy enables the ProxyTerminatingEndpoints feature by default, allowing automatic failover and routing to terminating endpoints in scenarios where traffic would otherwise be lost. Specifically, during a rolling update, if a node only contains terminating Pods, traffic is routed to the terminating Pods based on their readiness. Additionally, kube-proxy will actively fail the health check of the NodePort if only terminating Pods are available. This way, kube-proxy informs the external Load Balancer that no new connections should be sent to this node, while requests for already existing connections can continue to be handled gracefully.
The new features in Kubernetes v1.26 offer developers and DevOps teams significant benefits and reduce downtime during rolling updates. ayedo is proud to be a partner in the Kubernetes ecosystem and to help you make the most of these new opportunities.
Source: Kubernetes Blog
The release of Kubernetes 1.29 introduces an exciting new feature: an alpha capability that allows …
There is a lot of discussion about whether not using Kubernetes resource limits could actually be …
Every year, just before the official opening of KubeCon+CloudNativeCon, a very special event takes …