-
Notifications
You must be signed in to change notification settings - Fork 365
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: continue routing to serving endpointslices during termination #4946
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: fogninid <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks!
@fogninid why should traffic be routed to a pod during graceful termination? Imo with the right rolling update strategy in place, traffic should be shifted to the newer pod |
@arkodg the My expectations are for envoy-gateway to behave in the same way as other networking solutions. Most explicit mentions I found about this logic are close to what cilium is doing here. envoy-gateway could do similar, using all endpoints with state Would this be the right place to introduce this logic? |
thanks for sharing the link @fogninid, found this blog https://kubernetes.io/blog/2022/12/30/advancements-in-kubernetes-traffic-engineering/#traffic-loss-from-load-balancers-during-rolling-updates which highlights how it can be used by Ingress Controllers
This would translate to, if an endpoint is not Ready, but Serving, we can set it the lb endpoint status to wdyt @envoyproxy/gateway-maintainers / @envoyproxy/gateway-reviewers |
The description in that link for the behavior of a load balancer is too simplified for my taste. There is no 1:1 mapping from a single field of the EndpointConditions to the expected lb state of the matching endpoint, but rather the full list of endpoints and the combination of Serving,Terminating should map to a set of lb endpoints with appropriate states. Again my main reasoning is that new connections should not be dropped if any Serving=true endpoint is currently available (irrespective its termination), this is the same as described for kube-proxy in this quote from that link:
Based on the enovy config link you provided, I believe the
|
Continue routing to endpoints while their graceful termination is in progress
What this PR does / why we need it:
use
serving
condition that was defined in kubernetes/kubernetes#92968Release Notes: Yes