-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add dead simple k8s configs #32
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this all looks very cool. think the helper script can be a bit nicer..
set -e | ||
|
||
sleep 10 | ||
curl -s "${K8S_API:-localhost:8001}/api/v1/namespaces/$NS/pods/$POD_NAME" | jq '.status.hostIP' | sed 's/"//g' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, I think this could be even more useful as an init/wrapper script:
i.e. proxied_run
#!/bin/sh
set -e
if [ $# -lt 1 ]; then
echo "usage: proxied_run cmd args..." >&2
exit 1
fi
k8s_api="${K8S_API:-localhost:8001}/api/v1/namespaces/$NS/pods/$POD_NAME"
host_ip=$(curl -s | jq '.status.hostIP' | sed 's/"//g')
export http_proxy=$host_ip:4140 HTTP_PROXY=$host_ip:4140
exec "$@"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like a tall order to ask that jq be installed (this won't ever work on busybox etc)...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doing it as a wrapper looks nice. I agree that's probably better.
jq is a tall order but how do you suggest interacting with the kubernetes API? busybox probably doesn't even have curl...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait. I think we can just use the downward API for this!
Something like:
containers:
- ...
env:
- name: LINKERD_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
available in 1.4
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@adleong kubernetes #24657 just make a proposal for hostIP which is finally merged in kubernetes #42717, in kubernetes #27880, spec.NodeName
is added to downward API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andyxning thanks for the note!
|
||
set -e | ||
|
||
sleep 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
??? why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a mystery. It seems like when the container starts, the k8s API isn't guaranteed to already know about it. If we query the API too soon, this pod isn't there.
mountPath: "/io.buoyant/linkerd/config" | ||
readOnly: true | ||
- name: kubectl | ||
image: buoyantio/kubectl:1.2.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not a blocker, but we probably want to get a newer kubectl out (1.4 is out now...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
building now...
containers: | ||
- name: l5d | ||
image: buoyantio/linkerd:0.8.1 | ||
imagePullPolicy: Always |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fwiw we probably don't need to set imagePullPolicy on stable tags
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point
235e2ec
to
f7b433f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
⭐️ Looks good!
|
||
```bash | ||
http_proxy=$(kubectl get svc | grep l5d | awk '{ print $3 }'):4140 curl -s http://hello | ||
http_proxy=$(kubectl get svc | grep l5d | awk '{ print $3 }'):4140 curl -s http://world |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oooh, let's use JSONpath here:
$ http_proxy=$(kubectl get svc l5d -o jsonpath="{.spec.loadBalancerIP}"):4140 curl -s http://hello
$ http_proxy=$(kubectl get svc l5d -o jsonpath="{.spec.loadBalancerIP}"):4140 curl -s http://world
|
||
```bash | ||
kubectl apply -f linkerd-viz.yml | ||
open http://$(kubectl get svc | grep linkerd-viz | awk '{ print $3 }') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And here:
$ open http://$(kubectl get svc linkerd-viz -o jsonpath="{.spec.loadBalancerIP}")
fwiw there's now a lot of duplication between these config files and the ones under |
Writeup to follow.