Kubernetes in Google Cloud – Part 1: the etcd cluster
So, I’ve been getting Kubernetes running (following Kelsey Hightower’s tutorial), you need an etcd/fleetd cluster running, and here’s how to get up to speed with that in the Google Cloud.
Firstly, you need a etcd cluster discovery token. This is the mechanism by which members of the cluster can find each other.
local$ curl https://discovery.etcd.io/new > discovery.token local$ cat discovery.token https://discovery.etcd.io/d1085f3894e14a536da9b835d2a39bca |
If the token is “Unable to generate token” then try again until you get one.
Then you need a cloud-init configuration script to set up the machine.
#cloud-config coreos: update: reboot-strategy: off etcd: # generate a new token for each unique cluster from https://discovery.etcd.io/new # WARNING: replace each time you tear it down #discovery: https://discovery.etcd.io/<token> discovery: https://discovery.etcd.io/d1085f3894e14a536da9b835d2a39bca addr: $private_ipv4:4001 peer-addr: $private_ipv4:7001 fleet: public-ip: $public_ipv4 metadata: public_ip=$public_ipv4,private_ip=$private_ipv4,role=etcd units: - name: etcd.service command: start - name: fleet.service command: start |
coreos:
update:
reboot-strategy: off
etcd:
# generate a new token for each unique cluster from https://discovery.etcd.io/new
# WARNING: replace each time you tear it down
#discovery: https://discovery.etcd.io/<token>
discovery: https://discovery.etcd.io/d1085f3894e14a536da9b835d2a39bca
addr: $private_ipv4:4001
peer-addr: $private_ipv4:7001
fleet:
public-ip: $public_ipv4
metadata: public_ip=$public_ipv4,private_ip=$private_ipv4,role=etcd
units:
– name: etcd.service
command: start
– name: fleet.service
command: start
Save this as etcd.yml.
I’ve turned off the update strategy whilst I get my cluster up and running (otherwise you’ll likely see “You’re not using the latest version of etcd in this cluster” warnings). I’ve also replaced the discovery token with our generated token.
Now you can fire up a cluster
local$ gcloud --project wavering-elk-132 \\ compute instances create e1 e2 \\ --image https://www.googleapis.com/compute/v1/projects/coreos-cloud/global/images/coreos-alpha-457-0-0-v20141002 \\ --zone europe-west1-b \\ --machine-type f1-micro \\ --metadata-from-file user-data=etcd.yml \\ --metadata role=etcd |
This creates 2 virtual machines in the wavering-elk-132 project, called e1 and e2.
You can connect to e1 with:-
local$ gcloud compute --project wavering-elk-132 ssh --zone europe-west1-b e1 |
If you get prompted for a password, simply ctrl-c and try again because the machine is still setting up and hasn’t had the SSH keys transferred yet.
Once you’re logged onto the machine, you can see if the other machines have registered into the cluster with:-
e1$ fleetctl list-machines MACHINE IP METADATA 11ee89b1... 104.155.10.114 private_ip=10.240.168.137,public_ip=104.155.10.114,role=etcd 45855786... 104.155.1.67 private_ip=10.240.138.238,public_ip=104.155.1.67,role=etcd |
You can check that etcd is working now by setting a value on e1…
local$ gcloud compute --project wavering-elk-132 ssh --zone europe-west1-b e1 e1$ etcdctl set /foo bar bar |
and retrieving it on e2.
local$ gcloud compute --project wavering-elk-132 ssh --zone europe-west1-b e2 e2$ etcdctl get /foo bar |
You can also see some cluster information by curl’ing the discovery token.
local$ curl `cat discovery.token`| python -m json.tool { "action": "get", "node": { "createdIndex": 141438252, "dir": true, "key": "/_etcd/registry/d1085f3894e14a536da9b835d2a39bca", "modifiedIndex": 141438252, "nodes": [ { "createdIndex": 141440431, "expiration": "2014-10-24T09:30:41.349906821Z", "key": "/_etcd/registry/d1085f3894e14a536da9b835d2a39bca/11ee89b15ba942aea376ef66f5b4471f", "modifiedIndex": 141440431, "ttl": 147289, "value": "http://10.240.168.137:7001" }, { "createdIndex": 141440437, "expiration": "2014-10-24T09:30:41.531662386Z", "key": "/_etcd/registry/d1085f3894e14a536da9b835d2a39bca/45855786f7144cc280186ed71df51e26", "modifiedIndex": 141440437, "ttl": 147289, "value": "http://10.240.138.238:7001" } ] } } |
I’ll get the next blog posting in the next few days which builds on this.
0 Comments on “Kubernetes in Google Cloud – Part 1: the etcd cluster”