Kubernetes in Google Cloud – Part 3: Spinning up your app
The final part of my tutorials about Kubernetes on Google Cloud is about spinning up your own Docker‘d n-tier app. Part 1 is here, and part 2 is here, and this post builds on them.
You probably want to run your own Docker Registry (unless you’re sharing your images via the Docker Hub), so create a service which runs on each Kubernetes node which runs the Docker Registry.
local$ gcloud compute --project wavering-elk-132 ssh --zone europe-west1-b k1 k1$ sudo touch /opt/kubernetes/services/docker-registry.service k1$ sudo chmod 666 /opt/kubernetes/services/docker-registry.service k1$ cat << EOF > /opt/kubernetes/services/docker-registry.service [Unit] After=flannel.service Wants=flannel.service Description=Docker Registry Documentation=http://docs.docker.io [Service] Restart=always ExecStart=/usr/bin/docker run -e SETTINGS_FLAVOR=s3 -e AWS_BUCKET=<bucket> -e STORAGE_PATH=/docker/registry -e AWS_KEY=<aws_key> -e AWS_SECRET=<aws_secret> -e SEARCH_BACKEND=sqlalchemy -p 5000:5000 registry:0.8.1 [X-Fleet] Global=true MachineMetadata=role=kubernetes EOF k1$ sudo chmod 644 /opt/kubernetes/services/docker-registry.service k1$ fleetctl start /opt/kubernetes/services/docker-registry.service |
[Service]
Restart=always
ExecStart=/usr/bin/docker run -e SETTINGS_FLAVOR=s3 -e AWS_BUCKET=<bucket> -e STORAGE_PATH=/docker/registry -e AWS_KEY=<aws_key> -e AWS_SECRET=<aws_secret> -e SEARCH_BACKEND=sqlalchemy -p 5000:5000 registry:0.8.1
[X-Fleet]
Global=true
MachineMetadata=role=kubernetes
EOF
k1$ sudo chmod 644 /opt/kubernetes/services/docker-registry.service
k1$ fleetctl start /opt/kubernetes/services/docker-registry.service
Wait for this to start up (it may take several minutes to download the Registry image – use docker ps -a and/or fleetctl list-units).
Now you can start an n-tier service (I’m using my SimpleDropWIzardNTier project).
k1$ sudo mkdir -p /opt/sdwntier/backend/ k1$ sudo touch /opt/sdwntier/backend/backend.json k1$ sudo chmod 666 /opt/sdwntier/backend/backend.json k1$ cat << EOF > /opt/sdwntier/backend/backend.json { "id":"sdwntier-backend", "desiredState":{ "manifest":{ "version":"v1beta1", "id":"sdwntier-backend", "containers":[ { "name":"backend", "image":"localhost:5000/andrewgorton/sdwntier-backend:1.0.3-SNAPSHOT", "ports":[ { "containerPort":8020, "hostPort":8020 } ] } ] } }, "labels":{ "name":"sdwntier-backend" } } EOF k1$ kubecfg -c /opt/sdwntier/backend/backend.json create pods |
and wait for this to start (may take a few minutes to pull the image – use ‘kubecfg -l name=sdwntier-backend list pods’).
Once it’s running, you should be able to CURL the node and the endpoint to check it’s working.
Once you’ve confirmed that, you can spin up a Service. A Kubernetes Service routes all the traffic to the correct node, so you don’t need to know where the backend is running to access it.
k1$ sudo touch /opt/sdwntier/backend/backend-service.json k1$ sudo chmod 666 /opt/sdwntier/backend/backend-service.json.json k1$ cat << EOF > /opt/sdwntier/backend/backend-service.json.json { "id":"sdwntier-backend-service", "kind":"Service", "apiVersion":"v1beta1", "port":10000, "containerPort":8020, "selector":{ "name":"sdwntier-backend" } } EOF k1$ kubecfg -c /opt/sdwntier/backend/backend-service.json create services |
This should start fairly quickly. It simply says “in all Kubernetes instances, map port 10000 to port 8020 on the host running the single backend instance. So you should be able to check this via ‘curl localhost:10000/products’ from any Kubernetes node and it should return something.
Now you want to start the front-end tier.
k1$ sudo mkdir -p /opt/sdwntier/frontend/ k1$ sudo touch /opt/sdwntier/frontend/frontend.json k1$ sudo chmod 666 /opt/sdwntier/frontend/frontend.json k1$ cat << EOF > /opt/sdwntier/frontend/frontend.json { "id":"frontendController", "kind":"ReplicationController", "apiVersion":"v1beta1", "desiredState":{ "replicas":1, "replicaSelector":{ "name":"frontend" }, "podTemplate":{ "desiredState":{ "manifest":{ "version":"v1beta1", "id":"frontendController", "containers":[ { "name":"sdwntier-frontend", "image":"localhost:5000/andrewgorton/sdwntier-frontend:1.0.3-SNAPSHOT", "ports":[ { "containerPort":8030, "hostPort":8030 } ] } ] } }, "labels":{ "name":"frontend" } } }, "labels":{ "name":"frontend" } } EOF k1$ kubecfg -c /opt/sdwntier/frontend/frontend.json create replicationControllers |
and wait for this to start up via ‘kubecfg -l replicationController=frontendController list pods’. A ReplicationController indicates to Kubernetes to keep a specified number of instances running. This uses code such as
WebResource r = c.resource(String.format("http://%s:%s/products", System.getenv("BACKEND_PORT_8020_TCP_ADDR"), System.getenv("BACKEND_PORT_8020_TCP_PORT"))); |
to locate the backend instance (although I suppose you could just use http://localhost:10000/products because the Service will auto-proxy).
Lastly, you can run up another Service to route all traffic to nodes which are running the front end.
k1$ sudo touch /opt/sdwntier/frontend/frontend-service.json k1$ sudo chmod 666 /opt/sdwntier/frontend/frontend-service.json k1$ cat << EOF > /opt/sdwntier/frontend/frontend-service.json { "id":"sdwntier-fe-service", "kind":"Service", "apiVersion":"v1beta1", "port":8090, "containerPort":8030, "selector":{ "replicationController":"frontendController" } } EOF k1$ kubecfg -c /opt/sdwntier/frontend/frontend-service.json create services |
You should be able to ‘curl localhost:8090/products’ to see it working (we route traffic from port 8090 on any Kubernetes host to port 8030 on a host running a front end controller).
You can resize the number of instances with
k1$ kubecfg resize frontendController 2 |
To delete everything…
k1$ kubecfg delete services/sdwntier-fe-service k1$ kubecfg delete replicationControllers/frontendController k1$ kubecfg -l replicationController=frontendController list pods | tail -n +3 | awk '!/^($|#)/{print "pods/"$1;}' | xargs -r -L 1 -P 99 -t kubecfg delete k1$ kubecfg delete services/sdwntier-backend-service k1$ kubecfg delete pods/sdwntier-backend |