[ Language select: 日本語 ]

1. Work flow

This section describes how to run the official examples available on kubernetes.io on this k8s cluster.

The work here is designed to be run on a ThinkPad in a seminar room.

If you wish to run this on your own PC or in the seminar room, please keep the following points in mind.

  1. change "$(id -un)" to your ID (s13xxxxxxx)

  2. replace the "browse" command with the "echo" command to display the URL using your own Web browser.

2. Implementation of the procedure

Following the procedure on the official website in the reference information, we will replace the command with the one to be executed on k8s in the seminar room.

3. Creating a service for an application running in five pods

3.1. / 1. Run a Hello World application in your cluster: 1.

Here is the command with -n $(id -un) added

$ kubectl -n $(id -un) apply -f https://kubernetes.io/examples/service/load-balancer-example.yaml

If successful, you will see the following message: ` $ kubectl -n $(id -un)

deployment.apps/hello-world created

3.2. / 2. Display information about the Deployment:

From here, we will investigate the hello-world that was created.

To check the status, use get. (You will see that 5 web server processes are running.)

$ kubectl -n $(id -un) get deployments hello-world

The following information about the Deployment object will be displayed

NAME          READY   UP-TO-DATE   AVAILABLE   AGE
hello-world   5/5     5            5           98s

Another way to check is to use describe, which displays differently.

$ kubectl -n $(id -un) describe deployments hello-world

In addition to get, you can specify a describe command to display detailed information.

Name:                   hello-world
Namespace:              yasu-abe
CreationTimestamp:      Wed, 09 Oct 2019 05:57:52 +0000
Labels:                 app.kubernetes.io/name=load-balancer-example
Annotations:            deployment.kubernetes.io/revision: 1
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"load-balancer-example"},"name...
Selector:               app.kubernetes.io/name=load-balancer-example
Replicas:               5 desired | 5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app.kubernetes.io/name=load-balancer-example
  Containers:
   hello-world:
    Image:        gcr.io/google-samples/node-hello:1.0
...

When you use get, you can check detailed definition information by specifying -o yaml as an argument.

$ kubectl -n $(id -un) get deployments hello-world -o yaml
## Check the output by yourself.

3.3. / 3. Display information about your ReplicaSet objects:

Deployment is a top-level definition registered with kubectl create, but a ReplicaSet is automatically created based on the Deployment. In the official sample, a command is posted to verify this content.

$ kubectl -n $(id -un) get replicasets -l app.kubernetes.io/name=load-balancer-example
$ kubectl -n $(id -un) describe replicasets -l app.kubernetes.io/name=load-balancer-example

Please check the output by yourself.

3.4. / 4. Create a Service object that exposes the deployment:

In this example, LoadBalancer is used to expose the service to the outside world. So that, the service cannot be accessed from the network outside the seminar room.

To access the hello-world application created here, we need to prepare an IP address and port number so that we can connect from outside. Running the kubectl expose command will automatically set the type: LoadBalancer.

$ kubectl -n $(id -un) expose deployment hello-world --type=LoadBalancer --name=my-service
service/my-service exposed

3.5. / 5. Display information about the Service:

To verify the contents of the created service/my-service, the official sample applies get to service.

$ kubectl -n $(id -un) get services my-service
NAME         TYPE           CLUSTER-IP    EXTERNAL-IP       PORT(S)          AGE
my-service   LoadBalancer   10.233.2.82   192.168.100.160   8080:32415/TCP   95s

If you access the EXTERNAL-IP 192.168.100.160 and port number 8080 displayed here with a web browser, etc., you can check the contents of the hello-world application.

3.6. / 6. Display detailed information about the Service:

As before, you can check the contents not only by get but also by describe.

$ kubectl -n $(id -un) describe services my-service
## Verify that the output is almost the same as the official sample

3.7. / 7. In the preceding output, …​

Next, the official sample changes the output of kubectl get. Although --output=wide is used, the abbreviated form -o wide is usually more commonly used, so we present the command line there.

$ kubectl -n $(id -un) get pods -o wide -l app.kubernetes.io/name=load-balancer-example
OR
$ kubectl -n $(id -un) get pods --output=wide -l app.kubernetes.io/name=load-balancer-example
NAME                          READY   STATUS    RESTARTS   AGE   IP              NODE       NOMINATED NODE   READINESS GATES
hello-world-bbbb4c85d-4csvp   1/1     Running   0          44m   10.233.105.24   u109ls04   <none>           <none>
hello-world-bbbb4c85d-lwqxv   1/1     Running   0          44m   10.233.112.15   u109ls03   <none>           <none>
hello-world-bbbb4c85d-nntdf   1/1     Running   0          44m   10.233.105.23   u109ls04   <none>           <none>
hello-world-bbbb4c85d-r8h46   1/1     Running   0          44m   10.233.113.11   u109ls01   <none>           <none>
hello-world-bbbb4c85d-xkszr   1/1     Running   0          44m   10.233.115.12   u109ls02   <none>           <none>

For other possible formats for the -o option, see kubectl::Syntax in the k8s.io official guide. Please refer to the following page.

If you look at the NODE column here, you will see that the hello-world application is running distributed across all four servers. According to the explanation on the official site, you can see from the NODE name that GCP is used (GKE, Google Kubernetes Engine).

3.8. / 8. Use the external IP address (LoadBalancer Ingress) to access …​

Finally, the curl command is used to connect to the hello-world app from the command line instead of a web browser.

## Please change the IP address (192.168.100.160) for your environment.
$ curl http://192.168.100.160:8080/

The IP address (192.168.100.160) part is the EXTERNAL-IP value that you just saw with get services.

To view a web page from a web browser, you can enter the URL ( http://192.168.100.160:8080/ ) directly or use the browse command.

$ browse http://192.168.100.160:8080/

To access your IP address in the seminar room environment, run the following command.

$ curl http://$(kubectl -n $(id -un) get svc my-service -o=jsonpath='{.status.loadBalancer.ingress[0].ip}'):8080

4. Connecting from a network outside the seminar room using my-proxy

If you havn’t yet been implemented the Reverse Proxy tutorial, please go back here after the Reverse Proxy tutorial.

So far, you could not access curl http://192.168.100.160:8080/ unless you are inside the seminar room.

The following operation is required to access this application from my-proxy created with the Reverse Proxy .

  1. add a Service object specifying type: CluserIP (e.g. svc/hello-world-svc)

  2. change the setting of proxy.conf in cm/nginx-conf object to access the added service name as hostname.

  3. restart pod/my-proxy-xxxxxx-xxxxxxxxx

First add a new Service object. Modify some of the files used to create the Service object previously.

$ curl "https://web-int.u-aizu.ac.jp/~yasu-abe/ja/sccp/manual/ingress-proxy.svc-proxy.yaml" | sed -e "s/s12xxxxx/hello-world/" -e "s/app: my-proxy/app.kubernetes.io\/name: load-balancer-example/" -e "s/80/8080/" | kubectl -n $(id -un) apply -f -

Here, three items (five lines) are changed.

  1. replace "s12xxxxx" with "hello-world"

  2. replace "app: my-proxy" with "app.kubernetes.io/name: load-balancer-example".

  3. Replace "80" with "8080".

The resulting Service object is added as follows

$ kubectl -n $(id -un) get svc hello-world-svc -o yaml
---
apiVersion: v1
kind: Service
metadata:
  name: hello-world-svc
  labels:
    app.kubernetes.io/name: load-balancer-example
spec:
  type: ClusterIP
  ports:
     -  port: 8080
        protocol: TCP
        targetPort: 8080
  selector:
    app.kubernetes.io/name: load-balancer-example

Next, modify the cm/nginx-conf object. Here, the port number is not the default 80, so we explicitly specify 8080.

      ## Changes to proxy.conf
      location /s12xxxxx/hello-world/ {
        proxy_pass    http://hello-world-svc:8080/;
      }

To modify the settings in cm/nginx-conf, the following command line can be used. Use the EDITOR environment variable if necessary.

$ env EDITOR=emacs kubectl -n $(id -un) edit cm/nginx-conf

Finally, restart pod/my-proxy in the same way as before.

$ kubectl -n $(id -un) delete pod -l app=my-proxy

Now you can access /s12xxxxxxx/hello-world/ with your URL from the list below and see if you can connect to the app.

5. Cleaning up

From here on out, run the contents of Cleaning up to remove the configuration and restore it to its original state. Make sure you can easily create and delete pods.

$ kubectl -n $(id -un) delete services my-service
service "my-service" deleted

The message indicates that the service was deleted, but to verify that it was really deleted, you can do the following

$ kubectl -n $(id -un) get services my-service
## The same command can also be written in a shortened form
$ kubectl -n $(id -un) get svc/my-service
## Either command will display an Error message as follows
Error from server (NotFound): services "my-service" not found

When deleting a deployment, make sure that by deleting one definition, the ReplicaSet and Pod definitions will also automatically disappear.

$ kubectl -n $(id -un) delete deployment hello-world

The first time you run kubectl -n $(id -un) get all, there may still be pods, etc. If you run kubectl -n $(id -un) get all again after a while, you will see that ReplicaSet and Pod definitions are also deleted automatically.

6. Summary up to this point

If you are not sure after running the program, try running it again from the beginning.

If you run it again, you may get a different error message than the first time because it contains some unnecessary work that can be skipped.

There is no problem with duplicate runs, so feel free to repeat the process until you are satisfied.