Showing posts with label kubernetes. Show all posts
Showing posts with label kubernetes. Show all posts

Tuesday, July 28, 2020

Web app deployment inside of Kubernetes with microk8s

based on the Kubernetes course:
 
1) install microk8s: sudo snap install microk8s
2) enable registry & dns: microk8s.enable registry dns

MONGODB deployment & service
3) configure the mongodb deployment
generate 2 secrets using md5sum from shell
MONGO_INITDB_ROOT_USERNAME=--insert_here_encrypted_username-- -e MONGO_INITDB_ROOT_PASSWORD=--insert_here_encrypted_password-- -e MONGO_INITDB_DATABASE=admin

4) apply the MongoDB database deployment and service
microk8s.kubectl apply -f mongodb-deployment.yaml
5) check the environment variables inside the container
5.1) enter inside the deployment:
microk8s.kubectl exec -it deployment.apps/mongodb-deployment sh
5.2) env
6.1) get inside the mongodb container:
from Docker: docker exec -it mongo bash
from Kubernetes: microk8s.kubectl exec -it mongodb-deployment--insert_your_deployment_id -- /bin/sh
6.2) authenticate to the mongodb database container:
mongo -u insert_here_encrypted_username -p insert_here_encrypted_password --authenticationDatabase admin


Our application deployment & service
7) build the docker image of our application:
docker build . -t localhost:32000/mongo-app:v1
8) test the image using port forwarding:
docker run -p 3000:3000 localhost:32000/mongo-app:v1
or: docker run  -it --rm -p 3000:3000 localhost:32000/mongo-app:v1
9) push the image into the kubernetes registry
docker push localhost:32000/mongo-app:v1
10) apply our custom application: microk8s.kubectl apply -f mongo.yaml
11) check whether the IP addresses of the service and pods match. This means that the service endpoints are correctly set and math the created pods:
microk8s.kubectl describe service
microk8s.kubectl get pod -o wide


Congratulations!

Tuesday, May 19, 2020

Kubernetes in Ubuntu - Ingress

(part of the Kubernetes course):

microk8s.kubectl get all --all-namespaces
// enable registry
microk8s.enable registry
//check /etc/hosts
//enable usage of secure registry in /etc/docker/daemon.conf
// enable dns
microk8s.enable dns
// enable ingress service/controller
microk8s.enable ingress
// verify if it is running
microk8s.kubectl get pods --all-namespaces
// microk8s.kubectl describe  pod  nginx-ingress-microk8s-controller-pn82q  -n ingress
create 2 deployments with different names each pointing to different app version
docker build -t localhost:32000/php_app:v1 .
docker build -t localhost:32000/php_app:v2 .
push the images into registry
docker push localhost:32000/php_app:v1
docker push localhost:32000/php_app:v2
apply the 2 deployments
microk8s.kubectl apply -f deployment_web1.yaml
microk8s.kubectl apply -f deployment_web2.yaml
apply 2 services to expose the deployments
microk8s.kubectl apply -f service_web1.yaml
check if they have valid endpoints:
microk8s.kubectl get ep
microk8s.kubectl get pods -o wide
create ingress resource:
microk8s.kubectl apply -f ingress.yaml
check the ingress1: microk8s.kubectl get ingress
check the ingress2: microk8s.kubectl logs -n ingress daemonset.apps/nginx-ingress-microk8s-controller
set /etc/hosts to point localhost to the ingress address.

Wednesday, March 25, 2020

Kubernetes in Ubuntu - horizontal pod autoscaler with microk8s

Let's take a look at how to create horizontal pod autoscaler in Kubernetes. Such type of setting is very useful when we want Kubernetes to automatically add & remove pods based on the application's workload.
(part of my Kubernetes course):

Here is the sample PHP application directly from the Kubernetes docs:
  $x = 0.0001;
  for ($i = 0; $i <= 1000000; $i++) {
    $x += sqrt($x);
  }
  echo "Operation Complete! v1";


What it does: It creates a loop, inside the square root of value x is calculated, then the result is added to the accumulated value of x and again square-rooted.

We also have a Dockerfile for the creation of an image based on the PHP code:
FROM php:7.4.4-apache
ADD index.php /var/www/html/index.php

We use a php-apache image and on the second line, we just place our index.php application code inside the image's root directory. This way, when we create a container out of the image the apache web server will start to serve the content when browsed on port 80.

Let's now start our Kubernetes cluster with microk8s.start
then we will enable dns and registry addons with microk8s.enable dns registry
Please check if those two addons are running with microk8s.kubectl get all --all-namespaces
If the image pod is not behaving properly you can inspect it with microk8s.kubectl describe pod/name_of_registry_pod

The following also helps to get the registry running:
disable the registry addon with microk8s.disable registry
Make sure that you have at least 20Gb of space
disable #ipv6 inside the hosts file
sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0
sudo ufw default allow routed
sudo iptables -P FORWARD ACCEPT

Then re-enable the registry addon.

Let's now build our image with Docker and push it into the Kubernetes registry:
docker build . -t localhost:32000/php-app:v1
Please notice we are beginning the name of our image as localhost:32000 - because that's where microk8s registry resides and waits for push/pull operations exactly on this host:port combination.
Next, we can check with docker image list to see if the image is being built successfully. We can now push the image into the registry:
docker image push localhost:32000/php-app

It is time to run our deployment & service with the newly created image:
php-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: php-apache
spec:
  selector:
    matchLabels:
      run: php-apache
  replicas: 1
  template:
    metadata:
      labels:
        run: php-apache                                                                                                               
    spec:                                                                                                                             
      containers:
      - name: php-apache
        image: localhost:32000/php-app:v1
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 200m

---
apiVersion: v1
kind: Service
metadata:
  name: php-apache
  labels:
    run: php-apache
spec:
  ports:
  - port: 80
  selector:
    run: php-apache

Note the name of the deployment: php-apache, the image: localhost:32000/php-app:v1 and the port 80 which we are exposing out of the created pod replicas. Currently, the deployment is running just 1 pod.
Port 80 will not show outside of the cluster. We can just look at our service through another container thanks to the enabled dns addon. Another thing to notice is that we have also created a service with lablel: run=php-apache this service will expose to the outside world only deployments with the same label.
We are ready to apply the deployment and the service via:
microk8s.apply -f php-deployment.yaml

Let's run the horizontal pod autoscaler:
microk8s.kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10

so we will be spreading the load between 1 and 10 replicas.
microk8s.kubectl get hpa
will show us the information about the autoscaler that we can further follow.

Note: At this point, we will need to enable the metrics-server with: microk8s.enable metrics-server
in order to see actual information about the usage of our pods.

Now we will create a temporary pod that will attempt to send multiple requests to the newly created hpa service. At the same time we will run a shell inside the container:
microk8s.kubectl run --generator=run-pod/v1 -it --rm load-generator --image=busybox /bin/sh
from inside we will try to reach our service, exposing the port 80 with:
wget php-apached
This will download the interpreted from php&apache index.php as index.html and is a sure sign that we have established a connection between the two pods.
Now let's create multiple requests to the hpa autoscaled application:
while true; do wget -q -O- php-apache; done
(-q -O- is just to surpress the output from wget)

If we watch microk8s.kubectl get hpa we will see the hpa in action by increasing and reducing the number of replicas based on the load.
We can also delete the hpa by knowing its name: microk8s.kubectl delete horizontalpodautoscaler.autoscaling/your_hpa_name

Note: if you delete the autoscaler, you will need manually to re-adjust the number of replicas with:
microk8s.kubectl scale deployment/php-apache --replicas=1

Congratulations!

Thursday, March 19, 2020

Kubernetes - Deployments, Services & Registry on ubuntu with microk8s

We will get to know of deployments, services, and registry inside Kubernetes with the help of microk8s. Part of my Kubernetes course. First, enable the registry and dns addons:
We need a registry because it will store our newly created application image.
And then a DNS to ensure that the pods and services inside of the cluster can communicate effectively.
microk8s.enable registry, dns
then check with microk8s.status to see if they are enabled
and: microk8s.kubectl get all --all-namespaces
to see if the pods are running
Also, we can check inside the registry pod all the messages, emitted during the pod creation. For this and with the information for the specific pod id of the registry from the previous command just type microk8s.kubectl -n container-registry describe pod registry-xxxxxxx-xxxx

in case of problems:
1) edit your /etc/hosts file and comment the ipv6 entry #::1 ip6-localhost ip6-loopback
2) check if the service is listening on port 32000 with: sudo lsof -i:32000
get the CLUSTER-IP address from the registry and access it with port 5000
from the browser try to access the catalog of images offered: http://xxx.xxx.xxx.xxx:5000/v2/_catalog
Note: The container registry is supported by Docker.

Now it is time to push our docker image inside the registry:
1) List the images: docker image ls
and find our application image there.
Note: In order to be able to use the local Docker/Kubernetes registry the image has to be with tag: localhost:32000/image-name
for this just get the image_id from docker image ls and use the following command:
docker image tag image_id localhost:32000/image-name:v1
2) Push the name of our application container into the registry
docker push localhost:32000/image-name:v1
3) Check inside the browser the same:
http://xxx.xxx.xxx.xxx:5000/v2/_catalog
and also:
http://xxx.xxx.xxx.xxx:5000/v2/image-name/tags/list

We will now use the image from the registry and based on this image will create a deployment:
node-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-deployment
  labels:
    app: simple-api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: simple-api
  template:
    metadata:
      labels:
        app: simple-api
    spec:
      containers:
      - name: simple-api
        image: localhost:32000/image-name:v1
        ports:
        - containerPort: 3000

As you can see every container will have a name:simple-api, on this base, new pods will be created with the app=simple-api label.
Finally, our deployment (also tagged with app=simple-api) will create replicas of the pods who match the label: app=simple-api
We can also see that we are using/referencing the image localhost:32000/image-name:v1 straight from our registry as well as exposing port 3000 of our node application to be accessible outside of the container.

Let's now run the .yaml manifest with microk8s.kubectl apply -f node-deployment.yaml

Note: If you experience problems with the registry just enable these firewall rules:

sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0
sudo ufw default allow routed
sudo iptables -P FORWARD ACCEPT

as well as restart the docker daemon:
sudo systemctl restart docker

Now, when the deployment is successful, we should have 2 pods created, that we can see from microk8s.kubectl get pods --all-namespaces.
We can now enter inside of each by using:
microk8s.kubectl exec -it node-deployment-xxxxxxxxxx-xxxx -- /bin/bash
and here we can test the network connectivity as well as see the files.

Ok, let's now reveal our beautiful application outside of the deployment pods with the help of a service. And here is its yaml file:
node-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: node
spec:
  type: NodePort
  ports:
  - port: 30000
    targetPort:3000                                                                                      
 selector:                                                                                                               
    app: simple-api

Ok, as you can see here we are matching all pods which are created by deployments having labels of app=simple-api !
Then we are using NodePort type of service, which means that we target inside of the container port 3000 and exposing it outside as 30000 (as from the service)
Ok, let's apply the service with microk8s.apply -f node-service.yaml
Let's run again: microk8s.kubectl get pods --all-namespaces and we see that we've got a CLUSTER-IP assigned from Kubernetes - this is an IP assigned to our service, which we can use with the port of 30000. So go ahead and browse: http://your_CLUSTER-IP:30000

Since our cluster Node is running on localhost, we see from the output another (this time randomly) assigned port next to 30000:34467. We can use this NodePort to access our cluster with http://127.0.0.1:34467

Let's see our endpoints(several pods) which backup and are responsible for our service: microk8s.kubectl get endpoints
We can access these endpoints from our browser.

The benefit we get is that: multiple pods(endpoints) can be accessed from a single location, we just have to know and point to it by its name. And this is our service!

Congratulations!

Subscribe To My Channel for updates