Wednesday, March 25, 2020

Kubernetes in Ubuntu - horizontal pod autoscaler with microk8s

Let's take a look at how to create horizontal pod autoscaler in Kubernetes. Such type of setting is very useful when we want Kubernetes to automatically add & remove pods based on the application's workload.
(part of my Kubernetes course):

Here is the sample PHP application directly from the Kubernetes docs:
  $x = 0.0001;
  for ($i = 0; $i <= 1000000; $i++) {
    $x += sqrt($x);
  }
  echo "Operation Complete! v1";


What it does: It creates a loop, inside the square root of value x is calculated, then the result is added to the accumulated value of x and again square-rooted.

We also have a Dockerfile for the creation of an image based on the PHP code:
FROM php:7.4.4-apache
ADD index.php /var/www/html/index.php

We use a php-apache image and on the second line, we just place our index.php application code inside the image's root directory. This way, when we create a container out of the image the apache web server will start to serve the content when browsed on port 80.

Let's now start our Kubernetes cluster with microk8s.start
then we will enable dns and registry addons with microk8s.enable dns registry
Please check if those two addons are running with microk8s.kubectl get all --all-namespaces
If the image pod is not behaving properly you can inspect it with microk8s.kubectl describe pod/name_of_registry_pod

The following also helps to get the registry running:
disable the registry addon with microk8s.disable registry
Make sure that you have at least 20Gb of space
disable #ipv6 inside the hosts file
sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0
sudo ufw default allow routed
sudo iptables -P FORWARD ACCEPT

Then re-enable the registry addon.

Let's now build our image with Docker and push it into the Kubernetes registry:
docker build . -t localhost:32000/php-app:v1
Please notice we are beginning the name of our image as localhost:32000 - because that's where microk8s registry resides and waits for push/pull operations exactly on this host:port combination.
Next, we can check with docker image list to see if the image is being built successfully. We can now push the image into the registry:
docker image push localhost:32000/php-app

It is time to run our deployment & service with the newly created image:
php-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: php-apache
spec:
  selector:
    matchLabels:
      run: php-apache
  replicas: 1
  template:
    metadata:
      labels:
        run: php-apache                                                                                                               
    spec:                                                                                                                             
      containers:
      - name: php-apache
        image: localhost:32000/php-app:v1
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 200m

---
apiVersion: v1
kind: Service
metadata:
  name: php-apache
  labels:
    run: php-apache
spec:
  ports:
  - port: 80
  selector:
    run: php-apache

Note the name of the deployment: php-apache, the image: localhost:32000/php-app:v1 and the port 80 which we are exposing out of the created pod replicas. Currently, the deployment is running just 1 pod.
Port 80 will not show outside of the cluster. We can just look at our service through another container thanks to the enabled dns addon. Another thing to notice is that we have also created a service with lablel: run=php-apache this service will expose to the outside world only deployments with the same label.
We are ready to apply the deployment and the service via:
microk8s.apply -f php-deployment.yaml

Let's run the horizontal pod autoscaler:
microk8s.kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10

so we will be spreading the load between 1 and 10 replicas.
microk8s.kubectl get hpa
will show us the information about the autoscaler that we can further follow.

Note: At this point, we will need to enable the metrics-server with: microk8s.enable metrics-server
in order to see actual information about the usage of our pods.

Now we will create a temporary pod that will attempt to send multiple requests to the newly created hpa service. At the same time we will run a shell inside the container:
microk8s.kubectl run --generator=run-pod/v1 -it --rm load-generator --image=busybox /bin/sh
from inside we will try to reach our service, exposing the port 80 with:
wget php-apached
This will download the interpreted from php&apache index.php as index.html and is a sure sign that we have established a connection between the two pods.
Now let's create multiple requests to the hpa autoscaled application:
while true; do wget -q -O- php-apache; done
(-q -O- is just to surpress the output from wget)

If we watch microk8s.kubectl get hpa we will see the hpa in action by increasing and reducing the number of replicas based on the load.
We can also delete the hpa by knowing its name: microk8s.kubectl delete horizontalpodautoscaler.autoscaling/your_hpa_name

Note: if you delete the autoscaler, you will need manually to re-adjust the number of replicas with:
microk8s.kubectl scale deployment/php-apache --replicas=1

Congratulations!

Thursday, March 19, 2020

Kubernetes - Deployments, Services & Registry on ubuntu with microk8s

We will get to know of deployments, services, and registry inside Kubernetes with the help of microk8s. Part of my Kubernetes course. First, enable the registry and dns addons:
We need a registry because it will store our newly created application image.
And then a DNS to ensure that the pods and services inside of the cluster can communicate effectively.
microk8s.enable registry, dns
then check with microk8s.status to see if they are enabled
and: microk8s.kubectl get all --all-namespaces
to see if the pods are running
Also, we can check inside the registry pod all the messages, emitted during the pod creation. For this and with the information for the specific pod id of the registry from the previous command just type microk8s.kubectl -n container-registry describe pod registry-xxxxxxx-xxxx

in case of problems:
1) edit your /etc/hosts file and comment the ipv6 entry #::1 ip6-localhost ip6-loopback
2) check if the service is listening on port 32000 with: sudo lsof -i:32000
get the CLUSTER-IP address from the registry and access it with port 5000
from the browser try to access the catalog of images offered: http://xxx.xxx.xxx.xxx:5000/v2/_catalog
Note: The container registry is supported by Docker.

Now it is time to push our docker image inside the registry:
1) List the images: docker image ls
and find our application image there.
Note: In order to be able to use the local Docker/Kubernetes registry the image has to be with tag: localhost:32000/image-name
for this just get the image_id from docker image ls and use the following command:
docker image tag image_id localhost:32000/image-name:v1
2) Push the name of our application container into the registry
docker push localhost:32000/image-name:v1
3) Check inside the browser the same:
http://xxx.xxx.xxx.xxx:5000/v2/_catalog
and also:
http://xxx.xxx.xxx.xxx:5000/v2/image-name/tags/list

We will now use the image from the registry and based on this image will create a deployment:
node-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-deployment
  labels:
    app: simple-api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: simple-api
  template:
    metadata:
      labels:
        app: simple-api
    spec:
      containers:
      - name: simple-api
        image: localhost:32000/image-name:v1
        ports:
        - containerPort: 3000

As you can see every container will have a name:simple-api, on this base, new pods will be created with the app=simple-api label.
Finally, our deployment (also tagged with app=simple-api) will create replicas of the pods who match the label: app=simple-api
We can also see that we are using/referencing the image localhost:32000/image-name:v1 straight from our registry as well as exposing port 3000 of our node application to be accessible outside of the container.

Let's now run the .yaml manifest with microk8s.kubectl apply -f node-deployment.yaml

Note: If you experience problems with the registry just enable these firewall rules:

sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0
sudo ufw default allow routed
sudo iptables -P FORWARD ACCEPT

as well as restart the docker daemon:
sudo systemctl restart docker

Now, when the deployment is successful, we should have 2 pods created, that we can see from microk8s.kubectl get pods --all-namespaces.
We can now enter inside of each by using:
microk8s.kubectl exec -it node-deployment-xxxxxxxxxx-xxxx -- /bin/bash
and here we can test the network connectivity as well as see the files.

Ok, let's now reveal our beautiful application outside of the deployment pods with the help of a service. And here is its yaml file:
node-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: node
spec:
  type: NodePort
  ports:
  - port: 30000
    targetPort:3000                                                                                      
 selector:                                                                                                               
    app: simple-api

Ok, as you can see here we are matching all pods which are created by deployments having labels of app=simple-api !
Then we are using NodePort type of service, which means that we target inside of the container port 3000 and exposing it outside as 30000 (as from the service)
Ok, let's apply the service with microk8s.apply -f node-service.yaml
Let's run again: microk8s.kubectl get pods --all-namespaces and we see that we've got a CLUSTER-IP assigned from Kubernetes - this is an IP assigned to our service, which we can use with the port of 30000. So go ahead and browse: http://your_CLUSTER-IP:30000

Since our cluster Node is running on localhost, we see from the output another (this time randomly) assigned port next to 30000:34467. We can use this NodePort to access our cluster with http://127.0.0.1:34467

Let's see our endpoints(several pods) which backup and are responsible for our service: microk8s.kubectl get endpoints
We can access these endpoints from our browser.

The benefit we get is that: multiple pods(endpoints) can be accessed from a single location, we just have to know and point to it by its name. And this is our service!

Congratulations!

Saturday, February 29, 2020

Kubernetes - NodeJS application

Let's see how we can create a NodeJS application and place it inside a Docker container. In the next stage, we will manage the application within Kubernetes.
(part of my Kubernetes course)

From the directory of our project, we can take a look at the files with Visual Studio Code. Alright, let's start with the application code:
server_app.js

const express = require('express');
const config = {
name: 'sample-express-app',
port: 3000,
host: '0.0.0.0',
};

const app = express();

app.get('/', (req, res) => {
res.status(200).send('hello world');
});

app.listen(config.port, config.host, (e)=> {
if(e) {
throw new Error('Internal Server Error');
}
console.log(`${config.name} running on ${config.host}:${config.port}`);
});


The code starts by using the Express framework, and with a configuration for our simple Express application, where we specify on which host and port it will run Afterwards, we are creating the actual server application in a way that when its root domain URL is being requested, it is just sending "Hello world" as an output to the user. We set up the application to listen on the host and port specified inside our configuration and if everything runs well it outputs to the Nodejs console that the application is running on the current host and port specified.
We also have package.json file where the application starts using the node command and the name of the .js file to run: server-app.js. Also, we are requiring the Express framework in the dev dependencies section.
{
  "name": "kuber-node",
  "version": "1.0.0",
  "description": "",
  "main": "server_app.js",
  "scripts": {
    "start": "node server_app.js"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "express": "^4.17.1"
  }
}



Now let's take a look at our Dockerfile
FROM node:slim
WORKDIR /app
COPY ./package.json /app
RUN npm install
COPY . /app
CMD node server_app.js


For keeping the things lightweight in the process of containerization, we are using a tiny image version of nodejs - slim. Next inside of the /app/ directory of the image, we are copying our local version of package.json. This way npm will read the package.json, install the proper packages needed for the functioning of our application, and docker will copy the resulting files inside of the newly created container. The last command just starts the application.
Note that at this stage we don't need to expose any ports, because we will do this later using Kubernetes.

Now let's create/build an image from this file. For the build process, we will be using the current directory (local_app) context in order to include just our application files inside of the newly built image. We are also tagging the application with the v1 tag.
docker build -t localhost:32000/server-app:v1

Let's not forget the .dockerignore file, where we are ignoring:
.git
Dockerfile
docker-compose
node_modules

This is because we would like the image to build its own node_modules, and not to use the other image artifact files used during the build process. Note also that we are setting up a multi-stage build, where initially are being used just parts of already existing images(from docker hub) that our application needs and the final image is built based on those parts.
Lets check if our image works correctly:
docker run -p 3000:3000 localhost:32000/server-app:v1
Here we are creating a container where we load up our image and map internal to external ports: 3000 to 3000 so when we go to http://localhost:3000 we should see our application working within the docker container.
With docker container ls we list containers, noting their id's and we can stop them by using docker container stop container_id

The next step is to extract the image into an archive. We do this because we later will use the archive to populate the Kubernetes image repository.
docker save localhost:32000/server-app:v1 > myimage.tar

Ok now is the time to start the Kubernetes cluster:
microk8s.start
and to enable certain addons:
microk8s.enable dns registry
(DNS in order to communicate between pods and containers)
(registry is the internal image registry where we will push our docker image tar archive). to check if they are properly installed type: microk8s.status

Checks
We can also check all the running pods and services inside of our node with:
microk8s.kubectl describe nodes
To check what the container registry has inside we can use microk8s.ctr image list
Then let's import our archived image into the registry: microk8s.ctr image import myimage.tar
and again to check whether the newly imported image is listed inside of microk8s.ctr

Creating the pod
We will use the nodepod.yaml file manifest in order to get the image from the registry, to create a container using it and to place all this inside of a pod:

apiVersion: v1
kind: Pod
metadata:
  name: simple-api-pod
  labels:
    name: simple-api
spec:
  containers:
  - name: simple-api
    image: localhost:32000/server-app:v1


so let's apply the file to our Kubernetes cluster with microk8s.kubectl apply -f nodepod.yaml

Monitoring
Now we will observe what is happening inside of the cluster with:
microk8s.kubectl get all --all-namespaces
from the produced output find the id of your pod and just type this very useful command in order to track what is going on inside of the pod: microk8s.kubectl describe pod name_of_pod
You can too use the logs command that will output all the logs from the creating of the container up to now: microk8s.kubectl logs  --vv8 name_of_pod
For more fun we can even enter inside of the container:
microk8s.kubectl exec -it name_of_pod /bin/bash
and from there you can check the network connectivity or install packages.

Seeing the application
Let's now test the application. For this, we will need to expose our pod protected network to the outside world. We will use simple port forwarding:
microk8s.kubectl port-forward name_of_pod 3000
which means that we are exposing internal port 3000 outside.
And now we can browse http://localhost:3000 and see our app running inside of Kubernetes.

Congratulations!

Tuesday, February 25, 2020

Linux processes - attaching and inspecting

Inspecting processes is interesting topic for me in general. Whether with gdb or with command line tools, let's take a look how we can inspect what is going on inside a linux process: 

At one point, when you want, you can enjoy the full Ubuntu admin course.


For example in one terminal tab we can start a process such as ping,
then in another we can see its process identificator number(PID) with sudo ps -ax
and based on the information we can attach to the running process using strace: sudo strace -p PID (a nice and more verbose variant for process tracking offers: sudo strace -p12345 -s9999 -e write)

Another useful application is reptyr, which tries attach to runnig process and to transfer its output the current terminal we are using:
installation:
apt install reptyr

in order for reptyr to work you need to expand the scope of ptrace :
# echo 0 > /proc/sys/kernel/yama/ptrace_scope
then when you have the process ID you may try with the following options to attach to a process:
reptyr PID -T -L
L is to enable capturing child processes
T is for tty stealing

Keep in mind reptyr is just attaching to process and not getting its ownership (i.e. becoming its parent), so when you close the original parent terminal the captured process will halt. The solution in this case is to disown the process in question and is done in two steps:
1. the process should be listed as a task, and it is a fact that a task is associated with a particular terminal(tty). So first we run the process as a task with: bg, Ctrl+z, or &.
2. then we can run disown
Alternatively we can in first place use: nohup command name &
(
& will run the command as a child process to the current bash session. When you exit the session, all child processes will be killed.
nohup + &: when the session ends, the parent of the child process will be changed to 1 (the "init") process, thus preserving the child from being killed.
)
3. Now you can capture the process to your terminal using reptyr and even if you close the original terminal the process will not stop.

In the second example let's say you have running download in one session and it is too long, and you have to disconnect and go home. How to save the situation ?
1) Just login from another session and run the screen command.
2) From the second session: get the download PID, and use reptyr to attach it to the current session.
3) Detach screen with ctrl+a+d or just type exit
4) Next time, just re-login using ssh and make the session active (attached) with: screen -Dr

Hints on screen:
When you run the command, it creates creates new screen session/socket. Then you can use: Ctrl+a+d to detach from the current screen
to attach to already existing session use: screen -Dr
and to re-attach to already attached screen: screen -x
To delete screen session, you need to reattach and then Ctrl+a+k / or just type: exit

Congratulations!

Friday, February 07, 2020

Docker, XDebug and VSCODE in Ubuntu

Installing XDebug under VSCode is not a very easy task, especially under the Docker environment. Here is my way of doing it. References: Docker for web developers course.



You can learn more about Docker for web development in the following course.

Here is the sample Dockerfile for creating an Apache/PHP host, enabling and then configuring xdebug:

FROM php:7.4.2-apache
RUN pecl install xdebug && docker-php-ext-enable xdebug \
# not yet in linux: xdebug.remote_host = host.docker.internal \n\
&& echo "\n\
xdebug.remote_host = 172.19.0.1 \n\
xdebug.default_enable = 1 \n\
xdebug.remote_autostart = 1 \n\
xdebug.remote_connect_back = 0 \n\
xdebug.remote_enable = 1 \n\
xdebug.remote_handler = "dbgp" \n\
xdebug.remote_port = 9000 \n\
xdebug.remote_log = /var/www/html/xdebug.log \n\
" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini

here is the service allowing port 8000 to be accesed outside:
docker-compose.yaml
version: "3.7"
services:
apache_with_php:
build: .
volumes:
- ./:/var/www/html/
ports:
- "8000:80"

and here is the configuration for debug (F5) inside of VSCode:
.vscode/launch.json
{
"version": "0.2.0",
"configurations": [

{
"name": "Listen for XDebug",
"type": "php",
"request": "launch",
"port": 9000,
"pathMappings": {
"/var/www/html/": "${workspaceFolder}"
},
}
]
}

Don't forget the most important thing, to enable reverse port 9000 on the host running xdebug, in order for the container to be able to connect to this port:

sudo ufw allow in from 172.19.0.0/16 to any port 9000 comment xdebug

Then we just build and run the container
/usr/bin/docker-compose up

For debugging purposes we can enter inside the container using bash shell:
docker exec -it php-debug_apache_with_php_1 /bin/bash
in the xdebug.log file we can find useful information for the initialization of xdebug: cat xdebug.log

then just browse: http://127.0.0.1:8000/index.php, place some breakpoint inside the index.php file, press F5 and refresh the web page to see the debugger (xdebug) running.

Congratulations and enjoy the course!

Wednesday, February 05, 2020

How to restore a deleted Ubuntu kernel / python

Rule NR.1 always keep at least 2 working kernels installed on your system!
For more information on the Ubuntu server administration, I would recommend taking the Practical Ubuntu Linux Server for beginners course.



This is a very painful exercise, but if you are careful during the steps it will be beneficial.
First, boot into liveCD or liveUSB and ensure you have networking.
mount the partitions:
here /dev/sda2 is the partition of my hard drive, where the kernel is broken. 

you can check yours by typing: lsblk
sudo mount /dev/sda2 /mnt 
sudo mount --bind /run /mnt/run
sudo mount /udev /mnt/udev
sudo mount --bind /dev /mnt/dev
sudo mount --bind /proc /mnt/proc
sudo mount --bind /sys /mnt/sys

enter the hard drive environment:
sudo chroot /mnt

mount -t devpts none /dev/pts
from now on you are working back as if you have booted from your hard drive.

Then wait to finish: apt update && apt dist-upgrade

Note:

if in this step you experience problems such as:

dpkg: error processing archive /var/cache/apt/archives/gcc-10-base_10.2.0-13ubuntu1_i386.deb (--unpack):
 trying to overwrite shared '/usr/share/doc/gcc-10-base/changelog.Debian.gz', which is different from other instances of package gcc-10-base:i386

just install the package with --force-overwrite from the cache:

sudo dpkg -i --force-overwrite /var/cache/apt/archives/gcc-10-base_10.2.0-13ubuntu1_i386.deb
and re-run: sudo apt dist-upgrade to complete the installation of the packages

(repeat this for every error until: apt dist-upgrade completes, it may take you some time while installing all the dependent packages resulting in errors)

If you are just missing the old system i.e. having deleted python libraries, do sudo apt install ubuntu-desktop , exit the chrooted shell, and reboot the computer.

Now to the kernel restoration process:

We will skip the grub loader update which may prevent some of the next installations or removals of packages:

sudo mv /etc/kernel/postrm.d/zz-update-grub /etc/kernel/postrm.d/zz-update-grub.bad
Check what version of the kernel are you running with uname -r
then type dpkg -l | grep linux-image
and find at least one kernel starting with ii -> which means it is correctly installed on your system and can be used to boot from later.
- if there is none kernel installed, go ahead and try to install some:
apt install linux-headers-generic
apt install linux-image-generic
- if you experience problems while installing this kernel, install a custom kernel of your preference. The trick is to have the kernel with ii in front!
- if you have problematic kernels(those marked with rH (half-installed) or other flags such as rc (removed)) you can remove them first with apt purge kernel_version...
restore the grub loader updates:
sudo mv /etc/kernel/postrm.d/zz-update-grub.bad /etc/kernel/postrm.d/zz-update-grub


lastly, get your new kernel settings inside of the grub loader:
update-grub

Then exit the chrooted shell and reboot.

One last thing, since we are using USB/LiveCD update-grub could detect also the OS inside those devices, and bloat the whole boot menu.
So after rebooting just re-create the grub configuration with: sudo grub-mkconfig -o /boot/grub/grub.cfg

Congratulations and happy learning!

Sunday, January 26, 2020

JavaScript form validation with promises and exceptions handling

It is good to know the basics of handling web forms. The client entered data usually requires initial validation handled by the JavaScript interpreter of the browser. Here is how to approach the aspects of validating and sending data, as well as parsing the returned server response with the help of JavaScript built-in features such as promises and exceptions. More on them you can learn in this JavaScript course.


<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
  <style>
    #info {
      opacity: 1;
      transition: opacity 5s;
    }

    #info.hidden {
      opacity: 0;
    }
  </style>
</head>

<body>

  <form id="user-input">
    <div class="form-control">
      <label for="username">Username</label>
      <input type="text" id="username" />
    </div>
    <div class="form-control">
      <label for="password">Password</label>
      <input type="password" id="password" />
    </div>
    <button type="submit" id="submit" disabled>Create User</button>
  </form>

  <span id="info"></span>


<script>
  const REQUIRED = 'REQUIRED';
  const MIN_LENGTH = 'MIN_LENGTH';

  function isValid(value, attr, validatorValue) {
    if (attr === REQUIRED) { return value.trim().length > 0; }
    if (attr === MIN_LENGTH) { return value.trim().length > validatorValue; }
  }

  function createUser(userName, userPassword) {
    if (!isValid(userName, REQUIRED) || !isValid(userPassword, MIN_LENGTH, 5)) {
      throw new Error('Wrong username or password.'); // client-side check
    }
    // return the server-side check promise
    return new Promise(function (resolve, reject) {
      // db checking logic
      is_created = true;
      if (is_created) {
        resolve(
          {
            userName: userName,
            createdAt: +new Date,
            message: 'successfully created user'
          }
        )
      }
      else {
        reject(
          {
            message: 'not valid user or password'
          }
        );
      }
    });
  }

  function displayInfo(info) {
    const info_el = document.querySelector('#info');
    info_el.innerHTML = JSON.stringify(info);
    info_el.classList.toggle('hidden');
  }

  function signupHandler(event) {
    event.preventDefault();
    try {
      createUser(enteredUsername.value, enteredPassword.value)
        .then( // return from the server-side promise
          (info) => { displayInfo(info); },
          (error) => { displayInfo(error); }
        );
    } catch (err) { displayInfo(err.message); } // return from client-side error (throw)
    // .message is auto generated from the throw error object
  }

  const form = document.querySelector('#user-input');
  form.addEventListener('submit', signupHandler);

  // access the form controls
  const submitBtn = form.querySelector('#submit');
  const enteredPassword = form.querySelector('#password');

  const enteredUsername = form.querySelector('#username');
  enteredUsername.addEventListener('input', () => {
    // this.value points to window object, because => preserve the parent context
    submitBtn.disabled = !isValid(enteredUsername.value, REQUIRED);
  });

</script>

</body>
</html>

Congratulations and enjoy learning JavaScript!

Subscribe To My Channel for updates