Youtube channel !

Be sure to visit my youtube channel

Saturday, February 29, 2020

Kubernetes - NodeJS application

Let's see how we can create a NodeJS application and place it inside a Docker container. In the next stage, we will manage the application within Kubernetes.
(part of my Kubernetes course)

From the directory of our project, we can take a look at the files with Visual Studio Code. Alright, let's start with the application code:
server_app.js

const express = require('express');
const config = {
name: 'sample-express-app',
port: 3000,
host: '0.0.0.0',
};

const app = express();

app.get('/', (req, res) => {
res.status(200).send('hello world');
});

app.listen(config.port, config.host, (e)=> {
if(e) {
throw new Error('Internal Server Error');
}
console.log(`${config.name} running on ${config.host}:${config.port}`);
});


The code starts by using the Express framework, and with a configuration for our simple Express application, where we specify on which host and port it will run Afterwards, we are creating the actual server application in a way that when its root domain URL is being requested, it is just sending "Hello world" as an output to the user. We set up the application to listen on the host and port specified inside our configuration and if everything runs well it outputs to the Nodejs console that the application is running on the current host and port specified.
We also have package.json file where the application starts using the node command and the name of the .js file to run: server-app.js. Also, we are requiring the Express framework in the dev dependencies section.
{
  "name": "kuber-node",
  "version": "1.0.0",
  "description": "",
  "main": "server_app.js",
  "scripts": {
    "start": "node server_app.js"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "express": "^4.17.1"
  }
}



Now let's take a look at our Dockerfile
FROM node:slim
WORKDIR /app
COPY ./package.json /app
RUN npm install
COPY . /app
CMD node server_app.js


For keeping the things lightweight in the process of containerization, we are using a tiny image version of nodejs - slim. Next inside of the /app/ directory of the image, we are copying our local version of package.json. This way npm will read the package.json, install the proper packages needed for the functioning of our application, and docker will copy the resulting files inside of the newly created container. The last command just starts the application.
Note that at this stage we don't need to expose any ports, because we will do this later using Kubernetes.

Now let's create/build an image from this file. For the build process, we will be using the current directory (local_app) context in order to include just our application files inside of the newly built image. We are also tagging the application with the v1 tag.
docker build -t localhost:32000/server-app:v1

Let's not forget the .dockerignore file, where we are ignoring:
.git
Dockerfile
docker-compose
node_modules

This is because we would like the image to build its own node_modules, and not to use the other image artifact files used during the build process. Note also that we are setting up a multi-stage build, where initially are being used just parts of already existing images(from docker hub) that our application needs and the final image is built based on those parts.
Lets check if our image works correctly:
docker run -p 3000:3000 localhost:32000/server-app:v1
Here we are creating a container where we load up our image and map internal to external ports: 3000 to 3000 so when we go to http://localhost:3000 we should see our application working within the docker container.
With docker container ls we list containers, noting their id's and we can stop them by using docker container stop container_id

The next step is to extract the image into an archive. We do this because we later will use the archive to populate the Kubernetes image repository.
docker save localhost:32000/server-app:v1 > myimage.tar

Ok now is the time to start the Kubernetes cluster:
microk8s.start
and to enable certain addons:
microk8s.enable dns registry
(DNS in order to communicate between pods and containers)
(registry is the internal image registry where we will push our docker image tar archive). to check if they are properly installed type: microk8s.status

Checks
We can also check all the running pods and services inside of our node with:
microk8s.kubectl describe nodes
To check what the container registry has inside we can use microk8s.ctr image list
Then let's import our archived image into the registry: microk8s.ctr image import myimage.tar
and again to check whether the newly imported image is listed inside of microk8s.ctr

Creating the pod
We will use the nodepod.yaml file manifest in order to get the image from the registry, to create a container using it and to place all this inside of a pod:

apiVersion: v1
kind: Pod
metadata:
  name: simple-api-pod
  labels:
    name: simple-api
spec:
  containers:
  - name: simple-api
    image: localhost:32000/server-app:v1


so let's apply the file to our Kubernetes cluster with microk8s.kubectl apply -f nodepod.yaml

Monitoring
Now we will observe what is happening inside of the cluster with:
microk8s.kubectl get all --all-namespaces
from the produced output find the id of your pod and just type this very useful command in order to track what is going on inside of the pod: microk8s.kubectl describe pod name_of_pod
You can too use the logs command that will output all the logs from the creating of the container up to now: microk8s.kubectl logs  --vv8 name_of_pod
For more fun we can even enter inside of the container:
microk8s.kubectl exec -it name_of_pod /bin/bash
and from there you can check the network connectivity or install packages.

Seeing the application
Let's now test the application. For this, we will need to expose our pod protected network to the outside world. We will use simple port forwarding:
microk8s.kubectl port-forward name_of_pod 3000
which means that we are exposing internal port 3000 outside.
And now we can browse http://localhost:3000 and see our app running inside of Kubernetes.

Congratulations!

Tuesday, February 25, 2020

Linux processes - attaching and inspecting

Inspecting processes is interesting topic for me in general. Whether with gdb or with command line tools, let's take a look how we can inspect what is going on inside a linux process: 

At one point, when you want, you can enjoy the full Ubuntu admin course.


For example in one terminal tab we can start a process such as ping,
then in another we can see its process identificator number(PID) with sudo ps -ax
and based on the information we can attach to the running process using strace: sudo strace -p PID (a nice and more verbose variant for process tracking offers: sudo strace -p12345 -s9999 -e write)

Another useful application is reptyr, which tries attach to runnig process and to transfer its output the current terminal we are using:
installation:
apt install reptyr

in order for reptyr to work you need to expand the scope of ptrace :
# echo 0 > /proc/sys/kernel/yama/ptrace_scope
then when you have the process ID you may try with the following options to attach to a process:
reptyr PID -T -L
L is to enable capturing child processes
T is for tty stealing

Keep in mind reptyr is just attaching to process and not getting its ownership (i.e. becoming its parent), so when you close the original parent terminal the captured process will halt. The solution in this case is to disown the process in question and is done in two steps:
1. the process should be listed as a task, and it is a fact that a task is associated with a particular terminal(tty). So first we run the process as a task with: bg, Ctrl+z, or &.
2. then we can run disown
Alternatively we can in first place use: nohup command name &
(
& will run the command as a child process to the current bash session. When you exit the session, all child processes will be killed.
nohup + &: when the session ends, the parent of the child process will be changed to 1 (the "init") process, thus preserving the child from being killed.
)
3. Now you can capture the process to your terminal using reptyr and even if you close the original terminal the process will not stop.

In the second example let's say you have running download in one session and it is too long, and you have to disconnect and go home. How to save the situation ?
1) Just login from another session and run the screen command.
2) From the second session: get the download PID, and use reptyr to attach it to the current session.
3) Detach screen with ctrl+a+d or just type exit
4) Next time, just re-login using ssh and make the session active (attached) with: screen -Dr

Hints on screen:
When you run the command, it creates creates new screen session/socket. Then you can use: Ctrl+a+d to detach from the current screen
to attach to already existing session use: screen -Dr
and to re-attach to already attached screen: screen -x
To delete screen session, you need to reattach and then Ctrl+a+k / or just type: exit

Congratulations!

Subscribe To My Channel for updates

Things to do after install Fedora 43

#!/bin/bash # 1. SETUP REPOSITORIES echo ">>> Setting up Repositories (RPM Fusion, Copr, Cisco)..." # Install RPM Fusion ...