Youtube channel !

Be sure to visit my youtube channel
Showing posts with label docker. Show all posts
Showing posts with label docker. Show all posts

Tuesday, July 28, 2020

Web app deployment inside of Kubernetes with microk8s

based on the Kubernetes course:
 
1) install microk8s: sudo snap install microk8s
2) enable registry & dns: microk8s.enable registry dns

MONGODB deployment & service
3) configure the mongodb deployment
generate 2 secrets using md5sum from shell
MONGO_INITDB_ROOT_USERNAME=--insert_here_encrypted_username-- -e MONGO_INITDB_ROOT_PASSWORD=--insert_here_encrypted_password-- -e MONGO_INITDB_DATABASE=admin

4) apply the MongoDB database deployment and service
microk8s.kubectl apply -f mongodb-deployment.yaml
5) check the environment variables inside the container
5.1) enter inside the deployment:
microk8s.kubectl exec -it deployment.apps/mongodb-deployment sh
5.2) env
6.1) get inside the mongodb container:
from Docker: docker exec -it mongo bash
from Kubernetes: microk8s.kubectl exec -it mongodb-deployment--insert_your_deployment_id -- /bin/sh
6.2) authenticate to the mongodb database container:
mongo -u insert_here_encrypted_username -p insert_here_encrypted_password --authenticationDatabase admin


Our application deployment & service
7) build the docker image of our application:
docker build . -t localhost:32000/mongo-app:v1
8) test the image using port forwarding:
docker run -p 3000:3000 localhost:32000/mongo-app:v1
or: docker run  -it --rm -p 3000:3000 localhost:32000/mongo-app:v1
9) push the image into the kubernetes registry
docker push localhost:32000/mongo-app:v1
10) apply our custom application: microk8s.kubectl apply -f mongo.yaml
11) check whether the IP addresses of the service and pods match. This means that the service endpoints are correctly set and math the created pods:
microk8s.kubectl describe service
microk8s.kubectl get pod -o wide


Congratulations!

Friday, July 17, 2020

Permissions inside and outside of Docker containers

References: Docker for web developers course.


1) In Dockerfile, when building a container:
Inside the Dockerfile we can fix the container directory permissions: chown -R www-data:www-data /var/lib/nginx ->in order to let nginx to function properly

volumes & not empty dir -> files are copied from the dir to volume
bind mount & not empty dir -> if there are files they stay, nothing is being copied from the bind mount point

2) In docker-compose.yml

- volumes (volume:/var/lib/myslq) inherit the permissions and ownership from the user created the image - usually root.

- bind mounts (/my/own/datadir:/var/lib/mysql) - the permissions and ownership are the same as the directory on your host.

Even if in the Dockerfile we have: USER node or in docker-compose is specified user: "node:node", the local directory will be mounted preserving its UID:GID in the container, ignoring the USER directive.

Special case: when doing bind-mount and the uid in container != uid on host:
Solution is to change the ownership of the local dir before building the container and creating the bind with the same user/group: chown -R www-data:www-data /var/lib/nginx
There is a catch: when local uid <> container uid in the container then we will have mismatched permissions. We can solve this problem using UID/GID synchronization:
// optional
Check the user running the container from the dockerhub image: USER directive.
id -u
Check the container user to which group belongs (find its UID)
cat /etc/passwd | grep nevyan
id, groups, grep nevyan /etc/group
// end optional

1) Check the user which runs the server inside the container
ps aux | grep apache(server_name)
2) When having proper UID:GID, we again use chown but this time not with user/group names, but with UID:GUIDs

MySQL example: By default the MySQL image uses a non-root user with uid=1001. If we try to bind mount a local /var/lib/mysql (MySQL data directory not owned by UID 1001), to a non-root docker container - this will fail. Since user 1001 (from the container) needs to perform read/write operations to our local directory.
Solution: change the local directory permissions with numeric UID/GID expected by the container: sudo chown -R 1001 /my/own/datadir

Friday, April 17, 2020

Docker Basics + Security

Here are answers to common questions from the Docker for web developers course.
 

difference between image and build
using FROM:image_name - docker compose will run a container based on that image
using build: docker compose will first build an image based on the Dockerfile found in the path specified after the build: option, or inside the context: option, and then run a container based on the resulting image. Inside the build: we can specify image: option which will name and tag the built image. Example:
build: ./
image: webapp:tag
This results in an image named webapp, tagged tag

why we do: apt-get clean or npm cache clean?
The cache of apt, makes it not aware of new apt installs inside the docker image! If we install packages with apt install we immediately(&&) have to do apt clean afterwards or use: && rm -rf /var/lib/apt/lists/* Reason: Next time when we add a new package to be installed in the container docker will use the apt cached layer and won't be able to detect the changes and install the package version. We can use docker history command to see the different layers of the docker container creation.

optimizing image size: Docker images are structured as a series of additive layers, and cleanup needs to happen in the same RUN command that installed the packages. Otherwise, the deleted files will be gone in the latest layer, but not from the previous layer.

why we copy package.json from our host directory to the container?
We first COPY the dependency lists (package.json, composer.json, requirements.txt, etc.) to the container in order for Docker to cache the results of the npm install that follows. This way when changing other parts of the container configuration and re-building it, docker will not rebuild all the dependencies, but use the cached ones. At the same time, if we change a line inside the dependencies list file, all the dependencies will be re-installed, because they now form another different cached layer inside of docker.

Then why we copy just package.json and not all source files of the project, saving them in one docker layer? Because if we make a change to just one of our source files - this would bust the docker cache and even though the required packages had not changed they'll need to be re-installed (npm/composer install).
For this reason we:
1) copy the dependency list
2) install dependencies so they will be cached
3) copy our source files


combining commands
We can combine multiple lines from RUN and COPY commands into one line this will create only one layer which will be cached for later lookup. Also instead of using ADD, we can use: COPY to transfer files from image to image

multiple builds
For having development, build and test stage we can use target build in the compose file like:
target:dev
then we can build a specific target with: docker build app:prod --target prod
This will build just the section prod from the docker-compose file, and will tag it with app:prod
The same can be done for a development environment:
app:dev --target dev

mounts
- a named volume will be created entirely inside the container and is suitable for storing persistent information inside of the container such as database data.
- a bind mount (pointing outside of the container) is used for information, residing on our local machine. When it is good to use bind mounts? - they allow us not to copy our source code to the container, but to use the local code files, such as local development files.

version: "3.8"
services:
  web:
    image: nginx:alpine
    volumes:
      - type: volume # named volume
        source: dbdata # link to created volume inside of container
        target: /data # inside of container
      - type: bind # bind mount
        source: ./static # local directory
        target: /app/static # inside of container
 volumes:
  dbdata: #create volume inside of container
 
Note: anonymous volumes
They are the same as the named volumes, but don't have a specified name.
During the build phase the named volumes are created inside the container.
In the run phase bind mounts will overwrite the freshly created container contents: the name bind will copy the local directory name bind over the container named/anonymous volume overwriting its contents. In such cases anonymous volumes can be used to preserve certain container sub-directories from being overwritten at runtime from host directories:
volumes:
      - '.:/app' # bind mount - copy the local host dir into container at runtime
      - '/app/node_modules' # anonymous volume - preserve container built /node_modules at runtime


node_modules
Why we would like /node_modules to be rebuilt inside the container, and not copied directly from our host? Because the container libraries might be based on a different image distribution than our host. For example, if we run a project on Windows OS and creating a container for the project, based on a Linux distribution image the contents of /node_modules might be not the same for Linux and Windows OS. The solution in those cases is to place /node_modules inside of .git_ignore file. This way the libraries inside /node_modules will be rebuilt from scratch inside of the container, and they will get their own proper versions, based on the Linux image, that are different from the host installation.

environment variables
In the docker-compose file outside of the build phase, we can use pre-made published images and transfer variables to the image using the environment: section. The second benefit of this technique is that there is no need to rebuild the container, but just change the variables and restart the container in order for it to get the changes. Inside the build phase, the container uses ARGs to receive external variables.

Example 1
docker-compose:
version: '3'
services:
  service1:
    build: # note: we are in build phase
      context: ./
    args:
      USER_VAR: 'USER1' # setup the USER_VAR variable
# note: if there is alredy USER_VAR inside the alpine image (used in the Dockerfile)
# it will overrite the USER_VAR and show instead

Dockerfile:
FROM alpine
# note accessing the USER_VAR after the FROM line !
ARG USER_VAR # access the docker-compose set USER_VAR
RUN echo "variable is: $USER_VAR" # echo on screen

Example 2
.env:
ENV_USER_VAR = USER1
docker-compose:
version: '3'
services:
  service1:
    build: # note: we are in build phase
      context: ./
    args:
      USER_VAR: ${ENV_USER_VAR} # setup the USER_VAR variable from .env file

Dockerfile:
FROM alpine
ARG USER_VAR # access the docker-compose set USER_VAR
RUN echo "variable is: $USER_VAR" # echo on screen
 
Example secrets:

Optionally we can create named secrets from .txt files:
docker secret create mysql_root_password ./db_root_password.txt
docker secret create db_password ./db_password.txt 
docker secret ls
  
version: '3.1'

services:
   db:
     image: mysql:8
     volumes:
       - db_data:/var/lib/mysql # using persistant volume inside the container
     environment:
       MYSQL_ROOT_PASSWORD_FILE: /run/secrets/mysql_root_password
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
# read the password from memory and set the container environment variable 
       MYSQL_PASSWORD_FILE: /run/secrets/db_password 
     secrets:
       - mysql_root_password # enable access to the in-memory secrets 
       - db_password # enable access to the in-memory secrets

secrets:
   db_password:
#  Docker mounts the db_password.txt file under /run/secrets/db_password  
     file: db_password.txt #read the password from db_password.txt file in-memory filesystem
# note: if a container stops running, the secrets shared to it are
unmounted from the in-memory filesystem and flushed from the node’s memory. 
 
   mysql_root_password:
     file: mysql_root_password.txt

volumes:
    db_data: # creating persistant volume inside the container


non-root environment
Keep in mind that the Docker daemon starts with full root privileges in order to create networking, work with namespaces, open ports etc...
Then for each service/container created it uses the created service UID and exports it outside of the container. This way worker/service UIDs inside of the container are mapped to non-root UIDs inside of the host.
The special UID 0 in the container can perform privileged operations in the container. This means that if a container gets compromised and an attacker gains a root account inside of the container this is equal to the host root account. So it is good to use a non-root account for the following reasons:
- a non-root cannot read or write to system files, create users, read memory secrets, etc.
- memory secrets could be only read by the user who created them.

web servers
Some software (Nginx, Apache) already has one master node running at maximum privileges(root) for administrative purposes, and worker nodes for running user applications (web sites) with non-root privileges.
The same way applications developed in nodejs, angular, express, as processes in Linux, run with the privileges of the calling user.

Apache web server is having 1 master process which is owned by root,
then spawns child-processes(workers) for serving web pages, which are configured to run as user 'www-data':
ps -aef --forest|grep apache2
root  /usr/sbin/apache2 -k start
www-data  /usr/sbin/apache2 -k start
Keep in mind that when running Apache with non-root user (www-data) the default port 80 will not be allowed to be opened by the Apache because port 80 as all ports below 1024 are blocked to be assigned by non-root users by default inside of Unix environments. So you'll need to choose to open up a port that is greater than 1024.

dockerhub images
One must note that the predefined official images from dockerhub use root permissions for their installation process. In a container context, valid usage of running commands with root privileges is when we would like to perform system administration activities such as:
- run npm for updating the npm version: RUN npm i npm@latest -g
- install software inside the image with apt and other package managers
- copy files from outside to the inside of the container
- create and set up a 'non-root' user
- set correct permissions for application project directories such as /var/www/html/app etc. using chown and chmod
- setup/change webserver configuration
Note: following the above-described process, when the official image installation completes (unless specified otherwise such as using the USER command inside of docker-compose), the created container/service ends up having root permissions.

In such cases, in order to create a non-root environment, we can divide the docker-compose configuration file into 2 phases:
1) build-time dependencies:
to prepare the project's main installation directory, set up local 'non-root' user, set proper project directory permissions with chown in order our 'non-root' to be able to access it. -> ALL done with root permissions
2) run-time dependencies:
When the system environment is ready we can perform project-specific packages installations and customizations. We switch to a 'non-root' user (example: USER node) and install project packages using the current 'non-root' running user. Example: 
USER node RUN npm install


web development case
If we would like to develop locally on our host and then using our data inside the container via a bind mount:
1) we can first create a non-privileged user inside our container.
2) Then we need to match our local user UID to be the same as the container user UID. Reason: the freshly created container user might receive by the OS another UID which will not match our local user ID and prevent us to work correctly with files.
Solution:
1) We can specify and pass the UID from .env file to the service/container in the docker-compose file
2) Then pass the UID via ARGs from the compose file to the Dockerfile in order to achieve the same UID inside and outside the container.
Details: To specify the user that we want a service to run as, in the docker-compose.yml we can directly set user: uid:gid or: we can set variables in .env file: UID=1000 GID=1000 and then use the variables inside docker-compose use user like: "${UID}:${GID}"

more on security: If Apache runs as under www-data group, then the group www-data should be able to read+traverse user directories such as var/www/html/user_dir and read their files.
So for the directories, we set the following permissions: owner: rwx, group:rx (a group can traverse directories, and a developer can also create and update files), and for the files: - owner:rw, group r (developer reads and writes, apache interprets PHP (reads the file)). All other users are with denied permissions:
0) set initial ownership of /var/www/html to the current user/developer
sudo chown -R $USER:www-data /var/www/html
 
1) user www-data(apache) can only read files(+r) and directories(+rx)
sudo find /var/www/html -type d -exec chmod g+rx {} +
sudo find /var/www/html -type f -exec chmod g+r {} +

2) user/developer is able to read and create directories, as well as read/update/, write files.
We prevent the user from executing files(such as .php or other directly on the host (not on web). When the .php files are being requested on the web - Apache will handle the.
sudo chown -R USER /var/www/html/
sudo find /var/www/html -type d -exec chmod u+rwx {} +
sudo find /var/www/html -type f -exec chmod u+rw {} +

3) revoke access for other users
 sudo chmod -R o-rwx /var/www/html/

4) set default permissions for newly created files& directories

chmod g+s .
set the group ID (setgid) on the current directory - all newly created files and subdirectories will inherit the current group ID, rather than the group ID of the user creator.


use a specific version of the image instead of :latest
It is better to install an image specific version, so the newly created container will stay immutable and not induce problematic changes when the image changes its versions for example from ver.5 to ver.7. If we use :latest, we cannot be sure that our code will run correctly on every vendor version. So by setting a specific known version of the source image, we assure that our configuration/application/service will work on the chosen version.

networks
If your containers reside on the same network (by default) docker-compose will automatically create a network for the containers inside the compose project and they will be able to access all the listening ports of other containers via their service name as DNS hostname. The default created network driver is overlay/bridge. If containers span multiple hosts, we need an overlay network to connect them together.

'depends_on' is to be able to have somewhat control over the order of the creation of containers.

RUN apt-get update vs RUN [ "apt-get", "update" ]
1st will use shell /bin/sh to run the command, 2nd will not (for images without bash shell)

Multi-stage builds

PRODUCTION: using local-dev project files and building dependencies inside the container
dockerfile
# 1st stage
FROM composer AS builder
COPY composer.* /app # copy local app dependencies into the container /app directory

RUN composer install --no-dev # build project dependencies in container's /vendor folder in order container to build its own dependencies excluding dev-dependencies

# 2nd stage
FROM php:7.4.5-apache as base # start a new build stage with the php-apache image as its base
RUN docker-php-ext-install mysqli
# Note: COPY copies just the built artifact from previous stage to a new stage.
COPY --from=base ./ /var/www/html/ # copy our local project files inside the container using the base stage COPY --from=builder /app/vendor /var/www/html/vendor/ # from the composer stage copy the pre-build vendor folder to the container

docker-compose.yaml
version: '3.7'
services:
  app:
   build: .
     target: base # we just run the build phase only when the target is base
                  # i.e. don't need to rebuild the 1st stage of the build (composer install)
   ports:
     - 80:80
   volumes:
     - ./:/var/www/html # getting bind mount inside of the container to local development directory
-
/var/www/html/vendor # preserving container's built dependencies from being overwritten by bind mount


DEVELOPMENT: using both local-dev project files and dependencies (we need to manually install dependencies using composer install)
dockerfile
FROM php:7.4.5-apache as base # start a new build stage with the php-apache image as its base
RUN docker-php-ext-install mysqli
# Note: COPY copies just the built artifact from previous stage to a new stage.
COPY ./ /var/www/html/ # copy our local project files inside the container using the base stage
docker-compose.yaml
version: '3.7'
services:
  app:
   build: .
   ports:
     - 80:80
   volumes:
     - ./:/var/www/html # getting bind mount inside of the container to local development directory


Separating build and runtime dependencies using stages:

1st stage - build:
FROM node AS build
WORKDIR /usr/src/app # created / bind-mount volume inside the compose file
COPY package.json .
RUN npm install # install the app package dependencies
COPY . ./src # copy generated code into the container
2nd stage - serve the generated .js & html files
FROM nginx:alpine 
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from build /usr/src/app/build /usr/share/nginx/html

Production vs Development environment

FROM php:7.4-fpm-alpine as base FROM base as development
# build development environment

FROM base as production COPY data /var/www/html # copy into the container the generated source files

docker-compose.yaml

php-dev: build: .
  target: development ports: - "9000:9000"


php-prod: build: .
  target: production ports: - "9000:9000"
 

volumes:

  - ./:/var/www/html


docker build . -t app-dev --target=development
docker build . -t app-prod --target=production


FAQ:

How to inspect containers:
Here is how to inspect the open ports inside of both MySQL and Apache containers.
1) we need to get get the running container process id:
docker container ls (to get the container_id)
then:
docker inspect -f '{{.State.Pid}}' <container_id>
2) having the container process_id run netstat inside the container namespace:
sudo nsenter -t <container_process_id> -n netstat
which will show us which ports are open for connections from outside world to the container.
If needed you can also start a temporary shell in the container: docker exec -it <container_id> /bin/bash and try to analyze what is happening: i.e missing file/directory permissions with ls -la, check the container logs etc..., like when you are running the Apache server locally. For example you can easily check on which port Apache server is running with: sudo netstat -anp | grep apache2 , sudo lsof -i -P | grep apache2 , or cat /etc/apache2/ports.conf Then having the right port update your docker container configuration: delete and rebuild the container.

Enable / disable PHP extensions:
It is possible with: RUN docker-php-ext-install name_of_extension
Note: some extensions require additional system libraries to be also installed. For exmaple for the zip library you need to run on the same line before php-ext-install...: apt-get install libzip-dev zlib1g-dev;
 
 
How to import database from local drive into a mariadb/mysql database:
If the container is already present, execute the following commands: docker exec -i mysql8 mysql -udevuser -pdevpass mysql < db_backup.sql
or docker exec -i mysql8 sh -c 'exec mysql -udevuser  -pdevpass' <  db_backup.sql
Of course, you can just mount a local database (bind mount) to be used within the container with: docker run  -v /var/lib/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root mysql8


How to create tables inside of a mysql container?
You can create a sample table with the help of php:
$sql = "CREATE TABLE MyTable (
id INT(6) UNSIGNED AUTO_INCREMENT PRIMARY KEY,
firstname VARCHAR(30) NOT NULL,
email VARCHAR(50),
reg_date TIMESTAMP
)";
if ($conn->query($sql)===TRUE) {   echo "Table MyTable created successfully"; }
 
 
How to copy a local folder from /var/www/html/folder_name to folder html inside a docker container?
COPY /var/www/html/folder_name  /app  
 

How to create a persistent volume and link the container to use it?
dockerfile:
FROM php:7.4-apache
COPY --chown=www-data:www-data  . /var/www/html # we copy the current directory contents into /var/www/html directory inside of the container

docker-compose.yaml
version: '3.8'
services:
  php:
    build: ./  # use the above dockerfile to create the image
    ports:
      - 8080:80
    volumes:
       - type: volume
         source: phpdata
         target: /var/www/html
volumes:
  phpdata:  
 

How can you dockerize a website and then run the containers on another server?
First create a backup of the current container images and their content:
1) commit the changes made so far in the container: docker commit container_id backup_image_name
2) save the image to your local(node) machine: docker save backup_image_name > backup_image.tar
On the second server restore the image via:
1) docker load < backup_image.tar
2) start a container/s using the backup_image
Please not that if you have bind mounts (or volumes that reside outside of the container), you need to backup them manually!


How to install phpmyadmin?
docker-compose.yml file:
 phpmyadmin:
     image: phpmyadmin/phpmyadmin:latest
     env_file: .env
     environment:
       PMA_HOST: db
       MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
     ports:
       - 3333:80
.env file:
MYSQL_ROOT_PASSWORD = the password from the MYSQL installation


How to use local domains to access the container like domain.local?
You can start a NEW container from an image specifying -h (hostname option): docker run -h domain.local

How to forward localhost:8000 to some_domain.com
You can create an Nginx reverse-proxy container, which will expose your service container when browsing the Nginx container at port 80. Let's suppose you have a "web" service defined inside a docker-compose.yaml file.
1) Nginx configuration
default.conf
server {
  listen 80;
  listen [::]:80; # listen for connections on port 80
  server_name web-app.localhost;
  location / {
    proxy_pass http://web:80; #web is the name of the service(container) you would like to expose, 80 is the port, the service is listening on
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}
2) Dockerfile image configuration:
FROM nginx
COPY default.conf /etc/nginx/conf.d/
3) create an entry in hosts file pointing to
127.0.0.1 web-app.localhost
You can now browse: http://web-app.localhost

Congratulations and enjoy the Docker course !

Wednesday, March 25, 2020

Kubernetes in Ubuntu - horizontal pod autoscaler with microk8s

Let's take a look at how to create horizontal pod autoscaler in Kubernetes. Such type of setting is very useful when we want Kubernetes to automatically add & remove pods based on the application's workload.
(part of my Kubernetes course):

Here is the sample PHP application directly from the Kubernetes docs:
  $x = 0.0001;
  for ($i = 0; $i <= 1000000; $i++) {
    $x += sqrt($x);
  }
  echo "Operation Complete! v1";


What it does: It creates a loop, inside the square root of value x is calculated, then the result is added to the accumulated value of x and again square-rooted.

We also have a Dockerfile for the creation of an image based on the PHP code:
FROM php:7.4.4-apache
ADD index.php /var/www/html/index.php

We use a php-apache image and on the second line, we just place our index.php application code inside the image's root directory. This way, when we create a container out of the image the apache web server will start to serve the content when browsed on port 80.

Let's now start our Kubernetes cluster with microk8s.start
then we will enable dns and registry addons with microk8s.enable dns registry
Please check if those two addons are running with microk8s.kubectl get all --all-namespaces
If the image pod is not behaving properly you can inspect it with microk8s.kubectl describe pod/name_of_registry_pod

The following also helps to get the registry running:
disable the registry addon with microk8s.disable registry
Make sure that you have at least 20Gb of space
disable #ipv6 inside the hosts file
sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0
sudo ufw default allow routed
sudo iptables -P FORWARD ACCEPT

Then re-enable the registry addon.

Let's now build our image with Docker and push it into the Kubernetes registry:
docker build . -t localhost:32000/php-app:v1
Please notice we are beginning the name of our image as localhost:32000 - because that's where microk8s registry resides and waits for push/pull operations exactly on this host:port combination.
Next, we can check with docker image list to see if the image is being built successfully. We can now push the image into the registry:
docker image push localhost:32000/php-app

It is time to run our deployment & service with the newly created image:
php-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: php-apache
spec:
  selector:
    matchLabels:
      run: php-apache
  replicas: 1
  template:
    metadata:
      labels:
        run: php-apache                                                                                                               
    spec:                                                                                                                             
      containers:
      - name: php-apache
        image: localhost:32000/php-app:v1
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 200m

---
apiVersion: v1
kind: Service
metadata:
  name: php-apache
  labels:
    run: php-apache
spec:
  ports:
  - port: 80
  selector:
    run: php-apache

Note the name of the deployment: php-apache, the image: localhost:32000/php-app:v1 and the port 80 which we are exposing out of the created pod replicas. Currently, the deployment is running just 1 pod.
Port 80 will not show outside of the cluster. We can just look at our service through another container thanks to the enabled dns addon. Another thing to notice is that we have also created a service with lablel: run=php-apache this service will expose to the outside world only deployments with the same label.
We are ready to apply the deployment and the service via:
microk8s.apply -f php-deployment.yaml

Let's run the horizontal pod autoscaler:
microk8s.kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10

so we will be spreading the load between 1 and 10 replicas.
microk8s.kubectl get hpa
will show us the information about the autoscaler that we can further follow.

Note: At this point, we will need to enable the metrics-server with: microk8s.enable metrics-server
in order to see actual information about the usage of our pods.

Now we will create a temporary pod that will attempt to send multiple requests to the newly created hpa service. At the same time we will run a shell inside the container:
microk8s.kubectl run --generator=run-pod/v1 -it --rm load-generator --image=busybox /bin/sh
from inside we will try to reach our service, exposing the port 80 with:
wget php-apached
This will download the interpreted from php&apache index.php as index.html and is a sure sign that we have established a connection between the two pods.
Now let's create multiple requests to the hpa autoscaled application:
while true; do wget -q -O- php-apache; done
(-q -O- is just to surpress the output from wget)

If we watch microk8s.kubectl get hpa we will see the hpa in action by increasing and reducing the number of replicas based on the load.
We can also delete the hpa by knowing its name: microk8s.kubectl delete horizontalpodautoscaler.autoscaling/your_hpa_name

Note: if you delete the autoscaler, you will need manually to re-adjust the number of replicas with:
microk8s.kubectl scale deployment/php-apache --replicas=1

Congratulations!

Thursday, March 19, 2020

Kubernetes - Deployments, Services & Registry on ubuntu with microk8s

We will get to know of deployments, services, and registry inside Kubernetes with the help of microk8s. Part of my Kubernetes course. First, enable the registry and dns addons:
We need a registry because it will store our newly created application image.
And then a DNS to ensure that the pods and services inside of the cluster can communicate effectively.
microk8s.enable registry, dns
then check with microk8s.status to see if they are enabled
and: microk8s.kubectl get all --all-namespaces
to see if the pods are running
Also, we can check inside the registry pod all the messages, emitted during the pod creation. For this and with the information for the specific pod id of the registry from the previous command just type microk8s.kubectl -n container-registry describe pod registry-xxxxxxx-xxxx

in case of problems:
1) edit your /etc/hosts file and comment the ipv6 entry #::1 ip6-localhost ip6-loopback
2) check if the service is listening on port 32000 with: sudo lsof -i:32000
get the CLUSTER-IP address from the registry and access it with port 5000
from the browser try to access the catalog of images offered: http://xxx.xxx.xxx.xxx:5000/v2/_catalog
Note: The container registry is supported by Docker.

Now it is time to push our docker image inside the registry:
1) List the images: docker image ls
and find our application image there.
Note: In order to be able to use the local Docker/Kubernetes registry the image has to be with tag: localhost:32000/image-name
for this just get the image_id from docker image ls and use the following command:
docker image tag image_id localhost:32000/image-name:v1
2) Push the name of our application container into the registry
docker push localhost:32000/image-name:v1
3) Check inside the browser the same:
http://xxx.xxx.xxx.xxx:5000/v2/_catalog
and also:
http://xxx.xxx.xxx.xxx:5000/v2/image-name/tags/list

We will now use the image from the registry and based on this image will create a deployment:
node-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-deployment
  labels:
    app: simple-api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: simple-api
  template:
    metadata:
      labels:
        app: simple-api
    spec:
      containers:
      - name: simple-api
        image: localhost:32000/image-name:v1
        ports:
        - containerPort: 3000

As you can see every container will have a name:simple-api, on this base, new pods will be created with the app=simple-api label.
Finally, our deployment (also tagged with app=simple-api) will create replicas of the pods who match the label: app=simple-api
We can also see that we are using/referencing the image localhost:32000/image-name:v1 straight from our registry as well as exposing port 3000 of our node application to be accessible outside of the container.

Let's now run the .yaml manifest with microk8s.kubectl apply -f node-deployment.yaml

Note: If you experience problems with the registry just enable these firewall rules:

sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0
sudo ufw default allow routed
sudo iptables -P FORWARD ACCEPT

as well as restart the docker daemon:
sudo systemctl restart docker

Now, when the deployment is successful, we should have 2 pods created, that we can see from microk8s.kubectl get pods --all-namespaces.
We can now enter inside of each by using:
microk8s.kubectl exec -it node-deployment-xxxxxxxxxx-xxxx -- /bin/bash
and here we can test the network connectivity as well as see the files.

Ok, let's now reveal our beautiful application outside of the deployment pods with the help of a service. And here is its yaml file:
node-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: node
spec:
  type: NodePort
  ports:
  - port: 30000
    targetPort:3000                                                                                      
 selector:                                                                                                               
    app: simple-api

Ok, as you can see here we are matching all pods which are created by deployments having labels of app=simple-api !
Then we are using NodePort type of service, which means that we target inside of the container port 3000 and exposing it outside as 30000 (as from the service)
Ok, let's apply the service with microk8s.apply -f node-service.yaml
Let's run again: microk8s.kubectl get pods --all-namespaces and we see that we've got a CLUSTER-IP assigned from Kubernetes - this is an IP assigned to our service, which we can use with the port of 30000. So go ahead and browse: http://your_CLUSTER-IP:30000

Since our cluster Node is running on localhost, we see from the output another (this time randomly) assigned port next to 30000:34467. We can use this NodePort to access our cluster with http://127.0.0.1:34467

Let's see our endpoints(several pods) which backup and are responsible for our service: microk8s.kubectl get endpoints
We can access these endpoints from our browser.

The benefit we get is that: multiple pods(endpoints) can be accessed from a single location, we just have to know and point to it by its name. And this is our service!

Congratulations!

Saturday, February 29, 2020

Kubernetes - NodeJS application

Let's see how we can create a NodeJS application and place it inside a Docker container. In the next stage, we will manage the application within Kubernetes.
(part of my Kubernetes course)

From the directory of our project, we can take a look at the files with Visual Studio Code. Alright, let's start with the application code:
server_app.js

const express = require('express');
const config = {
name: 'sample-express-app',
port: 3000,
host: '0.0.0.0',
};

const app = express();

app.get('/', (req, res) => {
res.status(200).send('hello world');
});

app.listen(config.port, config.host, (e)=> {
if(e) {
throw new Error('Internal Server Error');
}
console.log(`${config.name} running on ${config.host}:${config.port}`);
});


The code starts by using the Express framework, and with a configuration for our simple Express application, where we specify on which host and port it will run Afterwards, we are creating the actual server application in a way that when its root domain URL is being requested, it is just sending "Hello world" as an output to the user. We set up the application to listen on the host and port specified inside our configuration and if everything runs well it outputs to the Nodejs console that the application is running on the current host and port specified.
We also have package.json file where the application starts using the node command and the name of the .js file to run: server-app.js. Also, we are requiring the Express framework in the dev dependencies section.
{
  "name": "kuber-node",
  "version": "1.0.0",
  "description": "",
  "main": "server_app.js",
  "scripts": {
    "start": "node server_app.js"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "express": "^4.17.1"
  }
}



Now let's take a look at our Dockerfile
FROM node:slim
WORKDIR /app
COPY ./package.json /app
RUN npm install
COPY . /app
CMD node server_app.js


For keeping the things lightweight in the process of containerization, we are using a tiny image version of nodejs - slim. Next inside of the /app/ directory of the image, we are copying our local version of package.json. This way npm will read the package.json, install the proper packages needed for the functioning of our application, and docker will copy the resulting files inside of the newly created container. The last command just starts the application.
Note that at this stage we don't need to expose any ports, because we will do this later using Kubernetes.

Now let's create/build an image from this file. For the build process, we will be using the current directory (local_app) context in order to include just our application files inside of the newly built image. We are also tagging the application with the v1 tag.
docker build -t localhost:32000/server-app:v1

Let's not forget the .dockerignore file, where we are ignoring:
.git
Dockerfile
docker-compose
node_modules

This is because we would like the image to build its own node_modules, and not to use the other image artifact files used during the build process. Note also that we are setting up a multi-stage build, where initially are being used just parts of already existing images(from docker hub) that our application needs and the final image is built based on those parts.
Lets check if our image works correctly:
docker run -p 3000:3000 localhost:32000/server-app:v1
Here we are creating a container where we load up our image and map internal to external ports: 3000 to 3000 so when we go to http://localhost:3000 we should see our application working within the docker container.
With docker container ls we list containers, noting their id's and we can stop them by using docker container stop container_id

The next step is to extract the image into an archive. We do this because we later will use the archive to populate the Kubernetes image repository.
docker save localhost:32000/server-app:v1 > myimage.tar

Ok now is the time to start the Kubernetes cluster:
microk8s.start
and to enable certain addons:
microk8s.enable dns registry
(DNS in order to communicate between pods and containers)
(registry is the internal image registry where we will push our docker image tar archive). to check if they are properly installed type: microk8s.status

Checks
We can also check all the running pods and services inside of our node with:
microk8s.kubectl describe nodes
To check what the container registry has inside we can use microk8s.ctr image list
Then let's import our archived image into the registry: microk8s.ctr image import myimage.tar
and again to check whether the newly imported image is listed inside of microk8s.ctr

Creating the pod
We will use the nodepod.yaml file manifest in order to get the image from the registry, to create a container using it and to place all this inside of a pod:

apiVersion: v1
kind: Pod
metadata:
  name: simple-api-pod
  labels:
    name: simple-api
spec:
  containers:
  - name: simple-api
    image: localhost:32000/server-app:v1


so let's apply the file to our Kubernetes cluster with microk8s.kubectl apply -f nodepod.yaml

Monitoring
Now we will observe what is happening inside of the cluster with:
microk8s.kubectl get all --all-namespaces
from the produced output find the id of your pod and just type this very useful command in order to track what is going on inside of the pod: microk8s.kubectl describe pod name_of_pod
You can too use the logs command that will output all the logs from the creating of the container up to now: microk8s.kubectl logs  --vv8 name_of_pod
For more fun we can even enter inside of the container:
microk8s.kubectl exec -it name_of_pod /bin/bash
and from there you can check the network connectivity or install packages.

Seeing the application
Let's now test the application. For this, we will need to expose our pod protected network to the outside world. We will use simple port forwarding:
microk8s.kubectl port-forward name_of_pod 3000
which means that we are exposing internal port 3000 outside.
And now we can browse http://localhost:3000 and see our app running inside of Kubernetes.

Congratulations!

Friday, February 07, 2020

Docker, XDebug and VSCODE in Ubuntu

Installing XDebug under VSCode is not a very easy task, especially under the Docker environment. Here is my way of doing it. References: Docker for web developers course.



You can learn more about Docker for web development in the following course.

Here is the sample Dockerfile for creating an Apache/PHP host, enabling and then configuring xdebug:

FROM php:7.4.2-apache
RUN pecl install xdebug && docker-php-ext-enable xdebug \
# not yet in linux: xdebug.remote_host = host.docker.internal \n\
&& echo "\n\
xdebug.remote_host = 172.19.0.1 \n\
xdebug.default_enable = 1 \n\
xdebug.remote_autostart = 1 \n\
xdebug.remote_connect_back = 0 \n\
xdebug.remote_enable = 1 \n\
xdebug.remote_handler = "dbgp" \n\
xdebug.remote_port = 9000 \n\
xdebug.remote_log = /var/www/html/xdebug.log \n\
" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini

here is the service allowing port 8000 to be accesed outside:
docker-compose.yaml
version: "3.7"
services:
apache_with_php:
build: .
volumes:
- ./:/var/www/html/
ports:
- "8000:80"

and here is the configuration for debug (F5) inside of VSCode:
.vscode/launch.json
{
"version": "0.2.0",
"configurations": [

{
"name": "Listen for XDebug",
"type": "php",
"request": "launch",
"port": 9000,
"pathMappings": {
"/var/www/html/": "${workspaceFolder}"
},
}
]
}

Don't forget the most important thing, to enable reverse port 9000 on the host running xdebug, in order for the container to be able to connect to this port:

sudo ufw allow in from 172.19.0.0/16 to any port 9000 comment xdebug

Then we just build and run the container
/usr/bin/docker-compose up

For debugging purposes we can enter inside the container using bash shell:
docker exec -it php-debug_apache_with_php_1 /bin/bash
in the xdebug.log file we can find useful information for the initialization of xdebug: cat xdebug.log

then just browse: http://127.0.0.1:8000/index.php, place some breakpoint inside the index.php file, press F5 and refresh the web page to see the debugger (xdebug) running.

Congratulations and enjoy the course!

Tuesday, October 22, 2019

Laravel inside Docker as a non root user

Laravel installation under Docker seems a painful experience but at the same time, it is a rewarding learning experience. The following are the steps for achieving the development environment for Laravel. For more information you can take a look at the Docker for web developers course, and also watch the following video for further details:


Let's assume you've installed Docker on Ubuntu or Windows 10 WSL2 with:
# sudo apt install docker
# sudo apt install docker-compose

Initially, we will get the source files of Laravel from its GIT repository. First, inside a newly created directory, we will use: git clone https://github.com/laravel/laravel.git .

Let's now run the Laravel project deployment locally:
sudo apt install composer && sudo composer install
(because we would like to develop our code locally so the changes to be reflected inside the docker container)

Then we will create our Dockerfile with the following content:
Be cautious when writing the yaml files: you will need to indent each element line: with space, incrementing the space for each sub-element

#we are copying the existing database migration files inside the docker container and are fetching and installing composer without user interaction and processing of scripts defined in composer.json
FROM composer:1.9 as vendor
COPY database/ database/
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install --no-scripts --ansi --no-interaction

# we are installing node, creating inside our container /app/ directory and copying the requirements as well as the js, css file resources there
# Then we install all the requirements and run the CSS and JS preprocessors

FROM node:12.12 as frontend
RUN mkdir -p /app/public
COPY package.json webpack.mix.js  /app/
COPY resources/ /app/resources/
WORKDIR /app
RUN npm install && npm run production

# get php+apache image and install pdo extension for the laravel database
FROM php:7.3.10-apache-stretch
RUN docker-php-ext-install  pdo_mysql

# create new user www which will be running inside the container
# it will have www-data as a secondary group and will sync with the same 1000 id set inside out .env file

ARG uid
RUN useradd  -o -u ${uid} -g www-data -m -s /bin/bash www

#we copy all the processed laravel files inside /var/www/html
COPY --chown=www-data:www-data . /var/www/html
COPY --chown=www-data:www-data --from=vendor /app/vendor/ /var/www/html/vendor/
COPY --chown=www-data:www-data --from=frontend /app/public/js/ /var/www/html/public/js/
COPY --chown=www-data:www-data --from=frontend /app/public/css/ /var/www/html/public/css/
COPY --chown=www-data:www-data --from=frontend /app/mix-manifest.json /var/www/html/mix-manifest.json

# allow the storage as well as logs to be read/writable by the web server(apache)
RUN chown -R www-data:www-data /var/www/html/storage

# setting the initial load directory for apache to be laravel's /public
ENV APACHE_DOCUMENT_ROOT /var/www/html/public
RUN sed -ri -e 's!/var/www/html!${APACHE_
DOCUMENT_ROOT}!g' /etc/apache2/sites-available/*.conf
RUN sed -ri -e 's!/var/www/!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/apache2.conf /etc/apache2/conf-available/*.conf

# changing 80 to port 8000 for our application inside the container, because as a regular user we cannot bind to system ports.
RUN sed -s -i -e "s/80/8000/" /etc/apache2/ports.conf /etc/apache2/sites-available/*.conf

RUN a2enmod rewrite

# run the container as www user
USER www

Here are the contents of the .env file which contains all the environment variables we would like to set and to be configurable outside of the container, while it has been build and run.

DB_CONNECTION=mysql
DB_HOST=mysql-db
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=laravel
DB_PASSWORD=mysql
UID=1000

Keep in mind that we are creating a specific user inside of MySQL which is: laravel, as well as setting its UID=1000 in order to be having synchronized UIDs between our container user and our outside user.

Follows the docker-compose.yml file where we are using multi-stage container build.

version: '3.5'

services:
  laravel-app:
    build:
      context: '.'
# first we set apache to be run under user www-data
      args:
        uid: ${UID}
    environment:
      - APACHE_RUN_USER=www-data
      - APACHE_RUN_GROUP=www-data

    volumes:
      - .:/var/www/html
# exposing port 8000 for our application inside the container, because run as a regular user apache cannot bind to system ports
    ports:
      - 8000:8000
    links:
      - mysql-db

  mysql-db:
    image: mysql:8.0
# use mysql_native authentication in order to be able to login to MySQL server using user and password
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    volumes:
      - dbdata:/var/lib/mysql
    env_file:
      - .env
# setup a newly created user with password and full database rights on the laravel database
    environment:
      - MYSQL_ROOT_PASSWORD=secure
      - MYSQL_USER=${DB_USERNAME}
      - MYSQL_DATABASE=${DB_DATABASE}
      - MYSQL_PASSWORD=${DB_PASSWORD}

# create persistent volume for the MySQL data storage
volumes:
  dbdata:


Lets not forget the .dockerignore file
.git/
vendor/
node_modules/
public/js/
public/css/
run/var/

Here we are just ensuring that those directories will not be copied from the host to the container.

Et voila!

You can now run:
docker-compose up
php artisan migrate
and start browsing your website on: 127.0.0.1:8000
Inside you can also invoke: php artisan key:generate

Congratulations, you have Laravel installed as a non-root user!

Wednesday, September 18, 2019

Wordpress and PhpMyAdmin inside Docker container

Let's see how to run WordPress and PHPMyAdmin inside a Docker container.
Part of the Docker for web developers course.

We will first create an environment file. Its purpose will be to save sensitive credentials data:
.env
MYSQL_ROOT_PASSWORD=mysql_root
MYSQL_ROOT=wp_user
MYSQL_PASSWORD=wp_password

Now it is time for the docker-compose.yml file where we will be describing our containers(services):
version: '3.3'

services:
  db:
      image: mysql:latest
      env_file: .env
      environment:
        - MYSQL_DATABASE=wordpress
      volumes:
          - dbdata:/var/lib/mysql
      command: --default-authentication-plugin=mysql_native_password

  wordpress:
    depends_on:
      - db
    image: wordpress:latest
    env_file: . env
    environment:
      - WORDPRESS_DB_HOST=db
      - WORDPRESS_DB_USER=$MYSQL_USER
      - WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
      - WORDPRESS_DB_NAME=wordpress
    volumes:
      - wordpress:/var/www/html
    ports:
      - "80:80"
   
  phpmyadmin:
    depends_on:
      - db
    image: phpmyadmin/phpmyadmin:latest
    env_file: .env
    environment:
      PMA_HOST: db
      MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
    ports:
       - 3333:80

  volumes:
    wordpress:
    dbdata:

Then you can launch: docker-compose up This will create networks between containers, volumes (to store persistent data), will pull images, configure them in order to create and run the containers.
We will be bringing up MySQL, PHPMyAdmin, and docker containers(services).

You'll need to wait a bit until everything is initialized and then you can browse:
http://127.0.0.1:80 for WordPress
as well as
http://127.0.0.1:3333 for PHPMyAdmin. Please note that for the PHPMyAdmin we need to use user: root, and password: mysql_root

Congratulations and enjoy learning !

Tuesday, September 10, 2019

Docker - Apache, PHP and MySQL setup

Here we will be doing an installation of the development/production environment of PHP and MySQL using Docker under Ubuntu. Here is a full video on the subject:


For more information, you can check the Docker for web developers course.

First, we will install docker and docker-compose:  

sudo apt install docker docker-compose

Then we will create docker group and place our user inside(to be able to use the docker command without sudo): 

sudo groupadd docker && sudo usermod -aG docker $USER

 
Now we either exit and re-run the terminal or type: 

newgrp docker

to switch our group.


Let's check the current images we are having: from docker image ls. We can test if docker's installation is successful with: 

docker run hello-world

 
Keep in mind that we can check what images we have in our system via:  

docker image ls

and our containers via 

docker container ls
 

With docker 

rmi hello-world: latest

we will remove the just installed image, but only if it is empty.
 

Let's check once again  

docker container ls -a

which will list all the images: running or not. We see our image is placed inside a container.


As a rule: if we want to remove an image, first we have to remove its container.
So we look up the container name. It is usually assigned by docker and in our case is: relaxed_cray and then we type 

docker container rm relaxed_cray

Now we can remove the image with 

docker rmi hello-world:latest


Setting up the PHP development environment
We will make a directory web_dev : 

mkdir web_dev

and will go inside:  

cd web_dev

Then we will create docker-compose file with: 

nano docker-compose.yml

Inside we will place the following config:
Keep in mind that for the indentations we are using 2 spaces! 

services:
  web:
       image: php:8.1-apache
        container_name: php81
        volumes:
           - ./php:/var/www/html/
        ports:
          - 8008:80

Short explanation: under volumes: we are specifying which local directory will be connected with the container directory. So whenever we are changing something local, the container will reflect and display the changes. For the ports: when we open/browse local port 8008 (127.0.0.1:8008) docker will redirect us to the 80 port of the docker container (achieved via port forwarding).
For the PHP image, you can choose whatever image you prefer from hub.docker.com
Next run: docker-compose up. This will read the docker-compose file and create a container by loading up different parts/layers of the packages, later will create a default network for our container and will load up the already provided PHP image.
The installation is ready, but if you want to display practical information, you have to create index.php file inside the newly created local PHP directory (storing our PHP files)
First, it is good to change the ownership of the directory with:
sudo chown your_user:your_user php/ -R

Then with
sudo nano index.php
we can type inside:
<?php 
echo "Hello from docker";
?>
Then again run docker-compose up. Also, try changing the .php file again and refresh the browser pointing to 127.0.0.0:8008

MYSQL support
let's create a new file Dockerfile, inside the PHP directory. Place inside:

FROM php:8.1-apache
RUN apt-get update && apt-get upgrade -y RUN docker-php-ext-install mysqli
EXPOSE 80

This will base our MySQL on the PHP image, that we already have, update the container system, run  specific docker extensions supporting MySQL from within PHP and expose 80-port

Next, we will customize the docker-compose.yml :
We will replace the line: image: PHP
with

buid:
  context: ./php
  dockerfile: Dockerfile 

This will read the docker file which we have set previously and create web service out of it.

Now we will be building the MySQL service:

db:
  container_name: mysql8
  image: mysql:latest
  command: --default-authentication-plugin=mysql_native_password
  restart: always
  environment:
    MYSQL_ROOT_PASSWORD: root
    MYSQL_DATABASE: test_db
    MYSQL_USER: devuser
    MYSQL_PASSWORD: devpass
   ports:
       - 6033:3306

Notes: We are using default authentication for MySQL in order to be able to login to the MySQL database, then docker is hardcoding mysql_user, mysql_password and mysql_database with: deuser, devpass, and test_db, and is exposing externally port 6033 for the MySQL service.

One last change: we would like first start the MySQL then the DB service, so we will add to the web service config:

depends_on:
  - db

To test the PHP-MySQL connection inside of our index.php file we can specify:

$host = 'db';  //the name of the mysql service inside the docker file.
$user = 'devuser';
$password = 'devpass';
$db = 'test_db';
$conn = new mysqli($host,$user,$password,$db){
 if($conn->connect_error){
  echo 'connection failed'. $conn->connect_error;
}
echo 'successfully connected to MYSQL';
}

If you experiencing problems you can remove the several images already created:

docker image ls -a 
docker rmi image ...fill here image names...
then run again docker-compose up and browse: 127.0.0.1:8008

Cheers and enjoy learning further.

Install Docker on Ubuntu 19.04 and 19.10

Here is how you can install Docker on Ubuntu 19.04

(part of the Docker course)


Video of the installation:



Steps:
1. Update the latest version of your repository and install additional packages for the secure transport of apt
sudo apt update && sudo apt install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common

2. Fetch and add docker's gpg key

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

3. Add the docker's repository (specific for our Ubuntu version) to our local repository
sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable"
Note: if you are running an unsupported Ubuntu release you can replace the string: $(lsb_release -cs) with the supported versions such as: disco


4. update again the local repository and install the latest docker version (community edition)
sudo apt update  && sudo apt install docker-ce docker-ce-cli containerd.io

5. Test the installation by fetching and running test image:
sudo docker run hello-world
For more information, you can visit: https://docs.docker.com/install/linux/docker-ce/ubuntu/

Notes:
1. when having problems with the socket docker binds on you can allow your user to be owner of the socket: sudo chown $USER:docker /var/run/docker.sock

2. If you want to run docker without sudo, just add docker group to the users with: usermod -aG docker $USER

and change the current running group with: newgrp docker
or su ${USER}

3. If you get: ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
check out the docker service status: sudo systemctl status docker
if it is stopped and masked: Loaded: masked (Reason: Unit docker.service is masked.) then you need to unmask the service: sudo systemctl unmask docker
then again start the docker service: sudo systemctl start docker 
until you see sudo systemctl status docker: Active: active (running)

Congratulations, and you can further explore the Docker for web developers course.

Subscribe To My Channel for updates

Integrating AI code helpers into Visual Studio Code

In this guide, we’ll walk through setting up a local AI-powered coding assistant within Visual Studio Code (VS Code). By leveraging tools s...