Youtube channel !

Be sure to visit my youtube channel
Showing posts with label ubuntu. Show all posts
Showing posts with label ubuntu. Show all posts

Tuesday, March 29, 2022

Ubuntu: how to restore packages after interrupted apt upgrade

Often you might stop running apt update && apt dist-upgrade.

Here is the one-line command that will resume reinstalling the unfinished, or half-configured packages for you. It creates list of packages which can be passed to apt install:  

grep  "08:18:.* half-configured"  /var/log/dpkg.log.1 /var/log/dpkg.log |  awk '{printf "%s ", $5}'

first part of the command will grab only half-configured packages, while the second part will grab just the package name.

Here is the command in full:

sudo apt install --reinstall $(grep  "08:18:.* half-configured"  /var/log/dpkg.log.1 /var/log/dpkg.log |  awk '{printf "%s ", $5}')

You can configure 08:18 with the time you know the packages were interrupted form installing.

Best luck!

Monday, February 22, 2021

Debug Laravel / PHP applications with XDebug in VSCODE

We will setup debugging using xdebug with PHP inside of visual studio code. 

Quick setup:

1) install php-xdebug:

sudo apt install php-xdebug

2) inside of php.ini at the end of the file set: 

[xdebug] 

xdebug.start_with_request = yes 

xdebug.mode = debug 

xdebug.discover_client_host = false 

3) install php debug extension in VSCODE and set the port of the vscode php debug extension to 9003.

Now you can press F5 and start debugging.

 

 

 

Alternatively you can install xdebug using pecl. 

The setup is valid for Ubuntu both on bare-metal as well as under Windows 10 with WSL.

Enjoy !

Monday, August 24, 2020

Install phpmyadmin in Ubuntu 20.04

Here is how to install phpmyadmin on Ubuntu 20.04
References: Practical Ubuntu Linux Server for beginners

We need fist to have mysql-server installed, where phpmyadmin will store its data. For this reason we will run:
sudo apt install mysql-server

Then some libraries for the functioning of phpmyadmin as well as the phpmyadmin package:
sudo apt install phpmyadmin php-mbstring php-zip php-gd php-json php-curl php libapache2-mod-php
Note: if there is a problem in the installation you can Ignore, or Abort the configuration of phpmyadmin.

Let's now go and login inside of MySQL as root:
sudo mysql -u root 


or if you already have password/user then: login with: sudo mysql -u user -p

Next we will adjust the MySQL root password, as well as its method of authentication:
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';

Optional: 

Configure Apache in order to serve phpmyadmin (if not already done by installation of phpmyadmin): inside of etc/apache/conf-available/ we create the following phpmyadmin.conf file:

Alias /phpmyadmin /usr/share/phpmyadmin
<Directory /usr/share/phpmyadmin/>
   AddDefaultCharset UTF-8
   <IfModule mod_authz_core.c>
      <RequireAny>
      Require all granted
     </RequireAny>
   </IfModule>
</Directory>
 
<Directory /usr/share/phpmyadmin/setup/>
   <IfModule mod_authz_core.c>
     <RequireAny>
       Require all granted
     </RequireAny>
   </IfModule>
</Directory>


Lastly, we need to activate the above configuration file with:
a2enconf phpmyadmin.conf
and then restart the apache2 service to reload and accept the changed configuration:
sudo systemctl restart apache2.service

Now is time to open up in the browser to: http://127.0.0.1/phpmyadmin
and use the combination that we already set: root / password

Congratulations and enjoy learning !

Tuesday, July 28, 2020

Web app deployment inside of Kubernetes with microk8s

based on the Kubernetes course:
 
1) install microk8s: sudo snap install microk8s
2) enable registry & dns: microk8s.enable registry dns

MONGODB deployment & service
3) configure the mongodb deployment
generate 2 secrets using md5sum from shell
MONGO_INITDB_ROOT_USERNAME=--insert_here_encrypted_username-- -e MONGO_INITDB_ROOT_PASSWORD=--insert_here_encrypted_password-- -e MONGO_INITDB_DATABASE=admin

4) apply the MongoDB database deployment and service
microk8s.kubectl apply -f mongodb-deployment.yaml
5) check the environment variables inside the container
5.1) enter inside the deployment:
microk8s.kubectl exec -it deployment.apps/mongodb-deployment sh
5.2) env
6.1) get inside the mongodb container:
from Docker: docker exec -it mongo bash
from Kubernetes: microk8s.kubectl exec -it mongodb-deployment--insert_your_deployment_id -- /bin/sh
6.2) authenticate to the mongodb database container:
mongo -u insert_here_encrypted_username -p insert_here_encrypted_password --authenticationDatabase admin


Our application deployment & service
7) build the docker image of our application:
docker build . -t localhost:32000/mongo-app:v1
8) test the image using port forwarding:
docker run -p 3000:3000 localhost:32000/mongo-app:v1
or: docker run  -it --rm -p 3000:3000 localhost:32000/mongo-app:v1
9) push the image into the kubernetes registry
docker push localhost:32000/mongo-app:v1
10) apply our custom application: microk8s.kubectl apply -f mongo.yaml
11) check whether the IP addresses of the service and pods match. This means that the service endpoints are correctly set and math the created pods:
microk8s.kubectl describe service
microk8s.kubectl get pod -o wide


Congratulations!

Tuesday, July 07, 2020

Install Wine & run Windows programs on Ubuntu 20.04 / 20.10

Wine is a preferred windows emulator when you want to run native Windows applications on Linux. Here is how easy is to install Wine on Ubuntu 20.04
For more information on Linux, I recommend taking the Practical Ubuntu Linux Server for beginners course.
 

Just follow the steps:

1) install wine32 first in order to include the i386 libraries:

apt install wine32 and wine

2) install winetricks in order to easily install external windows libraries. If you want to know which libraries are required just run wine your_app.exe and check the produced log:

apt install winetricks

3) use winetrics dlls combined with the libraries required by your application:

winetricks dlls mfc42 vcrun2010

4) run wine somefile.exe

Congratulations, and if you would like, you can enjoy the full Ubuntu admin course !

Tuesday, May 19, 2020

Kubernetes in Ubuntu - Ingress

(part of the Kubernetes course):

microk8s.kubectl get all --all-namespaces
// enable registry
microk8s.enable registry
//check /etc/hosts
//enable usage of secure registry in /etc/docker/daemon.conf
// enable dns
microk8s.enable dns
// enable ingress service/controller
microk8s.enable ingress
// verify if it is running
microk8s.kubectl get pods --all-namespaces
// microk8s.kubectl describe  pod  nginx-ingress-microk8s-controller-pn82q  -n ingress
create 2 deployments with different names each pointing to different app version
docker build -t localhost:32000/php_app:v1 .
docker build -t localhost:32000/php_app:v2 .
push the images into registry
docker push localhost:32000/php_app:v1
docker push localhost:32000/php_app:v2
apply the 2 deployments
microk8s.kubectl apply -f deployment_web1.yaml
microk8s.kubectl apply -f deployment_web2.yaml
apply 2 services to expose the deployments
microk8s.kubectl apply -f service_web1.yaml
check if they have valid endpoints:
microk8s.kubectl get ep
microk8s.kubectl get pods -o wide
create ingress resource:
microk8s.kubectl apply -f ingress.yaml
check the ingress1: microk8s.kubectl get ingress
check the ingress2: microk8s.kubectl logs -n ingress daemonset.apps/nginx-ingress-microk8s-controller
set /etc/hosts to point localhost to the ingress address.

Wednesday, March 25, 2020

Kubernetes in Ubuntu - horizontal pod autoscaler with microk8s

Let's take a look at how to create horizontal pod autoscaler in Kubernetes. Such type of setting is very useful when we want Kubernetes to automatically add & remove pods based on the application's workload.
(part of my Kubernetes course):

Here is the sample PHP application directly from the Kubernetes docs:
  $x = 0.0001;
  for ($i = 0; $i <= 1000000; $i++) {
    $x += sqrt($x);
  }
  echo "Operation Complete! v1";


What it does: It creates a loop, inside the square root of value x is calculated, then the result is added to the accumulated value of x and again square-rooted.

We also have a Dockerfile for the creation of an image based on the PHP code:
FROM php:7.4.4-apache
ADD index.php /var/www/html/index.php

We use a php-apache image and on the second line, we just place our index.php application code inside the image's root directory. This way, when we create a container out of the image the apache web server will start to serve the content when browsed on port 80.

Let's now start our Kubernetes cluster with microk8s.start
then we will enable dns and registry addons with microk8s.enable dns registry
Please check if those two addons are running with microk8s.kubectl get all --all-namespaces
If the image pod is not behaving properly you can inspect it with microk8s.kubectl describe pod/name_of_registry_pod

The following also helps to get the registry running:
disable the registry addon with microk8s.disable registry
Make sure that you have at least 20Gb of space
disable #ipv6 inside the hosts file
sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0
sudo ufw default allow routed
sudo iptables -P FORWARD ACCEPT

Then re-enable the registry addon.

Let's now build our image with Docker and push it into the Kubernetes registry:
docker build . -t localhost:32000/php-app:v1
Please notice we are beginning the name of our image as localhost:32000 - because that's where microk8s registry resides and waits for push/pull operations exactly on this host:port combination.
Next, we can check with docker image list to see if the image is being built successfully. We can now push the image into the registry:
docker image push localhost:32000/php-app

It is time to run our deployment & service with the newly created image:
php-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: php-apache
spec:
  selector:
    matchLabels:
      run: php-apache
  replicas: 1
  template:
    metadata:
      labels:
        run: php-apache                                                                                                               
    spec:                                                                                                                             
      containers:
      - name: php-apache
        image: localhost:32000/php-app:v1
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 200m

---
apiVersion: v1
kind: Service
metadata:
  name: php-apache
  labels:
    run: php-apache
spec:
  ports:
  - port: 80
  selector:
    run: php-apache

Note the name of the deployment: php-apache, the image: localhost:32000/php-app:v1 and the port 80 which we are exposing out of the created pod replicas. Currently, the deployment is running just 1 pod.
Port 80 will not show outside of the cluster. We can just look at our service through another container thanks to the enabled dns addon. Another thing to notice is that we have also created a service with lablel: run=php-apache this service will expose to the outside world only deployments with the same label.
We are ready to apply the deployment and the service via:
microk8s.apply -f php-deployment.yaml

Let's run the horizontal pod autoscaler:
microk8s.kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10

so we will be spreading the load between 1 and 10 replicas.
microk8s.kubectl get hpa
will show us the information about the autoscaler that we can further follow.

Note: At this point, we will need to enable the metrics-server with: microk8s.enable metrics-server
in order to see actual information about the usage of our pods.

Now we will create a temporary pod that will attempt to send multiple requests to the newly created hpa service. At the same time we will run a shell inside the container:
microk8s.kubectl run --generator=run-pod/v1 -it --rm load-generator --image=busybox /bin/sh
from inside we will try to reach our service, exposing the port 80 with:
wget php-apached
This will download the interpreted from php&apache index.php as index.html and is a sure sign that we have established a connection between the two pods.
Now let's create multiple requests to the hpa autoscaled application:
while true; do wget -q -O- php-apache; done
(-q -O- is just to surpress the output from wget)

If we watch microk8s.kubectl get hpa we will see the hpa in action by increasing and reducing the number of replicas based on the load.
We can also delete the hpa by knowing its name: microk8s.kubectl delete horizontalpodautoscaler.autoscaling/your_hpa_name

Note: if you delete the autoscaler, you will need manually to re-adjust the number of replicas with:
microk8s.kubectl scale deployment/php-apache --replicas=1

Congratulations!

Thursday, March 19, 2020

Kubernetes - Deployments, Services & Registry on ubuntu with microk8s

We will get to know of deployments, services, and registry inside Kubernetes with the help of microk8s. Part of my Kubernetes course. First, enable the registry and dns addons:
We need a registry because it will store our newly created application image.
And then a DNS to ensure that the pods and services inside of the cluster can communicate effectively.
microk8s.enable registry, dns
then check with microk8s.status to see if they are enabled
and: microk8s.kubectl get all --all-namespaces
to see if the pods are running
Also, we can check inside the registry pod all the messages, emitted during the pod creation. For this and with the information for the specific pod id of the registry from the previous command just type microk8s.kubectl -n container-registry describe pod registry-xxxxxxx-xxxx

in case of problems:
1) edit your /etc/hosts file and comment the ipv6 entry #::1 ip6-localhost ip6-loopback
2) check if the service is listening on port 32000 with: sudo lsof -i:32000
get the CLUSTER-IP address from the registry and access it with port 5000
from the browser try to access the catalog of images offered: http://xxx.xxx.xxx.xxx:5000/v2/_catalog
Note: The container registry is supported by Docker.

Now it is time to push our docker image inside the registry:
1) List the images: docker image ls
and find our application image there.
Note: In order to be able to use the local Docker/Kubernetes registry the image has to be with tag: localhost:32000/image-name
for this just get the image_id from docker image ls and use the following command:
docker image tag image_id localhost:32000/image-name:v1
2) Push the name of our application container into the registry
docker push localhost:32000/image-name:v1
3) Check inside the browser the same:
http://xxx.xxx.xxx.xxx:5000/v2/_catalog
and also:
http://xxx.xxx.xxx.xxx:5000/v2/image-name/tags/list

We will now use the image from the registry and based on this image will create a deployment:
node-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-deployment
  labels:
    app: simple-api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: simple-api
  template:
    metadata:
      labels:
        app: simple-api
    spec:
      containers:
      - name: simple-api
        image: localhost:32000/image-name:v1
        ports:
        - containerPort: 3000

As you can see every container will have a name:simple-api, on this base, new pods will be created with the app=simple-api label.
Finally, our deployment (also tagged with app=simple-api) will create replicas of the pods who match the label: app=simple-api
We can also see that we are using/referencing the image localhost:32000/image-name:v1 straight from our registry as well as exposing port 3000 of our node application to be accessible outside of the container.

Let's now run the .yaml manifest with microk8s.kubectl apply -f node-deployment.yaml

Note: If you experience problems with the registry just enable these firewall rules:

sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0
sudo ufw default allow routed
sudo iptables -P FORWARD ACCEPT

as well as restart the docker daemon:
sudo systemctl restart docker

Now, when the deployment is successful, we should have 2 pods created, that we can see from microk8s.kubectl get pods --all-namespaces.
We can now enter inside of each by using:
microk8s.kubectl exec -it node-deployment-xxxxxxxxxx-xxxx -- /bin/bash
and here we can test the network connectivity as well as see the files.

Ok, let's now reveal our beautiful application outside of the deployment pods with the help of a service. And here is its yaml file:
node-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: node
spec:
  type: NodePort
  ports:
  - port: 30000
    targetPort:3000                                                                                      
 selector:                                                                                                               
    app: simple-api

Ok, as you can see here we are matching all pods which are created by deployments having labels of app=simple-api !
Then we are using NodePort type of service, which means that we target inside of the container port 3000 and exposing it outside as 30000 (as from the service)
Ok, let's apply the service with microk8s.apply -f node-service.yaml
Let's run again: microk8s.kubectl get pods --all-namespaces and we see that we've got a CLUSTER-IP assigned from Kubernetes - this is an IP assigned to our service, which we can use with the port of 30000. So go ahead and browse: http://your_CLUSTER-IP:30000

Since our cluster Node is running on localhost, we see from the output another (this time randomly) assigned port next to 30000:34467. We can use this NodePort to access our cluster with http://127.0.0.1:34467

Let's see our endpoints(several pods) which backup and are responsible for our service: microk8s.kubectl get endpoints
We can access these endpoints from our browser.

The benefit we get is that: multiple pods(endpoints) can be accessed from a single location, we just have to know and point to it by its name. And this is our service!

Congratulations!

Saturday, February 29, 2020

Kubernetes - NodeJS application

Let's see how we can create a NodeJS application and place it inside a Docker container. In the next stage, we will manage the application within Kubernetes.
(part of my Kubernetes course)

From the directory of our project, we can take a look at the files with Visual Studio Code. Alright, let's start with the application code:
server_app.js

const express = require('express');
const config = {
name: 'sample-express-app',
port: 3000,
host: '0.0.0.0',
};

const app = express();

app.get('/', (req, res) => {
res.status(200).send('hello world');
});

app.listen(config.port, config.host, (e)=> {
if(e) {
throw new Error('Internal Server Error');
}
console.log(`${config.name} running on ${config.host}:${config.port}`);
});


The code starts by using the Express framework, and with a configuration for our simple Express application, where we specify on which host and port it will run Afterwards, we are creating the actual server application in a way that when its root domain URL is being requested, it is just sending "Hello world" as an output to the user. We set up the application to listen on the host and port specified inside our configuration and if everything runs well it outputs to the Nodejs console that the application is running on the current host and port specified.
We also have package.json file where the application starts using the node command and the name of the .js file to run: server-app.js. Also, we are requiring the Express framework in the dev dependencies section.
{
  "name": "kuber-node",
  "version": "1.0.0",
  "description": "",
  "main": "server_app.js",
  "scripts": {
    "start": "node server_app.js"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "express": "^4.17.1"
  }
}



Now let's take a look at our Dockerfile
FROM node:slim
WORKDIR /app
COPY ./package.json /app
RUN npm install
COPY . /app
CMD node server_app.js


For keeping the things lightweight in the process of containerization, we are using a tiny image version of nodejs - slim. Next inside of the /app/ directory of the image, we are copying our local version of package.json. This way npm will read the package.json, install the proper packages needed for the functioning of our application, and docker will copy the resulting files inside of the newly created container. The last command just starts the application.
Note that at this stage we don't need to expose any ports, because we will do this later using Kubernetes.

Now let's create/build an image from this file. For the build process, we will be using the current directory (local_app) context in order to include just our application files inside of the newly built image. We are also tagging the application with the v1 tag.
docker build -t localhost:32000/server-app:v1

Let's not forget the .dockerignore file, where we are ignoring:
.git
Dockerfile
docker-compose
node_modules

This is because we would like the image to build its own node_modules, and not to use the other image artifact files used during the build process. Note also that we are setting up a multi-stage build, where initially are being used just parts of already existing images(from docker hub) that our application needs and the final image is built based on those parts.
Lets check if our image works correctly:
docker run -p 3000:3000 localhost:32000/server-app:v1
Here we are creating a container where we load up our image and map internal to external ports: 3000 to 3000 so when we go to http://localhost:3000 we should see our application working within the docker container.
With docker container ls we list containers, noting their id's and we can stop them by using docker container stop container_id

The next step is to extract the image into an archive. We do this because we later will use the archive to populate the Kubernetes image repository.
docker save localhost:32000/server-app:v1 > myimage.tar

Ok now is the time to start the Kubernetes cluster:
microk8s.start
and to enable certain addons:
microk8s.enable dns registry
(DNS in order to communicate between pods and containers)
(registry is the internal image registry where we will push our docker image tar archive). to check if they are properly installed type: microk8s.status

Checks
We can also check all the running pods and services inside of our node with:
microk8s.kubectl describe nodes
To check what the container registry has inside we can use microk8s.ctr image list
Then let's import our archived image into the registry: microk8s.ctr image import myimage.tar
and again to check whether the newly imported image is listed inside of microk8s.ctr

Creating the pod
We will use the nodepod.yaml file manifest in order to get the image from the registry, to create a container using it and to place all this inside of a pod:

apiVersion: v1
kind: Pod
metadata:
  name: simple-api-pod
  labels:
    name: simple-api
spec:
  containers:
  - name: simple-api
    image: localhost:32000/server-app:v1


so let's apply the file to our Kubernetes cluster with microk8s.kubectl apply -f nodepod.yaml

Monitoring
Now we will observe what is happening inside of the cluster with:
microk8s.kubectl get all --all-namespaces
from the produced output find the id of your pod and just type this very useful command in order to track what is going on inside of the pod: microk8s.kubectl describe pod name_of_pod
You can too use the logs command that will output all the logs from the creating of the container up to now: microk8s.kubectl logs  --vv8 name_of_pod
For more fun we can even enter inside of the container:
microk8s.kubectl exec -it name_of_pod /bin/bash
and from there you can check the network connectivity or install packages.

Seeing the application
Let's now test the application. For this, we will need to expose our pod protected network to the outside world. We will use simple port forwarding:
microk8s.kubectl port-forward name_of_pod 3000
which means that we are exposing internal port 3000 outside.
And now we can browse http://localhost:3000 and see our app running inside of Kubernetes.

Congratulations!

Tuesday, February 25, 2020

Linux processes - attaching and inspecting

Inspecting processes is interesting topic for me in general. Whether with gdb or with command line tools, let's take a look how we can inspect what is going on inside a linux process: 

At one point, when you want, you can enjoy the full Ubuntu admin course.


For example in one terminal tab we can start a process such as ping,
then in another we can see its process identificator number(PID) with sudo ps -ax
and based on the information we can attach to the running process using strace: sudo strace -p PID (a nice and more verbose variant for process tracking offers: sudo strace -p12345 -s9999 -e write)

Another useful application is reptyr, which tries attach to runnig process and to transfer its output the current terminal we are using:
installation:
apt install reptyr

in order for reptyr to work you need to expand the scope of ptrace :
# echo 0 > /proc/sys/kernel/yama/ptrace_scope
then when you have the process ID you may try with the following options to attach to a process:
reptyr PID -T -L
L is to enable capturing child processes
T is for tty stealing

Keep in mind reptyr is just attaching to process and not getting its ownership (i.e. becoming its parent), so when you close the original parent terminal the captured process will halt. The solution in this case is to disown the process in question and is done in two steps:
1. the process should be listed as a task, and it is a fact that a task is associated with a particular terminal(tty). So first we run the process as a task with: bg, Ctrl+z, or &.
2. then we can run disown
Alternatively we can in first place use: nohup command name &
(
& will run the command as a child process to the current bash session. When you exit the session, all child processes will be killed.
nohup + &: when the session ends, the parent of the child process will be changed to 1 (the "init") process, thus preserving the child from being killed.
)
3. Now you can capture the process to your terminal using reptyr and even if you close the original terminal the process will not stop.

In the second example let's say you have running download in one session and it is too long, and you have to disconnect and go home. How to save the situation ?
1) Just login from another session and run the screen command.
2) From the second session: get the download PID, and use reptyr to attach it to the current session.
3) Detach screen with ctrl+a+d or just type exit
4) Next time, just re-login using ssh and make the session active (attached) with: screen -Dr

Hints on screen:
When you run the command, it creates creates new screen session/socket. Then you can use: Ctrl+a+d to detach from the current screen
to attach to already existing session use: screen -Dr
and to re-attach to already attached screen: screen -x
To delete screen session, you need to reattach and then Ctrl+a+k / or just type: exit

Congratulations!

Sunday, January 19, 2020

Install Kubernetes on Ubuntu with microk8s

Here is my experience of installing Kubernetes under Ubuntu. In simple terms, most of the time having properly functioning Kubernetes is not an easy task. I tried minikube, it has lots of installation issues, and so far I am more impressed by the microk8s performance. microk8s is available via snap and is small and suitable for local development. Just keep in mind to let at least 5GB of hard drive space for the nodes to work.
(part of the Kubernetes course)

Installation:
sudo snap install microk8s --classic --edge

To start the Kubernetes just use: microk8s start
and to stop it: microk8s stop

Once it loads up you can check the status of the cluster with microk8s.inspect and make changes if prompted to.

Then let's peek inside the cluster:

microk8s.kubectl describe nodes
microk8s.kubectl get all --all-namespaces
If you see that some of the pods/services/deployments are not available you can debug by looking at the logs inside of  /var/log/pods/ directory to find out why.

We have access to addons:
microk8s.status
then we can enable some of them with:
microk8s.enable

You can change the default directory for storage of resources:
by editing the --root and --state values inside: /var/snap/microk8s/current/args/containerd

Now lets authenticate inside the cluster, check the URL from:
microk8s.kubectl get all --all-namespaces
then use username(admin) and password from the output of:
microk8s.config

Let's now see the Kubernetes dashboard, for it to function properly we will enable the following addons:

microk8s.enable dns dashboard metrics-server

In order to browse the dashboard, we will need to know its IP address. Notice the IP and port of the dashboard service:
watch microk8s.kubectl get all --all-namespaces
you might need to browse something like https://10.152.183.1/ (port 443)

For accessing the dashboard URL: get the username and password(token) from microk8s.kubectl config view
or just: microk8s.config
For authentication inside the dashboard, there are several methods, we will use a token:
microk8s.kubectl -n kube-system get secret then find kubernetes-dashboard-token-xxxx
and use:
microk8s.kubectl -n kube-system describe secrets kubernetes-dashboard-token-xxxx
to find the token:
And there you go, you can monitor your Kubernetes in a browser!

You can create and update deployments in a cluster using:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc1/aio/deploy/head.yaml

Last, but not least you can create an alias and use directly kubectl instead of writing all the time microk8s.kubectl with:
sudo snap alias microk8s.kubectl kubectl

If at any point you are not happy with the cluster you can reset it with microk8s.reset


Notes:
Namespaces are used to isolate resources (like certain pod can be in only one namespace) inside a single cluster:
microk8s.kubectl get namespaces
microk8s.kubectl config get-contexts is very useful to find the current cluster name and user name. Context includes a cluster, a user and a namespace and is used to distinguish between multiple clusters we work on. To set a context we use: kubectl config use-context my-context
A node is a worker machine(hardware or virtual). Each node contains the services necessary to run pods and is managed by the master component.
Deployment has inside replica sets, which characterize pods, where the containers reside.

Congratulations and enjoy the course!

Sunday, November 17, 2019

MongoDB introduction

Let's see how to work with the non-relational database MongoDB. If you are looking for more advanced usage of MongoDB inside of a real-life application you can check this course: Learn Node.js, Express and MongoDB + JWT

 


Install the database with sudo apt install mongodb
*optional install the graphical client MongoDB compass:
https://www.mongodb.com/download-center/compass
then do dpkg -i mongodb_compass*

you can then type: mongo to get inside the interactive shell
inside you can type
show dbs
to show all the available databases:
admin   0.000GB
config  0.000GB
local   0.000GB

In mongodb we have the following structures: databases->collections->documents

use db1 switches the internal db() pointer to a different existing database or creates a new one if there is no such found. In order to actually create the database, you will need to place inside at least 1 collection.
So you can just type: db.createCollection('mycollection')
then show dbs will show your new database; the collections can be displayed using: show collections.
if we want to delete database we use: db.dropDatabase()

Since we are using unstructured data we can just place JSON object literals (named documents in MongoDb) inside the collections.
db.mycollection.insert({
field1:'value1'
})


multiple insertions:
db.mycollection.insertMany(
[
{name:'John',color:'red',position:'programmer' }
{name:'Peter',color:'green', position:'craftsman' }
{name:'Maria',color:'blue',position:'gardener' }
]
)

To display all the documents inside of a collection we could type:
db.mycollection.find().pretty()
other examples include:
db.mycollection.find({field1:'value1'})
or just get 1 field:
db.mycollection.findOne({field1:'value1'})
we can search using multiple criteria:
db.mycollection.find({field1: {$gt:5}},{_id:0})
or within subset of values:
{
_id:{ $in: [1,2]} 
db.mycollection.find({field1: {$gt:5}},{_id:0})   
 
we can aslo chain other methods to find such as limit
db.mycollection.find({field1:'value1'}).limit(5)

Update in MongoDB can be performed in two ways:
1. We search for a condition to be met, then we update the whole found document with a new document if the search condition returns a document otherwise the update fails. To update/repace entirely found docment entries we use:
update({filter_condition}, {fields_to_update})
inside filter_condition just specify JSON document key/values to search for, to be replaced by fields_to_update JSON document, for example:
db.mycollection.update({field1:'value1'},{field1:'value2'})
if we use upsert we will insert a new field if field1 search condition doesn't exist
db.mycollection.update({field1:'value1'},{field1:'value2'},{upsert:true})

2. The second way is again to search for condition, but this time to update specific fields from it - this is actually the expected familiar functionality of the MYSQL update function. We use the set operator, in order to just update specific fields:
db.mycollection.update(
{ _id: 1 },
{$inc: { quantity: 5 },
$set: {field1: "value2",}
)
An interesting thing to notice is that the documents have unique ids, so you can search by those ids in order to locate a specific document.
With inc we can increment certain fields inside the document (here the quantity field).
Example: db.mycollection.find({ _id:ObjectId("5dd0fb5988cbe5bb79e7a0e2")  } )

Notes: as filtering conditions you can use $gt:3 (which means >3) or $le:3
If you would like to rename certain keys inside the document, inside update you can us rename $rename{field1:"new_field"}
db.mycollection.update(
{ _id: 1 },
$rename{field1:"new_field"}
)with remove you can remove a document:
db.mycollection.remove(
{ _id: 1 }
)
We can have nested elements inside of the same document such as:
{
"article":{
"title":"my article",
"comments":[
                    {'title':'first comment'},
                    {'title':'second comment'}
                   ]
}
}

like so:
db.mycollection.insert(
{
"title":"my article",
"comments":[{'title':'first comment'},{'title':'second comment'}]
});

In order to search inside the comments we can use:
db.mycollection.find(  
{  
comments:{  $elemMatch:{ title:'first comment' }  }  
}
)
Keep in mind that when searching this way you have to specify exact match for the text fields. If you would like to perform full-text search you can place indexes on the fields you would like to search:
db.mycollection.createIndex({title:'text'})

Let's now insert multiple documents:
db.mycollection.insertMany(

{"_id":1,  "title":"my newest"},
{ "_id":2, "title":"my article 1"},
{ "_id":3, "title":"my article 2"},
{ "_id":4, "title":"my article 3"},
{ "_id":5, "title":"my article 4"},
{ "_id":6, "title":"my article 5"},  
]
);

*Optional:
db.mycollection.getIndexes() will display the indexes
from there you can find the name of the index and use: db.mycollection.dropIndex('comments_text') to remove it.
 

then use db.mycollection.find(  {  $text:{  $search: "\"article \""  }  })
to search inside the indexed field. You should escape the search string \" if doing a phrase search.
Note: In order to ensure that Fulltext search to work, you should aim for longer than 4 characters words to have inside the database!

Congratulations, you now know some of the basics when working with MongoDB!

Monday, November 11, 2019

Laravel development environment under Ubuntu 19.10

This guide is on how to install the Laravel framework under Ubuntu and set up a local development environment. 

Reference: Practical Ubuntu Linux Server for beginners

First, we will install Apache & MariaDB:
sudo apt install apache2 mariadb-server mariadb-client
then we will setup default password for the MySQL installation:
sudo mysql_secure_installation
Next, we will log in inside MySQL with: sudo mysql -u root
to create our initial database:
create database laravel;
and create a user for the Laravel installation: laravel with password: password
CREATE USER 'laravel'@'%' IDENTIFIED BY 'password';
at the same time will and grant privileges to the user to all databases:
GRANT ALL PRIVILEGES ON *.* TO 'laravel'@'%' ;
Then we will restart the MariaDB server to activate the changes:
sudo systemctl restart mariadb.service
Now is time to install PHP support for Apache and extensions for Laravel with:
sudo apt install php libapache2-mod-php php-common php-mbstring php-xmlrpc php-soap php-gd php-xml php-mysql php-cli php-zip

optionally we can set limits inside php.ini
sudo nano /etc/php/7.3/apache2/php.ini
memory_limit = 256M
upload_max_filesize = 64M
cgi.fix_pathinfo=0


Next, we will install Curl for the composer to be able to run:
sudo apt install curl
Then to install composer we can use:
curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer


/*
only if we want to use the laravel command:
let's update the local path to be able to access composer vendor binaries and particularly being able to run laravel:
export PATH = "$HOME/.config/composer/vendor/bin:$PATH"
if you want the path change to be persistent just add the line into the .bashrc file.
*/

It is time to create our project:
cd /var/www/html/
sudo composer create-project laravel/laravel --prefer-dist 
In order for Laravel artisan, Apache and our user to be able to access, read and write to the framework will need to fix the ownership and the permissions of the installation:
we set www-data as owner and group inside the Laravel installation :

sudo chown www-data:www-data /var/www/html/laravel/ -R
Next, we will make sure that all the existing as well as the newly created files will have rwx permissions and will also belong to the www-data group:
sudo chmod +770 /var/www/html/laravel -R
sudo setfacl -d -m g:www-data:rwx /var/www/html/
All that is left is to add our current user to the www-data group:
sudo usermod -a -G www-data $USER
and will switch the current user context to the www-data group:
newgrp www-data

Let's set up the Apache web-server to serve Laravel:
disabling default Apache site configuration
sudo a2dissite 000-default.conf
enable nice URLs:
sudo a2enmod rewrite

create configuration for laravel:
sudo nano /etc/apache2/sites-available/laravel.conf
  
 ServerName laravel.local
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/html/laravel/public
   
        AllowOverride All
   

    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined


enable the laravel configuration site:
sudo a2ensite laravel.conf


Installing PHPMyAdmin
sudo apt install php-curl
sudo composer create-project phpmyadmin/phpmyadmin

create configuration for phpmyadmin:
sudo nano /etc/apache2/sites-available/phpmyadmin.conf
   
 ServerName phpmyadmin.local
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/html/phpmyadmin
   
        AllowOverride All
   

    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined

enable the phpmyadmin configuration site:
sudo a2ensite laravel.conf

In order to activate changes we restart the Apache server:
systemctl restart apache2

Because both of the local domains will be accessible through the same IP 127.0.01 we will add them in the /etc/hosts file:
sudo nano /etc/hosts
127.0.0.1       laravel.local
127.0.0.1       phpmyadmin.local
(now we can browse those two host entries inside a browser)
http://laravel.local
http://phpmyadmin.local


Now let's fix some warnings. To run composer without the need of sudo, we will run:
sudo chown -R $USER ~/.config/composer/vendor/bin/ -R
this will give permissions of our user over the composer executable.

Database setup
open up .env and set our details taken from the mariadb setup:
DB_DATABASE=laravel
DB_USERNAME=laravel
DB_PASSWORD=password


Development helping utilities
We will help VSCode to recognize methods inside facades and models:

composer require --dev barryvdh/laravel-ide-helper
php artisan ide-helper:generate
php artisan ide-helper:meta
php artisan ide-helper:models --nowrite
-> in order to place the generated phpdoc to a separate file
in order when you have updates/changes inside the versions of the models/packages/facades your IDE to auto-regenerate the files, inside composer.json scripts place:
"post-update-cmd": [
        "Illuminate\\Foundation\\ComposerScripts::postUpdate",
        "@php artisan ide-helper:generate",
        "@php artisan ide-helper:meta",

        "@php artisan ide-helper:models --nowrite"
]


Navigation inside Visual Studio Code
inside /var/www/html/laravel we start the editor:
code .
Inside VSCode:
Press alt + c to toggle case sensitive highlighting and install the following extensions:
Laravel blade snippets, Laravel extra IntelliSense, Laravel goto view, PHP debug, PHP intelephense, PHP namespace resolver. (and optional: Local History)
Go to user settings" > "extensions" > "blade Configuration" use formatting
Notes on general navigation and debugging through code when developing:
With Ctrl+click we can navigate through locations such as controllers, classes, and blade templates. To return back to the previous location just press: Ctrl+Alt+-.
To search inside of all files inside the project we can use: CTRL+SHIFT+F
In order to list all available classes, methods and properties inside of a file just use:
CTRL+Shift+O
beforehand is needed from preferences to disable the integrated suggestions for php: "php.suggest.basic": false

We will install debugbar for in-browser debugging purposes:
sudo composer require barryvdh/laravel-debugbar --dev
Debugbar also very nicely displays the current ones used by our request while browsing. To debug inside the code, we can also use: dd('what we want to debug'); In case we have lots of variables just place them inside of an array:
$test[]=$var1;
$test[]=$var2;
and then dd($test); will output them all. Try also dd(__METHOD__); It will give you the invocation method location and name.
For adding/removing multiple lines comments toggle: Ctrl + /
in case you would like to know all the available routes, you can use: php artisan route:list

Let's go to the frontend part:
Let's install npm, here is how to get its latest version of:
curl https://www.npmjs.com/install.sh | sudo sh
Now follows the latest version of nodejs from nodesource:
from nodesource
curl -sL https://deb.nodesource.com/setup_13.x | sudo -E bash -
sudo apt-get install -y nodejs

We will require the user interface library and will place it inside as a development dependency:
composer require laravel/ui --dev
Now we add VUE as user interface framework (if you don't want to create the default authentication just omit --auth)
php artisan ui vue --auth
We will install the required laravel UI package dependencies using npm, and when we have them we will compile a development version of all the front-end JavaScript and CSS styles:
npm install && npm run dev
In case of problems during the packages' installation we will do:
npm cache clean --force
and then repeat the last build with npm install && npm run dev
(if you look at the contents of webpack.mix.js, you will see that they point from /resources/ to /public/ directory, so when running npm run dev, actually we grab and compile those resources and place the output inside /public/css and public/js directories)

After our DB connection is setup we can do: php artisan migrate to create our first migration. This is important because this migration will create tables for authentication. When the migration completes we can test the authentication, by creating a user and logging inside the laravel system.

Now we will discuss the simple Laravel flow. In order to protect certain route such as /home, we can use:
$this->middleware('auth');
You can see the function inside the constructor of this controller: public function __construct(){$this->middleware('auth');}
At the same time, how do we know when we type /home in our browser that exactly App/Http/Controllers/HomeController.php will be loaded?
well lets enter inside of /routes directory: web.php
let's focus on the line:
Route::get('/home', 'HomeController@index')->name('home');
which says: if we request /home the HomeController with method index will be loaded.
Then if we follow HomeController.php index.php we see:
public function index(){return view('home');}
return view('home') - simply means load up home.blade.view - this is where our HTML template resides. Go ahead make some changes inside and reload the /home URL to see them live:
Inside home.blade.php we can place:
@auth
You are logged in!
{{Auth::user()->name}}
@else
still not logged in.
@endauth
and inside welcome.blade.php:
@guest
Hi guest!
@endguest
Just save and see the change!

Congratulations!

Tuesday, November 05, 2019

Install PHP, MySql and Apache inside Ubuntu 19.10

The article is to show how the installation of PHP, MySQL, and Apache on Ubuntu 19.10 in 6 easy steps. It can be useful when performing system administration or start learning web development. Here is a video on the subject:

1. We will start with the Apache webserver:
sudo apt install apache2
which will install and start the Apache on port 80
test with pointing the browser to http:://localhost

2. Next, let's fix some permissions and ownership:
go to cd /var/www/ and if you type ls -la, you will see that all the files and directories are owned by and belong to root:root. Let's fix this in order to have access to files inside this directory. First, we will put out current user into the www-data group
sudo usermod -a -G www-data:$USER
and then with sudo chown www-data:www-data /var/www -R
we will recursively set all the files inside /var/www to belong to www-data which our user just became a member of. Now check the result with ls -la.

After the ownership, we will take care of the files and directories permissions. We will set them with:
chmod +0770 /var/www -R
With this line, we set read-write-execute to the owner and the group, in order for all the files and directories as well as the newly created files and directories to inherit those permissions.

3. Editor
Install the Visual Studio Code:
sudo apt install code
then inside the /var/www directory type code .
Create a file index.php with the following content:
<?php
phpinfo();
?>
with nano index.php

4. PHP
now it is time to add a way for Apache to interpret PHP code:
sudo apt install apache-php7.3
then restart the Apache server with sudo systemctl restart apache2
Point your browser again to http://localhost/index.php and you should be able to see the information from the phpinfo() function;

5. MySQL server
sudo apt install mysql-server
sudo mysql_secure_installation where please set a root password!
mysql -uroot -p (enter the password generated in the previous step)
when ready just type: use mysql; select plugin from user where User = 'root';
In the resulting table you should see: mysql_native_password; If not please type:
Alter user 'root'@'localhost'  identified with mysql_native_password by 'mysql'; Here we set MySQL as password and mysql_native_password as an authentication method in order to be able to use and login to MySQL databases inside of our applications;
followed by: flush privileges; to be able to have the changes accepted;

6. PHP-MySql connection
Exit the MySQL prompt and type:
sudo apt install php7.3-mysql
and again restart the Apache server with sudo systemctl restart apache2
Now paste the following code inside the index.php file and run again index.php in the browser:

<?php
$servername = "localhost";
$username = "root";
$password = "mysql";
try {
$conn = new PDO("mysql:host=$servername;dbname=mysql", $username, $password);
$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
echo "Connected successfully";
}
catch(PDOException $e)
{
echo "Connection failed: " . $e->getMessage();
}
?>
You should be able to see: Connected successfully!
Congratulations!

Sunday, November 03, 2019

Install NodeJS and Angular on Windows 10 WSL2

Let's see how under Windows Subsystem for Linux (WSL 2) we can setup NodeJs, Npm and install Angular. So that we can later do our web development projects or trying examples from Angular courses. You can also watch the video on the installation.



We will first enable WSL 2 in order to be able to support and load Linux systems:
launch PowerShell by typing: powershell and with right click run it in administrative mode. Paste the following content which will enable WSL as well as the virtual machine platform:
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux 
Enable-WindowsOptionalFeature -Online -FeatureName VirtualMachinePlatform
wsl --set-default 2The last line will set the 2nd more performant version of WSL as a default when running OS such as Ubuntu.

Next, we will go to Microsoft's store, download and launch the Ubuntu When launching the application, you might be prompted to restart your computer. Then again try to Launch the ubuntu application by just typing ubuntu. It will ask you to set up a default user and password so you can access the Ubuntu system.

Now it is time to update the local distribution packages with:
sudo apt update && sudo apt dist-upgrade

Installing Angular
Since Ubuntu version provided in WSL is not the latest one, we will go to https://github.com/nodesource/distributions
and then install the latest available node version
curl -sL https://deb.nodesource.com/setup_13.x | sudo -E bash - sudo apt-get install -y nodejs


We are ready to install the Angular CLI:
sudo npm i -g @angular/cli
(we install the package globally, so to be able to execute ng command from anywhere inside our system)
Now we can type ng new_project followed by cd new_project and ng serve
You can browse the newly created project under http://localhost:4200

Congratulations!

Saturday, November 02, 2019

Install VirtualBox under Ubuntu 19.10

You may have noticed that VirtualBox is having problems to install via the usual apt install method on Ubuntu 19.10. The reason is that it relies on old libraries from Ubuntu 19.04 which conflict with the current ones. When having such problems with compatibility there is also interesting solutions.
Reference: Practical Ubuntu Linux Server for beginners
You can watch the video for more details:



First, uninstall any previous VirtualBox leftovers that you might have with sudo apt remove virtualbox
Just go to https://www.virtualbox.org/wiki/Testbuilds and download the test build for Linux64-bit
then do: chmod +x file.run (where the file is the downloaded file)
and just run: sudo ./file.run

And that's it, the installer will run and you'll have the newest version of VirtualBox under Ubuntu 19.10 running.

Notes:
- Please also check the version of your kernel (uname -r), for now, Virtualbox supports kernel 5.3, so playing on anything above this version will also not allow Virtualbox modules to be compiled into the kernel and run.
- Your further virtual machines will reside inside the /root/ directory
- In order to remove the VirtualBox, you can run ./file.run uninstall

Congratulations!

Wednesday, October 30, 2019

Optimize Ubuntu for speed and performance

Here are some ways to optimize your Ubuntu system to take fewer resources and to be more performant. If you are interested there is a complete course on ubuntu administration.

You can take a look at the video:



I advise you at first to take a look at Conky as a hardware monitoring application
sudo apt install conky-all
and then run conky
From there just monitor which resources are fully utilized such as Disks, CPU, and Memory. This way you can really understand if you need to buy new hardware.

2. Use Lubuntu
sudo apt install lubuntu-desktop
you will be amazed by the performance gains.

3. Clean up your system using bleachbit
https://www.bleachbit.org/download

4. Tab wrangler - this addon to Firefox or Chrome will stop any inactive tabs, thus freeing up precious memory

5. Services:
systemd-analyze blame - will output all the services loading at bootup and which are taking most of the time. Feel free to disable with systemctl disable service_name those that you don't want.
You can inspect why certain service takes too long by typing:
systemctl status udisks2
and then
systemd-analyze critical-chain udisks2.service
(here we are inspecting the udisks2.service)
journalctl -b | grep udisks2
will show you even more detailed information about a particular service
Additional:
- You can also disable package indexing with sudo apt-get purge apt-xapian-index
- If you are not using thin clients, or have servers that need internet access for boot/configuration  you can also do:
sudo systemctl disable NetworkManager-wait-online.service
- Do check if UUID's listed in blkid and /etc/fstab match up and edit /etc/fstab to match accordingly.

Extra note: Install kernel modification such as Xanmod, which optimizes the performance to be suited for Desktop users:
echo 'deb http://deb.xanmod.org releases main' | sudo tee /etc/apt/sources.list.d/xanmod-kernel.list && wget -qO - https://dl.xanmod.org/gpg.key | sudo apt-key add - 
sudo apt update && sudo apt install linux-xanmod
I am really impressed by the performance of this kernel mod.


Congratulations and enjoy the course.

Monday, October 28, 2019

Install PHPMyAdmin under Ubuntu 19.10

Here is how to install PHPMyAdmin on Ubuntu 19.10. If you are interested in working within the Ubuntu environment, I would recommend you taking a more comprehensive Ubuntu course.
You can watch the following video for reference:

The steps are as follows:

1. Apache server
sudo apt install apache2
you can type: http://localhost to see if apache works

2. Install the PHP interpreter, add PHP support for Apache
sudo apt install php libapache2-mod-php

then go to /var/www/html and set the correct read/write permissions for our current user:
sudo chown $USER:$USER /var/www -R

create a new file index.php with:

echo "hello from php";
?>
and test in the browser http://localhost - you should be able to see the output: hello from php

3. Mysql server
sudo apt install mysql-server php-mysql
this will install the MySQL server as well as enable PHP to run MySQL queries
sudo mysql_secure_installation
will set our initial root password
just set the password and answer Y to flush privileges to be able to apply the new password to MySQL.
sudo mysql
to ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'password'.
this will enable password authentication and set the MySQL root password to password.
Exit the MySQL client and lets test with mysql -uroot -p
and then enter the password: password

4. PHPMyAdmin
install composer via: sudo apt install composer
install minimum required libraries for PHPMyAdmin: sudo apt install php-xml php-zip
fetch and install PHPMyAdmin: composer create-project phpmyadmin/phpmyadmin

Congratulations and enjoy the course!

Tuesday, October 22, 2019

Laravel inside Docker as a non root user

Laravel installation under Docker seems a painful experience but at the same time, it is a rewarding learning experience. The following are the steps for achieving the development environment for Laravel. For more information you can take a look at the Docker for web developers course, and also watch the following video for further details:


Let's assume you've installed Docker on Ubuntu or Windows 10 WSL2 with:
# sudo apt install docker
# sudo apt install docker-compose

Initially, we will get the source files of Laravel from its GIT repository. First, inside a newly created directory, we will use: git clone https://github.com/laravel/laravel.git .

Let's now run the Laravel project deployment locally:
sudo apt install composer && sudo composer install
(because we would like to develop our code locally so the changes to be reflected inside the docker container)

Then we will create our Dockerfile with the following content:
Be cautious when writing the yaml files: you will need to indent each element line: with space, incrementing the space for each sub-element

#we are copying the existing database migration files inside the docker container and are fetching and installing composer without user interaction and processing of scripts defined in composer.json
FROM composer:1.9 as vendor
COPY database/ database/
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install --no-scripts --ansi --no-interaction

# we are installing node, creating inside our container /app/ directory and copying the requirements as well as the js, css file resources there
# Then we install all the requirements and run the CSS and JS preprocessors

FROM node:12.12 as frontend
RUN mkdir -p /app/public
COPY package.json webpack.mix.js  /app/
COPY resources/ /app/resources/
WORKDIR /app
RUN npm install && npm run production

# get php+apache image and install pdo extension for the laravel database
FROM php:7.3.10-apache-stretch
RUN docker-php-ext-install  pdo_mysql

# create new user www which will be running inside the container
# it will have www-data as a secondary group and will sync with the same 1000 id set inside out .env file

ARG uid
RUN useradd  -o -u ${uid} -g www-data -m -s /bin/bash www

#we copy all the processed laravel files inside /var/www/html
COPY --chown=www-data:www-data . /var/www/html
COPY --chown=www-data:www-data --from=vendor /app/vendor/ /var/www/html/vendor/
COPY --chown=www-data:www-data --from=frontend /app/public/js/ /var/www/html/public/js/
COPY --chown=www-data:www-data --from=frontend /app/public/css/ /var/www/html/public/css/
COPY --chown=www-data:www-data --from=frontend /app/mix-manifest.json /var/www/html/mix-manifest.json

# allow the storage as well as logs to be read/writable by the web server(apache)
RUN chown -R www-data:www-data /var/www/html/storage

# setting the initial load directory for apache to be laravel's /public
ENV APACHE_DOCUMENT_ROOT /var/www/html/public
RUN sed -ri -e 's!/var/www/html!${APACHE_
DOCUMENT_ROOT}!g' /etc/apache2/sites-available/*.conf
RUN sed -ri -e 's!/var/www/!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/apache2.conf /etc/apache2/conf-available/*.conf

# changing 80 to port 8000 for our application inside the container, because as a regular user we cannot bind to system ports.
RUN sed -s -i -e "s/80/8000/" /etc/apache2/ports.conf /etc/apache2/sites-available/*.conf

RUN a2enmod rewrite

# run the container as www user
USER www

Here are the contents of the .env file which contains all the environment variables we would like to set and to be configurable outside of the container, while it has been build and run.

DB_CONNECTION=mysql
DB_HOST=mysql-db
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=laravel
DB_PASSWORD=mysql
UID=1000

Keep in mind that we are creating a specific user inside of MySQL which is: laravel, as well as setting its UID=1000 in order to be having synchronized UIDs between our container user and our outside user.

Follows the docker-compose.yml file where we are using multi-stage container build.

version: '3.5'

services:
  laravel-app:
    build:
      context: '.'
# first we set apache to be run under user www-data
      args:
        uid: ${UID}
    environment:
      - APACHE_RUN_USER=www-data
      - APACHE_RUN_GROUP=www-data

    volumes:
      - .:/var/www/html
# exposing port 8000 for our application inside the container, because run as a regular user apache cannot bind to system ports
    ports:
      - 8000:8000
    links:
      - mysql-db

  mysql-db:
    image: mysql:8.0
# use mysql_native authentication in order to be able to login to MySQL server using user and password
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    volumes:
      - dbdata:/var/lib/mysql
    env_file:
      - .env
# setup a newly created user with password and full database rights on the laravel database
    environment:
      - MYSQL_ROOT_PASSWORD=secure
      - MYSQL_USER=${DB_USERNAME}
      - MYSQL_DATABASE=${DB_DATABASE}
      - MYSQL_PASSWORD=${DB_PASSWORD}

# create persistent volume for the MySQL data storage
volumes:
  dbdata:


Lets not forget the .dockerignore file
.git/
vendor/
node_modules/
public/js/
public/css/
run/var/

Here we are just ensuring that those directories will not be copied from the host to the container.

Et voila!

You can now run:
docker-compose up
php artisan migrate
and start browsing your website on: 127.0.0.1:8000
Inside you can also invoke: php artisan key:generate

Congratulations, you have Laravel installed as a non-root user!

Controlling CPU using cgroups in Ubuntu 19.10

Ubuntu adopted Systemd way of controlling resources using cgroups. You check what kind of resource controllers your system has if you go into the virtual filesystem: cd /sys/fs/cgroup/. Keep in mind most of those files are created dynamically upon the starting of a service. These files (restriction parameters) also contain values that you can change.
For more information you can take a look at this course on Ubuntu administration here!
You can check the video for examples:
Since Linux wants to rule shared resources it keeps common restrictions over particular resource inside controllers, which are actually directories containing files (settings). Cpu, memory, bklio are the main controllers, which also have defined slices directories inside. In order to achieve more granular control over resources, the slices represent: system users, system services and virtual machines. For user tasks, the control settings are specified inside the following directories user.slice, system.slice is for the services while machine.slice is for running virtual machines. You can use the command systemd-cgtop to show the user, machine and system slices in real-time like top.

For example:
If we go to /cpu/user.slice we can see the settings for every user on the system and we can get even more granular by exploring the user-1000.slice directory. 
On Ubuntu 1000 is the first created(current) userid, while we can also check /etc/passwd for other user_ids
The allowed cpu quota can be seen with: cat cpu.cfs_quota_us

We can set hard and soft limits on the CPU:
Hard limit: by typing systemctrl set-property user-1000.slice CPUQuota=50%
which will limit the usage of the CPU in half.
You can use the stress command to test the change (sudo apt install stress). Then will type stress --cpu=3 (to overload the all 3 CPUs we have currently). In another terminal, we can check with top the CPU load, and by pressing 1(to show all the CPU) we will see that it is not overloaded and is just using about 50% of its power.
Since we are changing a specific slice, the changes will remain during the next reboot. We can reset the setting by using systemctl set-property user-1000.slice CPUQuota=""
We can set a soft limit using the parameter is CPUshares, by just adding CPUShares=256 to the previous command this will spread the load to multiple processes while each of them will receive 25% of the overall CPU power. If there is only one process running CPUShares will give it full 100% of the CPU.
In this regard soft limit is set only when we have program or threads which occupy the CPU, in this case, if we have 3 running copies of the same process each of them won't be allowed to occupy more than 25% of the CPU load.

Here is another example:
systemd-run -p CPUQuota=25% stress --cpu=3
this will create a service that will run the stress program within the specified limits. The command will create a unit with a random name and we can stop the running service using: systemctl stop service_name.service.

Congratulations and enjoy the course!

Subscribe To My Channel for updates

Integrating AI code helpers into Visual Studio Code

In this guide, we’ll walk through setting up a local AI-powered coding assistant within Visual Studio Code (VS Code). By leveraging tools s...