Ubuntu adopted Systemd way of controlling resources using cgroups. You check what kind of resource controllers your system has if you go into the virtual filesystem: cd /sys/fs/cgroup/. Keep in mind most of those files are created dynamically upon the starting of a service. These files (restriction parameters) also contain values that you can change.
For more information you can take a look at this course on Ubuntu administration here!
You can check the video for examples:
Since Linux wants to rule shared resources it keeps common restrictions over particular resource inside controllers, which are actually directories containing files (settings). Cpu, memory, bklio are the main controllers, which also have defined slices directories inside. In order to achieve more granular control over resources, the slices represent: system users, system services and virtual machines. For user tasks, the control settings are specified inside the following directories user.slice, system.slice is for the services while machine.slice is for running virtual machines. You can use the command systemd-cgtop to show the user, machine and system slices in real-time like top.
For example:
If we go to /cpu/user.slice we can see the settings for every user on the system and we can get even more granular by exploring the user-1000.slice directory.
On Ubuntu 1000 is the first created(current) userid, while we can also check /etc/passwd for other user_ids
The allowed cpu quota can be seen with: cat cpu.cfs_quota_us
We can set hard and soft limits on the CPU:
Hard limit: by typing systemctrl set-property user-1000.slice CPUQuota=50%
which will limit the usage of the CPU in half.
You can use the stress command to test the change (sudo apt install stress). Then will type stress --cpu=3 (to overload the all 3 CPUs we have currently). In another terminal, we can check with top the CPU load, and by pressing 1(to show all the CPU) we will see that it is not overloaded and is just using about 50% of its power.
Since we are changing a specific slice, the changes will remain during the next reboot. We can reset the setting by using systemctl set-property user-1000.slice CPUQuota=""
We can set a soft limit using the parameter is CPUshares, by just adding CPUShares=256 to the previous command this will spread the load to multiple processes while each of them will receive 25% of the overall CPU power. If there is only one process running CPUShares will give it full 100% of the CPU.
In this regard soft limit is set only when we have program or threads which occupy the CPU, in this case, if we have 3 running copies of the same process each of them won't be allowed to occupy more than 25% of the CPU load.
Here is another example:
systemd-run -p CPUQuota=25% stress --cpu=3
this will create a service that will run the stress program within the specified limits. The command will create a unit with a random name and we can stop the running service using: systemctl stop service_name.service.
Let's start with the following facts there are three areas in git: working directory-staging(index)-history(commits). All of them contain snapshot versions of your working files. Here are two videos on the topic:
From then on a sample Git workflow is as follows:
1. We need to set up a repository with git init . This will turn our local folder into a Git repository. And if we want to start from an already existing project, we can do this, by copying the remote project on our local machine. We can use git clone git address .
From then on git will track all the new files/directories, file changes or deletions inside our directory space. Note: If we don't want certain files to be tracked for changes by Git, we can place them inside: .gitignore file (files such as /node_modules/ directory)
2. Then, when we have completed our work on files (created new ones or updated code functionality) we can add them inside the staging area using git add filename.ext
3. As the last step on our local machine we save all the staged files as 1 commit inside the history area: git commit -m "message of the commit"
Why to use branches
Branches are useful because they enable
multiple developers to: work on several features, fix bugs, independently
etc... all this on a single repository.
To create a branch you can use: git branch
[branch_name]. Since you are currently checked out to a branch it's last
commit will be used as a starting point of the newly created branch. To go to the branch (its first commit) you use: git checkout [branch_name]. Note: If you would like to start developing from a certain commit onwards, just first checkout the commit and then type git checkout -b [commit_name]. This will automatically create a new branch with initial commit = the last checked out commit of the last checked out branch.
Must know: Git uses both HEAD pointer to branch reference or commit, as well as a branch pointer to a commit. By default, they go together and point to the same place.
We can view all the saved commits we made with: git log. Please pay attention to the commit ids. They are unique and we can use them to navigate between the commits using: git checkout commit_id -- file_to_restore.ext or git reset --hard commit_id. They will change the contents of the files, create new ones and even delete them.
The differences between the two commands are:
- with git reset, we are reverting to previous commit all the files, while with git checkout we can choose which specific files to revert
- git checkout: detaches only the HEAD pointer from the currently checked out branch (reference) and changes it to the reset commit. Checkout is used as a temporary switch to the previous commit.
Detached HEAD means we are not on a branch (we are on commit) but are not checked in a branch.
So we have 2 options to create new branch from the current commit or to return to (check out) an
existing branch. To create new branch and work from this commit onward we use: git checkout -b new_branch. To return back while checking out particular branch we use: checkout master or git checkout @{-1}
- git reset moves both the HEAD + branch ref to the reset commit (while keeping the HEAD pointer attached to the currently checked out branch). Reset is a preferred way for undoing actions (while programming) and returning to previous code states (commits).
Errors can occur while playing with those commands: for example when we are trying to check out to a different branch while still have uncommitted changes from the current. In such cases we either:
commit the work and switch to the other branch(checkout) or we are not ready to commit so we can save temporary the work by typing: git stash, then we can do reset/checkout to another branch/commit. Later when we want to reapply our changes to the current code we are exploring we can just get them out of the stash with: git stash pop.
Remotes
In order to share work with other programmers, we would like to send to and pull (fetch + merge) changes from remote repositories called remotes. From then on we can create a local branch that tracks the remote
equivalent branch on the remote repository (remote tracking
branch). The easiest way to create such branch so is to use the same names for the remote and local branches when creating the local branch.
Steps:
1. create&checkout to branch with: git checkout -b new_branch
2. set the tracking for the local branch (to track the remote branch) git branch --set-upstream-to=origin/local_branch_name remote_branch_name
2a. check the tracking of the branches with: git remote show origin
3. use git pull to get the information and merge it into out branch. Or just git fetch(to get the data) and git reset --hard origin/remote_branch_name
to synchronize the working directory and staging area files with the fetched information from the remote repository.
4. A merge request may appear so just write "synchronizing branches" as a commit message, save and exit the editor
Push code to remote
At one point we would like for other developers to see our code and probably to incorporate it inside the main
master/development branch on the original repository we cloned. So we can create a
pull request. Advice: beforehand pull all the changes from the remote master and merge them with the local branch. Why? In order not to overwrite other's code if in the meantime while we were working someone else made a change to the master branch and now our codebase is older than theirs. So
first we do: git pull origin master and then we push our local branch
remotely via: git push --set-upstream origin[our_local_branch_name]
To initiate the pull request (requesting the remote server to accept our changes and later merge our code to the main branch(master/develop)) we use: git request-pull [address of remote repository address] [our_local_branch_name]. Of course, you can do the last step using GUI such as GitHub, Bitbucket.
Resolving merge conflicts
While merging branches it is
possible for others to have rewritten (made changes) to the same lines of code as you. In
order to resolve such conflicts, while making pull request, your favorite editor will appear showing the
code with the differences illustrated, so you can choose what to keep and
what to discard. Next you will have to create a merge commit by: git add the conflict files git commit -m "conflict resolution" git push will push the resolved (non-conflicting) version of the code so you can restart the pull request successfully.
Keep in mind that when dealing with remote branches if we introduce mistakes to a remote branch we can use git revert. It will create an additional new commit, which will redo our last commit.
Some more practical tips on working locally with GIT:
Undo changes before a commit is made If we made changes on a wrong branch, and would like to apply them to a different branch. We can save our current project state temporary, just like using copy&paste to another file, with git stash. This will remove all the files from staging. Later we can toggle to a branch (git checkout branch_name) or create a new (git branch branch_name) and to apply the stashed changes we can use: stash apply. If we would like to see which stash to apply we can use: git stash list.
Undo changes after a commit is made This time we would be working with the repository HEAD pointer. First we will get the last commit hash we are interested in with: git log for later use. Next we will clean up the branch, reverting it to its previous commit (working dir, stage and local repository...) with git reset --hard HEAD^. Now is time to create and checkout/switch to a preferred different branch: git branch, git checkout branch_name (because the HEAD pointer will be detached). When on the branch, we will point the local_dir, staging as well as repository files to the commit we are interested in with (taken from git log): git reset --hard hash of commit
alternative:
# Create a backup of master branch
git branch backup_master
# Point master to '56e05fce' and# make working directory the same with '56e05fce'
git reset --hard 56e05fce
# Point master back to 'backup_master' and# leave working directory the same with '56e05fce'.
git reset --soft backup_master
# Now working directory is the same '56e05fce' and# master points to the original revision. Then we create a commit.
git commit -a -m "Revert to 56e05fce"# Delete unused branch
git branch -d backup_master
Forgetting to add files to a commit
just add them with: git add file.txt and then use: git commit --amend --no-edit
Examples of git reset + HEAD pointer ~1
git reset --soft
- combine several commits into one:
1) reset just the head pointer: git reset --soft HEAD~3
2) since all the changed files are in stage, we are ready to make an unifying commit
- reverse just the last commit command made (without touching stage and local files) with: git reset --soft HEAD~1
git reset --hard
- reset the entire project to previous commit state: git reset --hard
git reset --mixed (the default mode)
- undo a previous commit we can use:
git reset HEAD^ - if it is still not pushed to a remote repository
git revert HEAD^ - if it is pushed, and is public, in order to revert it, by creating a new(reverted) commit.
- unstage and uncommit, but keep the local file changes for editing
- create multiple commits from one, little by little with git add and git commit.
When to use cherry picking?
For example if you have fixed a bug in a branch or master and would like to apply this commit to another branch, you just cherry-pick the particular commit and apply it where you would like it.
Also good use cases of git, when to use:
rebase - to update local branch with master changes merge - to merge local changes into master
Forgot to create new branch and already made changes?
just create new branch with git switch -c new_branch_name
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin:latest
env_file: .env
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
ports:
- 3333:80
volumes:
wordpress:
dbdata:
Then you can launch: docker-compose up This will create networks between containers, volumes (to store persistent data), will pull images, configure them in order to create and run the containers.
We will be bringing up MySQL, PHPMyAdmin, and docker containers(services).
You'll need to wait a bit until everything is initialized and then you can browse: http://127.0.0.1:80 for WordPress
as well as http://127.0.0.1:3333 for PHPMyAdmin. Please note that for the PHPMyAdmin we need to use user: root, and password: mysql_root
to auto minimize, compile and transpile your modern ES6 and above version of JavaScript
to auto-reload your browser
to auto-include and merge external JavaScript files into one
and many more...
In this tutorial, we will focus on those three. First, download and install NodeJS, because we will need npm (node package manager) included in the NodeJS installation.
If you would like you can watch the video:
Prerequisites:
Have two files: index.html and main.js Inside index.html include index.html include
Inside main.js you can place any JavaScript code.
We will start by creating our first project. Why? - again to be able to have a portable version of our code which can other developers run in their machines. The project information will be stored inside the project.json file.
just type: npm init and follow the questions. Next, we will install webpack with: npm install --save-dev webpack
Open project.json and you will see section devDependencies with the webpack inside. All upcoming package installations or so-called dependencies will have their place in our package.json file. As you can see --save-dev option is just installing the packages for development mode in the devDependencies section - this means that we can have production and development dependencies and they can differ - which is nice because you would like to include in your production/live application only the needed libraries.
You can also see now that there is directory /node_modules/ - well there all downloaded and installed packages and their dependent packages will be placed.
Upon compilation / transpiling only part of those (certain functions) will be used and have their place in the final development or production project.
By the way, if other users have your files (excluding /node_modules/ which is heavy in KB) they just need to run npm install and they will have all the modules installed automatically based on your package.json file.
We need to install webpack-cli. Please do so with npm
And now we need to modify the "scripts" section of package.json to: "scripts":{ "build":"webpack" }
Then just type npm run build and this will start webpack.
Create folder: /src and place inside index.html and main.js.
Inside package.json replace main.js to index.js.
Rename main.js as index.js ->this is needed as an entry point index.js
Now run again: npm run build and you will see a new directory: /dist This is newly created by webpack and is where the production code resides. So you can browse it via: http://project_directory/dist directly in your browser.
Next, we need a plugin that will minify and load/import directly our JavaScript into the HTML. We will use: html-webpack-plugin and html-loader. Do npm install them!
Now is time to create the webpack.config.js file with the following content:
Then npm run build.
And you can see that index.html is modified (inside the /dist folder). From /src/index.html you can now remove the line (npm will auto-include it for us).
You can test the code from /dist directory - now everything works !
Lets apply Babel to be able to use ES6 on a wide range of browsers ! npm i --save-dev babel-loader @babel.core @babel/preset-env
create .babelrc file and place there:
{
"presets":[ "@babel/present-env" ]
}
Now add babel to webpack.config.js
{
test: /\.js$/, exclude:/node_modules/,
use[{loader:"babel-loader"}]
}
Type again npm build and you can see that the project now transpiles ES6 into ES5 !
Let's do some live code reloading: npm i --save-dev webpack-dev-server
then in package.json:
place new line inside of scripts: "dev":"webpack-dev-server"
Now: npm run dev
Load up your website and then modify something inside of your JavaScript file.
You will see how the browser auto-reloads and you can see your changes!
Here we will be doing an installation of the development/production environment of PHP and MySQL using Docker under Ubuntu. Here is a full video on the subject:
Now we either exit and re-run the terminal or type:
newgrp docker
to switch our group.
Let's check the current images we are having: from docker image ls. We can test if docker's installation is successful with:
docker run hello-world
Keep in mind that we can check what images we have in our system via:
docker image ls
and our containers via
docker container ls
With docker
rmi hello-world: latest
we will remove the just installed image, but only if it is empty.
Let's check once again
docker container ls -a
which will list all the images: running or not. We see our image is placed inside a container.
As a rule: if we want to remove an image, first we have to remove its container.
So we look up the container name. It is usually assigned by docker and in our case is: relaxed_cray and then we type
docker container rm relaxed_cray
Now we can remove the image with
docker rmi hello-world:latest
Setting up the PHP development environment
We will make a directory web_dev :
mkdir web_dev
and will go inside:
cd web_dev
Then we will create docker-compose file with:
nano docker-compose.yml
Inside we will place the following config: Keep in mind that for the indentations we are using 2 spaces!
Short explanation: under volumes: we are specifying which local directory will be connected with the container directory. So whenever we are changing something local, the container will reflect and display the changes. For the ports: when we open/browse local port 8008 (127.0.0.1:8008) docker will redirect us to the 80 port of the docker container (achieved via port forwarding).
For the PHP image, you can choose whatever image you prefer from hub.docker.com
Next run: docker-compose up. This will read the docker-compose file and create a container by loading up different parts/layers of the packages, later will create a default network for our container and will load up the already provided PHP image.
The installation is ready, but if you want to display practical information, you have to create index.php file inside the newly created local PHP directory (storing our PHP files)
First, it is good to change the ownership of the directory with:
sudo chown your_user:your_user php/ -R
Then with
sudo nano index.php
we can type inside:
<?php
echo "Hello from docker"; ?>
Then again run docker-compose up. Also, try changing the .php file again and refresh the browser pointing to 127.0.0.0:8008
MYSQL support
let's create a new file Dockerfile, inside the PHP directory. Place inside:
FROM php:8.1-apache RUN apt-get update && apt-get upgrade -y RUN docker-php-ext-install mysqli EXPOSE 80
This will base our MySQL on the PHP image, that we already have, update the container system, run specific docker extensions supporting MySQL from within PHP and expose 80-port
Next, we will customize the docker-compose.yml :
We will replace the line: image: PHP
with
buid: context: ./php dockerfile: Dockerfile
This will read the docker file which we have set previously and create web service out of it.
Notes: We are using default authentication for MySQL in order to be able to login to the MySQL database, then docker is hardcoding mysql_user, mysql_password and mysql_database with: deuser, devpass, and test_db, and is exposing externally port 6033 for the MySQL service.
One last change: we would like first start the MySQL then the DB service, so we will add to the web service config:
depends_on: - db
To test the PHP-MySQL connection inside of our index.php file we can specify:
$host = 'db'; //the name of the mysql service inside the docker file. $user = 'devuser'; $password = 'devpass'; $db = 'test_db'; $conn = new mysqli($host,$user,$password,$db){ if($conn->connect_error){ echo 'connection failed'. $conn->connect_error; } echo 'successfully connected to MYSQL'; }
If you experiencing problems you can remove the several images already created:
docker image ls -a docker rmi image ...fill here image names...
then run again docker-compose up and browse: 127.0.0.1:8008