You may have noticed that VirtualBox is having problems to install via the usual apt install method on Ubuntu 19.10. The reason is that it relies on old libraries from Ubuntu 19.04 which conflict with the current ones. When having such problems with compatibility there is also interesting solutions. Reference: Practical Ubuntu Linux Server for beginners
You can watch the video for more details:
First, uninstall any previous VirtualBox leftovers that you might have with sudo apt remove virtualbox
Just go to https://www.virtualbox.org/wiki/Testbuilds and download the test build for Linux64-bit
then do: chmod +x file.run (where the file is the downloaded file)
and just run: sudo ./file.run
And that's it, the installer will run and you'll have the newest version of VirtualBox under Ubuntu 19.10 running.
Notes:
- Please also check the version of your kernel (uname -r), for now, Virtualbox supports kernel 5.3, so playing on anything above this version will also not allow Virtualbox modules to be compiled into the kernel and run.
- Your further virtual machines will reside inside the /root/ directory
- In order to remove the VirtualBox, you can run ./file.run uninstall
Here are some ways to optimize your Ubuntu system to take fewer resources and to be more performant. If you are interested there is a complete course on ubuntu administration.
You can take a look at the video:
I advise you at first to take a look at Conky as a hardware monitoring application sudo apt install conky-all
and then run conky
From there just monitor which resources are fully utilized such as Disks, CPU, and Memory. This way you can really understand if you need to buy new hardware.
2. Use Lubuntu sudo apt install lubuntu-desktop
you will be amazed by the performance gains.
3. Clean up your system using bleachbit
https://www.bleachbit.org/download
4. Tab wrangler - this addon to Firefox or Chrome will stop any inactive tabs, thus freeing up precious memory
5. Services: systemd-analyze blame - will output all the services loading at bootup and which are taking most of the time. Feel free to disable with systemctl disable service_name those that you don't want.
You can inspect why certain service takes too long by typing: systemctl status udisks2
and then systemd-analyze critical-chain udisks2.service
(here we are inspecting the udisks2.service) journalctl -b | grep udisks2
will show you even more detailed information about a particular service
Additional:
- You can also disable package indexing with sudo apt-get purge apt-xapian-index
- If you are not using thin clients, or have servers that need internet access for boot/configuration you can also do: sudo systemctl disable NetworkManager-wait-online.service
- Do check if UUID's listed in blkid and /etc/fstab match up and edit /etc/fstab to match accordingly.
Extra note: Install kernel modification such as Xanmod, which optimizes the performance to be suited for Desktop users: echo 'deb http://deb.xanmod.org releases main' | sudo tee /etc/apt/sources.list.d/xanmod-kernel.list && wget -qO - https://dl.xanmod.org/gpg.key | sudo apt-key add - sudo apt update && sudo apt install linux-xanmod
I am really impressed by the performance of this kernel mod.
Here is how to install PHPMyAdmin on Ubuntu 19.10. If you are interested in working within the Ubuntu environment, I would recommend you taking a more comprehensive Ubuntu course.
You can watch the following video for reference:
The steps are as follows:
1. Apache server sudo apt install apache2
you can type: http://localhost to see if apache works
2. Install the PHP interpreter, add PHP support for Apache sudo apt install php libapache2-mod-php
then go to /var/www/html and set the correct read/write permissions for our current user: sudo chown $USER:$USER /var/www -R
create a new file index.php with:
echo "hello from php"; ?>
and test in the browser http://localhost - you should be able to see the output: hello from php
3. Mysql server sudo apt install mysql-server php-mysql
this will install the MySQL server as well as enable PHP to run MySQL queries sudo mysql_secure_installation
will set our initial root password
just set the password and answer Y to flush privileges to be able to apply the new password to MySQL. sudo mysql
to ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'password'.
this will enable password authentication and set the MySQL root password to password.
Exit the MySQL client and lets test with mysql -uroot -p
and then enter the password: password
Laravel installation under Docker seems a painful experience but at the same time, it is a rewarding learning experience. The following are the steps for achieving the development environment for Laravel. For more information you can take a look at the Docker for web developers course, and also watch the following video for further details:
Let's assume you've installed Docker on Ubuntu or Windows 10 WSL2 with:
# sudo apt install docker
# sudo apt install docker-compose
Initially, we will get the source files of Laravel from its GIT repository. First, inside a newly created directory, we will use: git clone https://github.com/laravel/laravel.git .
Let's now run the Laravel project deployment locally:
sudo apt install composer && sudo composer install
(because we would like to develop our code locally so the changes to be reflected inside the docker container)
Then we will create our Dockerfile with the following content:
Be cautious when writing the yaml files: you will need to indent each element line: with space, incrementing the space for each sub-element
#we are copying the existing database migration files inside the docker container and are fetching and installing composer without user interaction and processing of scripts defined in composer.json
FROM composer:1.9 as vendor
COPY database/ database/
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install --no-scripts --ansi --no-interaction
# we are installing node, creating inside our container /app/ directory and copying the requirements as well as the js, css file resources there
# Then we install all the requirements and run the CSS and JS preprocessors
FROM node:12.12 as frontend
RUN mkdir -p /app/public
COPY package.json webpack.mix.js /app/
COPY resources/ /app/resources/
WORKDIR /app
RUN npm install && npm run production
# get php+apache image and install pdo extension for the laravel database
FROM php:7.3.10-apache-stretch
RUN docker-php-ext-install pdo_mysql
# create new user www which will be running inside the container
# it will have www-data as a secondary group and will sync with the same 1000 id set inside out .env file
# allow the storage as well as logs to be read/writable by the web server(apache)
RUN chown -R www-data:www-data /var/www/html/storage
# setting the initial load directory for apache to be laravel's /public
ENV APACHE_DOCUMENT_ROOT /var/www/html/public
RUN sed -ri -e 's!/var/www/html!${APACHE_ DOCUMENT_ROOT}!g' /etc/apache2/sites-available/*.conf
RUN sed -ri -e 's!/var/www/!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/apache2.conf /etc/apache2/conf-available/*.conf
# changing 80 to port 8000 for our application inside the
container, because as a regular user we cannot bind to system ports.
RUN sed -s -i -e "s/80/8000/" /etc/apache2/ports.conf /etc/apache2/sites-available/*.conf
RUN a2enmod rewrite
# run the container as www user
USER www
Here are the contents of the .env file which contains all the environment variables we would like to set and to be configurable outside of the container, while it has been build and run.
Keep in mind that we are creating a specific user inside of MySQL which is: laravel, as well as setting its UID=1000 in order to be having synchronized UIDs between our container user and our outside user.
Follows the docker-compose.yml file where we are using multi-stage container build.
version: '3.5'
services:
laravel-app:
build:
context: '.'
# first we set apache to be run under user www-data
args:
uid: ${UID}
environment:
- APACHE_RUN_USER=www-data
- APACHE_RUN_GROUP=www-data
volumes:
- .:/var/www/html
# exposing port 8000 for our application inside the container, because run as a regular user apache cannot bind to system ports
ports:
- 8000:8000
links:
- mysql-db
mysql-db:
image: mysql:8.0
# use mysql_native authentication in order to be able to login to MySQL server using user and password
command: --default-authentication-plugin=mysql_native_password
restart: always
volumes:
- dbdata:/var/lib/mysql
env_file:
- .env
# setup a newly created user with password and full database rights on the laravel database
environment:
- MYSQL_ROOT_PASSWORD=secure
- MYSQL_USER=${DB_USERNAME}
- MYSQL_DATABASE=${DB_DATABASE}
- MYSQL_PASSWORD=${DB_PASSWORD}
# create persistent volume for the MySQL data storage
volumes:
dbdata:
Lets not forget the .dockerignore file
.git/
vendor/
node_modules/
public/js/
public/css/
run/var/
Here we are just ensuring that those directories will not be copied from the host to the container.
Et voila!
You can now run:
docker-compose up
php artisan migrate
and start browsing your website on: 127.0.0.1:8000
Inside you can also invoke: php artisan key:generate
Congratulations, you have Laravel installed as a non-root user!
Ubuntu adopted Systemd way of controlling resources using cgroups. You check what kind of resource controllers your system has if you go into the virtual filesystem: cd /sys/fs/cgroup/. Keep in mind most of those files are created dynamically upon the starting of a service. These files (restriction parameters) also contain values that you can change.
For more information you can take a look at this course on Ubuntu administration here!
You can check the video for examples:
Since Linux wants to rule shared resources it keeps common restrictions over particular resource inside controllers, which are actually directories containing files (settings). Cpu, memory, bklio are the main controllers, which also have defined slices directories inside. In order to achieve more granular control over resources, the slices represent: system users, system services and virtual machines. For user tasks, the control settings are specified inside the following directories user.slice, system.slice is for the services while machine.slice is for running virtual machines. You can use the command systemd-cgtop to show the user, machine and system slices in real-time like top.
For example:
If we go to /cpu/user.slice we can see the settings for every user on the system and we can get even more granular by exploring the user-1000.slice directory.
On Ubuntu 1000 is the first created(current) userid, while we can also check /etc/passwd for other user_ids
The allowed cpu quota can be seen with: cat cpu.cfs_quota_us
We can set hard and soft limits on the CPU:
Hard limit: by typing systemctrl set-property user-1000.slice CPUQuota=50%
which will limit the usage of the CPU in half.
You can use the stress command to test the change (sudo apt install stress). Then will type stress --cpu=3 (to overload the all 3 CPUs we have currently). In another terminal, we can check with top the CPU load, and by pressing 1(to show all the CPU) we will see that it is not overloaded and is just using about 50% of its power.
Since we are changing a specific slice, the changes will remain during the next reboot. We can reset the setting by using systemctl set-property user-1000.slice CPUQuota=""
We can set a soft limit using the parameter is CPUshares, by just adding CPUShares=256 to the previous command this will spread the load to multiple processes while each of them will receive 25% of the overall CPU power. If there is only one process running CPUShares will give it full 100% of the CPU.
In this regard soft limit is set only when we have program or threads which occupy the CPU, in this case, if we have 3 running copies of the same process each of them won't be allowed to occupy more than 25% of the CPU load.
Here is another example:
systemd-run -p CPUQuota=25% stress --cpu=3
this will create a service that will run the stress program within the specified limits. The command will create a unit with a random name and we can stop the running service using: systemctl stop service_name.service.
Let's start with the following facts there are three areas in git: working directory-staging(index)-history(commits). All of them contain snapshot versions of your working files. Here are two videos on the topic:
From then on a sample Git workflow is as follows:
1. We need to set up a repository with git init . This will turn our local folder into a Git repository. And if we want to start from an already existing project, we can do this, by copying the remote project on our local machine. We can use git clone git address .
From then on git will track all the new files/directories, file changes or deletions inside our directory space. Note: If we don't want certain files to be tracked for changes by Git, we can place them inside: .gitignore file (files such as /node_modules/ directory)
2. Then, when we have completed our work on files (created new ones or updated code functionality) we can add them inside the staging area using git add filename.ext
3. As the last step on our local machine we save all the staged files as 1 commit inside the history area: git commit -m "message of the commit"
Why to use branches
Branches are useful because they enable
multiple developers to: work on several features, fix bugs, independently
etc... all this on a single repository.
To create a branch you can use: git branch
[branch_name]. Since you are currently checked out to a branch it's last
commit will be used as a starting point of the newly created branch. To go to the branch (its first commit) you use: git checkout [branch_name]. Note: If you would like to start developing from a certain commit onwards, just first checkout the commit and then type git checkout -b [commit_name]. This will automatically create a new branch with initial commit = the last checked out commit of the last checked out branch.
Must know: Git uses both HEAD pointer to branch reference or commit, as well as a branch pointer to a commit. By default, they go together and point to the same place.
We can view all the saved commits we made with: git log. Please pay attention to the commit ids. They are unique and we can use them to navigate between the commits using: git checkout commit_id -- file_to_restore.ext or git reset --hard commit_id. They will change the contents of the files, create new ones and even delete them.
The differences between the two commands are:
- with git reset, we are reverting to previous commit all the files, while with git checkout we can choose which specific files to revert
- git checkout: detaches only the HEAD pointer from the currently checked out branch (reference) and changes it to the reset commit. Checkout is used as a temporary switch to the previous commit.
Detached HEAD means we are not on a branch (we are on commit) but are not checked in a branch.
So we have 2 options to create new branch from the current commit or to return to (check out) an
existing branch. To create new branch and work from this commit onward we use: git checkout -b new_branch. To return back while checking out particular branch we use: checkout master or git checkout @{-1}
- git reset moves both the HEAD + branch ref to the reset commit (while keeping the HEAD pointer attached to the currently checked out branch). Reset is a preferred way for undoing actions (while programming) and returning to previous code states (commits).
Errors can occur while playing with those commands: for example when we are trying to check out to a different branch while still have uncommitted changes from the current. In such cases we either:
commit the work and switch to the other branch(checkout) or we are not ready to commit so we can save temporary the work by typing: git stash, then we can do reset/checkout to another branch/commit. Later when we want to reapply our changes to the current code we are exploring we can just get them out of the stash with: git stash pop.
Remotes
In order to share work with other programmers, we would like to send to and pull (fetch + merge) changes from remote repositories called remotes. From then on we can create a local branch that tracks the remote
equivalent branch on the remote repository (remote tracking
branch). The easiest way to create such branch so is to use the same names for the remote and local branches when creating the local branch.
Steps:
1. create&checkout to branch with: git checkout -b new_branch
2. set the tracking for the local branch (to track the remote branch) git branch --set-upstream-to=origin/local_branch_name remote_branch_name
2a. check the tracking of the branches with: git remote show origin
3. use git pull to get the information and merge it into out branch. Or just git fetch(to get the data) and git reset --hard origin/remote_branch_name
to synchronize the working directory and staging area files with the fetched information from the remote repository.
4. A merge request may appear so just write "synchronizing branches" as a commit message, save and exit the editor
Push code to remote
At one point we would like for other developers to see our code and probably to incorporate it inside the main
master/development branch on the original repository we cloned. So we can create a
pull request. Advice: beforehand pull all the changes from the remote master and merge them with the local branch. Why? In order not to overwrite other's code if in the meantime while we were working someone else made a change to the master branch and now our codebase is older than theirs. So
first we do: git pull origin master and then we push our local branch
remotely via: git push --set-upstream origin[our_local_branch_name]
To initiate the pull request (requesting the remote server to accept our changes and later merge our code to the main branch(master/develop)) we use: git request-pull [address of remote repository address] [our_local_branch_name]. Of course, you can do the last step using GUI such as GitHub, Bitbucket.
Resolving merge conflicts
While merging branches it is
possible for others to have rewritten (made changes) to the same lines of code as you. In
order to resolve such conflicts, while making pull request, your favorite editor will appear showing the
code with the differences illustrated, so you can choose what to keep and
what to discard. Next you will have to create a merge commit by: git add the conflict files git commit -m "conflict resolution" git push will push the resolved (non-conflicting) version of the code so you can restart the pull request successfully.
Keep in mind that when dealing with remote branches if we introduce mistakes to a remote branch we can use git revert. It will create an additional new commit, which will redo our last commit.
Some more practical tips on working locally with GIT:
Undo changes before a commit is made If we made changes on a wrong branch, and would like to apply them to a different branch. We can save our current project state temporary, just like using copy&paste to another file, with git stash. This will remove all the files from staging. Later we can toggle to a branch (git checkout branch_name) or create a new (git branch branch_name) and to apply the stashed changes we can use: stash apply. If we would like to see which stash to apply we can use: git stash list.
Undo changes after a commit is made This time we would be working with the repository HEAD pointer. First we will get the last commit hash we are interested in with: git log for later use. Next we will clean up the branch, reverting it to its previous commit (working dir, stage and local repository...) with git reset --hard HEAD^. Now is time to create and checkout/switch to a preferred different branch: git branch, git checkout branch_name (because the HEAD pointer will be detached). When on the branch, we will point the local_dir, staging as well as repository files to the commit we are interested in with (taken from git log): git reset --hard hash of commit
alternative:
# Create a backup of master branch
git branch backup_master
# Point master to '56e05fce' and# make working directory the same with '56e05fce'
git reset --hard 56e05fce
# Point master back to 'backup_master' and# leave working directory the same with '56e05fce'.
git reset --soft backup_master
# Now working directory is the same '56e05fce' and# master points to the original revision. Then we create a commit.
git commit -a -m "Revert to 56e05fce"# Delete unused branch
git branch -d backup_master
Forgetting to add files to a commit
just add them with: git add file.txt and then use: git commit --amend --no-edit
Examples of git reset + HEAD pointer ~1
git reset --soft
- combine several commits into one:
1) reset just the head pointer: git reset --soft HEAD~3
2) since all the changed files are in stage, we are ready to make an unifying commit
- reverse just the last commit command made (without touching stage and local files) with: git reset --soft HEAD~1
git reset --hard
- reset the entire project to previous commit state: git reset --hard
git reset --mixed (the default mode)
- undo a previous commit we can use:
git reset HEAD^ - if it is still not pushed to a remote repository
git revert HEAD^ - if it is pushed, and is public, in order to revert it, by creating a new(reverted) commit.
- unstage and uncommit, but keep the local file changes for editing
- create multiple commits from one, little by little with git add and git commit.
When to use cherry picking?
For example if you have fixed a bug in a branch or master and would like to apply this commit to another branch, you just cherry-pick the particular commit and apply it where you would like it.
Also good use cases of git, when to use:
rebase - to update local branch with master changes merge - to merge local changes into master
Forgot to create new branch and already made changes?
just create new branch with git switch -c new_branch_name
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin:latest
env_file: .env
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
ports:
- 3333:80
volumes:
wordpress:
dbdata:
Then you can launch: docker-compose up This will create networks between containers, volumes (to store persistent data), will pull images, configure them in order to create and run the containers.
We will be bringing up MySQL, PHPMyAdmin, and docker containers(services).
You'll need to wait a bit until everything is initialized and then you can browse: http://127.0.0.1:80 for WordPress
as well as http://127.0.0.1:3333 for PHPMyAdmin. Please note that for the PHPMyAdmin we need to use user: root, and password: mysql_root