Friday, September 13, 2019

Install and configure Webpack

Why use Webpack?
part of the course: JavaScript for beginners - learn by doing

  • to auto minimize, compile and transpile your modern ES6 and above version of JavaScript
  • to auto-reload your browser
  • to auto-include and merge external JavaScript files into one
  • and many more...
In this tutorial, we will focus on those three. First, download and install NodeJS, because we will need npm (node package manager) included in the NodeJS installation.
If you would like you can watch the video:


Prerequisites:
Have two files: index.html and main.js Inside index.html include index.html include
Inside main.js you can place any JavaScript code.

We will start by creating our first project. Why? - again to be able to have a portable version of our code which can other developers run in their machines. The project information will be stored inside the project.json file.
just type: npm init and follow the questions. Next, we will install webpack with:
npm install --save-dev webpack
Open project.json and you will see section devDependencies with the webpack inside. All upcoming package installations or so-called dependencies will have their place in our package.json file. As you can see --save-dev option is just installing the packages for development mode in the devDependencies section -  this means that we can have production and development dependencies and they can differ - which is nice because you would like to include in your production/live application only the needed libraries.
You can also see now that there is directory /node_modules/ - well there all downloaded and installed packages and their dependent packages will be placed.
Upon compilation / transpiling only part of those (certain functions) will be used and have their place in the final development or production project.
By the way, if other users have your files (excluding /node_modules/ which is heavy in KB) they just need to run npm install and they will have all the modules installed automatically based on your package.json file.
We need to install webpack-cli. Please do so with npm
And now we need to modify the "scripts" section of package.json to:
"scripts":{
    "build":"webpack"
}
Then just type npm run build and this will start webpack.
Create folder: /src and place inside index.html and main.js.
Inside package.json replace main.js to index.js.
Rename main.js as index.js ->this is needed as an entry point index.js
Now run again: npm run build and you will see a new directory: /dist This is newly created by webpack and is where the production code resides. So you can browse it via: http://project_directory/dist directly in your browser.

Next, we need a plugin that will minify and load/import directly our JavaScript into the HTML. We will use: html-webpack-plugin and html-loader. Do npm install them!
Now is time to create the webpack.config.js file with the following content:

const htmlplugin = require("html-webpack-plugin");
module.exports = {
module: {
 rules: [
{
   test:/\.html$/,
   use:[  { loader:"html-loader",   options: {minimize:true} }  ]
}
]
},
plugins:[
    new htmlplugin(       {  template:"./src/index.html",filename:"./index.html"} )
]
}
Then npm run build.
And you can see that index.html is modified (inside the /dist folder). From /src/index.html you can now remove the line (npm will auto-include it for us).
You can test the code from /dist directory - now everything works !

Lets apply Babel to be able to use ES6 on a wide range of browsers !
npm i --save-dev babel-loader @babel.core @babel/preset-env
create .babelrc file and place there:
{
 "presets":[ "@babel/present-env" ]
}
Now add babel to webpack.config.js
{
 test: /\.js$/, exclude:/node_modules/,
 use[{loader:"babel-loader"}]
}

Type again npm build and you can see that the project now transpiles ES6 into ES5 !

Let's do some live code reloading:
npm i --save-dev webpack-dev-server
then in package.json:
place new line inside of scripts:
"dev":"webpack-dev-server"
Now: npm run dev
Load up your website and then modify something inside of your JavaScript file.
You will see how the browser auto-reloads and you can see your changes!

Congratulations and enjoy learning!

Tuesday, September 10, 2019

Docker - Apache, PHP and MySQL setup

Here we will be doing an installation of the development/production environment of PHP and MySQL using Docker under Ubuntu. Here is a full video on the subject:


For more information, you can check the Docker for web developers course.

First, we will install docker and docker-compose:  

sudo apt install docker docker-compose

Then we will create docker group and place our user inside(to be able to use the docker command without sudo): 

sudo groupadd docker && sudo usermod -aG docker $USER

 
Now we either exit and re-run the terminal or type: 

newgrp docker

to switch our group.


Let's check the current images we are having: from docker image ls. We can test if docker's installation is successful with: 

docker run hello-world

 
Keep in mind that we can check what images we have in our system via:  

docker image ls

and our containers via 

docker container ls
 

With docker 

rmi hello-world: latest

we will remove the just installed image, but only if it is empty.
 

Let's check once again  

docker container ls -a

which will list all the images: running or not. We see our image is placed inside a container.


As a rule: if we want to remove an image, first we have to remove its container.
So we look up the container name. It is usually assigned by docker and in our case is: relaxed_cray and then we type 

docker container rm relaxed_cray

Now we can remove the image with 

docker rmi hello-world:latest


Setting up the PHP development environment
We will make a directory web_dev : 

mkdir web_dev

and will go inside:  

cd web_dev

Then we will create docker-compose file with: 

nano docker-compose.yml

Inside we will place the following config:
Keep in mind that for the indentations we are using 2 spaces! 

services:
  web:
       image: php:8.1-apache
        container_name: php81
        volumes:
           - ./php:/var/www/html/
        ports:
          - 8008:80

Short explanation: under volumes: we are specifying which local directory will be connected with the container directory. So whenever we are changing something local, the container will reflect and display the changes. For the ports: when we open/browse local port 8008 (127.0.0.1:8008) docker will redirect us to the 80 port of the docker container (achieved via port forwarding).
For the PHP image, you can choose whatever image you prefer from hub.docker.com
Next run: docker-compose up. This will read the docker-compose file and create a container by loading up different parts/layers of the packages, later will create a default network for our container and will load up the already provided PHP image.
The installation is ready, but if you want to display practical information, you have to create index.php file inside the newly created local PHP directory (storing our PHP files)
First, it is good to change the ownership of the directory with:
sudo chown your_user:your_user php/ -R

Then with
sudo nano index.php
we can type inside:
<?php 
echo "Hello from docker";
?>
Then again run docker-compose up. Also, try changing the .php file again and refresh the browser pointing to 127.0.0.0:8008

MYSQL support
let's create a new file Dockerfile, inside the PHP directory. Place inside:

FROM php:8.1-apache
RUN apt-get update && apt-get upgrade -y RUN docker-php-ext-install mysqli
EXPOSE 80

This will base our MySQL on the PHP image, that we already have, update the container system, run  specific docker extensions supporting MySQL from within PHP and expose 80-port

Next, we will customize the docker-compose.yml :
We will replace the line: image: PHP
with

buid:
  context: ./php
  dockerfile: Dockerfile 

This will read the docker file which we have set previously and create web service out of it.

Now we will be building the MySQL service:

db:
  container_name: mysql8
  image: mysql:latest
  command: --default-authentication-plugin=mysql_native_password
  restart: always
  environment:
    MYSQL_ROOT_PASSWORD: root
    MYSQL_DATABASE: test_db
    MYSQL_USER: devuser
    MYSQL_PASSWORD: devpass
   ports:
       - 6033:3306

Notes: We are using default authentication for MySQL in order to be able to login to the MySQL database, then docker is hardcoding mysql_user, mysql_password and mysql_database with: deuser, devpass, and test_db, and is exposing externally port 6033 for the MySQL service.

One last change: we would like first start the MySQL then the DB service, so we will add to the web service config:

depends_on:
  - db

To test the PHP-MySQL connection inside of our index.php file we can specify:

$host = 'db';  //the name of the mysql service inside the docker file.
$user = 'devuser';
$password = 'devpass';
$db = 'test_db';
$conn = new mysqli($host,$user,$password,$db){
 if($conn->connect_error){
  echo 'connection failed'. $conn->connect_error;
}
echo 'successfully connected to MYSQL';
}

If you experiencing problems you can remove the several images already created:

docker image ls -a 
docker rmi image ...fill here image names...
then run again docker-compose up and browse: 127.0.0.1:8008

Cheers and enjoy learning further.

Install Docker on Ubuntu 19.04 and 19.10

Here is how you can install Docker on Ubuntu 19.04

(part of the Docker course)


Video of the installation:



Steps:
1. Update the latest version of your repository and install additional packages for the secure transport of apt
sudo apt update && sudo apt install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common

2. Fetch and add docker's gpg key

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

3. Add the docker's repository (specific for our Ubuntu version) to our local repository
sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable"
Note: if you are running an unsupported Ubuntu release you can replace the string: $(lsb_release -cs) with the supported versions such as: disco


4. update again the local repository and install the latest docker version (community edition)
sudo apt update  && sudo apt install docker-ce docker-ce-cli containerd.io

5. Test the installation by fetching and running test image:
sudo docker run hello-world
For more information, you can visit: https://docs.docker.com/install/linux/docker-ce/ubuntu/

Notes:
1. when having problems with the socket docker binds on you can allow your user to be owner of the socket: sudo chown $USER:docker /var/run/docker.sock

2. If you want to run docker without sudo, just add docker group to the users with: usermod -aG docker $USER

and change the current running group with: newgrp docker
or su ${USER}

3. If you get: ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
check out the docker service status: sudo systemctl status docker
if it is stopped and masked: Loaded: masked (Reason: Unit docker.service is masked.) then you need to unmask the service: sudo systemctl unmask docker
then again start the docker service: sudo systemctl start docker 
until you see sudo systemctl status docker: Active: active (running)

Congratulations, and you can further explore the Docker for web developers course.

Wednesday, March 06, 2019

Ubuntu 19.04 & 19.10 firewall examples


In order to offer more than one service from a single IP address, Linux uses the notion of PORTS. For example, 80 is the common name of port for HTTP, 21 - FTP, 443 - ssh. Users can connect to services by typing the service IP address and its specific port number. Firewalls help us to open and close ports for specific IP addresses or whole network subnets. This is suitable when we want certain IP addresses to have access to our server while its default access to stay restricted.
For video information on firewalls, you can check this course.

In ubuntu, the firewall wrapper is named uncomplicated firewall or in short "ufw". It is installed in Ubuntu by default but is not active. We can check the status of ufw with: "status". If we want to completely isolate access to our machine we could use: "default deny incoming", while: "default allow outgoing" will allow packets to exit our machine. Keep in mind that in order for them to work, we have to activate those rules by typing: "enable". Next, we can explicitly: "reject out ssh" and then "delete reject out ssh".
Hint: If we are unsure of our actions we can always type: "reset" to empty the firewall rules. To deactivate the firewall please use "disable".

Since we are already connected with SSH, if we enable the firewall we will lose our connection so let's allow the forwarded port 22/tcp by typing: sudo ufw allow 22/tcp and run: sudo ufw enable.
We can actually show rules with numbers: status numbered, and to delete a rule we use: "delete rule_number". In order to insert a rule at a particular place, we use: "insert 1 your_rule".

Here are some commonly used service names: secure shell: ssh, mail: smtp, web server: http, https, SAMBA/File sharing: 139/tcp, 445/tcp. Not common service names are hard to remember, so here is a trick: ufw reads from /etc/services. When certain service is installed in order to communicate with the outside word it could modify the firewall rules by adding its own rules. In such cases with "app list" we can see all the installed service /application profiles and "app info 'SSH'" will dig us deeper into what ports certain application profile allows.

Lets' see the following examples on practical usage of the firewall:

//deny all incoming connections from 10.0.0.1 to interface eth0
deny in on eth0 from 10.0.0.1

// limit ssh access of specific IP address
deny proto tcp from 10.0.0.1 to any port 22

// limit a whole subnet
allow from 10.0.0.0/24 to any port 22

// allow ssh access only to IP: 10.0.0.1
allow proto tcp from any to 10.0.0.1 port 22

// deny outgoing SMTP traffic
deny out 25

// allow connections on eth1 interface to MySQL
allow in on eth1 to any port 3306

If we want to use filtering my mac address we can add: -A ufw-before-input -m mac –mac-source 00:00:00:00:00:AA -j DROP in /etc/ufw/before.rules
(these are rules who are read and act before the firewall rules)

To monitor the usage of the firewall we can use: sudo tail -f /var/log/ufw.log, but just if the "ufw logging" is enabled.
While experimenting with the firewall you can use an external network scanner such as Nmap (sudo apt install nmap) to check which ports are open on the machine.

More useful examples you can find with: man ufw
as well as on the ubuntu firewall page: https://help.ubuntu.com/lts/serverguide/firewall.html More information, you can find in this course.

Wednesday, February 27, 2019

Ubuntu 19.04 - files, directories, users and groups

The following are working examples for managing files, directories, users and groups in Ubuntu server environment.
For more information you can take a look at this course on ubuntu linux server:

https://udemy-images.udemy.com/course/480x270/2215088_32d5.jpg

Work with the console:
By typing "History" without the quotes, you can take a look at your previous commands and with "history | grep name_of_command " you can search for specific command that you already used. If you want to return the previously used command just press the "up" arrow. And if you have problems remembering filenames you can start typing the first letter of the name and then press several times "TAB" button - this will give you autocomplete for the rest of the word. 
You can clear the console by using "clear".

Directories:
All the directories are contained into the root directory /. Pay attention: Linux file and directory names are case sensitive, so "Linux.txt" is different than "linux.txt".
If you want to see your current directory just type: "pwd". With "cd .." you go up to the parent directory, with "cd /" you will go to the root directory. Using the command "ls" you can see the current objects(files, directories and symbolic links) residing in the file system.
You can create directories using mkdir new_dir_name
To delete directory use: rm dir_name . Before removing a directory, please make sure that it is empty.
"ls -la" will give you more information like properties, date of creation, size, permissions, owner and grop of the files and directories.
if you go to the root / directory: "cd /" "ls -la |grep bin" will show you not only the /bin directory but any directories that start with "bin" characters. We can also use * to show all files starting with 'linux': "ls linux*" or listing all .html files "ls .html"

If you want to know more about the "ls" command just append "--help" next to it:
"ls --help"
and if you want to list the resulting information in pages you can type:
"ls --help | more"
"Enter" and "Space" keys can be used for navigation
For more information on the command try: "man ls". You can exit the help screen with the key "q"



Files:
You can create an empty file with the command "touch file_name.txt". To see the contents of the file just type: cat file_name.txt
In just one line we can actually filter the contents of a file based on condition (all the lines containing "Example"), and output them in a new file(output.txt): "cat examples.desktop | grep "Example" > output.txt "
To remove the file we type: "rm output.txt"
We can edit files with the "nano" editor: "nano output.txt" Inside we can use crtl + O to write the changes, and ctrl + X to exit the editor
Hint: if there are spaces or special symbols in the filename to work with it, just wrap the name with quotes or just to put / before the special symbol:
“123–!.txt”
123-\’.txt \‘file name!\’
file\ name\!
Hint: a convenient way of displaying text files is with "more /var/log/syslog" - this will give us the content paginated and if we use: watch tail /var/log/syslog we can watch if there are any changes to the last few lines of the file.
"watch -n 5 tail -n 15 logfile.txt" will grab last 15 lines of "logfile.txt" and will watch them every 5 seconds.
Moving and copying files:
to move file we use: "mv source_filename /directory/destination_filename"
same is for copying files: "cp filename /directory/"
We can delete files using: "rm file_name" or just all files within a directory with "rm *"

Permissions
When we run: ls -la we see bunch of information about the objects within a directory. For example:
 -rwxr-xr-x   1 nevyan nevyan   4096 nov 25 20:34  file
drwxr-xr-x   2 nevyan nevyan   4096 nov 25 20:34  Public

Lets start with the first row representing a file. We know it is a file because the first character of the first column doesn't start with D.
Then we can see its permissions: They are divided into groups of 3(rwx). (read, write and execute)
First 1 group are for the owner(creator) of the file, then for the group it belongs, and last ones are for every other user.
We use groups in order to be able to set once permissions for multiple users belonging to a group, so they can automatically get those permissions.
For example:
just to be able to "cd" into a directory we need to have (+x) execute rights over the directory.
to be able to list file contents with cat we need (+r)read rights over the file.

Linux applies the principle of least privileges, which means that a user is given not more than the privileges he/she needs for completing a certain task. Let's now take a look of what is inside of /etc/shadow with the cat command. We will see: 13 permission denied
Reason is that only the root user can read this file. You can check this out by issuing: ls -la /etc/shadow and look at the others group(3rd) we see - which means no one else except its owner have rights over this file. And the owner can be seen on the second column (root).
If we run the same command, appending "sudo" in front:
"sudo ls -la /etc/shadow"
(and type the root password)
we will see that we have rights to look at the file. This is because for this particular command we have gained temporary "root" rights
Next we can take a look at /etc/passwd - there we can see all the users registered into the server, their login names, hidden password, user id, group, home directory as well as working shell(command prompt).

Changing permissions:
"chmod ugo+rwx file_name.txt"
this will give maximum privileges(rwx) to all(user,group and others) to file_name.txt

Changing ownership:
to change the ownership we can type:
"chown user1:user1 file_name.txt"
will set user=user1 and group=user1 to the file.

Groups:
Each user can belong to one primary and several secondary groups.
"id" shows the current groups our user belongs to. We can issue: groups and user_name to find out which groups specific user belongs to:
"groups user_name" will list the information in format: (primary, secondary) groups
The same information can be gained from the file: "/etc/group" and we can list information about particular group with: "cat /etc/group | grep mygroup"
"addgroup mygroup" will add new "mygroup" to our currenly existing groups
"usermod -G mygroup nevyan" will remove all secondary groups and will add secondary group "mygroup" to user "nevyan"
With the flag -g we can change the user primary group and if we want just to append another secondary group to our user we can use the -aG flag.
In order to remove user from a group: "deluser nevyan mygroup"
and finally to remove a group we do: "sudo delgroup mygroup"
Notice that we need to "logout" in order to see the effect of those group changes. Congratulations and enjoy the ubuntu linux server course:

Sunday, February 03, 2019

SEO Crawling and Indexing basics

Our journey into SEO starts with the terms of crawling and indexing. So let’s point out the main differences between them.
For more information, you can take a look at my search engine optimization course



 
In order for a specific URL to show up, when searching in Google’s index it has to be discovered. This is the main purpose of the crawling process. So when a particular web site is being submitted to Google for indexing the crawler first fetches its main entry point which is usually index.html file and then it tries to discover as much as possible internal pages following the web page links. Next for each discovered page the crawler makes an HTTP request just like you do in a browser and parses all the found content so it can gather readable information.
The process of parsing includes: removing all the HTML tags, scripts, styles and the so-called “stop words”. Stop words usually represent commonly used words and just because they bring noise to the information they are being discarded. After the cleanup, algorithms of machine learning are trying to understand the topic of the content based on information they have learned from previous websites.
You may ask yourself what does the Google index look like?
Although the real index and all the factors which Google takes into account before ranking a web page for a specific keyword remain a secret. We could represent the index as a table-like distributed database structure which holds the following columns: term or keyword and path to a document (from the website documents) where the keyword is found. In other words, this table is used to map the relevance of the keyword towards a particular web document, so in a case of search, the search engine could easily determine where (in which document) a specific word could be found. Now that we have a grasp on the index structure let’s take a look at the actual process of Indexing:
The search engine performs two main tasks: first is to find meaningful words from the parsed content related to its topic and the second is to associate them together with the path to the document in its existing index.
You may ask how the engine knows if a particular keyword is relevant?
The local relevancy of a word or a combination of words for a particular document is calculated using techniques such as Inverted index, Latent Semantic Indexing, and others:
  • The Inverted index is a data structure where all the unique words found in a document are mapped to a list of documents where they also appear.
  • In LSI the mapping additionally considers relations between keywords and concepts contained in a collection of text documents.
When a meaningful keyword is found, it is then linked to its source document or multiple documents forming a new data entry in the table structure. Now the second step of the indexing process is when the search engine tries to fit the data entry within the existing index. The comparison process takes into account more than 200 factors that feed Machine Learning algorithms. Additionally, human evaluators are being used to determine the relevancy of a particular keyword for a particular document.
One more thing: in this process, you have to know that all the cached copies of the documents are archived and stored in another place.

Crawling and indexing continued
There are ways to control the crawler’s access to a website as well as to choose which pages to be taken into account for indexing. This can be done by placing a meta tag in the head section of a particular web page as well as by creating and using a robots.txt file inside the website’s root directory.
There are two properties: index and follow which are related to indexing and crawling processes that we will discuss:
Meta: index says that the page should be added to the index database.
Meta: follow controls crawling of the inner links inside the web page.
If we would like to restrict the crawler’s access to content we would use “nofollow” attribute.
If we would like the web page not to be a part of the search engine index, and to be excluded from the search results we use the “noindex” attribute.
The robots.txt file is primarily used for excluding crawling of a web page. It is a text file containing information on which pages/domains/directories to be included/or excluded from a particular website. It is one per website and resides in its root directory.
Robots.txt example:
# Rule 1
User-agent: Googlebot
Disallow: /nogooglebot/

# Rule 2
User-agent: *
Allow: /


Now let's see the difference between using robots.txt and meta tags with an example of crawl blocking.
The next example is showing how we can have a website but some of its web pages to continue displaying: A description for this result is not available because of this site’s robots.txt file.

First, let's discuss why we are having this situation. Apparently, this website has in its description the tag: meta robots=”index” so it has been indexing appropriately,
but in the robots.txt file we have disallowed this web page, so it's denying the display of the description of the result. In this way, the web page is added to the index (have been discovered), but has not being crawled (link index)
Before proceeding with the next examples, lets first be clear on what is link juice?
It is actually the value passed from one page or site to another through hyperlinks. Search engines see it as a vote or a signal by other pages that the page they are linking to is valuable.
In the figure on the left side, you can see how one page can pass link juice to another.

page-rank-slide
We will discuss the other four cases which prevent the flow of link juice:
  • If the main page returns 404 or not found, its link juice will not be calculated and transferred to the other page.
  • If the page we are linking to cannot be found also the link juice remains in the source page
    For the next two cases, we have to see what does ...robots.txt file example photo
    do. It is a plain text document file, where we can describe directories and files to be allowed or disallowed from crawling when a particular search engine visits our website.
  • When a page is disallowed in the robots.txt file its internal links would not be passing link juice to the destination page
  • And finally, if a page has a “nofollow” attribute placed on the link, the link will not flow it to the destination page.
Penguin update
This update main goal is to prevent the exchange of bad linking practices between websites. Such schemes are used by SEO 'specialists' in order to increase a particular website reputation. As well as in reverse direction, to negatively affect specific website by having lots of low-quality website links pointing to the targeted website. Here is how to clean up such situations. We can go to the search console and from there to see who is linking to us. Then we check all the listed domains by hand for issues. Other free websites that are also very helpful for finding back-linking sites are NeilPatel's website as well as backlinkwatch. After obtaining the spammy websites list we just create a plain text file disavow.txt where we place all the links following the format: domain: spammy.com. The last step is to upload the file to google's disavow tool. You will have to wait some days before the penalty imposed by this kind of negative SEO will be released.

Panda/topical/quality update
This penalty affects the entire website and even a single webpage could cause it.
Here are a few ways on how to remedy the situation if your website is being targeted by the Panda update:
If you have very similar articles just merge them or add more relevant content to the shorter ones. For articles aim for having about 1000 words of content.
In order to identify what might be the source of the problem, especially if you have lots of pages, you can group your categories into subdomains. Then the search console, will allow you to inspect them by domain so you can gain insight into which categories perform better and decide whether to correct the weak pages or just disallow the whole category. You can become even more granular by using sitemaps of the whole site pages. After submitting the sitemap in search console, you will have information on which pages are being fully indexed and which are having problems. The benefit of this technique is that it will show you which categories are not performing well to the level of individual URLs.
In case you have an article which spans in multiple pages, you can add rel next and rel prev inside your HTML markup, so Google can treat those pages as a group of pages.
More techniques:
First, identify the top 5 pages receiving most impressions and at the same time having very low CTR (clicks). The improvement action in such a case for you is just to correct their meta description and title in order to help them become more attractive to the visitors.
Transfer power from higher to lower-ranking pages, or just analyze the good rank pages and how they differ from the lower-ranking pages. When done you can either delete the weak pages or merge them into the powerful pages. 
For comments: choose to be displayed only after a user performs an action such as clicking on a button. On one hand this improves the UI and on other it prevents SPAM. In many cases webmasters choose just to disallow commenting on pages altogether.
The most effective yet longest technique for dealing with the Panda update is to allow only high-quality pages inside the Google index. Start with a few ones which already have good click-through rate and impressions, and then gradually allow more pages to the quality indexed list by rewriting, updating or adding new information inside.

Top-heavy update
The update targets websites that use too much of an advertisement inside the first screen a particular user sees. This way you might have great content, but if it is being occupied with advertisement the penalty will be triggered.
The suggestions in such cases are to have only: 1 ad above the fold (before reaching the vertical scroll height), and it should be of less than 300x250px and 1 per page (for mobile devices). The alternative is to use the auto ad placement available from Google Adsense.

Monday, August 20, 2018

Enabling grayscale in Gnome, KDE, LxQT and Windows

Windows 7
Negative Screen
https://zerowidthjoiner.net/Uploads/negativescreen/Binary.zip

Windows 10
there is already present grayscale mode: from the Turn color filters on or off

Gnome:
install an extension called: Desaturate_all  Note: to run the script successfully just comment the two lines: 

//  x_fill: true,

//  y_fill: false, inside extension.js
 

https://extensions.gnome.org/



KDE:
https://github.com/ugjka/kwin-effect-grayscale
Ubuntu/Debian:
install: sudo apt install kwin-dev libkf5xmlgui-dev libkf5service-dev libkf5globalaccel-dev libkf5configwidgets-dev qt5-default
git clone https://github.com/ugjka/kwin-effect-grayscale.git cd kwin-effect-grayscale mkdir build && cd build cmake .. -DCMAKE_C_FLAGS:STRING="" -DCMAKE_CXX_FLAGS:STRING="" -DCMAKE_EXE_LINKER_FLAGS:STRING="" -DCMAKE_SHARED_LINKER_FLAGS:STRING="" -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_LIBDIR=lib make sudo make install
Then enable the grayscale color filter from Desktop Effects menu

LxQt:
create a filter file: grayscale.glsl with the following contents:
uniform float opacity;
uniform bool invert_color;
uniform sampler2D tex;

void main() {
vec4 c = texture2D(tex, gl_TexCoord[0].xy);
float y = dot(c.rgb, vec3(0.299, 0.587, 0.114));
// y = 1.0 -y;
gl_FragColor = vec4(y, y, y, 1.0);
}
then just apply the filter with:
compton --backend glx --glx-fshader-win "$(cat ./grayscale.glsl)"

Please enjoy a less distractive / addictive world.

Subscribe To My Channel for updates