Youtube channel !

Be sure to visit my youtube channel

Thursday, January 16, 2020

JSON web tokens (JWT) with NodeJS REST API

Here is how to deal with JWT inside REST API routes:
Note: if you want to learn more on JSONWebTokens and REST you can visit this course: Learn Node.js, Express and MongoDB + JWT

 


So let's begin by creating a new directory project:
mkdir auth
setup the project:
npm init -f
Then we will install the following libraries:
for setting up the webserver
npm i express
for the database connection
npm i mongoose
for reading .env files
npm i dotenv
for restarting the nodejs application(webserver)
npm i --save-dev nodemon
for using ES6 syntax inside nodejs
npm i --save-dev @babel/preset-env @babel/core @babel/node
setting up the transpiling inside babel
nano .babelrc
{"presets": ["@babel/preset-env"]}

install eslint
npm install eslint --save-dev
other packages: bcryptjs, jsonwebtoken

to be able to run the code change package.json:
"start": "nodemon --exec babel-node index.js"
start developing from the current directory inside visual studio code:
code .

.env file
DB_CONNECT ="mongodb://127.0.0.1/users"
TOKEN_SECRET = "onetwothreefourfive"

our index.js file:
import express from "express";
import mongoose from "mongoose";
import dotenv from "dotenv";
// import the routes
import routes from "./routes/routes";

// create an express instance
const app = express();

// setup the middleware routes
routes(app);

// config the database credentials
dotenv.config();

// connect to the database
mongoose.connect(
process.env.DB_CONNECT,
{ useNewUrlParser: true, useUnifiedTopology: true },
() => console.log("connected to mongoDB")
);
// listen for errors
mongoose.connection.on('error', console.error.bind(console, 'MongoDB connection error:'));
// listen on port 3000
app.listen(3000, () => console.log("server is running"));


controller.js:
import mongoose from "mongoose";
mongoose.set("useCreateIndex", true);
import { userSchema } from "../models/user.js";
import * as bcrypt from "bcryptjs";
import * as jwt from "jsonwebtoken";

const User = mongoose.model("users", userSchema); // users is the name of our collection!!!
export const addNewUser = (req, res) => {
User.init(() => {
// init() resolves when the indexes have finished building successfully.
// in order for unique check to work
let newUser = new User(req.query); // just creating w/o saving
newUser.password = bcrypt.hashSync(req.query.password, 10); // setting password synchronously
newUser.save((err, user) => { // now saving
if (err) {
res.send(err.message);
}
res.json(user);
});
});
};

export const loginUser = (req, res) => {
User.init(() => {
User.findOne({ email: req.query.email }, (err, user) => {
if (err) {
res.send(err);
}
if (user == null) {
res.status(400).send("Non existing user");
}

// we have the user record from db, now check the password
const validPassword = bcrypt.compareSync(
req.query.password,
user.password
);
if (!validPassword) res.status(400).send("Not valid password");

// create and send a token to be able to use it in further requests
const token = jwt.sign({ _id: user._id }, process.env.TOKEN_SECRET);
res.header("auth-token", token)  // set the token in the header of the response

.send(token); // display the token
});
});
};


routes.js: // our main routes file
import { addNewUser, loginUser } from "../controllers/controller.js";
import { info } from "../controllers/info.js"; // the protected route

import { auth } from "../controllers/verifyToken"; // middleware for validating the token

const routes = app => { 
app.route("/user/register").get((req,res)=>addNewUser(req,res)); // we capture inside req, and res
app.route("/user/login").get((req,res)=>loginUser(req,res)); // we capture inside req, and res
app.route("/info").get(auth,(req,res)=>info(req,res)); // we capture inside req, and res
// and insert the auth middleware to process the token
};
export default routes;


verifytoken.js
import * as jwt from "jsonwebtoken";

export const auth = (req, res, next) => {
const token = req.header("Bearer");
if (!token) return res.status(401).send("access denied");
const verified = jwt.verify(token, process.env.TOKEN_SECRET);
if (!verified) res.status(400).send("Invalid token");
// continue from the middleware to the next processing middleware :)
next();
};

// user mongoDB schema:
user.js
import mongoose from "mongoose";
export const userSchema = new mongoose.Schema(
{
name: { type: String, required: "Enter username", minlength: 5, maxlength: 20 },
email: { type: String, required: "Enter email", maxlength: 50, unique: true },
password: { type: String, required: "Enter password", maxlength: 65 }
},
{
timestamps: true
}
);

Congratulations and enjoy learning !

Thursday, January 09, 2020

Ubuntu server - my list of useful commands

Here is a collection of commands I find very useful when doing daily work on Ubuntu Linux. For more information, you can reference this course.

  1. How to check what is the exact name of a package containing a command with a certain name? With: apt-file we can search inside of contents of a package. 1.1 sudo apt-file update, then 1.2 apt-file search gimp
    or dpkg -S filename
  2. How about checking all the files that have been modified in the last 30 minutes? find -cmin -30 will do the job
  3. or just to see the installed packages: cat /var/log/dpkg.log |grep installed is the command.
  4. Sometimes I need to check what is happening to my disk space. One very good program for the command prompt is ncdu. Just try it, and you won't be disappointed: ncdu
  5. Sometimes I don't know which applications are using the internet or if certain download/update is stalled: nethogs is best for those cases.
  6. And what if I would like to be able to run a certain program without typing constantly sudo in front? Well adding our current user $USER to the program's group is all needed:  sudo usermod -aG docker $USER(here is for running docker without sudo) 
  7. Permissions: you may encounter that there exist additional fine-grained permissions apart from the default for the user/group/and owner. Welcome the ACLs. If you see a + next to the ls -la listing of a file or directory just type getfacl /dir_name/ to see them. To a add a group ACLs: setfacl -m g:www-data:rwx /var/www/, to remove group ACLs: setfacl -x g:www-data /var/www. ACLs also have defaults with -d. This way a new directory/file will inherit the default permissions of the parent directory: setfacl -d -m g:www-data:rwx /var/www Another example: setfacl -Rd -m u:{$USER}:rwx /var/www/ will set rwx permissions to /var/www for a specific user - here we also use the recursive -R flag. Note that sometimes we need to do 2 passes to set permissions correctly: one with defaults -d for any newly created files/dirs, and one without the -d for the current files and directories!
  8. Packages dependency clashes:
    I. We can have different versions of the same package inside of the apt repository for example for bionic and for focal releases. First, check and fix every URL to come from the same distribution inside: /etc/apt/sources.list and the inner directories. Then run apt update again to refresh the updated list of packages and use: dpkg --configure -a to configure the packages.
    II. Interrupted apt dist-upgrade while installing the packages, then later trying again to use apt update && apt dist-upgrade but in between a new version of some package has been released. In this case, you have unmet dependencies because you have an old version about to be installed (staying in the apt-cache list), then a new version comes and it cannot continue with the installation, because the old one is still not installed successfully. Circular dependencies may happen, or when the package version installed on the system is different than the required from another package. For example:
    libqtermwidget5-0:amd64 depends on qtermwidget5-data (= 0.14.1-2); however:
    The version of qtermwidget5-data on the system is 0.14.1-0ubuntu3
    1. remove the half-installed package that causes the problems and its configuration from your system:
    sudo dpkg -P --force-depends
    qtermwidget5-data_0.14.1-2_all 
    2. when we do apt update, the command saves a cached list of packages that will be downloaded from the Ubuntu servers in /var/lib/apt/lists/, so remove all apt caches: sudo find /var/lib/apt/lists -type f  |xargs rm -f >/dev/null and run apt update
    3. configure the rest of the packages to be installed and configured, without checking their dependency requirements (ignore): sudo dpkg --configure -a --force-depends
    4. continue installing the  packages with their correct dependencies: sudo apt-get -f install
    in another version of the problem a new package cannot be installed, because it asks to overwrite a file of an old one: dpkg is trying to overwrite 'file' which is also in package '...'. In this case issue: sudo dpkg -i --force-overwrite /var/cache/apt/archives/libffi8ubuntu1_3.4~20200819gead65ca871-0ubuntu3_amd64.deb (where you can place the name of the new package archive)
    or a more "safe option" is to remove the problem causing archive from /var/cache/apt/archives/
  9. Network tools:
    list all the processes taking most of the CPU: ps -e --format=pid,rss,args | sort --numeric-sort --key=2 check the network connections inside of specific process: sudo lsof -aPi -p 3258 or trace them: sudo strace -p 3258 -e trace=network
    list all listening(open) ports on the current machine: sudo lsof -i | grep -i LISTEN
    list all the network connections of user 'nevyan': sudo lsof -aPi -u nevyan
    listening connections inside of a process:
    sudo lsof -ai -p 730
  10. Working with text files:
    - replacing strings inside of a file:
    sudo sed -i 's/focal/groovy/g' /etc/apt/sources.list
    // s -substitute, g - global, apply to all matches
    - downloading a file and replacing its contents at the same time:
    curl -s https://raw.githubusercontent.com/istio/istio/release-1.6/samples/bookinfo/platform/kube/bookinfo.yaml | sed 's/app: reviews/app: reviews_test/'
    - adding information to text file (-a = append / the default action is overwrite):
    echo "deb https://download.sublimetext.com/ apt/stable/" | sudo tee -a /etc/apt/sources.list.d/sublime-text.list
  11. Working with variables:
    - how to grep webpage tags:
    MYHOST="http://www.google.com";
    curl -s $MYHOST | grep -o "<title>*.*</title>";

    // -o shows only full matched lines with content
    - how to save an output of command:
    look for pattern line and display column 3: ip route | awk '/default/ { print $3 }'
    save output into variable: MYHOST=$(ip route | awk '/default/ { print $3 }')
    ping MYHOST
    - complex example for purging non-used kernels:
    dpkg --list linux-{headers,image,modules}-* | awk '{ if ($1=="ii") print $2 }' | grep -v -e "$(uname -r | cut -d "-" -f 1,2)" | sudo xargs apt purge -y

    // e regex match, v inverted match
    It there are any removed packages, they can be reinstalled with: apt install --reinstall linux-image-X.Y-ARCH5
  12. Finding and deleting files recursively based on pattern: find ./mail -depth -path '*gmai*' -delete
  13. How to do a default prompt shortening, when the shell gets too long: export PS1="\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]\$ "
  14. Ever wanted to be able to resume downloads, here is the curl option: curl -c http://url_to_download
    Congratulations and enjoy the course!

    Subscribe To My Channel for updates

    Things to do after install Fedora 43

    #!/bin/bash # 1. SETUP REPOSITORIES echo ">>> Setting up Repositories (RPM Fusion, Copr, Cisco)..." # Install RPM Fusion ...