Tuesday, September 15, 2020

Deploy Angular app to Vercel and Firebase for free

Here is how to do it very quickly:


 

For Firebase you'll need to install the following schematics:
ng add @angular/fire

then just do:
ng deploy

you'll be probably asked to authenticate with password in browser, and then your project will be on Internet.

If you would like to youse serverless functions for NodeJS interpretation here is the way:
sudo npm install -g firebase-tools 

firebase init functions

This will install and initialize the functions. Then go to the newly created /functions directory and install your packages such as: npm install nodemailer cors etc.

And now is time to edit the auto-generated index.js file.

When you are happy with the generated function you can deploy it, just run from the same directory: 

firebase deploy

For Vercel, after the registration just link your github repository to Vercel. You can see/edit your current local git configuration with:

git config --local -e

To link the remote origin of your repository to the local git repo use: 

git remote add origin  https://github.com/your_username/project.git

if there is something on the remote site, you can overwrite it with the local version using:
git push --set-upstream origin master -f

 
or just pull and merge the remote version: git pull origin master

Then just do your commits and when pushing you'll have a new version synchronized in Internet.

Congratulations and enjoy the: Angular for beginners - modern TypeScript and RxJS course!

Sunday, September 13, 2020

Web development in LXC / LXD containers on Ubuntu

Here is how to do web development using the very fast Ubuntu native LXC/LXD containers. Part of the Practical Ubuntu Linux Server for beginners course.



First lets install lxd in our system:
sudo snap install lxd

Then initialize the basic environment:
sudo lxd init

We will also fetch image from a repository: linuxcontainers.org

and will start a container based on it:
sudo lxc launch images:alpine/3.10/amd64 webdevelopment

Let's see what we have in the system:
sudo lxc ls
sudo lxc storage ls
sudo lxc network ls

Now it is time to access the container with: sudo lxc exec webdevelopment sh
and then we will use apk to install openssh
apk add openssh-server
lets' also add unprivileged user in order to access the ssh:
adduser webdev

we will also start the server: 

service sshd start

Ok let's check with: ip a the address of the container.

Now we exit the shell(sh) and we can connect to the container using our new user: ssh webdev@ip address of container

Alright, now go back inside the container and will add the Apache service:
apk add apache2
service apache2 restart

Optional:

If we need to get rid of the container we need to stop it first:
sudo lxc stop demo
sudo lxc delete demo

If we need to get rid of the created storage pool, we run the following:
printf 'config: {}\ndevices: {}' | lxc profile edit default
lxc storage delete default

If we need to remove the created network bridge we can run:
sudo lxc network delete lxdbr0

Congratulations and happy learning !

Tuesday, September 01, 2020

Skaffold on microk8s kubernettes

Here is how to install configure and use microk8s with skaffold, step by step. Based on the Kubernetes course:

installation:

curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && sudo install skaffold /usr/local/bin/

create the initial project skaffold configuration:

skaffold init 



create alias to kubectl for skaffold to be able to use it :  

sudo snap alias microk8s.kubectl kubectl

provide microk8s config to skaffold:

microk8s.kubectl config view --raw > $HOME/.kube/config

update the pod configuration to use the image from microk8s:

image: localhost:32000/php-app 

(add localhost:32000...)

enable microk8s registry addon: 

microk8s.enable registry
then test the registry if it works: http://localhost:32000/v2/

run skaffold monitoring by providing repo to the insecure microk8s repo:

skaffold dev --default-repo=localhost:32000
Check if the pod is running:

kubectl get pods

Expose the pod ports to be browsable:

kubectl port-forward pod/skaffold-pod 8080:4000

Optional: In case we need to debug inside the container:   

docker run -ti localhost:32000/php-app:latest /bin/bash


Congratulations and enjoy the course !

Monday, August 24, 2020

Install phpmyadmin in Ubuntu 20.04

Here is how to install phpmyadmin on Ubuntu 20.04
References: Practical Ubuntu Linux Server for beginners

We need fist to have mysql-server installed, where phpmyadmin will store its data. For this reason we will run:
sudo apt install mysql-server

Then some libraries for the functioning of phpmyadmin as well as the phpmyadmin package:
sudo apt install phpmyadmin php-mbstring php-zip php-gd php-json php-curl php libapache2-mod-php
Note: if there is a problem in the installation you can Ignore, or Abort the configuration of phpmyadmin.

Let's now go and login inside of MySQL as root:
sudo mysql -u root 


or if you already have password/user then: login with: sudo mysql -u user -p

Next we will adjust the MySQL root password, as well as its method of authentication:
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';

Optional: 

Configure Apache in order to serve phpmyadmin (if not already done by installation of phpmyadmin): inside of etc/apache/conf-available/ we create the following phpmyadmin.conf file:

Alias /phpmyadmin /usr/share/phpmyadmin
<Directory /usr/share/phpmyadmin/>
   AddDefaultCharset UTF-8
   <IfModule mod_authz_core.c>
      <RequireAny>
      Require all granted
     </RequireAny>
   </IfModule>
</Directory>
 
<Directory /usr/share/phpmyadmin/setup/>
   <IfModule mod_authz_core.c>
     <RequireAny>
       Require all granted
     </RequireAny>
   </IfModule>
</Directory>


Lastly, we need to activate the above configuration file with:
a2enconf phpmyadmin.conf
and then restart the apache2 service to reload and accept the changed configuration:
sudo systemctl restart apache2.service

Now is time to open up in the browser to: http://127.0.0.1/phpmyadmin
and use the combination that we already set: root / password

Congratulations and enjoy learning !

Friday, August 07, 2020

NodeJS JSON API + MongoDB + JWT + ES6 forms

Here is how to create a real-life NodeJS API together with a login form.

Resources:

JavaScript for beginners - learn by doing

Learn Node.js, Express and MongoDB + JWT

We will start with the HTML representing the form as well as its JavaScript functionality:

formLogin.html


  <html>
<body>
<form id="myform">
<div>
<label for="email">Email:</label>
<input type="text" id="email" name="email" />
</div>
<div>
<label for="password">Password:</label>
<input type="password" id="password" name="password" />
</div>
<div class="button">
<button type="submit" id="loginUser">Send</button>
</div>
</form>

<div id="result"></div>

<script type="text/javascript">
async function fetchData(url = '', data = {}, method, headers = {}) {
const response = await fetch(
url, {
method,
headers: { 'Content-Type': 'application/json', ...headers },
...data && { body: JSON.stringify(data) },
});
return response.json();
}

let form = document.querySelector('#myform');
if (form) {
form.addEventListener('submit', (e) => {
e.preventDefault();
fetchData(
'/user/login',
{ email: this.email.value, password: this.password.value },
'POST'
).then((result) => {
if (result.token) {
// request the url with token
fetchData('/info', null, 'GET', { Bearer: result.token })
.then((result) => { console.log(result); });
return;
}
document.querySelector('#result').innerHTML = `message: ${result.message}`;
})
.catch(error => console.log('error:', error));
})
}
</script>
</body>
</html>



our main node server: index.js

import express from "express";
import mongoose from "mongoose";
import dotenv from "dotenv";

// import the routes
import routes from "./routes/routes.js";

// create an express instance
const app = express();

app.use(express.json())

// setup the middleware routes
routes(app);

// config the database credentials
dotenv.config();

// connect to the database
mongoose.connect(
process.env.DB_CONNECT,
{ useNewUrlParser: true, useUnifiedTopology: true },
() => console.log("connected to mongoDB")
);
// listen for errors
mongoose.connection.on('error', console.error.bind(console, 'MongoDB connection error:'));
// listen on port 3000
app.listen(3000, () => console.log("server is running"));

application routes: routes.js

import { loginUser } from "../controllers/controller.js";
import { info } from "../controllers/info.js"; // the protected route
import { auth } from "../controllers/verifyToken.js"; // middleware for validating the token

import * as path from 'path';
import { fileURLToPath } from 'url';
const __filename = fileURLToPath(import.meta.url); // The absolute URL of the current file.
const __dirname = path.dirname(__filename); // parse just the directory


const routes = app => {
app.route("/user/login").get((req, res) => { res.sendFile('formLogin.html', { root: path.join(__dirname, "../views") }); });
app.route("/user/login").post((req, res) => loginUser(req, res)); // we capture inside req, and res

app.route("/info").get(auth, (req, res) => info(req, res)); // we capture inside req, and res
// and insert the auth middleware to process the token
};
export default routes;


our main controller: controller.js

import mongoose from "mongoose";
mongoose.set("useCreateIndex", true);
import { userSchema } from "../models/user.js";
import jwt from "jsonwebtoken";

const User = mongoose.model("users", userSchema); // users is the name of our collection!
export const addNewUser = (req, res) => {
User.init(() => {
// init() resolves when the indexes have finished building successfully.
// in order for unique check to work

let newUser = new User(req.body); // just creating w/o saving
newUser.password = newUser.encryptPassword(req.body.password);

newUser.save((err, user) => { // now saving
if (err) {
res.json({ 'message': 'duplicate email' });
}
res.json(user);
});
});
};

export const loginUser = (req, res) => {

if (req.body.password == null || req.body.email == null) {
res.status(400).json({ 'message': 'Please provide email / password' });
return;
}

User.init(() => {
User.findOne({ email: req.body.email }, (err, user) => {
if (err) {
res.json(err);
return;
}
if (user == null) {
res.status(400).json({ 'message': 'Non existing user' });
return;
});

// here user is the fetched user
const validPassword = user.validatePassword(req.body.password, user.password);

if (!validPassword) {
res.status(400).json({ 'message': 'Not valid password' });
return;
}

// create and send a token to be able to use it in further requests
const token = jwt.sign({ _id: user._id }, process.env.TOKEN_SECRET);
res.header("auth-token", token) // set the token in the header of the response
.json({ 'token': token }); // display the token
});
});
};



js helper middleware for working with JWT tokens: verifyToken.js

    import jwt from "jsonwebtoken";
export const auth = (req, res, next) => {
const token = req.header("Bearer");
if (!token) return res.status(401).json({'message':'access denied'});
const verified = jwt.verify(token, process.env.TOKEN_SECRET);
if (!verified) res.status(400).message({'message':'Invalid token'});
// continue from the middleware to the next processing middleware :)
next();
};


user database model: user.js

import mongoose from 'mongoose';
import bcrypt from 'bcryptjs';

let userSchema = new mongoose.Schema(
{
email: {
type: String,
requires: "Enter email",
maxlength: 50,
unique: true
},
password: {
type: String,
required: "Enter password",
maxlength: 65
}
},
{
timestamps: true
}
);

userSchema.method({
encryptPassword: (password) => {
return bcrypt.hashSync(password, bcrypt.genSaltSync(5));
},
validatePassword: (pass1, pass2) => {
return bcrypt.compareSync(pass1, pass2);
}
});

export { userSchema };

Congratulations !

Tuesday, July 28, 2020

Web app deployment inside of Kubernetes with microk8s

based on the Kubernetes course:
 
1) install microk8s: sudo snap install microk8s
2) enable registry & dns: microk8s.enable registry dns

MONGODB deployment & service
3) configure the mongodb deployment
generate 2 secrets using md5sum from shell
MONGO_INITDB_ROOT_USERNAME=--insert_here_encrypted_username-- -e MONGO_INITDB_ROOT_PASSWORD=--insert_here_encrypted_password-- -e MONGO_INITDB_DATABASE=admin

4) apply the MongoDB database deployment and service
microk8s.kubectl apply -f mongodb-deployment.yaml
5) check the environment variables inside the container
5.1) enter inside the deployment:
microk8s.kubectl exec -it deployment.apps/mongodb-deployment sh
5.2) env
6.1) get inside the mongodb container:
from Docker: docker exec -it mongo bash
from Kubernetes: microk8s.kubectl exec -it mongodb-deployment--insert_your_deployment_id -- /bin/sh
6.2) authenticate to the mongodb database container:
mongo -u insert_here_encrypted_username -p insert_here_encrypted_password --authenticationDatabase admin


Our application deployment & service
7) build the docker image of our application:
docker build . -t localhost:32000/mongo-app:v1
8) test the image using port forwarding:
docker run -p 3000:3000 localhost:32000/mongo-app:v1
or: docker run  -it --rm -p 3000:3000 localhost:32000/mongo-app:v1
9) push the image into the kubernetes registry
docker push localhost:32000/mongo-app:v1
10) apply our custom application: microk8s.kubectl apply -f mongo.yaml
11) check whether the IP addresses of the service and pods match. This means that the service endpoints are correctly set and math the created pods:
microk8s.kubectl describe service
microk8s.kubectl get pod -o wide


Congratulations!

Friday, July 17, 2020

Permissions inside and outside of Docker containers

References: Docker for web developers course.


1) In Dockerfile, when building a container:
Inside the Dockerfile we can fix the container directory permissions: chown -R www-data:www-data /var/lib/nginx ->in order to let nginx to function properly

volumes & not empty dir -> files are copied from the dir to volume
bind mount & not empty dir -> if there are files they stay, nothing is being copied from the bind mount point

2) In docker-compose.yml

- volumes (volume:/var/lib/myslq) inherit the permissions and ownership from the user created the image - usually root.

- bind mounts (/my/own/datadir:/var/lib/mysql) - the permissions and ownership are the same as the directory on your host.

Even if in the Dockerfile we have: USER node or in docker-compose is specified user: "node:node", the local directory will be mounted preserving its UID:GID in the container, ignoring the USER directive.

Special case: when doing bind-mount and the uid in container != uid on host:
Solution is to change the ownership of the local dir before building the container and creating the bind with the same user/group: chown -R www-data:www-data /var/lib/nginx
There is a catch: when local uid <> container uid in the container then we will have mismatched permissions. We can solve this problem using UID/GID synchronization:
// optional
Check the user running the container from the dockerhub image: USER directive.
id -u
Check the container user to which group belongs (find its UID)
cat /etc/passwd | grep nevyan
id, groups, grep nevyan /etc/group
// end optional

1) Check the user which runs the server inside the container
ps aux | grep apache(server_name)
2) When having proper UID:GID, we again use chown but this time not with user/group names, but with UID:GUIDs

MySQL example: By default the MySQL image uses a non-root user with uid=1001. If we try to bind mount a local /var/lib/mysql (MySQL data directory not owned by UID 1001), to a non-root docker container - this will fail. Since user 1001 (from the container) needs to perform read/write operations to our local directory.
Solution: change the local directory permissions with numeric UID/GID expected by the container: sudo chown -R 1001 /my/own/datadir

Subscribe To My Channel for updates