We need fist to have mysql-server installed, where phpmyadmin will store its data. For this reason we will run: sudo apt install mysql-server
Then some libraries for the functioning of phpmyadmin as well as the phpmyadmin package: sudo apt install phpmyadmin php-mbstring php-zip php-gd php-json php-curl php libapache2-mod-php Note: if there is a problem in the installation you can Ignore, or Abort the configuration of phpmyadmin.
Let's now go and login inside of MySQL as root: sudo mysql -u root
or if you already have password/user then: login with: sudo mysql -u user -p
Next we will adjust the MySQL root password, as well as its method of authentication: ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
Optional:
Configure Apache in order to serve phpmyadmin (if not already done by installation of phpmyadmin): inside of etc/apache/conf-available/ we create the following phpmyadmin.conf file: Alias /phpmyadmin /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin/> AddDefaultCharset UTF-8 <IfModule mod_authz_core.c> <RequireAny> Require all granted </RequireAny> </IfModule> </Directory>
Lastly, we need to activate the above configuration file with: a2enconf phpmyadmin.conf and then restart the apache2 service to reload and accept the changed configuration: sudo systemctl restart apache2.service
Now is time to open up in the browser to: http://127.0.0.1/phpmyadmin and use the combination that we already set: root / password
import express from "express";
import mongoose from "mongoose";
import dotenv from "dotenv";
// import the routes
import routes from "./routes/routes.js";
// create an express instance
const app = express();
app.use(express.json())
// setup the middleware routes
routes(app);
// config the database credentials
dotenv.config();
// connect to the database
mongoose.connect(
process.env.DB_CONNECT,
{ useNewUrlParser: true, useUnifiedTopology: true },
() => console.log("connected to mongoDB")
);
// listen for errors
mongoose.connection.on('error', console.error.bind(console, 'MongoDB connection error:'));
// listen on port 3000
app.listen(3000, () => console.log("server is running"));
application routes: routes.js
import { loginUser } from "../controllers/controller.js";
import { info } from "../controllers/info.js"; // the protected route
import { auth } from "../controllers/verifyToken.js"; // middleware for validating the token
import * as path from 'path';
import { fileURLToPath } from 'url';
const __filename = fileURLToPath(import.meta.url); // The absolute URL of the current file.
const __dirname = path.dirname(__filename); // parse just the directory
const routes = app => {
app.route("/user/login").get((req, res) => { res.sendFile('formLogin.html', { root: path.join(__dirname, "../views") }); });
app.route("/user/login").post((req, res) => loginUser(req, res)); // we capture inside req, and res
app.route("/info").get(auth, (req, res) => info(req, res)); // we capture inside req, and res
// and insert the auth middleware to process the token
};
export default routes;
our main controller: controller.js
import mongoose from "mongoose";
mongoose.set("useCreateIndex", true);
import { userSchema } from "../models/user.js";
import jwt from "jsonwebtoken";
const User = mongoose.model("users", userSchema); // users is the name of our collection!
export const addNewUser = (req, res) => {
User.init(() => {
// init() resolves when the indexes have finished building successfully.
// in order for unique check to work
let newUser = new User(req.body); // just creating w/o saving
newUser.password = newUser.encryptPassword(req.body.password);
newUser.save((err, user) => { // now saving
if (err) {
res.json({ 'message': 'duplicate email' });
}
res.json(user);
});
});
};
export const loginUser = (req, res) => {
if (req.body.password == null || req.body.email == null) {
res.status(400).json({ 'message': 'Please provide email / password' });
return;
}
// create and send a token to be able to use it in further requests
const token = jwt.sign({ _id: user._id }, process.env.TOKEN_SECRET);
res.header("auth-token", token) // set the token in the header of the response
.json({ 'token': token }); // display the token
});
});
};
js helper middleware for working with JWT tokens: verifyToken.js
import jwt from "jsonwebtoken";
export const auth = (req, res, next) => {
const token = req.header("Bearer");
if (!token) return res.status(401).json({'message':'access denied'});
const verified = jwt.verify(token, process.env.TOKEN_SECRET);
if (!verified) res.status(400).message({'message':'Invalid token'});
// continue from the middleware to the next processing middleware :)
next();
};
user database model: user.js
import mongoose from 'mongoose';
import bcrypt from 'bcryptjs';
7) build the docker image of our application: docker build . -t localhost:32000/mongo-app:v1 8) test the image using port forwarding: docker run -p 3000:3000 localhost:32000/mongo-app:v1
or: docker run -it --rm -p 3000:3000 localhost:32000/mongo-app:v1
9) push the image into the kubernetes registry docker push localhost:32000/mongo-app:v1
Inside the Dockerfile we can fix the container directory permissions: chown -R www-data:www-data /var/lib/nginx ->in order to let nginx to function properly
volumes & not empty dir -> files are copied from the dir to volume
bind mount & not empty dir -> if there are files they stay, nothing is being copied from the bind mount point
2) In docker-compose.yml
- volumes (volume:/var/lib/myslq) inherit the permissions and ownership from the user created the image - usually root.
- bind mounts (/my/own/datadir:/var/lib/mysql) - the permissions and ownership are the same as the directory on your host.
Even if in the Dockerfile we have: USER node or in docker-compose is specified user: "node:node", the local directory will be mounted preserving its UID:GID in the container, ignoring the USER directive.
Special case: when doing bind-mount and the uid in container != uid on host:
Solution is to change the ownership of the local dir before building the container and creating the bind with the same user/group: chown -R www-data:www-data /var/lib/nginx There is a catch: when local uid <> container uid in the container then we will have mismatched permissions. We can solve this problem using UID/GID synchronization:
// optional
Check the user running the container from the dockerhub image: USER directive. id -u
Check the container user to which group belongs (find its UID) cat /etc/passwd | grep nevyan
id, groups, grep nevyan /etc/group
// end optional
1) Check the user which runs the server inside the container
ps aux | grep apache(server_name)
2) When having proper UID:GID, we again use chown but this time not with user/group names, but with UID:GUIDs
MySQL example: By default the MySQL image uses a non-root user with uid=1001. If we try to bind mount a local /var/lib/mysql (MySQL data directory not owned by UID 1001), to a non-root docker container - this will fail. Since user 1001 (from the container) needs to perform read/write operations to our local directory.
Solution: change the local directory permissions with numeric UID/GID expected by the container: sudo chown -R 1001 /my/own/datadir
1) install wine32 first in order to include the i386 libraries:
apt install wine32 and wine
2) install winetricks in order to easily install external windows libraries. If you want to know which libraries are required just run wine your_app.exe and check the produced log:
apt install winetricks
3) use winetrics dlls combined with the libraries required by your application:
winetricks dlls mfc42 vcrun2010
4) run wine somefile.exe
Congratulations, and if you would like, you can enjoy the full Ubuntu admin course !
Here is an example of Angular component using a template decorator in TypeScript:
@Component({ template: '<div>Woo a component!</div>', }) export class ExampleComponent { constructor() { console.log('Hey I am a component!'); } }
In JavaScript a decorator can be viewed as a composite with only one component and it isn’t intended for object aggregation. Here is the Decorator pattern in action:
const component = { template: "<div>hello</div>", }; setTemplate(component); // pass the whole object to the setTemplate function
console.log(component.template);
Enter Mixins: They find good usage base for object aggregation as well as inside of multiple components, at the same time have some drawbacks:
const externalLib = { // ... other functions we use setTemplate: () => { console.log('overriding...'); } // overriding function }
Introducing partial composition using inheritance mixin:
const myComponent = Object.assign(
Properties of the target object are overwritten by properties of the source object, if they have the same key. This way later sources' properties overwrite earlier ones. { setTemplate: () => { console.log('original'); } // initially in our object will be overriden from ExternalLib }, externalLib ) myComponent.setTemplate();
We can update the mixin code, but this solves just half way the problem, as this time our function will overwrite the externalLib functionality:
const setTemplate = (state = {}) => { // create a state on the first run return { // return copy of the state object (immutability!) ...state, change: inputTemplate => { // template is input parameter state.template = InputTemplate; return state; }, } }
const setLogin = (state = {}) => { // receive the state as parameter or if not create an empty one return { ...state, login: () => { console.log('Logged in!') }, } }