Youtube channel !

Be sure to visit my youtube channel

Thursday, December 25, 2025

Things to do after install Fedora 43

---
##### SYSTEM UPDATES & FIRMWARE  
sudo dnf upgrade --refresh -y  
##### Check Firmware (Only if supported hardware is found)  
fwupdmgr refresh  
fwupdmgr get-updates  
fwupdmgr update -y  

---
# INSTALL DRIVERS & CODECS 

##### Install RPM Fusion (Free & Nonfree)  
sudo dnf install -y https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
###### Install basic multimedia packages and libraries  
sudo dnf install -y libfreeaptx libldac fdk-aac ffmpeg-libs libva libva-utils openh264 gstreamer1-plugin-openh264 mozilla-openh264 intel-gpu-tools  
##### Swap restricted codecs
sudo dnf swap -y ffmpeg-free ffmpeg --allowerasing  
sudo dnf swap -y mesa-va-drivers mesa-va-drivers-freeworld  
sudo dnf swap -y mesa-vdpau-drivers mesa-vdpau-drivers-freeworld 
##### INTEL DRIVERS  
#### 'libva-intel-driver' is the legacy i965 driver (Sandy Bridge to Skylake).   
#### Newer Intel CPUs (Broadwell+) use 'intel-media-driver'.   
sudo dnf install -y libva-intel-driver  
    
---
# INSTALL APPLICATIONS & TOOLS  
## Archive Tools, System Info, Monitoring  

sudo dnf install -y unzip 7zip 7zip-plugins unrar inxi btop lm_sensors git make gcc mono-devel vlc

## Viber  
wget https://download.cdn.viber.com/cdn/desktop/Linux/viber.rpm  
sudo dnf install -y ./viber.rpm  
## Caprine
sudo dnf copr enable dusansimic/caprine
sudo dnf install -y caprine
## Notejot 
## Notes-xfce  
sudo dnf install xfce4-notes-plugin 

---
# KERNEL CONFIGURATION  

# Enable Vanilla Kernel Copr  
sudo dnf -y copr enable @kernel-vanilla/stable  

sudo dnf upgrade -y 'kernel*'  
# Update metadata expiration for Copr  
sudo sed -i 's!baseurl=https://download.copr.fedorainfracloud.org/results/@kernel-vanilla/\(mainline\|stable-rc\|next\).*!&\nmetadata_expire=1h!g; s!baseurl=https://download.copr.fedorainfracloud.org/results/@kernel-vanilla/\(stable\|fedora\)/.*!&\nmetadata_expire=3h!g;' /etc/yum.repos.d/_copr:copr.fedorainfracloud.org:group_kernel-vanilla:*.repo  
# Secure Boot State  
mokutil --sb-state  

---
# POWER & THERMAL MANAGEMENT  

# Detect sensors  
sudo sensors-detect --auto  
# Install Thermald  
sudo dnf install -y thermald  
sudo systemctl enable --now thermald  

# --- OPTION A: Power Profiles Daemon (Default/Recommended for GNOME/KDE) ---

sudo dnf install -y power-profiles-daemon  
sudo systemctl enable --now power-profiles-daemon  
powerprofilesctl set performance  
  
# --- OPTION B: TLP (Advanced battery life) ---  
sudo systemctl mask power-profiles-daemon

sudo dnf install -y tlp tlp-rdw  
sudo systemctl enable --now tlp  

## --- OPTION C: --- 
git clone https://github.com/AdnanHodzic/auto-cpufreq.git  
cd auto-cpufreq && sudo ./auto-cpufreq-installer  
sudo auto-cpufreq --install  
auto-cpufreq --stats  

### When the fans are not working use NBFC-LINUX 
git clone https://github.com/nbfc-linux/nbfc-linux.git  
cd nbfc-linux  
make  
sudo make install  

---
# Boot speedup
### Disable NetworkManager wait online
sudo systemctl disable NetworkManager-wait-online.service  
## disable autoupdates
sudo systemctl stop packagekit  
sudo systemctl mask packagekit 
### FILESYSTEM MAINTENANCE  
sudo fstrim -av  
sudo systemctl enable --now fstrim.timer

### MANUAL CONFIGURATION 
### /etc/dnf/dnf.conf    
max_parallel_downloads=10
### /etc/default/grub   
GRUB_CMDLINE_LINUX="rhgb quiet acpi_enforce_resources=lax  mitigations=off "
sudo grub2-mkconfig -o /boot/grub2/grub.cfg

sudo grubby --update-kernel=ALL --args="preempt=full mitigations=off"

## save ram 
Tune ZRAM for performance.
sudo nano /etc/systemd/zram-generator.conf   
    [zram0]  
    zram-size = ram  
    compression-algorithm = zstd  
Restart the service:  
    sudo systemctl restart systemd-zram-setup@zram0.service


# Fedora Kde specific:
sudo systemctl disable --now switcheroo-control.service ModemManager.service pcscd.service abrtd.service abrt-journal-core.service abrt-oops.service abrt-xorg.service rsyslog.service atd.service gssproxy.service rpc-statd-notify.service irqbalance.service 
balooctl disable 
sudo systemctl disable --now pcscd.service

---
# CHECKS  
systemd-analyze blame | head -n 100 
vainfo  
sudo intel_gpu_top 
inxi
cpufreq
systemctl list-units --type=service  


Monday, April 07, 2025

Modernizing old php project with the help of AI

0. Keep docker running in separate outside of VSCODE terminal
1. The importance of GIT for version control - create modernization branch
2. Add rules:
Prefer simple solutions
Only make requested changes, that are well understood and related to the request
Think about what other methods and areas of code that might be affected by code changes
Always look for existing code to iterate instead of creating new code
Avoid code duplication, which means checking for other areas of the codebase that might already have similar code and functionality
Do no touch code that is unrelated to the task
Focus on the code areas that are relevant to the task
Keep the codebase clean and organized
Generate code for specific, defined transformations (e.g., adding namespace to one file, updating callers for one file), do not to perform project-wide automated changes

3. Have two different chats: for fixes, and enhancements

1. Refactoring:
Start refactoring with Cursor using agents as it reads best context: use thinking model for understanding better the prompt and deep code relations.
Initial prompt: Analyze the include and require statements in /src. Suggest and create a PSR-4 namespace structure (e.g., App\), move, update and refactor the existing files based on the new structure. @src
Context: If the project is large instead of providing the entire src directory, provide more specific context (e.g., @src/Models or: @UserModel.php @Database.php) when the task is focused on a particular area.

2. Fixes:


Prompt 1: could you fix the following: Keep in mind that the project path: /var/www/html(inside docker) points to location: /real_path/src

Prompt:2 I am receiving the following warnings, could you try to fix them: add as context the whole directory @src
(repeat multiple times until the code is running)

If there is persistant bug / error. Use the gemini code extension, but carefully provide context of related files!

Remember: On success save in git and then continue && Don't overflow the chat buffer, start new chat.

Second prompt: Could you suggest an optimization of the PHP class structure that could encapsulate the logic and data handled in this code. Please use PDO and make it compatible with the rest of the code.

3. Advanced prompts:
3.1. Refactor the PHP code to improve readability and maintainability. Consider breaking down long functions, using modern PHP syntax (e.g., PHP 7.x/8.x features like null coalesce operator, short array 3.2. Look at the application as a whole and tell if there are architectural patters that can be used to improve the project.syntax, type hints if applicable), and adhering to PSR-12 coding standards.

4. Security
Analyze this PHP code snippet for common security vulnerabilities like SQL Injection, Cross-Site Scripting (XSS), and insecure variable handling. Suggest safer alternatives using modern practices (e.g., prepared statements, output escaping). Add rate limiting if possible.

Tuesday, October 22, 2024

Integrating AI code helpers into Visual Studio Code

In this guide, we’ll walk through setting up a local AI-powered coding assistant within Visual Studio Code (VS Code). By leveraging tools such as Ollama, CodeStral, and the Continue extension, we can enhance our development workflow, get intelligent code suggestions, and even automate parts of the coding process.

Installation:

Configuring the Environment:

  • Step 1: Installing the Required Extensions

    1. Install CodeStral: Open the Extensions panel in VS Code, search for “CodeStral,” and click “Install.” This extension helps with managing your local AI models for code assistance.
    2. Configure CodeStral: Once installed, follow the extension’s configuration guide. You will need to install additional components, such as Ollama, which sets up a local server to run AI models.

    Step 2: Setting Up Ollama for Local AI Models

    • Install Ollama: Download and install Ollama, a server that will allow your system to host AI models locally. Once installed, you should be able to run the command Ollama run from your terminal.
    • Use Granite 8B Model: For code suggestions, use the Granite 8B model from IBM, which is optimized for code-related tasks. Note that loading the model may take some time as it’s about 5GB in size.

    Step 3: Working with the Continue Extension

    • Installing Continue: The Continue extension integrates with models running on Ollama and helps provide code assistance based on context.
    • Configure Privacy Settings: If you want to work offline or avoid telemetry, open your configuration file (config.json) and set the allow_telemetry parameter to false.
    • Configure Continue extension: Open the configuration settings in VS Code (Ctrl+P) and search for config.json. Add the following configuration:
    JSON
    {
      "models": [
        {
          "provider": "ollama",
          "name": "granite_8b",
          "model": "granite_8b"
        }
      ],
      "allowTelemetry": false
     
    • Using the Extension: You can highlight specific parts of your code, press Ctrl+L, and interact with the AI. For instance, if you highlight a few lines of code and ask, “How can I improve this code?” the AI will analyze the snippet and suggest improvements.
    • Accepting Code Changes: Once the AI provides suggestions, you can either accept the changes directly or refine them by asking follow-up questions.

    Step 4: Advanced Features

    • Context Detection: Continue supports context detection across files, repositories, and even web URLs. It can analyze your entire project and provide suggestions based on your overall code structure.
    • Working Offline with Privacy: If privacy is important, Continue can be configured to keep everything offline, unlike some extensions that send data for research purposes.

    Step 6: Hardware Considerations and Speed Optimization for Ollama

    • Optimizing GPU Usage: If you have multiple Nvidia GPUs, use the nvidia-smi -L command to identify the unique ID of each card. You can then set the CUDA_VISIBLE_DEVICES environment variable to ensure the AI model utilizes the right GPU for faster performance.
    • Check Logs: Periodically check Ollama’s logs to troubleshoot any issues, such as problems initializing the server or GPU.
    • Hardware Recommendations: If possible, use more powerful GPUs like RTX 4070 or 3090 for faster model performance, especially when running large models.

    Use Cases of Cursor editor, or Cody VSCode extension

    • Highlight the relevant code and use the AI tool to suggest changes. While having the code highlighted, by pressing Ctrl +K in Cursor you can even ask the assistant to rewrite the code, which will automatically update the code in place.
    • By using the option to scan the entire codebase from the chat menu option, the AI can add it to the context and suggest improvements.
    • You can switch between different AI models, such as Claude or Gemini, depending on your coding needs. Each model has strengths in areas like code generation or identifying code smells.
    • If the model generates incorrect suggestions, you can refine your query or switch models to get a better response. Always test and review changes before fully integrating them into your codebase.

    Useful videos:








     
     

Monday, February 26, 2024

Burnout or toxic culture ?

Outsourcing companies are hell to be in for an experienced programmer, because managers are being allowed to mistakes, which are covered, thus putting the rest of the workers in not favourable position.

So it is very important to keep track of your health and please do not try to compensate the toxic workplace effect with: letting it out to other people, overeating not sleeping etc.

Restorative is : running, and weight lifting.

From the herbs I recommend: thyme, glog and passiflora tea to help with the sleep as well as taking ashwagandga.

Take your time and enjoy!

Thursday, April 21, 2022

Simple Laravel REST API


1) Create the user model in models/UserModel.php 

php artisan make:model User

<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Foundation\Auth\User as Authenticatable;
use Illuminate\Notifications\Notifiable;
use Laravel\Sanctum\HasApiTokens;

class User extends Authenticatable
{
use HasApiTokens, HasFactory, Notifiable;

/**
* The attributes that are mass assignable.
*
* @var array<int, string>
*/
protected $fillable = [
'name',
'email',
'password',
];

/**
* The attributes that should be hidden for serialization.
*
* @var array<int, string>
*/
protected $hidden = [
'password',
'remember_token',
];

/**
* The attributes that should be cast.
*
* @var array<string, string>
*/
protected $casts = [
'email_verified_at' => 'datetime',
];
}

 

2) 

create validation for the update requests:  php artisan make:request UserUpdateRequest

<?php

namespace App\Http\Requests;

use Illuminate\Foundation\Http\FormRequest;

class UserUpdateRequest extends FormRequest
{
/**
* Determine if the user is authorized to make this request.
*
* @return bool
*/
public function authorize()
{
return true;
}

/**
* Get the validation rules that apply to the request.
*
* @return array
*/
public function rules()
{
return [
'name'=>'required',
'email'=>'required|email',
// 'password'=>''
];
}
}


 and for the post request:

php artisan make:request UserPostRequest

<?php

namespace App\Http\Requests;

use Illuminate\Foundation\Http\FormRequest;

class UserPostRequest extends FormRequest
{
/**
* Determine if the user is authorized to make this request.
*
* @return bool
*/
public function authorize()
{
return true;
}

/**
* Get the validation rules that apply to the request.
*
* @return array
*/
public function rules()
{
return [
'name'=>'required',
'email'=>'required|email',
'password'=>'required'
];
}
}


 

 

create user controller based on the user model: php artisan make:controller UserController --model=User --resource

<?php

namespace App\Http\Controllers;

use App\Http\Controllers\Controller;
use App\Http\Requests\UserPostRequest;
use App\Http\Requests\UserUpdateRequest;
use App\Http\Resources\UserCollection;
use App\Http\Resources\UserResource;
use App\Models\User;
use Illuminate\Support\Facades\Hash;

class UserController extends Controller
{
public function index()
{
return new UserCollection(User::paginate(5));
}

public function store(UserPostRequest $request)
{
$userData = $request->validated();
$userData['password'] = Hash::make($userData['password']);
$userData['email_verified_at'] = now();

$user = User::forceCreate($userData);
return new UserResource($user);
}

public function show(User $user) //route model binding
{
return new UserResource($user);
}

public function update(User $user, UserUpdateRequest $request)
{
$user->update($request->validated());
}

public function destroy(User $user)
{
$user->delete();
return response()->noContent();
}
}



3) create resources/UserCollection:  php artisan make:resource UserCollection

to return user collection and user resource, when required by the user controller.

<?php

namespace App\Http\Resources;

use Illuminate\Http\Resources\Json\ResourceCollection;

class UserCollection extends ResourceCollection
{
/**
* Transform the resource collection into an array.
*
* @param \Illuminate\Http\Request $request
* @return array|\Illuminate\Contracts\Support\Arrayable|\JsonSerializable
*/

public function toArray($request)
{
return
[
'data'=>$this->collection,
'total_count'=> $this->total()
];
}
}

 

create UserResource: //expose which fields to be returned in the json response.

<?php

namespace App\Http\Resources;

use Illuminate\Http\Resources\Json\JsonResource;

class UserResource extends JsonResource
{
/**
* Transform the resource into an array.
*
* @param \Illuminate\Http\Request $request
* @return array|\Illuminate\Contracts\Support\Arrayable|\JsonSerializable
*/
public function toArray($request)
{
return [
'id' => $this->id,
'name' => $this->name,
'email' => $this->email,
];
}
}

 

4) enable the requests to be performed, and add validation rules when posting and updating information, inside Requests/UserPostRequest.php

php artisan make:request UserPostRequest

<?php

namespace App\Http\Requests;

use Illuminate\Foundation\Http\FormRequest;

class UserPostRequest extends FormRequest
{
/**
* Determine if the user is authorized to make this request.
*
* @return bool
*/
public function authorize()
{
return true;
}

/**
* Get the validation rules that apply to the request.
*
* @return array
*/
public function rules()
{
return [
'name'=>'required',
'email'=>'required|email',
'password'=>'required'
];
}
}

for updating info:

<?php

namespace App\Http\Requests;

use Illuminate\Foundation\Http\FormRequest;

class UserUpdateRequest extends FormRequest
{
/**
* Determine if the user is authorized to make this request.
*
* @return bool
*/
public function authorize()
{
return true;
}

/**
* Get the validation rules that apply to the request.
*
* @return array
*/
public function rules()
{
return [
'name'=>'required',
'email'=>'required|email',
];
}
}

 

6) add routes/api.php in order to redirect /users to the index() method of the UserController.
<?php

use Illuminate\Http\Request;
use Illuminate\Support\Facades\Route;

use App\Http\Resources\UserCollection;
use App\Models\User;
use App\Http\Controllers\UserController;

Route::apiResource('users', UserController::class);

Cheers!

Wednesday, April 20, 2022

WSL2 - how to make it accessible through outher machines

1) enable wildcard listening address of the app like 0.0.0.0

ss -anpst will show you the on which address/port the app is listening to.

2) use powershell to setup portproxy to forward all the outside requests to the windows machine to land in the WSL2 system:

netsh interface portproxy add v4tov4 listenport=3000 listenaddress=0.0.0.0 connectport=3000 connectaddress=localhost

listenport and listenaddress are on the Windows side.

connectport and connectaddress are on the WSL2 side.

(for a node app the listening port (connectport) is usually 3000, check you app listening port in 1)

verify with:  netsh interface portproxy show all

3) open port 3000 on the firewall with:

netsh advfirewall firewall add rule name="WSL2 app" dir=in action=allow protocol=TCP localport=3000

verify from the windows defender firewall, advanced settings, inbound rules.

Cheers !

Thursday, April 14, 2022

NativeScript - IOS Xcode build and development settings

Instructions for MAC M1 instance:

1) Update your package.json with the latest tns-ios version!

2) run from /platforms directory: tns platform remove ios, tns platform install ios 

3) tns prepare ios, and follow the settings for:

Xcode 12

build:

and development(emulator):

Keep in mind to change for build the VALID_ARCHS to x86_64, and

for developments to arm64 respectively.

 

for Xcode 13 build just change the VALID_ARCHS to:

Thursday, April 07, 2022

Install Laravel Sail on Windows


 

10 Steps to install Laravel Sail and start developing web applications under WSL:

1. from Turn Windows features on and off:

choose Windows subsystem for Linux (WSL) -> and restart the system

2. update the kernel of WSL from https://wslstorestorage.blob.core.windows.net/wslblob/wsl_update_x64.msi

3. set the default version to 2: wsl --set-default-version 2

4. install from Microsoft Store: Ubuntu

open Command prompt, and type ubuntu

5. Update the ubuntu system:

sudo apt-update && sudo apt dist-upgrade -y

6. Setup Docker: Install Docker Desktop

Go to Settings(icon) then check: General->Use the WSL2 based engine, as well as

Resources->WSL INTEGRATION-> enable integration with my default WSL distro, check also Ubuntu and restart the Docker Desktop app.

7. run inside Ubuntu: curl -a https://laravel.build/example-app | bash

8. start the containers with: ./vendor/bin/sail up

9. you can browse: 127.0.0.1:80

10. in another terminal of Ubunu run: code .

so that you can edit your files inside Visual Studio Code.

Cheers!

Wednesday, April 06, 2022

Install WIFI on Ubuntu linux via terminal


 

Steps:

with lsusb we can first see if the device is recognised correctly.

then type: iwconfig then use the Tab key to get to your device name

then edit /etc/wpa_supplicant/wpa.conf

and place there:

network={

ssid="network_id",

psk="encoded_password"

(you need to supply your own network_id and encoded_password,

you can get the encoded_password by typing:

sudo wpa_passphrase your_ssid

then type a password

and you'll get sample config file with the encoded password, you can overwrite the original file with.

Next: start the wpa supplicant with:

suto wpa_supplicant -Dnext -iwxl...(wifi interface id) -cwpa.conf

 

Enjoy!

Laravel RabbitMQ queues


 

In order to connect Laravel with RabbitMQ we will need the following library:

composer require vladimir-yuldashev/laravel-queue-rabbitmq
then
in config/queue.php add the following configuration:
'connections' => [
    // ...

    'rabbitmq' => [
    
       'driver' => 'rabbitmq',
       'queue' => env('RABBITMQ_QUEUE', 'default'),
       'connection' => PhpAmqpLib\Connection\AMQPLazyConnection::class,
   
       'hosts' => [
           [
               'host' => env('RABBITMQ_HOST', '127.0.0.1'),
               'port' => env('RABBITMQ_PORT', 5672),
               'user' => env('RABBITMQ_USER', 'guest'),
               'password' => env('RABBITMQ_PASSWORD', 'guest'),
               'vhost' => env('RABBITMQ_VHOST', '/'),
           ],
       ],
   
       'options' => [
           'ssl_options' => [
               'cafile' => env('RABBITMQ_SSL_CAFILE', null),
               'local_cert' => env('RABBITMQ_SSL_LOCALCERT', null),
               'local_key' => env('RABBITMQ_SSL_LOCALKEY', null),
               'verify_peer' => env('RABBITMQ_SSL_VERIFY_PEER', true),
               'passphrase' => env('RABBITMQ_SSL_PASSPHRASE', null),
           ],
           'queue' => [
               'job' => VladimirYuldashev\LaravelQueueRabbitMQ\Queue\Jobs\RabbitMQJob::class,
           ],
       ],
   
       /*
        * Set to "horizon" if you wish to use Laravel Horizon.
        */
       'worker' => env('RABBITMQ_WORKER', 'default'),
        
    ],

    // ...    
], 
 
then you need to edit the .env file, supplying your settings under the rabbitMQ section:
RABBITMQ_HOST, RABBITMQ_PORT, RABBITMQ_USER, RABBITMQ_PASSWORD, RABBITMQ_VHOST 

also for the QUEUE_CONNECTION you should supply: rabbitmq

Now lets create a job in the terminal with:

php artisan make:job TestJob
it will handle all the incoming queue events. It's contents under /jobs:
private $data;
    /**
     * Create a new job instance.
     *
     * @return void
     */
    public function __construct($data)
    {
        //
         $this->data = $data;
    }

    /**
     * Execute the job.
     *
     * @return void
     */
    public function handle()
    {
        print_r($this->data);
    } 
 
Finally we connect and run the created above job handler in order to handle event. Inside EventServiceProvider.php
inside the boot() function add:
$this->app->bind(
TestJob::class."@handle",
fn($job)=>{$job->handle()} // this will run the handle() function from above.
Then inside of a controller you can run:
use App\Jobs\TestJob;
TestJob::Dispatch('hello'); 
you can see inside of the queue with: php artisan queue:work

Cheers!

Install Angular Material on Ubuntu

Here is how to install Angular Material on Ubuntu:

 

1. Install NODEJS/NPM

inside of a terminal type: sudo apt install nodejs

as an alternative you can use nvm:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash 

then just type: nvm install --lts

this will download install and use latest long-term supported version of node.

 

2. Install the angular CLI

with npm i -g @angular/cli

 

3. Create new project: ng new myproject

 

4. Add Material Design: ng add @angular/material

 

5. Restart ng serve if running and enjoy your Material enabled project!

Tuesday, March 29, 2022

Ubuntu: how to restore packages after interrupted apt upgrade

Often you might stop running apt update && apt dist-upgrade.

Here is the one-line command that will resume reinstalling the unfinished, or half-configured packages for you. It creates list of packages which can be passed to apt install:  

grep  "08:18:.* half-configured"  /var/log/dpkg.log.1 /var/log/dpkg.log |  awk '{printf "%s ", $5}'

first part of the command will grab only half-configured packages, while the second part will grab just the package name.

Here is the command in full:

sudo apt install --reinstall $(grep  "08:18:.* half-configured"  /var/log/dpkg.log.1 /var/log/dpkg.log |  awk '{printf "%s ", $5}')

You can configure 08:18 with the time you know the packages were interrupted form installing.

Best luck!

Monday, March 28, 2022

Wordpress customizations inside functions.php

Here are few tips on how to customize your Wordpress, without having to resort to plugins, just insert the following php code inside your functions.php file. I will be adding more.

Redirect inner page to outer domain:
add_action('template_redirect','redirect_from_to');
function redirect_from_to(){
  if (is_page('mypage')){
    wp_redirect('http://www.google.com',301);
    exit()
  }
}
Note: mypage must be created in order for the redirect to work.


Allow svg files to be uploaded:

function cc_mime_types($mimes){

$mimes['svg']='image/svg';

return $mimes;

}

add_filter('upload_mimes','cc_mime_types');


Cheers!

Monday, February 22, 2021

Debug Laravel / PHP applications with XDebug in VSCODE

We will setup debugging using xdebug with PHP inside of visual studio code. 

Quick setup:

1) install php-xdebug:

sudo apt install php-xdebug

2) inside of php.ini at the end of the file set: 

[xdebug] 

xdebug.start_with_request = yes 

xdebug.mode = debug 

xdebug.discover_client_host = false 

3) install php debug extension in VSCODE and set the port of the vscode php debug extension to 9003.

Now you can press F5 and start debugging.

 

 

 

Alternatively you can install xdebug using pecl. 

The setup is valid for Ubuntu both on bare-metal as well as under Windows 10 with WSL.

Enjoy !

Sunday, September 27, 2020

JWT - JSON WEB TOKENS security

Refresh tokens are helpful stateless technology, because they have longer time of expiry than the secure tokens, and can be used to send requests back to the server for reissuing of normal secure tokens. 

The primary aim of a refresh token is to regenerate the authentication for the user in such way, that the user doesn't need to manually re-login into the system.

The flow of using refresh together with secure tokens is the following: Initially we're making a request containing valid combination of user/password payload to a server. After performing checks the server is generating and returning to us a pair of secure and refresh tokens. It is sending the refresh token as an http only cookie, which cannot be read or modified by the browser. Later in the process of work, when the secure token is about to expire we use the cookie containing the refresh token information to make request to the server. The server checks its validity in its database and sends back to the client a new pair of refresh secure tokens. 

In summary we use refresh tokens when our access token is expired, and we would like to renew it as well as to renew the refresh token. That is why it has longer expiration time than the access token. Keep in mind that, when the refresh token is expired we need to manually re-login the user. For the technical implementation of refresh tokens is very good if you manage to place the refresh token inside of http-only cookie, because on the client side JavaScript and other techniques cannot exploited to modify this type of cookie. In rare cases, if attackers send a refresh request to the server they cannot get the newly issued secure token. If you would like to increase the security of the generated tokens you can also include browser and os fingerprinting inside of the token payload. 

For the authentication server it is good it can perform the following specific actions: to be able to generate access and refresh tokens to revoke tokens(to delete the refresh token). When a refresh token is generated it usually goes through the following process: check whether there is an user id in the internal database with a token, check the validity of the token, check the number of tokens for this user: how many they are, because one user can generate and overflow our database and this is also a type of an attack. When everything is ready we can save the newly generated token into our database.


 

Access token is used when performing service requests

secret key is stored both in the server and in the JWT payload:
const Token = jwt.sign(
{ user: myUser }, // payload
jwtSecretKey,
{ expiresIn: '30d' }
);
on client side resides in local storage

 

1) Client side authentication - POST request to get the token:
payload: {
‘username:req.body.user’,
’password:req.body.password’
}

Response
Bearer: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwMSIsIm5hbWUiOiJKb2huIERvZSIsImlhdC

2) Client side: request + Authorization header
fetch(url, {
        method: 'GET',
        withCredentials: true,
        credentials: 'include',
        headers: {
            'Authorization': bearer,
        }
    })

request service with the token:
3) Server side authorization -
// const token = req.get('Authorization');
// token = token.slice(7, token.length);

app.route(‘/secret_url’).post(
jwtVerifier,
(req,res)=>res.send(‘info’)); // secret information

 


Refresh token is used when access token is expired in order to produce new access and refresh tokens.

  • has longer expire time than the access token, if expires the user is logged out.
  • on client side resides in httponly cookie, so client cannot modify it (attacker cannot get the new JWT refresh token)
  • includes browser fingerprint for extra security


The auth server can perform specific actions:

  • generate new access and refresh tokens
  • refresh tokens:
    •  check the user_id from the http transmitted refresh token cookie against internal refresh tokens list in order to regenerate new access & refresh tokens:
      • check refresh token validity (by comparing user_id inside the issued token list for the requested user)
      • prune the number of generated refresh tokens (because the user can be logged in from different devices)
      • save in a db the generated refresh tokens
  • revoke token (delete refresh token)

The practical implementation of both JWT secure and refresh tokens can be seen in these 2 courses:

Angular with PHP and JSON web tokens (JWT)

JavaScript User Authentication Login Script (JWT)

 

Congratulations !

 

Sunday, September 20, 2020

Starting with React


Here is how to create a simple application with the React front-end framework:


 

Setup the project:

sudo npm i -g create-react-app // install the dependencies
npx create-react-app my-react-app // create the initial application
npm start //start the live development server


App.js
import React from 'react';
import './App.css';
import Login from './loginComponent'; //default import
import {UsersList} from './usersList'; // specific import

const users = [ // using nested/presentational components
{ name: 'John', occupation: 'student', age: 23 },
{ name: 'Pete', occupation: 'teacher', age: 30 },
{ name: 'Anna', occupation: 'programmer', age: 35 }
];

const App = () => {
// double curly braces because of passing object and because of passing property
return (
<div className="App">
<Login user={{ name: "John", uid: 1000 }} />
<header className="App-header">
<UsersList users = {users} />
</header>
</div>
);
}

export default App;


usersList.js
import React from 'react';
import {UsersListItem} from './usersListItem'; // specific import

export const UsersList = ({ users }) => // getting especially users array from the passed array with props
( // with () we return automatically instead of writing {}
<>
{ users.map(user => <UsersListItem user={user} key={user.name} /> ) }
</>
);
// React will be used when we return react fragment


usersListItem.js
import React from 'react';
const sayMyName = (name) => {
alert(name);
}

export const UsersListItem = ({ user }) =>
(<div>
{user.name} -
{user.occupation} -
{user.age}
<button onClick={ ()=>sayMyName(user.name) }>Display name</button>
</div>
)


loginComponent.js
import React from 'react';
// show the import, conditionals and DOM element, also the export
// how to display props, destructurise in the passing parameters(props)

const Login = ({ user }) => {
let isAdmin = (user.uid) === 1000;
let logged_in = true;
// double conditional
return logged_in ? (
<>
hello mr.{user.name}
{ isAdmin ? `you are admin (conditional)` : null}
</>
)
: (<>please login</>)
}

export default Login;

Tuesday, September 15, 2020

Deploy Angular app to Vercel and Firebase for free

Here is how to do it very quickly:


 

For Firebase you'll need to install the following schematics:
ng add @angular/fire

then just do:
ng deploy

you'll be probably asked to authenticate with password in browser, and then your project will be on Internet.

If you would like to youse serverless functions for NodeJS interpretation here is the way:
sudo npm install -g firebase-tools 

firebase init functions

This will install and initialize the functions. Then go to the newly created /functions directory and install your packages such as: npm install nodemailer cors etc.

And now is time to edit the auto-generated index.js file.

When you are happy with the generated function you can deploy it, just run from the same directory: 

firebase deploy

For Vercel, after the registration just link your github repository to Vercel. You can see/edit your current local git configuration with:

git config --local -e

To link the remote origin of your repository to the local git repo use: 

git remote add origin  https://github.com/your_username/project.git

if there is something on the remote site, you can overwrite it with the local version using:
git push --set-upstream origin master -f

 
or just pull and merge the remote version: git pull origin master

Then just do your commits and when pushing you'll have a new version synchronized in Internet.

Congratulations and enjoy the: Angular for beginners - modern TypeScript and RxJS course!

Sunday, September 13, 2020

Web development in LXC / LXD containers on Ubuntu

Here is how to do web development using the very fast Ubuntu native LXC/LXD containers. Part of the Practical Ubuntu Linux Server for beginners course.



First lets install lxd in our system:
sudo snap install lxd

Then initialize the basic environment:
sudo lxd init

We will also fetch image from a repository: linuxcontainers.org

and will start a container based on it:
sudo lxc launch images:alpine/3.10/amd64 webdevelopment

Let's see what we have in the system:
sudo lxc ls
sudo lxc storage ls
sudo lxc network ls

Now it is time to access the container with: sudo lxc exec webdevelopment sh
and then we will use apk to install openssh
apk add openssh-server
lets' also add unprivileged user in order to access the ssh:
adduser webdev

we will also start the server: 

service sshd start

Ok let's check with: ip a the address of the container.

Now we exit the shell(sh) and we can connect to the container using our new user: ssh webdev@ip address of container

Alright, now go back inside the container and will add the Apache service:
apk add apache2
service apache2 restart

Optional:

If we need to get rid of the container we need to stop it first:
sudo lxc stop demo
sudo lxc delete demo

If we need to get rid of the created storage pool, we run the following:
printf 'config: {}\ndevices: {}' | lxc profile edit default
lxc storage delete default

If we need to remove the created network bridge we can run:
sudo lxc network delete lxdbr0

Congratulations and happy learning !

Tuesday, September 01, 2020

Skaffold on microk8s kubernettes

Here is how to install configure and use microk8s with skaffold, step by step. Based on the Kubernetes course:

installation:

curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && sudo install skaffold /usr/local/bin/

create the initial project skaffold configuration:

skaffold init 



create alias to kubectl for skaffold to be able to use it :  

sudo snap alias microk8s.kubectl kubectl

provide microk8s config to skaffold:

microk8s.kubectl config view --raw > $HOME/.kube/config

update the pod configuration to use the image from microk8s:

image: localhost:32000/php-app 

(add localhost:32000...)

enable microk8s registry addon: 

microk8s.enable registry
then test the registry if it works: http://localhost:32000/v2/

run skaffold monitoring by providing repo to the insecure microk8s repo:

skaffold dev --default-repo=localhost:32000
Check if the pod is running:

kubectl get pods

Expose the pod ports to be browsable:

kubectl port-forward pod/skaffold-pod 8080:4000

Optional: In case we need to debug inside the container:   

docker run -ti localhost:32000/php-app:latest /bin/bash


Congratulations and enjoy the course !

Subscribe To My Channel for updates

Things to do after install Fedora 43

--- ##### SYSTEM UPDATES & FIRMWARE   sudo dnf upgrade --refresh -y   ##### Check Firmware (Only if supported hardware is found)   fwupd...