Update formats and languages for prismjs

This commit is contained in:
RealStickman 2023-02-23 14:48:51 +01:00
parent b3ee22b2b5
commit 9026dca5be
47 changed files with 1004 additions and 670 deletions

View File

@ -4,17 +4,24 @@ visible: true
---
[toc]
## Other drives
Find uuid with `sudo blkid`
`UUID=(uuid) (mountpath) (filesystem) defaults,noatime 0 2`
`UUID=(uuid) (mountpath) (filesystem) defaults,noatime 0 2`
## Samba shares
`//(ip)/(path)/ (mountpath) cifs uid=0,credentials=(path to credentials file),iocharset=utf8,noperm,nofail 0 0`
Example credentials file:
```sh
//(ip)/(path)/ (mountpath) cifs uid=0,credentials=(path to credentials file),iocharset=utf8,noperm,nofail 0 0
```
Example credentials file:
```
user=(user)
password=(password)
domain=WORKGROUP
```
Make sure to set permissions to the credential files to something like 700.
Make sure to set permissions to the credential files to something like 700.

View File

@ -7,14 +7,19 @@ visible: true
## Pre-Setup
Create a gitea user
`# useradd -m git`
Create a gitea user
`# mkdir /etc/gitea`
`# chown git:git -R /etc/gitea`
```sh
useradd -m git
mkdir /etc/gitea
chown git:git -R /etc/gitea
```
Create the .ssh directory for the git user
`$ sudo -u git mkdir -p /home/git/.ssh`
Create the .ssh directory for the git user
```sh
sudo -u git mkdir -p /home/git/.ssh
```
Get the user id of git with `id git`
@ -22,8 +27,10 @@ Get the user id of git with `id git`
### Network and Pod
`# podman network create net_gitea`
`# podman pod create --name pod_gitea --network net_gitea -p 127.0.0.1:5432:5432 -p 3000:3000 -p 127.0.0.1:2222:22`
```sh
podman network create net_gitea
podman pod create --name pod_gitea --network net_gitea -p 127.0.0.1:5432:5432 -p 3000:3000 -p 127.0.0.1:2222:22
```
#### Port Mappings
@ -66,25 +73,26 @@ Get the user id of git with `id git`
```
**NOTE:** gitea's /data directory must not contain permissions too open. Otherwise the SSH redirection set up below will fail.
`0750` for directories and `0640` has been shown to work
`0750` for directories and `0640` is known to work.
The next few lines are used to set up ssh-redirection to gitea if it is used to clone a repo.
> See also the [official documentation](https://docs.gitea.io/en-us/install-with-docker/#sshing-shim-with-authorized_keys)
Create SSH Keys for gitea
`$ sudo -u git ssh-keygen -t rsa -b 4096 -C "Gitea Host Key"`
`$ sudo -u git cat /home/git/.ssh/id_rsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys`
`$ sudo -u git chmod 600 /home/git/.ssh/authorized_keys`
Create SSH Keys for gitea
```sh
$ cat <<"EOF" | sudo tee /usr/local/bin/gitea
sudo -u git ssh-keygen -t rsa -b 4096 -C "Gitea Host Key"
sudo -u git cat /home/git/.ssh/id_rsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys
sudo -u git chmod 600 /home/git/.ssh/authorized_keys
cat <<"EOF" | sudo tee /usr/local/bin/gitea
#!/bin/sh
ssh -p 2222 -o StrictHostKeyChecking=no git@127.0.0.1 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"
EOF
```
`# chmod +x /usr/local/bin/gitea`
chmod +x /usr/local/bin/gitea
```
We've now finished setting up the ssh-redirection.
After that, connect to the Server on port 3000 to finish the installation
@ -92,17 +100,26 @@ The first registered user will be made admin
## Management CLI
Gitea comes with a management cli. To access it, change into the Container first and su into the user "git".
`# podman exec -it gitea bash`
`# su git`
Gitea comes with a management cli. To access it, change into the Container first and su into the user "git".
```sh
podman exec -it gitea bash
su git
```
### User Management
List users:
`$ gitea admin user list`
List users:
Change user password:
`$ gitea admin user change-password -u (user) -p (password)`
```sh
gitea admin user list
```
Change user password:
```sh
gitea admin user change-password -u (user) -p (password)
```
## Package Management
@ -112,8 +129,12 @@ Gitea comes with a built-in container registry.
#### Login
`$ podman login gitea.exu.li`
```sh
podman login gitea.exu.li
```
#### Push image
`$ podman push <IMAGE ID> docker://gitea.exu.li/<OWNER>/<IMAGE>:<TAG>`
```sh
podman push <IMAGE ID> docker://gitea.exu.li/<OWNER>/<IMAGE>:<TAG>
```

View File

@ -157,22 +157,22 @@ TODO
#### (BTRFS) Swapfile in btrfs
_Does not work with snapper_
_Use a separate subvolume in that case_
`truncate -s 0 /mnt/swapfile`
_Use a separate subvolume in that case_
`chattr +C /mnt/swapfile`
`btrfs property set /mnt/swapfile compression none`
`fallocate -l (size)M /mnt/swapfile`
```sh
truncate -s 0 /mnt/swapfile
chattr +C /mnt/swapfile
btrfs property set /mnt/swapfile compression none
fallocate -l (size)M /mnt/swapfile
```
#### Initialising swapfile
`chmod 600 /mnt/swapfile`
`mkswap /mnt/swapfile`
`swapon /mnt/swapfile`
```sh
chmod 600 /mnt/swapfile
mkswap /mnt/swapfile
swapon /mnt/swapfile
```
## Essential packages

View File

@ -9,8 +9,10 @@ visible: true
### Network and Pod
`# podman network create net_hedgedoc`
`# podman pod create --name pod_hedgedoc --network net_hedgedoc -p 127.0.0.1:5432:5432 -p 3005:3000`
```sh
podman network create net_hedgedoc
podman pod create --name pod_hedgedoc --network net_hedgedoc -p 127.0.0.1:5432:5432 -p 3005:3000
```
### Database
@ -23,11 +25,16 @@ visible: true
-d docker.io/postgres:14
```
`# podman exec -it hedgedocdb bash`
`# psql -U postgres`
```sh
podman exec -it hedgedocdb bash
psql -U postgres
```
Create database used by hedgedoc
`=# CREATE DATABASE hedgedocdb;`
Create database used by hedgedoc
```sql
CREATE DATABASE hedgedocdb;
```
### Application
@ -49,8 +56,10 @@ Create database used by hedgedoc
Because `CMD_ALLOW_EMAIL_REGISTER` is set to `false`, registration of new users has to be done through the CLI interface using `bin/manage_users` in the container.
`# podman exec -it hedgedocdb bash`
`# bin/manage_users --add (email)`
```sh
podman exec -it hedgedocdb bash
bin/manage_users --add (email)
```
## Nginx config

View File

@ -17,15 +17,13 @@ visible: true
## Apt Packate
`# apt install nginx apt-transport-https`
`# wget -O - https://repo.jellyfin.org/jellyfin_team.gpg.key | apt-key add -`
`# echo "deb [arch=$( dpkg --print-architecture )] https://repo.jellyfin.org/$( awk -F'=' '/^ID=/{ print $NF }' /etc/os-release ) $( awk -F'=' '/^VERSION_CODENAME=/{ print $NF }' /etc/os-release ) main" | tee /etc/apt/sources.list.d/jellyfin.list`
`# apt update`
`# apt install jellyfin`
```sh
apt install nginx apt-transport-https
wget -O - https://repo.jellyfin.org/jellyfin_team.gpg.key | apt-key add -
echo "deb [arch=$( dpkg --print-architecture )] https://repo.jellyfin.org/$( awk -F'=' '/^ID=/{ print $NF }' /etc/os-release ) $( awk -F'=' '/^VERSION_CODENAME=/{ print $NF }' /etc/os-release ) main" | tee /etc/apt/sources.list.d/jellyfin.list
apt update
apt install jellyfin
```
## Nginx
@ -110,8 +108,9 @@ server {
}
```
Enable the config
`$ ln -s /etc/nginx/sites-available/(config) /etc/nginx/sites-enabled/`
Enable the config and restart nginx
Restart nginx
`# systemctl restart nginx`
```sh
ln -s /etc/nginx/sites-available/(config) /etc/nginx/sites-enabled/
systemctl restart nginx
```

View File

@ -1,55 +0,0 @@
---
title: Kaizoku
visible: false
---
[toc]
## Podman
### Network and Pod
`# podman network create net_kaizoku`
`# podman pod create --name pod_kaizoku --network net_kaizoku -p 3000:3000`
#### Port Mappings
```
3000: Kaizoku WebUI
```
### Database
```sh
# podman run --name kaizoku-db \
-e POSTGRES_USER=kaizoku \
-e POSTGRES_PASSWORD=kaizoku \
-e POSTGRES_DB=kaizoku \
-v /mnt/kaizuko_db:/var/lib/postgresql/data \
--pod pod_kaizoku \
-d docker.io/postgres:15
```
### Redis
```sh
# podman run --name kaizoku-redis \
-v /mnt/kaizoku_redis:/data \
--pod pod_kaizoku \
-d docker.io/redis:7-alpine
```
### Application
```sh
# podman run --name kaizoku-app \
-e DATABASE_URL=postgresql://kaizoku:kaizoku@kaizoku-db:5432/kaizoku \
-e KAIZOKU_PORT=3000 \
-e REDIS_HOST=kaizoku-redis \
-e REDIS_PORT=6379 \
-v /mnt/kaizoku_app/data:/data \
-v /mnt/kaizoku_app/config:/config \
-v /mnt/kaizoku_app/logs:/logs \
--pod pod_kaizoku \
-d ghcr.io/oae/kaizoku:latest
```

View File

@ -7,8 +7,10 @@ visible: true
## Create directories
`# mkdir -p /var/kavita/{config,content}`
`# mkdir -p /var/kavita/content/{manga,books,tech}`
```sh
mkdir -p /var/kavita/{config,content}
mkdir -p /var/kavita/content/{manga,books,tech}
```
## Run Kavita

View File

@ -9,8 +9,10 @@ visible: true
## Create directories
`# mkdir -p /var/komga/{config,content}`
`# mkdir -p /var/komga/content/{manga,books,tech}`
```sh
mkdir -p /var/komga/{config,content}
mkdir -p /var/komga/content/{manga,books,tech}
```
## Run Komga

View File

@ -4,18 +4,26 @@ visible: true
---
[toc]
## Installation
### Debian
`curl -s https://kopia.io/signing-key | sudo apt-key add -`
`echo "deb http://packages.kopia.io/apt/ stable main" | sudo tee /etc/apt/sources.list.d/kopia.list`
`sudo apt update`
`sudo apt install kopia`
```sh
curl -s https://kopia.io/signing-key | sudo apt-key add -
echo "deb http://packages.kopia.io/apt/ stable main" | sudo tee /etc/apt/sources.list.d/kopia.list
sudo apt update
sudo apt install kopia
```
## Connect Repository
To create a new repo, replace "connect" with "create"
To create a new repo, replace "connect" with "create"
### B2
```
# kopia repository connect b2 \
```sh
kopia repository connect b2 \
--bucket=(bucket name) \
--key-id=(api key id) \
--key=(api key)
@ -24,19 +32,21 @@ To create a new repo, replace "connect" with "create"
> [Official Documentation](https://kopia.io/docs/reference/command-line/common/repository-connect-b2/)
## Policy
Get global policy
`# kopia policy get --global`
`# kopia policy get --global`
Change global retention
Options are `latest, hourly, daily, weekly, monthly, annual`
`# kopia policy set --keep-(option) (number)`
`# kopia policy set --keep-(option) (number)`
Change compression
`# kopiy policy set --compression zstd-best-compression --global`
`# kopiy policy set --compression zstd-best-compression --global`
## Snapshots
`# kopia snapshot create (path)`
`# kopia snapshot list (path)`
`# kopia snapshot create (path)`
`# kopia snapshot list (path)`
> [Check the "Getting Started" Page for more options](https://kopia.io/docs/getting-started/)

View File

@ -1,52 +1,65 @@
---
title: 'MariaDB Replication'
title: "MariaDB Replication"
visible: true
---
[toc]
## Master Slave Setup
### Master Configuration
The MariaDB Server has to be accessible from outside. For Debian, one has to comment `bind-address=127.0.0.1` in the file `/etc/mysql/mariadb.conf.d/50-server.cnf`.
If you have any firewall enabled, make sure to allow port 3306/TCP.
Add this segment at the end of `/etc/mysql/my.cnf`
```
## Master Slave Setup
### Master Configuration
The MariaDB Server has to be accessible from outside. For Debian, one has to comment `bind-address=127.0.0.1` in the file `/etc/mysql/mariadb.conf.d/50-server.cnf`.
If you have any firewall enabled, make sure to allow port 3306/TCP.
Add this segment at the end of `/etc/mysql/my.cnf`
```ini
[mariadb]
log-bin
server_id=1
log-basename=master
binlog-format=mixed
```
**Restart mariadb** now
**Restart mariadb** now
Create a replication user
```
```sql
CREATE USER 'replication'@'%' IDENTIFIED BY '<password>';
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'%';
```
Next we have to get the data necessary so the slave knows where to start replicating.
`FLUSH TABLES WITH READ LOCK;`
`SHOW MASTER STATUS;`
Next we have to get the data necessary so the slave knows where to start replicating.
```sql
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
```
**Do not close this session, keep it running until you have made the backup from the next step**
`# mysqldump -u root -p (db name) > db_name.sql`
`# mysqldump -u root -p (db name) > db_name.sql`
You can unlock the database again.
`UNLOCK TABLES;`
`UNLOCK TABLES;`
### Slave Configuration
Edit your `/etc/mysql/my.cnf` file
Make sure to choose different IDs for every host
```
Make sure to choose different IDs for every host
```ini
[mysqld]
server-id = 2
```
Create the database and restore the sql dumps made earlier.
`# mysql -u root -p (db name) < db_name.sql`
`# mysql -u root -p (db name) < db_name.sql`
Set the database master now
```
Set the database master now
```sql
CHANGE MASTER TO
MASTER_HOST='<domain>',
MASTER_USER='replication',
@ -61,9 +74,10 @@ CHANGE MASTER TO
Start slave now
`START SLAVE;`
And check the status
`SHOW SLAVE STATUS \G`
`SHOW SLAVE STATUS \G`
If both of the following options say yes, everything is working as intended
If both of the following options say yes, everything is working as intended
```
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

View File

@ -23,9 +23,12 @@ Put your folder here
Install java
`# apt install openjdk-17-jre`
Add a `minecraft` user.
`# useradd minecraft`
`# chown minecraft:minecraft -R /etc/minecraft/`
Add a `minecraft` user.
```sh
useradd minecraft
chown minecraft:minecraft -R /etc/minecraft/
```
Start the server a first time.
`sudo -u minecraft /etc/minecraft/forge-(version)/run.sh`
@ -40,7 +43,7 @@ Accept the EULA by editing `/etc/minecraft/forge-(version)/eula.txt`
`/etc/systemd/system/minecraft.service`
```
```systemd
[Unit]
Description=Minecraft Server
After=network.target
@ -73,7 +76,7 @@ WantedBy=multi-user.target
`/etc/systemd/system/minecraft.socket`
```
```systemd
[Unit]
PartOf=minecraft.service
@ -83,7 +86,7 @@ ListenFIFO=%t/minecraft.stdin
`/etc/systemd/system/minecraft.service`
```
```systemd
[Unit]
Description=Minecraft Server
After=network.target
@ -114,7 +117,7 @@ To run commands, redirect commands into your socket.
**No safety at all!!**
```
```sh
#!/usr/bin/env bash
echo "$@" > /run/minecraft.stdin
```

View File

@ -5,14 +5,17 @@ media_order: content-encoding-type.png
---
[toc]
Interesting options, configurations and information about nginx.
Interesting options, configurations and information about nginx.
## Compression
*NOTE: The most reliable way to check whether content is compressed, is by using the debug tools in the webbrowser. Look for the "content-encoding" header*
![Picture shows parts of the response headers in the network tab of the firefox debug tool](content-encoding-type.png)
_NOTE: The most reliable way to check whether content is compressed, is by using the debug tools in the webbrowser. Look for the "content-encoding" header_
![Picture shows parts of the response headers in the network tab of the firefox debug tool](content-encoding-type.png)
These are the settings used by this website to compress with gzip.
These will suffice for most websites.
```
These will suffice for most websites.
```nginx
# Compression
gzip on;
gzip_vary on;
@ -21,7 +24,8 @@ These will suffice for most websites.
gzip_types text/plain text/html text/css application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml;
```
> All configuration options can be found in the [official documentation](https://nginx.org/en/docs/http/ngx_http_gzip_module.html)
> All configuration options can be found in the [official documentation](https://nginx.org/en/docs/http/ngx_http_gzip_module.html)
## Website Performance
> Google's [PageSpeed Insights](https://pagespeed.web.dev/) tool can be used to measure website performance.
> Google's [PageSpeed Insights](https://pagespeed.web.dev/) tool can be used to measure website performance.

View File

@ -4,10 +4,13 @@ visibility: false
---
[toc]
## Application
*NOTE: Openproject does not provide a default "latest" tag. Specifying the tag is required!*
```
$ podman run -p 8080:80 --name openproject \
_NOTE: Openproject does not provide a default "latest" tag. Specifying the tag is required!_
```sh
podman run -p 8080:80 --name openproject \
-e OPENPROJECT_HOST__NAME=openproject.exu.li \
-e OPENPROJECT_SECRET_KEY_BASE=<secret> \
-v /mnt/openproject/pgdata:/var/openproject/pgdata \

View File

@ -4,6 +4,7 @@ visible: true
---
[toc]
- [CPU](./cpu)
- [GPU](./gpu)
- [RAM](./ram)

View File

@ -4,46 +4,60 @@ visible: true
---
[toc]
## Monitoring
### Sensors
The `lm_sensors` package shows temperatures, fan pwm and other sensors for your CPU, GPU and motherboard.
Run `$ sensors` to get the output.
Run `$ sensors` to get the output.
#### Support for motherboard ITE LPC chips
Support for this type of chip does not come built in to `lm_sensors`.
In the AUR the package `it87-dkms-git` provides a kernel module with support for a variety of ITE chips. It pulls from [this](https://github.com/frankcrawford/it87) git repo. You can find a list of supported chips there. See [this issue on lm_sensors git repo](https://github.com/lm-sensors/lm-sensors/issues/134) for background info.
In the AUR the package `it87-dkms-git` provides a kernel module with support for a variety of ITE chips. It pulls from [this](https://github.com/frankcrawford/it87) git repo. You can find a list of supported chips there. See [this issue on lm_sensors git repo](https://github.com/lm-sensors/lm-sensors/issues/134) for background info.
The kernel driver can be automatically loaded on boot by putting `it87` into `/etc/modules-load.d/(filename).conf`
The option `acpi_enforce_resources=lax` also needs to be added to `GRUB_CMDLINE_LINUX_DEFAULT` in `/etc/default/grub` or your bootloader equivalent.
The option `acpi_enforce_resources=lax` also needs to be added to `GRUB_CMDLINE_LINUX_DEFAULT` in `/etc/default/grub` or your bootloader equivalent.
### CoreFreq
[CoreFreq](https://github.com/cyring/CoreFreq) can display a lot of information about the CPU and the memory controller.
[CoreFreq](https://github.com/cyring/CoreFreq) can display a lot of information about the CPU and the memory controller.
To run, the systemd service `corefreqd` needs to be enabled.
CoreFreq also depends on a kernel driver. Simply put `corefreqk` into `/etc/modules-load.d/(filename).conf` to load it automatically on boot.
Access the TUI using `$ corefreq-cli`
Access the TUI using `$ corefreq-cli`
A few interesting views:
`Shift + C` shows per thread frequency, voltage and power, as well as overall power and temperature.
`Shift + M` shows the memory timings, frequency and DIMM layout.
`Shift + M` shows the memory timings, frequency and DIMM layout.
### Zenmonitor
[Zenmonitor](https://github.com/ocerman/zenmonitor) is, as the name suggests, monitoring software specifically for AMD Zen CPUs.
[Zenmonitor](https://github.com/ocerman/zenmonitor) is, as the name suggests, monitoring software specifically for AMD Zen CPUs.
### CoreCtrl
CoreCtrl displays a range of information for AMD GPUs.
CoreCtrl displays a range of information for AMD GPUs.
### Error monitoring
Some applications have hardware error reporting built-in.
Some applications have hardware error reporting built-in.
#### Kernel log
For others, try checking the kernel log.
`$ journalctl -k --grep=mce`
`$ journalctl -k --grep=mce`
#### Rasdaemon
You can also install `aur/rasdaemon` and enable its two services.
`# systemctl enable --now ras-mc-ctl.service`
`# systemctl enable --now rasdaemon.service`
You can also install `aur/rasdaemon` and enable its two services.
```sh
systemctl enable --now ras-mc-ctl.service
systemctl enable --now rasdaemon.service
```
`$ ras-mc-ctl --summary` shows all historic errors
`$ ras-mc-ctl --error-count` shows memory errors of the current session
`$ ras-mc-ctl --error-count` shows memory errors of the current session

View File

@ -4,12 +4,19 @@ visible: true
---
[toc]
## Overclocking
*I'm unaware of any platform supporting online-editing of RAM timings*
_I'm unaware of any platform supporting online-editing of RAM timings_
## Testing
> [More Testing Tools can be found on the ArchWiki](https://wiki.archlinux.org/title/Stress_testing?useskinversion=1)
> [More Testing Tools can be found on the ArchWiki](https://wiki.archlinux.org/title/Stress_testing?useskinversion=1)
#### Stressapptest
**NOTE**: Produces heavy load on the CPU as well. A stable CPU OC before running this is recommended.
`$ stressapptest -M (RAM MiB) -s (time in s) -m (CPU threads)`
**NOTE**: Produces heavy load on the CPU as well. A stable CPU OC before running this is recommended.
```sh
stressapptest -M (RAM MiB) -s (time in s) -m (CPU threads)
```

View File

@ -4,45 +4,60 @@ visible: true
---
[toc]
## Generate systemd service
Create a container the normal way
Using this container as a reference, you can generate a systemd service file
`# podman generate systemd --new --name --files (container)`
Create a container the normal way
Remove your old container
`# podman container rm (container)`
Using this container as a reference, you can generate a systemd service file
`# cp container-(container).service /etc/systemd/system/`
```sh
podman generate systemd --new --name --files (container)
```
`# systemctl daemon-reload`
`# systemctl enable --now container-(container)`
The container should now be running just as before
Remove your old container
```
podman container rm (container)
cp container-(container).service /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now container-(container)
```
The container should now be running just as before
## Auto-Update container
The command to update containers configured for auto-update is `# podman auto-update`
The command to update containers configured for auto-update is `# podman auto-update`
Add `--label "io.containers.autoupdate=image"` to the `ExecStart=/usr/bin/podman ...` line in the service file you generated
Make sure to use, for example, `docker.io/` instead of `docker://` as the source of the image
Make sure to use, for example, `docker.io/` instead of `docker://` as the source of the image
Reload and restart
`# systemctl daemon-reload`
`# systemctl enable --now container-(container)`
Reload and restart
```sh
systemctl daemon-reload
systemctl enable --now container-(container)
```
If you want to manually run updates for the configured containers, use this command:
`# podman auto-update`
`# podman auto-update`
### Auto-Update timer
To truly automate your updates, enable the included timer
`# systemctl enable --now podman-auto-update.timer`
`# systemctl enable --now podman-auto-update.timer`
### Check update log
The update logs are kept in the `podman-auto-update` service
`$ journalctl -eu podman-auto-update`
`$ journalctl -eu podman-auto-update`
## Prune images service and timer
`/etc/systemd/system/podman-image-prune.service`
```
`/etc/systemd/system/podman-image-prune.service`
```systemd
[Unit]
Description=Podman image-prune service
@ -54,8 +69,9 @@ ExecStart=/usr/bin/podman image prune -f
WantedBy=multi-user.target
```
`/etc/systemd/system/podman-image-prune.timer`
```
`/etc/systemd/system/podman-image-prune.timer`
```systemd
[Unit]
Description=Podman image-prune timer
@ -67,7 +83,9 @@ Persistent=true
WantedBy=timers.target
```
`# systemctl daemon-reload`
`# systemctl enable --now podman-image-prune.timer`
```sh
systemctl daemon-reload
systemctl enable --now podman-image-prune.timer
```
> [Documentation](https://docs.podman.io/en/latest/markdown/podman-image-prune.1.html)

View File

@ -5,24 +5,30 @@ media_order: powerdns-admin-api-settings.png
---
[toc]
## Installation
For the autoriative server install this package
`# apt install pdns-server`
This is the PowerDNS resolver package
`# apt install pdns-recursor`
`# apt install pdns-recursor`
### Different Backends can be installed on Debian
Mysql Backend
`# apt install pdns-backend-mysql mariadb-server`
`# apt install pdns-backend-mysql mariadb-server`
## Configuration Authoritative Server
Set the backend you chose in the `launch=` option of PowerDNS' configuration file.
The config can be found under `/etc/powerdns/pdns.conf`
The config can be found under `/etc/powerdns/pdns.conf`
For MySQL I chose `launch=gmysql`
> A [list of backends can be found here](https://doc.powerdns.com/authoritative/backends/index.html)
For MySQL I chose `launch=gmysql`
> A [list of backends can be found here](https://doc.powerdns.com/authoritative/backends/index.html)
Add the following parameters below `launch=gmysql`
Add the following parameters below `launch=gmysql`
```
gmysql-host=127.0.0.1
gmysql-socket=/run/mysqld/mysqld.sock
@ -33,50 +39,63 @@ gmysql-dbname=pdns
gmysql-dnssec=yes
```
Prepare database
`# mariadb -u root -p`
Prepare database
`CREATE DATABASE pdns;`
```sh
mariadb -u root -p
```
`GRANT ALL ON pdns.* TO 'pdns'@'localhost' IDENTIFIED BY '<password>';`
```sql
CREATE DATABASE pdns;
GRANT ALL ON pdns.* TO 'pdns'@'localhost' IDENTIFIED BY '<password>';
```
Import the schema utilised by PowerDNS. This can be done with the user you just created
`$ mysql -u pdns -p pdns < /usr/share/doc/pdns-backend-mysql/schema.mysql.sql`
Import the schema utilised by PowerDNS. This can be done with the user you just created
`# systemctl restart pdns`
```sh
mysql -u pdns -p pdns < /usr/share/doc/pdns-backend-mysql/schema.mysql.sql
```
```sh
systemctl restart pdns
```
### Zones
Create Zone and add a name server
`# pdnsutil create-zone (domain) ns1.(domain)`
`# pdnsutil create-zone (domain) ns1.(domain)`
Add "A"-Record. **Mind the (.) after the domain**
"Name" is the hostname you wish to assign.
`# pdnsutil add-record (domain). (name) A (ip address)`
`# pdnsutil add-record (domain). (name) A (ip address)`
### Dynamic DNS
`# apt install bind9utils`
`# apt install bind9utils`
Generate key
`# dnssec-keygen -a hmac-md5 -b 128 -n USER (keyname)`
`# dnssec-keygen -a hmac-md5 -b 128 -n USER (keyname)`
Edit the configuration file and change `dnsupdate=no` to `dnsupdate=yes` and set `allow-dnsupdate-from=` to empty.
Edit the configuration file and change `dnsupdate=no` to `dnsupdate=yes` and set `allow-dnsupdate-from=` to empty.
Allow updates from your DHCP server
`# pdnsutil set-meta (domain) ALLOW-DNSUPDATE-FROM (dhcp server ip)`
If you set up a reverse-zone, also allow that
`# pdnsutil set-meta (reverse ip).in-addr.arpa ALLOW-DNSUPDATE-FROM (dhcp server ip)`
`# pdnsutil set-meta (reverse ip).in-addr.arpa ALLOW-DNSUPDATE-FROM (dhcp server ip)`
Import the key
`# pdnsutil import-tsig-key (keyname) hmac-md5 (key)`
Enable for domain
`# pdnsutil set-meta (domain) TSIG-ALLOW-DNSUPDATE (keyname)`
And for reverse-zone
`# pdnsutil set-meta (reverse ip).in-addr.arpa TSIG-ALLOW-DNSUPDATE (keyname)`
`# pdnsutil set-meta (reverse ip).in-addr.arpa TSIG-ALLOW-DNSUPDATE (keyname)`
You also have to configure the DHCP server to provide updates, see [the DHCP article](https://wiki.realstickman.net/en/linux/services/dhcp-server)
You also have to configure the DHCP server to provide updates, see [the DHCP article](https://wiki.realstickman.net/en/linux/services/dhcp-server)
#### Testing with nsupdate
`# nsupdate -k Kdhcpdupdate.+157+12673.key`
`# nsupdate -k Kdhcpdupdate.+157+12673.key`
```
> server 127.0.0.1 5300
> zone testpdns
@ -85,38 +104,48 @@ You also have to configure the DHCP server to provide updates, see [the DHCP art
```
## Configuration Recursive Resolver
The config file can be found under `/etc/powerdns/recursor.conf`
In `/etc/powerdns/pdns.conf` set `local-address=127.0.0.1` and `local-port=5300` to allow the recursor to run on port 53
In `/etc/powerdns/pdns.conf` set `local-address=127.0.0.1` and `local-port=5300` to allow the recursor to run on port 53
In `/etc/powerdns/recursor.conf` set `forward-zones=(domain)=127.0.0.1:5300` to forward queries for that domain to the authoritative DNS
Also set `local-address` and `allow-from`
To bind to all interfaces, use `local-address=::,0.0.0.0`
To bind to all interfaces, use `local-address=::,0.0.0.0`
### Wipe Cache
`# rec_control wipe-cache $`
`# rec_control wipe-cache $`
## DNSSEC
### Authoritative Server
> *TODO*
> https://doc.powerdns.com/authoritative/dnssec/index.html
### Authoritative Server
> _TODO_
> https://doc.powerdns.com/authoritative/dnssec/index.html
### Recursor Server
To fully enable DNSSEC, set `dnssec=process-no-validate` to `dnssec=validate`
To fully enable DNSSEC, set `dnssec=process-no-validate` to `dnssec=validate`
To allow a domain without DNSSEC, modify `/etc/powerdns/recursor.lua`
Add `addNTA('(domain)')` to disable DNSSEC for the selected domain.
Add `addNTA('(domain)')` to disable DNSSEC for the selected domain.
Show domains with disabled DNSSEC
`# rec_control get-ntas`
`# rec_control get-ntas`
> [DNSSEC Testing](https://wiki.debian.org/DNSSEC#Test_DNSSEC)
## WebGUI
### PowerDNS-Admin
`# mkdir /etc/pda-data`
`# chmod 777 -R /etc/pda-data`
```sh
mkdir /etc/pda-data
chmod 777 -R /etc/pda-data
```
# podman run -d \
```sh
podman run -d \
--name powerdns-admin \
-e SECRET_KEY='q5dNwUVzbdn6gc7of6DvO0syIhTHVq1t' \
-v /etc/pda-data:/data \
@ -125,21 +154,25 @@ Show domains with disabled DNSSEC
```
#### Enabling API
A few settings in `/etc/powerdns/pdns.conf` need to be changed.
```
A few settings in `/etc/powerdns/pdns.conf` need to be changed.
```
api=yes
api-key=(random key)
webserver=yes
```
Following this, the API access can be configured in the webgui
![Configuration options in PowerDNS Admin](powerdns-admin-api-settings.png)
![Configuration options in PowerDNS Admin](powerdns-admin-api-settings.png)
Now you should see all your configured Domains and be able to modify records
Now you should see all your configured Domains and be able to modify records
#### Systemd Service
`/etc/systemd/system/powerdns-admin.service`
```
`/etc/systemd/system/powerdns-admin.service`
```systemd
[Unit]
Description=Powerdns Admin Podman container
[Service]
@ -150,5 +183,7 @@ ExecStop=/usr/bin/podman stop -t 10 powerdns-admin
WantedBy=multi-user.target
```
`# systemctl daemon-reload`
`# systemctl enable --now powerdns-admin`
```sh
systemctl daemon-reload
systemctl enable --now powerdns-admin
```

View File

@ -3,12 +3,14 @@ title: Prowlarr
visible: false
---
*NOTE: This application is still in beta. No stable release is available*
_NOTE: This application is still in beta. No stable release is available_
## Application
`lscr.io/linuxserver/prowlarr:develop`
```
# podman run -d \
`lscr.io/linuxserver/prowlarr:develop`
```sh
podman run -d \
--name=prowlarr \
-p 9696:9696 \
-v /mnt/prowlarr/config:/config \

View File

@ -1,14 +1,17 @@
---
title: 'SSH Agent'
title: "SSH Agent"
visible: true
---
[toc]
Autostarting an ssh-agent service
Autostarting an ssh-agent service
## Systemd Service
A local service works for this. For example `~/.config/systemd/user/ssh-agent.service`
```
A local service works for this. For example `~/.config/systemd/user/ssh-agent.service`
```systemd
[Unit]
Description=SSH key agent
@ -22,14 +25,17 @@ WantedBy=default.target
```
Enable the systemd service
`systemctl --user enable --now ssh-agent`
`systemctl --user enable --now ssh-agent`
## Shell environment variable
The shell needs to know about the ssh-agent. In the case of fish, add this snippet to your config.
`set SSH_AUTH_SOCK /run/user/1000/ssh-agent.socket; export SSH_AUTH_SOCK`
`set SSH_AUTH_SOCK /run/user/1000/ssh-agent.socket; export SSH_AUTH_SOCK`
## SSH config
Modify the `~/.ssh/config` to add new keys automatically.
Modify the `~/.ssh/config` to add new keys automatically.
```
AddKeysToAgent yes
```

View File

@ -4,37 +4,44 @@ visible: true
---
[toc]
## Server
```
# podman run -d --name step-ca \
```sh
podman run -d --name step-ca \
-v step:/home/step \
-p 9000:9000 \
-e "DOCKER_STEPCA_INIT_NAME=Demiurge" \
-e "DOCKER_STEPCA_INIT_DNS_NAMES=(hostname),(hostname2)" \
docker.io/smallstep/step-ca
```
Get the root ca fingerprint
`# podman run -v step:/home/step smallstep/step-ca step certificate fingerprint certs/root_ca.crt`
`# podman run -v step:/home/step smallstep/step-ca step certificate fingerprint certs/root_ca.crt`
To view your ca password, run this command
`# podman run -v step:/home/step smallstep/step-ca cat secrets/password`
`# podman run -v step:/home/step smallstep/step-ca cat secrets/password`
### ACME Server
Enable ACME. Restart the server afterwards.
`$ step ca provisioner add acme --type ACME`
`$ step ca provisioner add acme --type ACME`
## Client
Initialize the step-cli client
`step-cli ca bootstrap --ca-url https://(domain/ip):9000 --fingerprint (root_ca fingerprint)`
`step-cli ca bootstrap --ca-url https://(domain/ip):9000 --fingerprint (root_ca fingerprint)`
## Create Certificates
> [Official documentation](https://smallstep.com/docs/step-cli/basic-crypto-operations)
> [Official documentation](https://smallstep.com/docs/step-cli/basic-crypto-operations)
Enter the container
`# podman exec -it step-ca bash`
`# podman exec -it step-ca bash`
### Client Certificate
```
```sh
step certificate create (cert name) client-certs/(cert name).crt client-certs/(cert name).key \
--profile leaf --not-after=8760h \
--ca certs/intermediate_ca.crt \
@ -42,16 +49,19 @@ step certificate create (cert name) client-certs/(cert name).crt client-certs/(c
--bundle
```
Add SANs with the `--san=`-flag. Add multiple flags for multiple SANs.
Add SANs with the `--san=`-flag. Add multiple flags for multiple SANs.
### ACME
Point your ACME client to `https://(domain/ip):9000/acme/(provisioner-name)/directory`
Point your ACME client to `https://(domain/ip):9000/acme/(provisioner-name)/directory`
## Device Truststore
### Arch Linux
> [Archwiki Article on TLS](https://wiki.archlinux.org/title/Transport_Layer_Security#Add_a_certificate_to_a_trust_store)
> [Archwiki Article on TLS](https://wiki.archlinux.org/title/Transport_Layer_Security#Add_a_certificate_to_a_trust_store)
Add new trust anchor
`# trust anchor (root ca.crt)`
List trust anchors
`$ trust list`
`$ trust list`

View File

@ -1,18 +1,20 @@
---
title: 'Systemd Automount'
title: "Systemd Automount"
visible: true
---
[toc]
Systemd can be used to mount filesystems not only on boot (simple `.mount` file), but also on request by any process. (`.automount` file)
Systemd can be used to mount filesystems not only on boot (simple `.mount` file), but also on request by any process. (`.automount` file)
## Mount file
The `.mount` file should be placed in `/etc/systemd/system`
**NOTE: The filename must be (mountpoint).mount with slashes `/` being replaced with dashes `-`**
Example: `/mnt/target` --> `mnt-target.mount`
Example: `/mnt/target` --> `mnt-target.mount`
Here's an example `.mount` file for a CIFS share
Here's an example `.mount` file for a CIFS share
```systemd
[Unit]
Description=cifs mount
@ -28,10 +30,11 @@ WantedBy=multi-user.target
```
## Automount file
The corresponding `.automount` file needs to have the same name as its `.mount` file
Example: `mnt-target.mount` and `mnt-target.automount`
```
The corresponding `.automount` file needs to have the same name as its `.mount` file
Example: `mnt-target.mount` and `mnt-target.automount`
```systemd
[Unit]
Description=cifs automount
@ -43,5 +46,4 @@ WantedBy=multi-user.target
```
Enable the `.automount` file to mount the filesystem when necessary
`# systemctl enable (target-mount).automount`
`# systemctl enable (target-mount).automount`

View File

@ -4,17 +4,22 @@ visible: true
---
[toc]
## Installation
`# apt install unattended-upgrades`
`# apt install unattended-upgrades`
## Configuration
**NOTE: This configuration is tailored to my personal preferences. Feel free to do something else if you don't want what I'm doing**
**NOTE: This configuration is tailored to my personal preferences. Feel free to do something else if you don't want what I'm doing**
### Enable automatic reboots
If necessary, the server will automatically reboot.
An example would be kernel updates.
Edit `/etc/apt/apt.conf.d/50unattended-upgrades`
If necessary, the server will automatically reboot.
An example would be kernel updates.
Edit `/etc/apt/apt.conf.d/50unattended-upgrades`
```
...
Unattended-Upgrade::Automatic-Reboot "true";
@ -22,24 +27,27 @@ Unattended-Upgrade::Automatic-Reboot "true";
```
### Repo update time
Create an override file for `apt-daily.timer` using this command
`$ sudo systemctl edit apt-daily.timer`
Add these lines between the two comments
```
Create an override file for `apt-daily.timer` using this command
`$ sudo systemctl edit apt-daily.timer`
Add these lines between the two comments
```systemd
[Timer]
OnCalendar=*-*-* 2:00
RandomizedDelaySec=0
```
### Host upgrade time
Create an override file for `apt-daily-upgrade.timer` using this command
`$ sudo systemctl edit apt-daily-upgrade.timer`
Add these lines between the two comments
```
Create an override file for `apt-daily-upgrade.timer` using this command
`$ sudo systemctl edit apt-daily-upgrade.timer`
Add these lines between the two comments
```systemd
[Timer]
OnCalendar=*-*-* 4:00
RandomizedDelaySec=0
```

View File

@ -1,18 +1,21 @@
---
title: 'Users and Groups'
title: "Users and Groups"
visible: true
---
[toc]
## Users
Check users by looking at `/etc/passwd`
Check users by looking at `/etc/passwd`
### Add users
Basic usage:
`# useradd -m (user)`
Important options:
Basic usage:
`# useradd -m (user)`
Important options:
```
login name -> by default
group -> -G //separate multiple by commas: group1,group2
@ -22,31 +25,38 @@ full name -> -c
```
Example more complicated usage:
`# useradd -m -c "Bruno Huber" -s /bin/bash -G sudo,systemd-journal bruhub`
`# useradd -m -c "Bruno Huber" -s /bin/bash -G sudo,systemd-journal bruhub`
### Remove user
The command `userdel` can be used to remove users from a system.
Using it with the `-r` additionally deletes the user home directory and mail spool.
`# userdel -r (user)`
`# userdel -r (user)`
### Add user to groups
Add user to more groups:
`# usermod -a -G (group1),(group2) (user)`
`# usermod -a -G (group1),(group2) (user)`
Alternative command:
`# gpasswd -a (user) (group)`
`# gpasswd -a (user) (group)`
### Remove user from group
`# gpasswd -d (user) (group)`
`# gpasswd -d (user) (group)`
## Groups
Check a user's groups with `id (user)`
Check a user's groups with `id (user)`
### Create group
`# groupadd (group)`
`# groupadd (group)`
### Rename group
`# groupmod -n (new_group) (old_group)`
`# groupmod -n (new_group) (old_group)`
### Delete group
`# groupdel (group)`
`# groupdel (group)`

View File

@ -5,43 +5,54 @@ visible: true
[toc]
> I'm not using WikiJS anymore. This article might be out of date
> I'm not using WikiJS anymore. This article might be out of date
`# apt install nginx podman nodejs`
`# apt install nginx podman nodejs`
## Preparation
Create a new network for the database and wikijs
`$ podman network create wikijs`
`$ podman network create wikijs`
## Database setup
`# podman pull docker://postgres`
```
# podman run -p 127.0.0.1:5432:5432 --name wikijsdb \
`# podman pull docker://postgres`
```sh
podman run -p 127.0.0.1:5432:5432 --name wikijsdb \
-e POSTGRES_PASSWORD=wikijs \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v /mnt/postgres/wikijsdb:/var/lib/postgresql/data \
-d docker.io/postgres:15
```
`# podman exec -it wikijsdb bash`
`# podman exec -it wikijsdb bash`
`# psql -U postgres`
`# psql -U postgres`
Create database used by wikijs
`=# CREATE DATABASE wikijs;`
Create database used by wikijs
```sql
CREATE DATABASE wikijs;
```
### Systemd Service
Generate the systems service file following the [podman guide](/linux/services/podman)
Generate the systems service file following the [podman guide](/linux/services/podman)
## Wiki.JS Setup
`$ cd /var`
`# wget https://github.com/Requarks/wiki/releases/download/(version)/wiki-js.tar.gz`
`# mkdir wiki`
`# tar xzf wiki-js.tar.gz -C ./wiki`
`$ cd ./wiki`
```sh
cd /var
wget https://github.com/Requarks/wiki/releases/download/(version)/wiki-js.tar.gz
mkdir wiki
tar xzf wiki-js.tar.gz -C ./wiki
cd ./wiki
```
Move default config
`# mv config.sample.yml config.yml`
`# mv config.sample.yml config.yml`
```
#######################################################################
# Wiki.js - CONFIGURATION #
@ -174,16 +185,21 @@ ha: false
dataPath: ./data
```
Don't forget to open permissions so the systemd service can run the server
`# useradd -m wiki`
`# chown wiki:wiki -R /var/wiki`
Don't forget to open permissions so the systemd service can run the server
```sh
useradd -m wiki
chown wiki:wiki -R /var/wiki
```
Run server directly:
`$ node server`
`$ node server`
## Systemd service
Put this under `/etc/systemd/system/wiki.service`
```
Put this under `/etc/systemd/system/wiki.service`
```systemd
[Unit]
Description=Wiki.js
After=network.target
@ -203,12 +219,16 @@ WorkingDirectory=/var/wiki
WantedBy=multi-user.target
```
`# systemctl daemon-reload`
`# systemctl enable --now wiki`
```sh
systemctl daemon-reload
systemctl enable --now wiki
```
## Nginx config
*Replace "IPV4" and "IPV6"*
```
_Replace "IPV4" and "IPV6"_
```nginx
server {
server_name DOMAIN_NAME;
@ -258,49 +278,63 @@ Enable config
`# ln -s /etc/nginx/sites-available/(config) /etc/nginx/sites-enabled`
Restart nginx
`# systemctl restart nginx`
`# systemctl restart nginx`
## Wiki Settings
### Storage with git
Create a home directory for the wiki user if you haven't used "-m" when creating the user.
**Make sure not to have a "/" after the directory you want for your user**
```
# mkdir /home/wiki
# chown wiki:wiki -R /home/wiki
# usermod -d /home/wiki wiki
**Make sure not to have a "/" after the directory you want for your user**
```sh
mkdir /home/wiki
chown wiki:wiki -R /home/wiki
usermod -d /home/wiki wiki
```
Create ssh key as wiki user
`$ ssh-keygen -t ed25519 -C wiki`
`$ ssh-keygen -t ed25519 -C wiki`
- DB - PostgreSQL used as Search Engine
## Update Wiki
Download and install the latest release with these steps
`# systemctl stop wiki`
`$ cd /var`
`# wget https://github.com/Requarks/wiki/releases/download/(version)/wiki-js.tar.gz`
This is to ensure we have a known good version to go back to in case something goes wrong
`# mv wiki wiki-old`
`# mkdir wiki`
`# tar xzf wiki-js.tar.gz -C ./wiki`
`# cp wiki-old/config.yml wiki/`
`# chown wiki:wiki -R /var/wiki`
`# systemctl start wiki`
Download and install the latest release with these steps
```sh
systemctl stop wiki
cd /var
wget https://github.com/Requarks/wiki/releases/download/(version)/wiki-js.tar.gz
```
This is to ensure we have a known good version to go back to in case something goes wrong
```sh
mv wiki wiki-old
mkdir wiki
tar xzf wiki-js.tar.gz -C ./wiki
cp wiki-old/config.yml wiki/
chown wiki:wiki -R /var/wiki
systemctl start wiki
```
## Database Backup
`# podman exec (container name) pg_dump (database name) -U (database user) -F c > wikibackup.dump`
`# podman exec (container name) pg_dump (database name) -U (database user) -F c > wikibackup.dump`
## Database Restore
**The wiki has to be installed fully, but not yet configured**
*Also works for transfering wiki from one server to another*
Stop the database and wiki
_Also works for transfering wiki from one server to another_
Stop the database and wiki
Drop the existing database and restore from the database
`# podman exec -it (container name) dropdb -U (database user) (database name)`
`# podman exec -it (container name) createdb -U (database user) (database name)`
`cat ~/wikibackup.dump | docker exec -i (container name) pg_restore -U (database user) -d (database name)`
Drop the existing database and restore from the database
Start the database and wiki again
```sh
podman exec -it (container name) dropdb -U (database user) (database name)
podman exec -it (container name) createdb -U (database user) (database name)
cat ~/wikibackup.dump | docker exec -i (container name) pg_restore -U (database user) -d (database name)
```
Start the database and wiki again

View File

@ -4,35 +4,40 @@ visible: true
---
[toc]
## Installation
`# pacman -S wireguard-tools`
*Enable backports for buster and older*
`# apt install wireguard`
_Enable backports for buster and older_
`# apt install wireguard`
## Configuration
This command creates a private key and also a matching public key
`$ wg genkey | tee (name).key | wg pubkey > (name).pub`
The network we will be using for wireguard will be 172.16.1.0/24
This command creates a private key and also a matching public key
`$ wg genkey | tee (name).key | wg pubkey > (name).pub`
The network we will be using for wireguard will be 172.16.1.0/24
To activate a wireguard tunnel on boot use the following command
`# systemctl enable --now wg-quick@wg0.service`
`# systemctl enable --now wg-quick@wg0.service`
### VPN "Server" configuration
*Illustration only, don't share your private keys*
_Illustration only, don't share your private keys_
Private key: `oFlgQ3uq4tjgRILDV3Lbqdx0mVZv2VCWWRkhJA3gcX4=`
Public key: `/0LMRaQCx1oMIh+eU/v4T3YQ8gAb/Qf7ulYl0zzFAkQ=`
Public key: `/0LMRaQCx1oMIh+eU/v4T3YQ8gAb/Qf7ulYl0zzFAkQ=`
This server needs to have a public IP.
All traffic between the different nodes will be routed through here.
All traffic between the different nodes will be routed through here.
Kernel forwarding has to be enabled
SystemD only loads settings specified in the `/etc/sysctl.d/` directory
`# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.d/80-forwarding.conf`
`# sysctl -p /etc/sysctl.d/80-forwarding.conf`
`# sysctl -p /etc/sysctl.d/80-forwarding.conf`
Note how the first peer has two allowed IPs.
`/etc/wireguard/wg0.conf`
`/etc/wireguard/wg0.conf`
```
[Interface]
Address = 172.16.1.10/24
@ -55,8 +60,9 @@ PublicKey = 0jDtfR5GlZAHWtwxVEpukjneVj/Ace40VVdHh/eZnwU=
AllowedIPs = 172.16.1.200/32
```
`/etc/wireguard/wg0-postup.sh`
```
`/etc/wireguard/wg0-postup.sh`
```sh
WIREGUARD_INTERFACE=wg0
WIREGUARD_LAN=172.16.1.0/24
MASQUERADE_INTERFACE=ens33
@ -87,8 +93,9 @@ iptables -A $CHAIN_NAME -i $WIREGUARD_INTERFACE -j DROP
iptables -A $CHAIN_NAME -j RETURN
```
`/etc/wireguard/wg0-postdown.sh`
```
`/etc/wireguard/wg0-postdown.sh`
```sh
WIREGUARD_INTERFACE=wg0
WIREGUARD_LAN=172.16.1.0/24
MASQUERADE_INTERFACE=ens33
@ -104,12 +111,14 @@ iptables -X $CHAIN_NAME
```
### VPN "Client" configuration
*Illustration only, don't share your private keys*
_Illustration only, don't share your private keys_
Private key: `kAgCeU6l+RWlFxfpnGj19tzEDyYz3I4HuqHkaUmHX1Q=`
Public key: `r+TAbAN1hGh4MaIk/J5I5L3ZSAn+kCo1MJJq5YxHrl0=`
Public key: `r+TAbAN1hGh4MaIk/J5I5L3ZSAn+kCo1MJJq5YxHrl0=`
Here we have two different interfaces configured under the same wireguard config
`/etc/wireguard/wg0.conf`
`/etc/wireguard/wg0.conf`
```
[Interface]
Address = 172.16.1.100/24
@ -132,6 +141,6 @@ PersistentKeepalive = 5
```
## Iptables no local access ssh user
> [Block outgoing network access for single user](https://www.cyberciti.biz/tips/block-outgoing-network-access-for-a-single-user-from-my-server-using-iptables.html)
> [Restrict internet access for user](https://unix.stackexchange.com/questions/21650/how-to-restrict-internet-access-for-a-particular-user-on-the-lan-using-iptables)
> [Block outgoing network access for single user](https://www.cyberciti.biz/tips/block-outgoing-network-access-for-a-single-user-from-my-server-using-iptables.html)
> [Restrict internet access for user](https://unix.stackexchange.com/questions/21650/how-to-restrict-internet-access-for-a-particular-user-on-the-lan-using-iptables)

View File

@ -1,23 +1,30 @@
---
title: 'Woodpecker CI'
title: "Woodpecker CI"
visible: true
---
[toc]
## Podman
### Network and Pod
`# podman network create net_woodpecker`
`# podman pod create --name pod_woodpecker --network net_woodpecker -p 8000:8000 -p 9000:9000`
```sh
podman network create net_woodpecker
podman pod create --name pod_woodpecker --network net_woodpecker -p 8000:8000 -p 9000:9000
```
#### Port Mappings
```
8000: Woodpecker HTTP listener, Configurable with "WOODPECKER_SERVER_ADDR"
9000: Woodpecker gRPC listener, Configurable with "WOODPECKER_GRPC_ADDR"
```
### Database
```
# podman run --name woodpeckerdb \
```sh
podman run --name woodpeckerdb \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-e POSTGRES_USER=woodpecker \
-e POSTGRES_PASSWORD=woodpecker \
@ -28,10 +35,11 @@ visible: true
```
### Application server
> [Official Documentation](https://woodpecker-ci.org/docs/administration/server-config)
```
# podman run --name woodpecker-server -t \
> [Official Documentation](https://woodpecker-ci.org/docs/administration/server-config)
```sh
podman run --name woodpecker-server -t \
-e WOODPECKER_HOST=https://(hostname/ip address) \
-e WOODPECKER_ADMIN=RealStickman \
-e WOODPECKER_OPEN=false \
@ -44,19 +52,22 @@ visible: true
```
If `WOODPECKER_OPEN` is set to `true`, any user present on the connected git server could log in to woodpecker.
If one wanted to add a user manually: `$ woodpecker-cli user add`
If one wanted to add a user manually: `$ woodpecker-cli user add`
Generate `WOODPECKER_AGENT_SECRET` with this command:
`$ openssl rand -hex 32`
`$ openssl rand -hex 32`
#### GitHub
*TODO*
_TODO_
#### Gitea
> [Documentation](https://woodpecker-ci.org/docs/administration/vcs/gitea)
Add these environment variables to enable Woodpecker for a gitea server.
```
> [Documentation](https://woodpecker-ci.org/docs/administration/vcs/gitea)
Add these environment variables to enable Woodpecker for a gitea server.
```sh
-e WOODPECKER_GITEA=true \
-e WOODPECKER_GITEA_URL=https://(gitea url) \
-e WOODPECKER_GITEA_CLIENT='(oauth client id)' \
@ -65,13 +76,15 @@ Add these environment variables to enable Woodpecker for a gitea server.
```
I run gitea and woodpecker behind an OPNsense firewall. The default NAT configuration alerts due to a suspected DNS rebind attack.
Therefor I set added an override rule for my gitea url in OPNsense (Services > Unbound DNS > Overrides)
Therefor I set added an override rule for my gitea url in OPNsense (Services > Unbound DNS > Overrides)
> [Reddit post I used as guidance](https://www.reddit.com/r/OPNsenseFirewall/comments/lrmtsz/a_potential_dns_rebind_attack/)
> [Reddit post I used as guidance](https://www.reddit.com/r/OPNsenseFirewall/comments/lrmtsz/a_potential_dns_rebind_attack/)
#### GitLab
Add these environment variables to enable GitLab in Woodpecker.
```
Add these environment variables to enable GitLab in Woodpecker.
```sh
-e WOODPECKER_GITLAB=true \
-e WOODPECKER_GITLAB_URL=https://(gitlab url) \
-e WOODPECKER_GITLAB_CLIENT=(oauth client id) \
@ -79,10 +92,11 @@ Add these environment variables to enable GitLab in Woodpecker.
```
### Application agent
> [Official Documentation](https://woodpecker-ci.org/docs/administration/agent-config)
```
# docker run --name woodpecker-agent -t \
> [Official Documentation](https://woodpecker-ci.org/docs/administration/agent-config)
```sh
docker run --name woodpecker-agent -t \
-e WOODPECKER_SERVER=(url/ip):(grpc port) \
-e WOODPECKER_AGENT_SECRET=(shared secret for server and agents) \
-e WOODPECKER_HOSTNAME=(agent hostname, def: empty) \
@ -94,13 +108,13 @@ Add these environment variables to enable GitLab in Woodpecker.
```
The Woodpecker agent needs access to the docker socket to spawn new container processes on the host.
For now I'll be using docker to run my agents.
For now I'll be using docker to run my agents.
Podman has support for using sockets since version 3.4.0.
*TODO: try out socket access once Podman 3.4.0 is on my servers*
*Recommended by Woodpecker is at least Podman 4.0*
[Podman socket activation](https://github.com/containers/podman/blob/main/docs/tutorials/socket_activation.md)
_TODO: try out socket access once Podman 3.4.0 is on my servers_
_Recommended by Woodpecker is at least Podman 4.0_
[Podman socket activation](https://github.com/containers/podman/blob/main/docs/tutorials/socket_activation.md)
[Woodpecker note on using Podman](https://github.com/woodpecker-ci/woodpecker/blob/master/docs/docs/30-administration/22-backends/10-docker.md#podman-support)
[Woodpecker issue about Podman](https://github.com/woodpecker-ci/woodpecker/issues/85)
[Woodpecker PR for Podman backend](https://github.com/woodpecker-ci/woodpecker/pull/305)
[Woodpecker PR for Podman backend](https://github.com/woodpecker-ci/woodpecker/pull/305)

View File

@ -4,17 +4,21 @@ visible: true
---
[toc]
## Firewall
The firewall configuration can be changed with an already included package.
Call the TUI version with `system-config-firewall-tui`
The only open port will be 22/tcp for SSH Access
## Firewall
The firewall configuration can be changed with an already included package.
Call the TUI version with `system-config-firewall-tui`
The only open port will be 22/tcp for SSH Access
## SSH Access
Disable password authentication. See [ssh](/remote/ssh)
Disable password authentication. See [ssh](/remote/ssh)
## Local ISO Storage
Using ISO Storage on "/" or subdirectories on the same partition is not really viable, as only 18GiB are assigned to this mountpoint by default.
Instead use the local EXT mapper device. This is mounted under `/run/sr-mount/(id)`
Create a new "ISO" directory.
If you want to still use an easier to remember path, create a symbolic link. For example `ln -s /run/sr-mount/69d19d8e-f0dd-92d8-41bc-3d974b20f4f8/ISO/ /root/ISO`. You'll be able to use the path `/root/ISO` in the webinterface as local ISO storage.
If you want to still use an easier to remember path, create a symbolic link. For example `ln -s /run/sr-mount/69d19d8e-f0dd-92d8-41bc-3d974b20f4f8/ISO/ /root/ISO`. You'll be able to use the path `/root/ISO` in the webinterface as local ISO storage.

View File

@ -24,8 +24,8 @@ Run `# xo-vm-import.sh` to import that VM.
You need to explicitly allow host loopback for the container, or it won't be able to access the local ssh tunnel we'll create later
We'll need to enter the server on 10.0.2.2 with the local port we gave our ssh tunnel
```
# podman run -itd --name xen-orchestra \
```sh
podman run -itd --name xen-orchestra \
--net slirp4netns:allow_host_loopback=true \
-p 8080:80 \
docker.io/ronivay/xen-orchestra
@ -47,7 +47,7 @@ To start and stop the tunnel automatically a systemd service is used. It is a sp
`/etc/systemd/system/local-tunnel@.service`
```
```systemd
[Unit]
Description=Setup a local tunnel to %I
After=network.target

View File

@ -4,15 +4,19 @@ visible: true
---
[toc]
## Zabbix Server
### Pod
```
# podman pod create --name zabbix -p 127.0.0.1:8080:8080 -p 10051:10051
```sh
podman pod create --name zabbix -p 127.0.0.1:8080:8080 -p 10051:10051
```
### Database
```
# podman run --name zabbix-mysql -t \
```sh
podman run --name zabbix-mysql -t \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
-e MYSQL_PASSWORD="zabbix" \
@ -26,10 +30,12 @@ visible: true
```
### Application
Zabbix consists of multiple containers that need to be running.
First is the server itself.
```
# podman run --name zabbix-server -t \
First is the server itself.
```sh
podman run --name zabbix-server -t \
-e DB_SERVER_HOST="127.0.0.1" \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
@ -39,9 +45,10 @@ First is the server itself.
-d docker.io/zabbix/zabbix-server-mysql:latest
```
Next, we need the webserver
```
# podman run --name zabbix-web -t \
Next, we need the webserver
```sh
podman run --name zabbix-web -t \
-e ZBX_SERVER_HOST="127.0.0.1" \
-e DB_SERVER_HOST="127.0.0.1" \
-e MYSQL_DATABASE="zabbix" \
@ -53,34 +60,39 @@ Next, we need the webserver
-d docker.io/zabbix/zabbix-web-nginx-mysql:latest
```
Finally, we will also install the agent as a container
```
# podman run --name zabbix-agent \
Finally, we will also install the agent as a container
```sh
podman run --name zabbix-agent \
-e ZBX_SERVER_HOST="127.0.0.1,localhost" \
--restart=always \
--pod=zabbix \
-d docker.io/zabbix/zabbix-agent:latest
```
The default user is `Admin` with password `zabbix`
The default user is `Admin` with password `zabbix`
### Updating Server
Updating the server might fail for various reasons. Those I already encountered will be documented below.
*NOTE: The server and proxy need to run the same version of zabbix to talk with one another*
_NOTE: The server and proxy need to run the same version of zabbix to talk with one another_
#### MARIADB: Missing permissions (log_bin_trust_function_creators)
From what I could find this error is thrown, when the specified user lacks super user privileges.
A workaround is enabling `log_bin_trust_function_creators` temporarily.
`# podman exec -it bash zabbix-mysql`
`# mysql -u root -p` and enter the root password
`mysql> set global log_bin_trust_function_creators=1;`
`mysql> set global log_bin_trust_function_creators=1;`
The setting will be reset to default after a restart of the database container.
The setting will be reset to default after a restart of the database container.
## Zabbix Proxy
`ZBX_HOSTNAME` has to be the same as the value configured on the zabbix server as the proxy name.
```
# podman run --name zabbix-proxy \
`ZBX_HOSTNAME` has to be the same as the value configured on the zabbix server as the proxy name.
```sh
podman run --name zabbix-proxy \
-p 10051:10051 \
-e ZBX_SERVER_HOST="178.18.243.82" \
-e ZBX_HOSTNAME="he1prx1" \
@ -93,15 +105,17 @@ The setting will be reset to default after a restart of the database container.
```
### Updating Proxy
Updating the proxy will always fail when using the SQLite database, as upgrading is not supported for SQLite.
*NOTE: The server and proxy need to run the same version of zabbix to talk with one another*
_NOTE: The server and proxy need to run the same version of zabbix to talk with one another_
Simply deleting/moving the old SQLite database and restarting the proxy is enough.
*NOTE: History stored on the proxy will obviously be lost*
_NOTE: History stored on the proxy will obviously be lost_
## Zabbix Agent
```
# podman run --name zabbix-agent \
```sh
podman run --name zabbix-agent \
-p 10050:10050 \
-e ZBX_HOSTNAME="(hostname)" \
-e ZBX_SERVER_HOST="(zabbix server/proxy)" \
@ -109,12 +123,14 @@ Simply deleting/moving the old SQLite database and restarting the proxy is enoug
```
### XCP-ng
Use zabbix package from EPEL.
Zabbix server can handle the older agent fine [See the Documentation on Compatibility](https://www.zabbix.com/documentation/current/en/manual/appendix/compatibility)
`# yum install zabbix50-agent --enablerepo=epel`
`# yum install zabbix50-agent --enablerepo=epel`
Edit `/etc/zabbix_agentd.conf`
*haven't managed to make encryption work yet*
_haven't managed to make encryption work yet_
```
Server=(Zabbix server ip)
ServerActive=(Zabbix server ip)
@ -125,16 +141,19 @@ Hostname=he1xcp1
#TLSPSKFile=/mnt/zabbix/zabbix_agentd.psk
```
Create the .psk file. Set the user and group to `zabbix`
Create the .psk file. Set the user and group to `zabbix`
Allow 10050/TCP on the firewall
Allow 10050/TCP on the firewall
*nope*
`# yum install openssl11 --enablerepo=epel`
_nope_
`# yum install openssl11 --enablerepo=epel`
## TODO
### Encryption with PSK
> [Official Documentation](https://www.zabbix.com/documentation/6.0/en/manual/encryption/using_pre_shared_keys)
### Force refresh Proxy
> [Zabbix Forum Post](https://www.zabbix.com/forum/zabbix-troubleshooting-and-problems/363196-cannot-send-list-of-active-checks-to-ip-address-host-ip-address-hostnames-match?p=363205#post363205)

View File

@ -1,10 +1,11 @@
---
title: 'Non-Standard Shell'
title: "Non-Standard Shell"
visible: true
---
[toc]
When trying to use a non-standard shell, `chsh` will throw the following error:
`chsh: /usr/local/bin/zsh: non-standard shell`
To fix this, add the shell's path you want to use to `/etc/shells`
When trying to use a non-standard shell, `chsh` will throw the following error:
`chsh: /usr/local/bin/zsh: non-standard shell`
To fix this, add the shell's path you want to use to `/etc/shells`

View File

@ -4,16 +4,19 @@ visible: true
---
[toc]
## Returning exit status
`exit 1`
Code | Meaning
--- | ---
0 | Success
1 | Error
| Code | Meaning |
| ---- | ------- |
| 0 | Success |
| 1 | Error |
## Check for Arguments given
```
```sh
if [ $# -eq 0 ]; then
echo "Please supply one argument"
$(exit 1); echo "$?"
@ -22,15 +25,19 @@ elif [ $# -ge 2 ]; then
$(exit 1); echo "$?"
fi
```
## Multiline output
```
```sh
cat << EOF
Line 1
Line 2
Line 3
EOF
```
Will output:
Will output:
```
Line 1
Line 2

View File

@ -4,13 +4,17 @@ visible: true
---
[toc]
## Utils
`# pacman -S btrfs-progs`
`# pacman -S btrfs-progs`
## Fstab example
`UUID=2dc70a6e-b4cf-4d94-b326-0ba9f886cf49 /mnt/tmp btrfs defaults,noatime,compress-force=zstd,space_cache=v2,subvol=@ 0 0`
Options:
Options:
```
defaults -> Use whatever defaults
noatime -> Reading access to a file is not recorded
@ -20,8 +24,9 @@ subvol -> Subvolume to mount
```
## Filesystem usage
Show storage allocated, used and free
`# btrfs fi usage (mountpoint)`
`# btrfs fi usage (mountpoint)`
```
allocated: space used
@ -31,22 +36,25 @@ Free: free storage based on "Used"
```
Start rebalance of datachunks filled less than 70%
`# btrfs balance start --b -dusage=70 -musage=70 (mountpoint)`
`# btrfs balance start --b -dusage=70 -musage=70 (mountpoint)`
Check status of rebalance
`# btrfs balance status -v (mountpoint)`
`# btrfs balance status -v (mountpoint)`
## Disable CoW
Disable copy on write for folders (Only works on new files)
`$ chattr +C (path)`
`$ chattr +C (path)`
## Device errors
Error counts for a given mountpoint
`# btrfs dev stat (mountpoint)`
`# btrfs dev stat (mountpoint)`
## Compression
### Algorithms
```
zlib: Slow, but strong compression, level 1-9
lzo : Fastest, weak compression
@ -55,79 +63,88 @@ zstd: [Recommended] Medium, newer compression standard than the others, only wor
Enable compression for existing files
`# btrfs filesystem defragment -r -v -c(alg) (path)`
*It is impossible to specify the level of compression wanted.*
_It is impossible to specify the level of compression wanted._
Add `compress=(alg)` to `/etc/fstab`
Add `compress=(alg)` to `/etc/fstab`
To specify a level of compression (zlib and zstd) use `compress=(alg):(level)` in fstab.
For zstd compression it is recommended to use `compress-force=zstd:(level)`
For zstd compression it is recommended to use `compress-force=zstd:(level)`
## Subvolumes
List
`# btrfs subv list (path)`
`# btrfs subv list (path)`
Create
`# btrfs subv create (path)`
`# btrfs subv create (path)`
Mount a subvolume
`# mount -o subvol=@(subvolname) /dev/sdXX /(mountpoint)`
`# mount -o subvol=@(subvolname) /dev/sdXX /(mountpoint)`
## Snapshots
TODO
## RAID
An array can be mounted by specifying one of its members.
`# mount /dev/sdXX /mnt`
All members of an array have the same UUID, which can be mounted through fstab.
An array can be mounted by specifying one of its members.
`# mount /dev/sdXX /mnt`
All members of an array have the same UUID, which can be mounted through fstab.
### RAID 1
On filesystem creation
`# mkfs.btrfs -m raid1 -d raid1 /dev/sdXX /dev/sdYY`
`# mkfs.btrfs -m raid1 -d raid1 /dev/sdXX /dev/sdYY`
### RAID 5
On filesystem creation
*It is recommended not to use raid5/6 for metadata yet*
`# mkfs.btrfs -m raid1 -d raid5 /dev/sdXX /dev/sdYY /dev/sdZZ`
_It is recommended not to use raid5/6 for metadata yet_
`# mkfs.btrfs -m raid1 -d raid5 /dev/sdXX /dev/sdYY /dev/sdZZ`
### RAID 10
On filesystem creation
`# mkfs.btrfs -m raid10 -d raid10 /dev/sdXX /dev/sdYY /dev/sdZZ /dev/sdQQ`
`# mkfs.btrfs -m raid10 -d raid10 /dev/sdXX /dev/sdYY /dev/sdZZ /dev/sdQQ`
### Convert to single device
First, the files have to be collected on one device.
*DUP on system and metadata should only be used on HDDs. Use single on SSDs*
`# btrfs balance start -f -sconvert=dup,devid=(id) -mconvert=dup,devid=(id) -dconvert=single,devid=(id) /(mountpoint)`
_DUP on system and metadata should only be used on HDDs. Use single on SSDs_
`# btrfs balance start -f -sconvert=dup,devid=(id) -mconvert=dup,devid=(id) -dconvert=single,devid=(id) /(mountpoint)`
Now unused devices can be removed
`# btrfs device delete /dev/sdYY /(mountpoint)`
`# btrfs device delete /dev/sdYY /(mountpoint)`
### Replace dying/dead device in RAID array
Show arrays that are available
`btrfs fi show`
`btrfs fi show`
From my testing the log has to be dropped before btrfs will mount the incomplete array
`btrfs rescue zero-log /dev/sdXX`
`btrfs rescue zero-log /dev/sdXX`
Mount with these options to be able to fix it
`mount -o rw,degraded /(mountpoint)`
`mount -o rw,degraded /(mountpoint)`
The ID has to be replaced with the ID of the **missing** device!
`btrfs replace start -B (id) /dev/sdYY /(mountpoint)`
`btrfs replace start -B (id) /dev/sdYY /(mountpoint)`
Query the status of the repace
`btrfs replace status /(mountpoint)`
`btrfs replace status /(mountpoint)`
Balance the filesystem at the end
`btrfs balance /(mountpoint)`
`btrfs balance /(mountpoint)`
## Issues
### 100% CPU Usage
`btrfs-transaction` and `btrfs-cleaner` will run on a single cpu core, maxing it out with 100% load.
*TODO: Check what enabled quotas in the first place. A likely candidate is snapper*
_TODO: Check what enabled quotas in the first place. A likely candidate is snapper_
The issue is apparently caused by using quotas in btrfs.
Check if quotas are enabled:
`# btrfs qgroup show (path)`
Disable quotas:
`# btrfs quota disable (path)`
`# btrfs quota disable (path)`

View File

@ -1,119 +1,137 @@
---
title: 'Doom Emacs'
title: "Doom Emacs"
visible: true
---
[toc]
## Keybindings
### Minimap
`SPC t m`
`SPC t m`
### Dired
Provides directory view
Provides directory view
Create new directory within the current directory
`Shift +`
`Shift +`
Create new file in current directory
`SPC . <enter new file name>`
`SPC . <enter new file name>`
Delete files or directories
`d`, `x`
`d`, `x`
Unselect
`u`
`u`
### Treemacs
Toggle view of directory structure of the current project on the side.
`SPC o p`
`SPC o p`
### Term
Open terminal
`SPC o t`
`SPC o t`
### Window management
Open window right of current window
`SPC w v`
`SPC w v`
Open window below current window
`SPC w s`
`SPC w s`
Move to other windows
`SPC h/j/k/l`
`SPC h/j/k/l`
### Buffers
Open recent within the same project buffers
`SPC b b`
`SPC ,`
`SPC ,`
Remove buffers
`SPC b k`
`SPC b k`
Open new empty buffer
`SPC b N`
`SPC b N`
Save buffer
`SPC b s`
`SPC b s`
### Quickly move to start/end of a document
Start of document
`gg`
`gg`
End of document
`G`
`G`
### Evil Snipe
Move to next occurence of one letter
`f (letter)`
`f (letter)`
Move to previous occurence of one letter
`F (letter)`
`F (letter)`
`;` continue in that direction
`,` go in the opposite direction
`,` go in the opposite direction
`s (letter)` or `S (letter)` for occurences of two letters
`s (letter)` or `S (letter)` for occurences of two letters
### Indent selection
Press `CTRL x` followed by `TAB` and use h/l to indent text
Press `CTRL x` followed by `TAB` and use h/l to indent text
### SSH Editing
`SPC f f`
Enter `/ssh:`
Press `TAB` to show available options
Enter new options with the following syntax: `/ssh:root@albedo.realstickman.net:/`
Enter new options with the following syntax: `/ssh:root@albedo.realstickman.net:/`
#### Privilege elevation
Execute sudo after establishing the connection
`/ssh:nonroot@albedo.realstickman.net|sudo:nonroot@albedo.realstickman.net:/`
`/ssh:nonroot@albedo.realstickman.net|sudo:nonroot@albedo.realstickman.net:/`
## Windows installation
### git
Go to the [git homepage](https://git-scm.com/) and install it.
Go to the [git homepage](https://git-scm.com/) and install it.
### emacs
Go to the [emacs homepage](https://www.gnu.org/software/emacs/) and install it.
Add the `(location)\emacs\x86_84\bin` directory to your PATH in the environment variables.
Add the `(location)\emacs\x86_84\bin` directory to your PATH in the environment variables.
#### Shortcut
Create a shortcut to `(location)\emacs\x86_64\bin\runemacs.exe`
Edit the shortcut to execute in your home directory `C:\Users\(user)`
Edit the shortcut to execute in your home directory `C:\Users\(user)`
### HOME
Add the path to your home to the environment variables.
New variable -> HOME -> `C:\Users\(user)`
Add the path to your home to the environment variables.
New variable -> HOME -> `C:\Users\(user)`
### doom-emacs
Open git bash
```bash
Open git bash
```sh
git clone --depth 1 https://github.com/hlissner/doom-emacs ~/.emacs.d
```
```bash
~/.emacs.d/bin/doom install
```
Add `C:\Users\(user)\.emacs.d\bin` to your PATH.
Add `C:\Users\(user)\.emacs.d\bin` to your PATH.
*Currently doesn't show emotes*
*Missing ripgrep and fd*
_Currently doesn't show emotes_
_Missing ripgrep and fd_

View File

@ -4,35 +4,42 @@ visible: true
---
[toc]
## List supported codecs and formats
`$ ffmpeg -codecs`
`$ ffmpeg -formats`
## List supported codecs and formats
`$ ffmpeg -codecs`
`$ ffmpeg -formats`
## Video Encoding
### H.264
> [H.264 Encoding Guide](https://trac.ffmpeg.org/wiki/Encode/H.264)
### H.264
> [H.264 Encoding Guide](https://trac.ffmpeg.org/wiki/Encode/H.264)
### AV1
> [AV1 Encoding Guide](https://trac.ffmpeg.org/wiki/Encode/AV1)
> [AV1 Encoding Guide](https://trac.ffmpeg.org/wiki/Encode/AV1)
#### libaom
```
$ ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Joker" -disposition 0 \
```sh
ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Joker" -disposition 0 \
-c:v libaom-av1 -crf 23 -b:v 0 -cpu-used 6 -row-mt 1 -map 0:v:0 -metadata:s:v:0 title="Video" \
-c:a libopus -b:a 768k -ac:a 8 -map 0:a:0 -map 0:a:3 -metadata:s:a:0 title="English [7.1ch]" -metadata:s:a:0 language=eng -metadata:s:a:1 title="German [7.1ch]" -metadata:s:a:1 language=ger -disposition:a:0 default \
-c:s copy -map 0:s:0 -map 0:s:1 -metadata:s:s:0 title="English [PGS]" -metadata:s:s:0 language=eng -metadata:s:s:1 title="German [PGS]" -metadata:s:s:1 language=ger -disposition:s:0 default \
/mnt/storage/MediaLibrary/output/Joker/test-libaom-av1.mkv
```
Additional settings for increased speed and cpu usage:
Additional settings for increased speed and cpu usage:
```
-g 239: keyframes every ~10s (fps * 10)
-tiles 2x2: multiple parallel encoding tiles to speed up performance (4 in total here)
```
```
$ ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Joker" -disposition 0 \
```sh
ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Joker" -disposition 0 \
-c:v libaom-av1 -crf 23 -b:v 0 -cpu-used 6 -row-mt 1 -g 239 -tiles 2x2 -map 0:v:0 -metadata:s:v:0 title="Video" \
-c:a libopus -b:a 768k -ac:a 8 -map 0:a:0 -map 0:a:3 -metadata:s:a:0 title="English [7.1ch]" -metadata:s:a:0 language=eng -metadata:s:a:1 title="German [7.1ch]" -metadata:s:a:1 language=ger -disposition:a:0 default \
-c:s copy -map 0:s:0 -map 0:s:1 -metadata:s:s:0 title="English [PGS]" -metadata:s:s:0 language=eng -metadata:s:s:1 title="German [PGS]" -metadata:s:s:1 language=ger -disposition:s:0 default \
@ -40,8 +47,9 @@ $ ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Jo
```
#### SVT-AV1
```
$ ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Joker" -disposition 0 \
```sh
ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Joker" -disposition 0 \
-c:v libsvtav1 -crf 23 -preset 8 -g 239 -map 0:v:0 -metadata:s:v:0 title="Video" \
-c:a libopus -b:a 768k -ac:a 8 -map 0:a:0 -map 0:a:3 -metadata:s:a:0 title="English [7.1ch]" -metadata:s:a:0 language=eng -metadata:s:a:1 title="German [7.1ch]" -metadata:s:a:1 language=ger -disposition:a:0 default \
-c:s copy -map 0:s:0 -map 0:s:1 -metadata:s:s:0 title="English [PGS]" -metadata:s:s:0 language=eng -metadata:s:s:1 title="German [PGS]" -metadata:s:s:1 language=ger -disposition:s:0 default \
@ -49,11 +57,17 @@ $ ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Jo
```
## Audio Encoding
> [High Quality Audio Encoding Guide](https://trac.ffmpeg.org/wiki/Encode/HighQualityAudio)
## Video Quality
### VMAF
> [A practical guide for VMAF](https://medium.com/a-practical-guide-for-vmaf-481b4d420d9c)
*Note: The order of the input videos is important. Make sure to place the distorted video first*
`$ ffmpeg -i (distorted) -i (original) -filter_complex libvmaf -f null -`
### VMAF
> [A practical guide for VMAF](https://medium.com/a-practical-guide-for-vmaf-481b4d420d9c)
_Note: The order of the input videos is important. Make sure to place the distorted video first_
```sh
ffmpeg -i (distorted) -i (original) -filter_complex libvmaf -f null -
```

View File

@ -4,10 +4,14 @@ visible: true
---
[toc]
## For loop
### Iterating over number sequence
`for i in (seq 1 10); echo $i; end`
Output:
Output:
```
1
2
@ -23,7 +27,8 @@ Output:
If you want all numbers to be padded to equal lengths use the `-w` flag with `seq`
`for i in (seq -w 1 10); echo $i; end`
Output:
Output:
```
01
02

View File

@ -4,25 +4,31 @@ visible: true
---
[toc]
## Reset everything to selected branch
Useful for getting to the same state as upstream
`git reset --hard (upstream)/(branch)`
`git pull (upstream) (branch)`
Finally force push all of this into your own repo
## Reset everything to selected branch
Useful for getting to the same state as upstream
```sh
git reset --hard (upstream)/(branch)
git pull (upstream) (branch)
```
Finally force push all of this into your own repo
## Get Pull Request from foreign repo
*Example with neofetch*
_Example with neofetch_
Add remote if you haven't already done that
`git remote add dylanaraps https://github.com/dylanaraps/neofetch.git`
`git remote add dylanaraps https://github.com/dylanaraps/neofetch.git`
Remotes can be show with `git remote show`
Remotes can be show with `git remote show`
Fetch desired commits
`git fetch dylanaraps a0221c51ff4c8ce834d7e3431f2770b6879de009`
`git fetch dylanaraps a0221c51ff4c8ce834d7e3431f2770b6879de009`
Cherry pick commits
`git cherry-pick -m 1 a0221c51ff4c8ce834d7e3431f2770b6879de009`
`git cherry-pick -m 1 a0221c51ff4c8ce834d7e3431f2770b6879de009`
Resolve whatever conflicts arise
Resolve whatever conflicts arise

View File

@ -4,13 +4,16 @@ visible: true
---
[toc]
## Linux Server
`# apt install nfs-kernel-server`
`# apt install nfs-kernel-server`
Shares can be configured in `/etc/exports`
`(mountpoint) (allowed_ip)(options) (allowed_ip2)(options)`
`(mountpoint) (allowed_ip)(options) (allowed_ip2)(options)`
### Options
```
ro: specifies that the directory may only be mounted as read only
rw: grants both read and write permissions on the directory
@ -21,31 +24,33 @@ sync: this just ensures that the host keeps any changes uploaded to the shared d
async: ignores synchronization checks in favor of increased speed
```
*Example single host:*
`/mnt/nfs 192.168.1.123(rw,sync,no_subtree_check)`
_Example single host:_
`/mnt/nfs 192.168.1.123(rw,sync,no_subtree_check)`
*Example whole subnet:*
`/mnt/nfs 192.168.1.0/24(rw,sync,no_subtree_check)`
_Example whole subnet:_
`/mnt/nfs 192.168.1.0/24(rw,sync,no_subtree_check)`
Apply new config by restarting the service.
`# systemctl restart nfs-kernel-server`
`# systemctl restart nfs-kernel-server`
Show configured shares
`$ cat /var/lib/nfs/etab`
`$ cat /var/lib/nfs/etab`
## Linux Client
`# pacman -S nfs-utils`
`# apt install nfs-common`
`# apt install nfs-common`
Mount through terminal
`# mount -t nfs4 (ip):(mountpoint) (local mountpoint)`
`# mount -t nfs4 (ip):(mountpoint) (local mountpoint)`
Can also be mounted with fstab
Can also be mounted with fstab
## Windows Client
Search for `Turn Windows features on or off`
Check everything under `Services for NFS` and click "OK"
Search for `Turn Windows features on or off`
Check everything under `Services for NFS` and click "OK"
Mount as mapped network drive
`mount -o anon \\(ip)\(mountpoint) (letter):`
`mount -o anon \\(ip)\(mountpoint) (letter):`

View File

@ -4,8 +4,10 @@ visible: true
---
[toc]
## Exit on Keyboard Interrupt
```
```python
try:
<put your code here>
except KeyboardInterrupt:

View File

@ -4,82 +4,96 @@ visible: true
---
[toc]
## Linux Server
`# apt install xrdp`
`# systemctl enable xrdp`
`# apt install xrdp`
Put the desktop environment you want to start in `.xsession`
`# systemctl enable xrdp`
*Example*
`xfce4-session`
Put the desktop environment you want to start in `.xsession`
`# systemctl restart xrdp`
_Example_
`xfce4-session`
`# systemctl restart xrdp`
### Change port
Edit `/etc/xrdp/xrdp.ini`
Change the value of `port` to what you want
Edit `/etc/xrdp/xrdp.ini`
`# systemctl restart xrdp`
Change the value of `port` to what you want
`# systemctl restart xrdp`
## Windows Server
### Windows Server Edition
Go to `Local Server` in the Server manager.
There should be an option called `Remote Desktop`. Click on it and allow remote connections.
If you refresh the view now, `Remote Desktop` should show as enabled.
If you refresh the view now, `Remote Desktop` should show as enabled.
#### Allow unlimited RDP sessions
Enter `gpedit` in the search bar
Go to `Administrative Templates>Windows Components>Remote Desktop Services>Remote Desktop Session Host>Connections`
Enter `gpedit` in the search bar
Disable `Limit number of connections`
Go to `Administrative Templates>Windows Components>Remote Desktop Services>Remote Desktop Session Host>Connections`
Disable `Restrict Remote Desktop Services users to a single Remote Desktop Services session`
Disable `Limit number of connections`
Reboot the Server
Disable `Restrict Remote Desktop Services users to a single Remote Desktop Services session`
Reboot the Server
### Windows Pro Edition
Go to `Remotedesktop` in the settings under `System`
Go to `Remotedesktop` in the settings under `System`
#### Change port
*PowerShell as admin*
_PowerShell as admin_
Check port in use currently:
`Get-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp' -name "PortNumber"`
`Get-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp' -name "PortNumber"`
Change port:
`Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp' -name "PortNumber" -Value (port)`
`Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp' -name "PortNumber" -Value (port)`
Firewall exception:
`New-NetFirewallRule -DisplayName 'RDPPORTLatest' -Profile 'Public' -Direction Inbound -Action Allow -Protocol TCP -LocalPort (port)`
`New-NetFirewallRule -DisplayName 'RDPPORTLatest' -Profile 'Public' -Direction Inbound -Action Allow -Protocol TCP -LocalPort (port)`
Reboot the PC
Reboot the PC
## Linux Client
### Installation
Use Remmina as client and install freerdp to get support for RDP.
`# pacman -S remmina freerdp`
### Installation
Use Remmina as client and install freerdp to get support for RDP.
`# pacman -S remmina freerdp`
### Configuration
Example configuration:
![rdp-linux-client-pic1-example.png](/rdp-linux-client-pic1-example.png)
#### Set different port
![rdp-linux-client-pic2-port.png](/rdp-linux-client-pic2-port.png)
## Windows Client
Enter `Remote Desktop Connection` in Windows search.
The target computer can be specified by IP or name
After clicking on `connect` the user will be asked to insert the username and password.
After clicking on `connect` the user will be asked to insert the username and password.
### Use different port
![rdp-winpro-client-pic1-example-port.png](/rdp-winpro-client-pic1-example-port.png)
## References
- [ArchWiki Remmina](https://wiki.archlinux.org/index.php/Remmina)
- [Azure RDP configuration](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/use-remote-desktop)
- [ArchWiki xrdp](https://wiki.archlinux.org/index.php/Xrdp)
- [ArchWiki Remmina](https://wiki.archlinux.org/index.php/Remmina)
- [Azure RDP configuration](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/use-remote-desktop)
- [ArchWiki xrdp](https://wiki.archlinux.org/index.php/Xrdp)

View File

@ -4,20 +4,23 @@ visible: true
---
[toc]
## Linux Server
`sudo apt install samba smbclient`
`sudo apt install samba smbclient`
samba conf backup
`sudo cp /etc/samba/smb.conf /etc/samba/smb.conf_backup`
`sudo cp /etc/samba/smb.conf /etc/samba/smb.conf_backup`
*Samba users have to exist on the system as well before they are added to samba's user management system.*
_Samba users have to exist on the system as well before they are added to samba's user management system._
Add user to samba and create a password for it
`sudo smbpasswd -a (user)`
`sudo smbpasswd -a (user)`
Directories can be shared with groups or users.
Make sure to [set the owner and group](/content/linux-other/files.html) for the directories you want to share.
Make sure to [set the owner and group](/content/linux-other/files.html) for the directories you want to share.
### Sharing with users
```
[sharename]
path = (absolute path)
@ -30,10 +33,12 @@ Make sure to [set the owner and group](/content/linux-other/files.html) for the
```
### Sharing with groups
Make sure to add all users to the group
The "@" signals samba that this is a group
The "@" signals samba that this is a group
```
[sharename]
[sharename]
path = (absolute path)
read only = no
writeable = yes
@ -44,4 +49,4 @@ The "@" signals samba that this is a group
```
Finally, restart the samba service.
`sudo systemctl restart smbd`
`sudo systemctl restart smbd`

View File

@ -1,15 +1,18 @@
---
title: 'Regenerate SSH Keys'
title: "Regenerate SSH Keys"
visible: true
---
[toc]
## Remove from known_hosts
`$ ssh-keygen -R (server name)`
`$ ssh-keygen -R (server name)`
## Debian
Remove the old Hostkeys
`# rm -v /etc/ssh/ssh_host_*`
`# rm -v /etc/ssh/ssh_host_*`
Generate new Hostkeys
`# dpkg-reconfigure openssh-server`
`# dpkg-reconfigure openssh-server`

View File

@ -4,38 +4,45 @@ visible: true
---
[toc]
## Linux Client
`# apt install sshfs`
`# pacman -S sshfs`
## Linux Client
`# apt install sshfs`
`# pacman -S sshfs`
Mount remote filesystem
`sshfs (user)@(ip/domain):(remotepath) (mountpoint)`
`sshfs (user)@(ip/domain):(remotepath) (mountpoint)`
*Example with Windows host:*
`sshfs admin@192.168.1.123:/ /mnt/tmp`
_Example with Windows host:_
`sshfs admin@192.168.1.123:/ /mnt/tmp`
## Windows Client
Install [WinFSP](https://github.com/billziss-gh/winfsp)
Install [sshfs-win](https://github.com/billziss-gh/sshfs-win)
Install [sshfs-win](https://github.com/billziss-gh/sshfs-win)
### Usage
*No path = start in remote user's home directory*
_No path = start in remote user's home directory_
#### GUI
Map a new network drive in Windows Explorer
`\\sshfs\(user)@(ip/domain)\(path)`
`\\sshfs\(user)@(ip/domain)\(path)`
#### Terminal
Mount drive
`net use (letter): \\sshfs\(user)@(ip/domain)\(path)`
`net use (letter): \\sshfs\(user)@(ip/domain)\(path)`
Show mounted drives
`net use`
`net use`
Remove mounted drive
`net use (letter): /delete`
`net use (letter): /delete`
## References
- [sshfs](https://github.com/libfuse/sshfs)
- [sshfs-win](https://github.com/billziss-gh/sshfs-win)
- [sshfs](https://github.com/libfuse/sshfs)
- [sshfs-win](https://github.com/billziss-gh/sshfs-win)

View File

@ -1,5 +1,5 @@
---
title: 'Useful Commands'
title: "Useful Commands"
visible: true
---
@ -9,7 +9,7 @@ visible: true
### Splitting PDF files
```bash
```sh
convert -density 600 {INPUT.PDF} -crop 50x100% +repage {OUT.PDF}
```
@ -26,7 +26,7 @@ Using find with its `exec` switch one can set different permissions based on the
One example would be only changing file or directory permissions.
```sh
$ find (directory) -type f -exec chmod 744 {} +
find (directory) -type f -exec chmod 744 {} +
```
Replacing `-type f` with `-type d` would execute the `chmod` for directories instead.
@ -39,7 +39,7 @@ Using openssl on CPUs with AES acceleration one can create pseudorandom data wit
Much faster than `/dev/urandom` at least
```sh
# openssl enc -aes-128-ctr -md sha512 -pbkdf2 -nosalt -pass file:/dev/urandom < /dev/zero | pv > {TARGET DISK}
openssl enc -aes-128-ctr -md sha512 -pbkdf2 -nosalt -pass file:/dev/urandom < /dev/zero | pv > {TARGET DISK}
```
Around 2GiB/s on my Ryzen 7 1700x if output to `/dev/null`
@ -49,7 +49,7 @@ Around 2GiB/s on my Ryzen 7 1700x if output to `/dev/null`
> [From Pretty CSV viewing on the Command Line](https://www.stefaanlippens.net/pretty-csv.html)
```sh
$ column -t -s, < {FILE.CSV}
column -t -s, < {FILE.CSV}
```
### Download directory from webdav
@ -57,5 +57,5 @@ $ column -t -s, < {FILE.CSV}
Using `wget`, it's possible to download directories recursively from WebDAV.
```sh
$ wget -r -nH -np --cut-dirs=1 --user={USERNAME} --password={PASSWORD} https://WEBDAVHOST/DIR/DIR
wget -r -nH -np --cut-dirs=1 --user={USERNAME} --password={PASSWORD} https://WEBDAVHOST/DIR/DIR
```

View File

@ -4,18 +4,22 @@ visible: true
---
[toc]
## Get output from command
`:r!(command)`
*Example to get UUID for a disk*
`:r!blkid /dev/(partition) -sUUID -ovalue`
## Get output from command
`:r!(command)`
_Example to get UUID for a disk_
`:r!blkid /dev/(partition) -sUUID -ovalue`
## Write as sudo user
`:w !sudo tee %`
`:w !sudo tee %`
## Replace strings
Globally replace strings
`:%s/foo/bar/g`
`:%s/foo/bar/g`
Replace strings in line 6 to 10
`:6,10s/foo/bar/g`
`:6,10s/foo/bar/g`

View File

@ -4,33 +4,41 @@ visible: true
---
[toc]
## Host
### Networking for nested VMs
To pass through the network connection to nested VMs, the first VM has to put the network adapter into promiscuous mode.
By default only root is allowed to do that, however the permissions can also be granted to others.
Grant permission to group:
```
# chgpr (group) /dev/vmnetX
# chmod g+rw /dev/vmnetX
## Host
### Networking for nested VMs
To pass through the network connection to nested VMs, the first VM has to put the network adapter into promiscuous mode.
By default only root is allowed to do that, however the permissions can also be granted to others.
Grant permission to group:
```sh
chgpr (group) /dev/vmnetX
chmod g+rw /dev/vmnetX
```
Grant permission to everyone:
`# chmod a+rw /dev/vmnetX`
`# chmod a+rw /dev/vmnetX`
### Allow nested VMs
Enable the following two settings under "Processor" in the settings of the VM.
`Virtualize Intel VT-x/EPT or AMD-V/RVI`
`Virtualize CPU performance counters`
`Virtualize CPU performance counters`
### Fix MSRS bug on Ryzen CPUs
Add `kvm.ignore_msrs=1` in `/etc/default/grub` to `GRUB_CMDLINE_LINUX_DEFAULT=`
Update the Grub configuration
`# grub-mkconfig -o /boot/grub/grub.cfg`
`# grub-mkconfig -o /boot/grub/grub.cfg`
## Guest
### VMWare Tools
**Debian**
`# apt install open-vm-tools`
**Arch**
`# pacman -S open-vm-tools`
`# pacman -S open-vm-tools`

View File

@ -5,23 +5,26 @@ media_order: vnc-linux-pic1-example.png
---
[toc]
## Linux Server
For the VNC Server we will be using tightVNC.
`# apt install tightvncserver`
`# apt install tightvncserver`
Initial setup and starting VNC server
`vncserver`
`vncserver`
You will have to enter a password
Optionally, a view-only password can be created as well.
Optionally, a view-only password can be created as well.
Kill VNC server
`vncserver -kill :1`
`vncserver -kill :1`
Edit the `xstartup` file in `.vnc` to your liking.
Edit the `xstartup` file in `.vnc` to your liking.
*Example with xfce*
```
_Example with xfce_
```sh
#!/bin/sh
xrdb $HOME/.Xresources
@ -31,21 +34,24 @@ exec startxfce4
```
### Change password
`vncpasswd`
You can also add a view-only password
`vncpasswd`
You can also add a view-only password
## Windows Server
Install tightVNC to get a VNC Client and also a VNC Server in one package for windows.
The server will be started automatically.
One important setting is `Require VNC authentication`, which allows you to define a password for viewing and interacting with the remote pc.
Install tightVNC to get a VNC Client and also a VNC Server in one package for windows.
The server will be started automatically.
One important setting is `Require VNC authentication`, which allows you to define a password for viewing and interacting with the remote pc.
## Linux Client
Install Remmina with libvncserver to get client functionality.
`# pacman -S remmina libvncserver`
![Picture showing the usage of VNC with Remmina](vnc-linux-pic1-example.png)
## Windows Client
Install tightVNC to get a VNC Client and also a VNC Server in one package for windows.
Install tightVNC to get a VNC Client and also a VNC Server in one package for windows.