Update formats and languages for prismjs

This commit is contained in:
RealStickman 2023-02-23 14:48:51 +01:00
parent b3ee22b2b5
commit 9026dca5be
47 changed files with 1004 additions and 670 deletions

View File

@ -4,17 +4,24 @@ visible: true
---
[toc]
## Other drives
Find uuid with `sudo blkid`
`UUID=(uuid) (mountpath) (filesystem) defaults,noatime 0 2`
## Samba shares
`//(ip)/(path)/ (mountpath) cifs uid=0,credentials=(path to credentials file),iocharset=utf8,noperm,nofail 0 0`
```sh
//(ip)/(path)/ (mountpath) cifs uid=0,credentials=(path to credentials file),iocharset=utf8,noperm,nofail 0 0
```
Example credentials file:
```
user=(user)
password=(password)
domain=WORKGROUP
```
Make sure to set permissions to the credential files to something like 700.

View File

@ -8,13 +8,18 @@ visible: true
## Pre-Setup
Create a gitea user
`# useradd -m git`
`# mkdir /etc/gitea`
`# chown git:git -R /etc/gitea`
```sh
useradd -m git
mkdir /etc/gitea
chown git:git -R /etc/gitea
```
Create the .ssh directory for the git user
`$ sudo -u git mkdir -p /home/git/.ssh`
```sh
sudo -u git mkdir -p /home/git/.ssh
```
Get the user id of git with `id git`
@ -22,8 +27,10 @@ Get the user id of git with `id git`
### Network and Pod
`# podman network create net_gitea`
`# podman pod create --name pod_gitea --network net_gitea -p 127.0.0.1:5432:5432 -p 3000:3000 -p 127.0.0.1:2222:22`
```sh
podman network create net_gitea
podman pod create --name pod_gitea --network net_gitea -p 127.0.0.1:5432:5432 -p 3000:3000 -p 127.0.0.1:2222:22
```
#### Port Mappings
@ -66,25 +73,26 @@ Get the user id of git with `id git`
```
**NOTE:** gitea's /data directory must not contain permissions too open. Otherwise the SSH redirection set up below will fail.
`0750` for directories and `0640` has been shown to work
`0750` for directories and `0640` is known to work.
The next few lines are used to set up ssh-redirection to gitea if it is used to clone a repo.
> See also the [official documentation](https://docs.gitea.io/en-us/install-with-docker/#sshing-shim-with-authorized_keys)
Create SSH Keys for gitea
`$ sudo -u git ssh-keygen -t rsa -b 4096 -C "Gitea Host Key"`
`$ sudo -u git cat /home/git/.ssh/id_rsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys`
`$ sudo -u git chmod 600 /home/git/.ssh/authorized_keys`
```sh
$ cat <<"EOF" | sudo tee /usr/local/bin/gitea
sudo -u git ssh-keygen -t rsa -b 4096 -C "Gitea Host Key"
sudo -u git cat /home/git/.ssh/id_rsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys
sudo -u git chmod 600 /home/git/.ssh/authorized_keys
cat <<"EOF" | sudo tee /usr/local/bin/gitea
#!/bin/sh
ssh -p 2222 -o StrictHostKeyChecking=no git@127.0.0.1 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"
EOF
```
`# chmod +x /usr/local/bin/gitea`
chmod +x /usr/local/bin/gitea
```
We've now finished setting up the ssh-redirection.
After that, connect to the Server on port 3000 to finish the installation
@ -93,16 +101,25 @@ The first registered user will be made admin
## Management CLI
Gitea comes with a management cli. To access it, change into the Container first and su into the user "git".
`# podman exec -it gitea bash`
`# su git`
```sh
podman exec -it gitea bash
su git
```
### User Management
List users:
`$ gitea admin user list`
```sh
gitea admin user list
```
Change user password:
`$ gitea admin user change-password -u (user) -p (password)`
```sh
gitea admin user change-password -u (user) -p (password)
```
## Package Management
@ -112,8 +129,12 @@ Gitea comes with a built-in container registry.
#### Login
`$ podman login gitea.exu.li`
```sh
podman login gitea.exu.li
```
#### Push image
`$ podman push <IMAGE ID> docker://gitea.exu.li/<OWNER>/<IMAGE>:<TAG>`
```sh
podman push <IMAGE ID> docker://gitea.exu.li/<OWNER>/<IMAGE>:<TAG>
```

View File

@ -158,21 +158,21 @@ TODO
_Does not work with snapper_
_Use a separate subvolume in that case_
`truncate -s 0 /mnt/swapfile`
`chattr +C /mnt/swapfile`
`btrfs property set /mnt/swapfile compression none`
`fallocate -l (size)M /mnt/swapfile`
```sh
truncate -s 0 /mnt/swapfile
chattr +C /mnt/swapfile
btrfs property set /mnt/swapfile compression none
fallocate -l (size)M /mnt/swapfile
```
#### Initialising swapfile
`chmod 600 /mnt/swapfile`
`mkswap /mnt/swapfile`
`swapon /mnt/swapfile`
```sh
chmod 600 /mnt/swapfile
mkswap /mnt/swapfile
swapon /mnt/swapfile
```
## Essential packages

View File

@ -9,8 +9,10 @@ visible: true
### Network and Pod
`# podman network create net_hedgedoc`
`# podman pod create --name pod_hedgedoc --network net_hedgedoc -p 127.0.0.1:5432:5432 -p 3005:3000`
```sh
podman network create net_hedgedoc
podman pod create --name pod_hedgedoc --network net_hedgedoc -p 127.0.0.1:5432:5432 -p 3005:3000
```
### Database
@ -23,11 +25,16 @@ visible: true
-d docker.io/postgres:14
```
`# podman exec -it hedgedocdb bash`
`# psql -U postgres`
```sh
podman exec -it hedgedocdb bash
psql -U postgres
```
Create database used by hedgedoc
`=# CREATE DATABASE hedgedocdb;`
```sql
CREATE DATABASE hedgedocdb;
```
### Application
@ -49,8 +56,10 @@ Create database used by hedgedoc
Because `CMD_ALLOW_EMAIL_REGISTER` is set to `false`, registration of new users has to be done through the CLI interface using `bin/manage_users` in the container.
`# podman exec -it hedgedocdb bash`
`# bin/manage_users --add (email)`
```sh
podman exec -it hedgedocdb bash
bin/manage_users --add (email)
```
## Nginx config

View File

@ -17,15 +17,13 @@ visible: true
## Apt Packate
`# apt install nginx apt-transport-https`
`# wget -O - https://repo.jellyfin.org/jellyfin_team.gpg.key | apt-key add -`
`# echo "deb [arch=$( dpkg --print-architecture )] https://repo.jellyfin.org/$( awk -F'=' '/^ID=/{ print $NF }' /etc/os-release ) $( awk -F'=' '/^VERSION_CODENAME=/{ print $NF }' /etc/os-release ) main" | tee /etc/apt/sources.list.d/jellyfin.list`
`# apt update`
`# apt install jellyfin`
```sh
apt install nginx apt-transport-https
wget -O - https://repo.jellyfin.org/jellyfin_team.gpg.key | apt-key add -
echo "deb [arch=$( dpkg --print-architecture )] https://repo.jellyfin.org/$( awk -F'=' '/^ID=/{ print $NF }' /etc/os-release ) $( awk -F'=' '/^VERSION_CODENAME=/{ print $NF }' /etc/os-release ) main" | tee /etc/apt/sources.list.d/jellyfin.list
apt update
apt install jellyfin
```
## Nginx
@ -110,8 +108,9 @@ server {
}
```
Enable the config
`$ ln -s /etc/nginx/sites-available/(config) /etc/nginx/sites-enabled/`
Enable the config and restart nginx
Restart nginx
`# systemctl restart nginx`
```sh
ln -s /etc/nginx/sites-available/(config) /etc/nginx/sites-enabled/
systemctl restart nginx
```

View File

@ -1,55 +0,0 @@
---
title: Kaizoku
visible: false
---
[toc]
## Podman
### Network and Pod
`# podman network create net_kaizoku`
`# podman pod create --name pod_kaizoku --network net_kaizoku -p 3000:3000`
#### Port Mappings
```
3000: Kaizoku WebUI
```
### Database
```sh
# podman run --name kaizoku-db \
-e POSTGRES_USER=kaizoku \
-e POSTGRES_PASSWORD=kaizoku \
-e POSTGRES_DB=kaizoku \
-v /mnt/kaizuko_db:/var/lib/postgresql/data \
--pod pod_kaizoku \
-d docker.io/postgres:15
```
### Redis
```sh
# podman run --name kaizoku-redis \
-v /mnt/kaizoku_redis:/data \
--pod pod_kaizoku \
-d docker.io/redis:7-alpine
```
### Application
```sh
# podman run --name kaizoku-app \
-e DATABASE_URL=postgresql://kaizoku:kaizoku@kaizoku-db:5432/kaizoku \
-e KAIZOKU_PORT=3000 \
-e REDIS_HOST=kaizoku-redis \
-e REDIS_PORT=6379 \
-v /mnt/kaizoku_app/data:/data \
-v /mnt/kaizoku_app/config:/config \
-v /mnt/kaizoku_app/logs:/logs \
--pod pod_kaizoku \
-d ghcr.io/oae/kaizoku:latest
```

View File

@ -7,8 +7,10 @@ visible: true
## Create directories
`# mkdir -p /var/kavita/{config,content}`
`# mkdir -p /var/kavita/content/{manga,books,tech}`
```sh
mkdir -p /var/kavita/{config,content}
mkdir -p /var/kavita/content/{manga,books,tech}
```
## Run Kavita

View File

@ -9,8 +9,10 @@ visible: true
## Create directories
`# mkdir -p /var/komga/{config,content}`
`# mkdir -p /var/komga/content/{manga,books,tech}`
```sh
mkdir -p /var/komga/{config,content}
mkdir -p /var/komga/content/{manga,books,tech}
```
## Run Komga

View File

@ -4,18 +4,26 @@ visible: true
---
[toc]
## Installation
### Debian
`curl -s https://kopia.io/signing-key | sudo apt-key add -`
`echo "deb http://packages.kopia.io/apt/ stable main" | sudo tee /etc/apt/sources.list.d/kopia.list`
`sudo apt update`
`sudo apt install kopia`
```sh
curl -s https://kopia.io/signing-key | sudo apt-key add -
echo "deb http://packages.kopia.io/apt/ stable main" | sudo tee /etc/apt/sources.list.d/kopia.list
sudo apt update
sudo apt install kopia
```
## Connect Repository
To create a new repo, replace "connect" with "create"
### B2
```
# kopia repository connect b2 \
```sh
kopia repository connect b2 \
--bucket=(bucket name) \
--key-id=(api key id) \
--key=(api key)
@ -24,6 +32,7 @@ To create a new repo, replace "connect" with "create"
> [Official Documentation](https://kopia.io/docs/reference/command-line/common/repository-connect-b2/)
## Policy
Get global policy
`# kopia policy get --global`
@ -35,6 +44,7 @@ Change compression
`# kopiy policy set --compression zstd-best-compression --global`
## Snapshots
`# kopia snapshot create (path)`
`# kopia snapshot list (path)`

View File

@ -1,33 +1,43 @@
---
title: 'MariaDB Replication'
title: "MariaDB Replication"
visible: true
---
[toc]
## Master Slave Setup
### Master Configuration
The MariaDB Server has to be accessible from outside. For Debian, one has to comment `bind-address=127.0.0.1` in the file `/etc/mysql/mariadb.conf.d/50-server.cnf`.
If you have any firewall enabled, make sure to allow port 3306/TCP.
Add this segment at the end of `/etc/mysql/my.cnf`
```
```ini
[mariadb]
log-bin
server_id=1
log-basename=master
binlog-format=mixed
```
**Restart mariadb** now
Create a replication user
```
```sql
CREATE USER 'replication'@'%' IDENTIFIED BY '<password>';
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'%';
```
Next we have to get the data necessary so the slave knows where to start replicating.
`FLUSH TABLES WITH READ LOCK;`
`SHOW MASTER STATUS;`
```sql
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
```
**Do not close this session, keep it running until you have made the backup from the next step**
`# mysqldump -u root -p (db name) > db_name.sql`
@ -35,9 +45,11 @@ You can unlock the database again.
`UNLOCK TABLES;`
### Slave Configuration
Edit your `/etc/mysql/my.cnf` file
Make sure to choose different IDs for every host
```
```ini
[mysqld]
server-id = 2
```
@ -46,7 +58,8 @@ Create the database and restore the sql dumps made earlier.
`# mysql -u root -p (db name) < db_name.sql`
Set the database master now
```
```sql
CHANGE MASTER TO
MASTER_HOST='<domain>',
MASTER_USER='replication',
@ -64,6 +77,7 @@ And check the status
`SHOW SLAVE STATUS \G`
If both of the following options say yes, everything is working as intended
```
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

View File

@ -24,8 +24,11 @@ Install java
`# apt install openjdk-17-jre`
Add a `minecraft` user.
`# useradd minecraft`
`# chown minecraft:minecraft -R /etc/minecraft/`
```sh
useradd minecraft
chown minecraft:minecraft -R /etc/minecraft/
```
Start the server a first time.
`sudo -u minecraft /etc/minecraft/forge-(version)/run.sh`
@ -40,7 +43,7 @@ Accept the EULA by editing `/etc/minecraft/forge-(version)/eula.txt`
`/etc/systemd/system/minecraft.service`
```
```systemd
[Unit]
Description=Minecraft Server
After=network.target
@ -73,7 +76,7 @@ WantedBy=multi-user.target
`/etc/systemd/system/minecraft.socket`
```
```systemd
[Unit]
PartOf=minecraft.service
@ -83,7 +86,7 @@ ListenFIFO=%t/minecraft.stdin
`/etc/systemd/system/minecraft.service`
```
```systemd
[Unit]
Description=Minecraft Server
After=network.target
@ -114,7 +117,7 @@ To run commands, redirect commands into your socket.
**No safety at all!!**
```
```sh
#!/usr/bin/env bash
echo "$@" > /run/minecraft.stdin
```

View File

@ -6,13 +6,16 @@ media_order: content-encoding-type.png
[toc]
Interesting options, configurations and information about nginx.
## Compression
*NOTE: The most reliable way to check whether content is compressed, is by using the debug tools in the webbrowser. Look for the "content-encoding" header*
_NOTE: The most reliable way to check whether content is compressed, is by using the debug tools in the webbrowser. Look for the "content-encoding" header_
![Picture shows parts of the response headers in the network tab of the firefox debug tool](content-encoding-type.png)
These are the settings used by this website to compress with gzip.
These will suffice for most websites.
```
```nginx
# Compression
gzip on;
gzip_vary on;
@ -24,4 +27,5 @@ These will suffice for most websites.
> All configuration options can be found in the [official documentation](https://nginx.org/en/docs/http/ngx_http_gzip_module.html)
## Website Performance
> Google's [PageSpeed Insights](https://pagespeed.web.dev/) tool can be used to measure website performance.

View File

@ -4,10 +4,13 @@ visibility: false
---
[toc]
## Application
*NOTE: Openproject does not provide a default "latest" tag. Specifying the tag is required!*
```
$ podman run -p 8080:80 --name openproject \
_NOTE: Openproject does not provide a default "latest" tag. Specifying the tag is required!_
```sh
podman run -p 8080:80 --name openproject \
-e OPENPROJECT_HOST__NAME=openproject.exu.li \
-e OPENPROJECT_SECRET_KEY_BASE=<secret> \
-v /mnt/openproject/pgdata:/var/openproject/pgdata \

View File

@ -4,6 +4,7 @@ visible: true
---
[toc]
- [CPU](./cpu)
- [GPU](./gpu)
- [RAM](./ram)

View File

@ -4,12 +4,16 @@ visible: true
---
[toc]
## Monitoring
### Sensors
The `lm_sensors` package shows temperatures, fan pwm and other sensors for your CPU, GPU and motherboard.
Run `$ sensors` to get the output.
#### Support for motherboard ITE LPC chips
Support for this type of chip does not come built in to `lm_sensors`.
In the AUR the package `it87-dkms-git` provides a kernel module with support for a variety of ITE chips. It pulls from [this](https://github.com/frankcrawford/it87) git repo. You can find a list of supported chips there. See [this issue on lm_sensors git repo](https://github.com/lm-sensors/lm-sensors/issues/134) for background info.
@ -17,6 +21,7 @@ The kernel driver can be automatically loaded on boot by putting `it87` into `/e
The option `acpi_enforce_resources=lax` also needs to be added to `GRUB_CMDLINE_LINUX_DEFAULT` in `/etc/default/grub` or your bootloader equivalent.
### CoreFreq
[CoreFreq](https://github.com/cyring/CoreFreq) can display a lot of information about the CPU and the memory controller.
To run, the systemd service `corefreqd` needs to be enabled.
@ -29,21 +34,30 @@ A few interesting views:
`Shift + M` shows the memory timings, frequency and DIMM layout.
### Zenmonitor
[Zenmonitor](https://github.com/ocerman/zenmonitor) is, as the name suggests, monitoring software specifically for AMD Zen CPUs.
### CoreCtrl
CoreCtrl displays a range of information for AMD GPUs.
### Error monitoring
Some applications have hardware error reporting built-in.
#### Kernel log
For others, try checking the kernel log.
`$ journalctl -k --grep=mce`
#### Rasdaemon
You can also install `aur/rasdaemon` and enable its two services.
`# systemctl enable --now ras-mc-ctl.service`
`# systemctl enable --now rasdaemon.service`
```sh
systemctl enable --now ras-mc-ctl.service
systemctl enable --now rasdaemon.service
```
`$ ras-mc-ctl --summary` shows all historic errors
`$ ras-mc-ctl --error-count` shows memory errors of the current session

View File

@ -4,12 +4,19 @@ visible: true
---
[toc]
## Overclocking
*I'm unaware of any platform supporting online-editing of RAM timings*
_I'm unaware of any platform supporting online-editing of RAM timings_
## Testing
> [More Testing Tools can be found on the ArchWiki](https://wiki.archlinux.org/title/Stress_testing?useskinversion=1)
#### Stressapptest
**NOTE**: Produces heavy load on the CPU as well. A stable CPU OC before running this is recommended.
`$ stressapptest -M (RAM MiB) -s (time in s) -m (CPU threads)`
```sh
stressapptest -M (RAM MiB) -s (time in s) -m (CPU threads)
```

View File

@ -4,45 +4,60 @@ visible: true
---
[toc]
## Generate systemd service
Create a container the normal way
Using this container as a reference, you can generate a systemd service file
`# podman generate systemd --new --name --files (container)`
```sh
podman generate systemd --new --name --files (container)
```
Remove your old container
`# podman container rm (container)`
`# cp container-(container).service /etc/systemd/system/`
```
podman container rm (container)
cp container-(container).service /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now container-(container)
```
`# systemctl daemon-reload`
`# systemctl enable --now container-(container)`
The container should now be running just as before
## Auto-Update container
The command to update containers configured for auto-update is `# podman auto-update`
Add `--label "io.containers.autoupdate=image"` to the `ExecStart=/usr/bin/podman ...` line in the service file you generated
Make sure to use, for example, `docker.io/` instead of `docker://` as the source of the image
Reload and restart
`# systemctl daemon-reload`
`# systemctl enable --now container-(container)`
```sh
systemctl daemon-reload
systemctl enable --now container-(container)
```
If you want to manually run updates for the configured containers, use this command:
`# podman auto-update`
### Auto-Update timer
To truly automate your updates, enable the included timer
`# systemctl enable --now podman-auto-update.timer`
### Check update log
The update logs are kept in the `podman-auto-update` service
`$ journalctl -eu podman-auto-update`
## Prune images service and timer
`/etc/systemd/system/podman-image-prune.service`
```
```systemd
[Unit]
Description=Podman image-prune service
@ -55,7 +70,8 @@ WantedBy=multi-user.target
```
`/etc/systemd/system/podman-image-prune.timer`
```
```systemd
[Unit]
Description=Podman image-prune timer
@ -67,7 +83,9 @@ Persistent=true
WantedBy=timers.target
```
`# systemctl daemon-reload`
`# systemctl enable --now podman-image-prune.timer`
```sh
systemctl daemon-reload
systemctl enable --now podman-image-prune.timer
```
> [Documentation](https://docs.podman.io/en/latest/markdown/podman-image-prune.1.html)

View File

@ -5,24 +5,30 @@ media_order: powerdns-admin-api-settings.png
---
[toc]
## Installation
For the autoriative server install this package
`# apt install pdns-server`
This is the PowerDNS resolver package
`# apt install pdns-recursor`
### Different Backends can be installed on Debian
Mysql Backend
`# apt install pdns-backend-mysql mariadb-server`
## Configuration Authoritative Server
Set the backend you chose in the `launch=` option of PowerDNS' configuration file.
The config can be found under `/etc/powerdns/pdns.conf`
For MySQL I chose `launch=gmysql`
> A [list of backends can be found here](https://doc.powerdns.com/authoritative/backends/index.html)
Add the following parameters below `launch=gmysql`
```
gmysql-host=127.0.0.1
gmysql-socket=/run/mysqld/mysqld.sock
@ -34,18 +40,28 @@ gmysql-dnssec=yes
```
Prepare database
`# mariadb -u root -p`
`CREATE DATABASE pdns;`
```sh
mariadb -u root -p
```
`GRANT ALL ON pdns.* TO 'pdns'@'localhost' IDENTIFIED BY '<password>';`
```sql
CREATE DATABASE pdns;
GRANT ALL ON pdns.* TO 'pdns'@'localhost' IDENTIFIED BY '<password>';
```
Import the schema utilised by PowerDNS. This can be done with the user you just created
`$ mysql -u pdns -p pdns < /usr/share/doc/pdns-backend-mysql/schema.mysql.sql`
`# systemctl restart pdns`
```sh
mysql -u pdns -p pdns < /usr/share/doc/pdns-backend-mysql/schema.mysql.sql
```
```sh
systemctl restart pdns
```
### Zones
Create Zone and add a name server
`# pdnsutil create-zone (domain) ns1.(domain)`
@ -54,6 +70,7 @@ Add "A"-Record. **Mind the (.) after the domain**
`# pdnsutil add-record (domain). (name) A (ip address)`
### Dynamic DNS
`# apt install bind9utils`
Generate key
@ -76,7 +93,9 @@ And for reverse-zone
You also have to configure the DHCP server to provide updates, see [the DHCP article](https://wiki.realstickman.net/en/linux/services/dhcp-server)
#### Testing with nsupdate
`# nsupdate -k Kdhcpdupdate.+157+12673.key`
```
> server 127.0.0.1 5300
> zone testpdns
@ -85,6 +104,7 @@ You also have to configure the DHCP server to provide updates, see [the DHCP art
```
## Configuration Recursive Resolver
The config file can be found under `/etc/powerdns/recursor.conf`
In `/etc/powerdns/pdns.conf` set `local-address=127.0.0.1` and `local-port=5300` to allow the recursor to run on port 53
In `/etc/powerdns/recursor.conf` set `forward-zones=(domain)=127.0.0.1:5300` to forward queries for that domain to the authoritative DNS
@ -92,15 +112,19 @@ Also set `local-address` and `allow-from`
To bind to all interfaces, use `local-address=::,0.0.0.0`
### Wipe Cache
`# rec_control wipe-cache $`
## DNSSEC
### Authoritative Server
> *TODO*
> _TODO_
> https://doc.powerdns.com/authoritative/dnssec/index.html
### Recursor Server
To fully enable DNSSEC, set `dnssec=process-no-validate` to `dnssec=validate`
To allow a domain without DNSSEC, modify `/etc/powerdns/recursor.lua`
@ -112,11 +136,16 @@ Show domains with disabled DNSSEC
> [DNSSEC Testing](https://wiki.debian.org/DNSSEC#Test_DNSSEC)
## WebGUI
### PowerDNS-Admin
`# mkdir /etc/pda-data`
`# chmod 777 -R /etc/pda-data`
```sh
mkdir /etc/pda-data
chmod 777 -R /etc/pda-data
```
# podman run -d \
```sh
podman run -d \
--name powerdns-admin \
-e SECRET_KEY='q5dNwUVzbdn6gc7of6DvO0syIhTHVq1t' \
-v /etc/pda-data:/data \
@ -125,7 +154,9 @@ Show domains with disabled DNSSEC
```
#### Enabling API
A few settings in `/etc/powerdns/pdns.conf` need to be changed.
```
api=yes
api-key=(random key)
@ -138,8 +169,10 @@ Following this, the API access can be configured in the webgui
Now you should see all your configured Domains and be able to modify records
#### Systemd Service
`/etc/systemd/system/powerdns-admin.service`
```
```systemd
[Unit]
Description=Powerdns Admin Podman container
[Service]
@ -150,5 +183,7 @@ ExecStop=/usr/bin/podman stop -t 10 powerdns-admin
WantedBy=multi-user.target
```
`# systemctl daemon-reload`
`# systemctl enable --now powerdns-admin`
```sh
systemctl daemon-reload
systemctl enable --now powerdns-admin
```

View File

@ -3,12 +3,14 @@ title: Prowlarr
visible: false
---
*NOTE: This application is still in beta. No stable release is available*
_NOTE: This application is still in beta. No stable release is available_
## Application
`lscr.io/linuxserver/prowlarr:develop`
```
# podman run -d \
```sh
podman run -d \
--name=prowlarr \
-p 9696:9696 \
-v /mnt/prowlarr/config:/config \

View File

@ -1,14 +1,17 @@
---
title: 'SSH Agent'
title: "SSH Agent"
visible: true
---
[toc]
Autostarting an ssh-agent service
## Systemd Service
A local service works for this. For example `~/.config/systemd/user/ssh-agent.service`
```
```systemd
[Unit]
Description=SSH key agent
@ -25,11 +28,14 @@ Enable the systemd service
`systemctl --user enable --now ssh-agent`
## Shell environment variable
The shell needs to know about the ssh-agent. In the case of fish, add this snippet to your config.
`set SSH_AUTH_SOCK /run/user/1000/ssh-agent.socket; export SSH_AUTH_SOCK`
## SSH config
Modify the `~/.ssh/config` to add new keys automatically.
```
AddKeysToAgent yes
```

View File

@ -4,15 +4,18 @@ visible: true
---
[toc]
## Server
```
# podman run -d --name step-ca \
```sh
podman run -d --name step-ca \
-v step:/home/step \
-p 9000:9000 \
-e "DOCKER_STEPCA_INIT_NAME=Demiurge" \
-e "DOCKER_STEPCA_INIT_DNS_NAMES=(hostname),(hostname2)" \
docker.io/smallstep/step-ca
```
Get the root ca fingerprint
`# podman run -v step:/home/step smallstep/step-ca step certificate fingerprint certs/root_ca.crt`
@ -20,21 +23,25 @@ To view your ca password, run this command
`# podman run -v step:/home/step smallstep/step-ca cat secrets/password`
### ACME Server
Enable ACME. Restart the server afterwards.
`$ step ca provisioner add acme --type ACME`
## Client
Initialize the step-cli client
`step-cli ca bootstrap --ca-url https://(domain/ip):9000 --fingerprint (root_ca fingerprint)`
## Create Certificates
> [Official documentation](https://smallstep.com/docs/step-cli/basic-crypto-operations)
Enter the container
`# podman exec -it step-ca bash`
### Client Certificate
```
```sh
step certificate create (cert name) client-certs/(cert name).crt client-certs/(cert name).key \
--profile leaf --not-after=8760h \
--ca certs/intermediate_ca.crt \
@ -45,10 +52,13 @@ step certificate create (cert name) client-certs/(cert name).crt client-certs/(c
Add SANs with the `--san=`-flag. Add multiple flags for multiple SANs.
### ACME
Point your ACME client to `https://(domain/ip):9000/acme/(provisioner-name)/directory`
## Device Truststore
### Arch Linux
> [Archwiki Article on TLS](https://wiki.archlinux.org/title/Transport_Layer_Security#Add_a_certificate_to_a_trust_store)
Add new trust anchor

View File

@ -1,5 +1,5 @@
---
title: 'Systemd Automount'
title: "Systemd Automount"
visible: true
---
@ -8,11 +8,13 @@ visible: true
Systemd can be used to mount filesystems not only on boot (simple `.mount` file), but also on request by any process. (`.automount` file)
## Mount file
The `.mount` file should be placed in `/etc/systemd/system`
**NOTE: The filename must be (mountpoint).mount with slashes `/` being replaced with dashes `-`**
Example: `/mnt/target` --> `mnt-target.mount`
Here's an example `.mount` file for a CIFS share
```systemd
[Unit]
Description=cifs mount
@ -28,10 +30,11 @@ WantedBy=multi-user.target
```
## Automount file
The corresponding `.automount` file needs to have the same name as its `.mount` file
Example: `mnt-target.mount` and `mnt-target.automount`
```
```systemd
[Unit]
Description=cifs automount
@ -44,4 +47,3 @@ WantedBy=multi-user.target
Enable the `.automount` file to mount the filesystem when necessary
`# systemctl enable (target-mount).automount`

View File

@ -4,17 +4,22 @@ visible: true
---
[toc]
## Installation
`# apt install unattended-upgrades`
## Configuration
**NOTE: This configuration is tailored to my personal preferences. Feel free to do something else if you don't want what I'm doing**
### Enable automatic reboots
If necessary, the server will automatically reboot.
An example would be kernel updates.
Edit `/etc/apt/apt.conf.d/50unattended-upgrades`
```
...
Unattended-Upgrade::Automatic-Reboot "true";
@ -22,24 +27,27 @@ Unattended-Upgrade::Automatic-Reboot "true";
```
### Repo update time
Create an override file for `apt-daily.timer` using this command
`$ sudo systemctl edit apt-daily.timer`
Add these lines between the two comments
```
```systemd
[Timer]
OnCalendar=*-*-* 2:00
RandomizedDelaySec=0
```
### Host upgrade time
Create an override file for `apt-daily-upgrade.timer` using this command
`$ sudo systemctl edit apt-daily-upgrade.timer`
Add these lines between the two comments
```
```systemd
[Timer]
OnCalendar=*-*-* 4:00
RandomizedDelaySec=0
```

View File

@ -1,18 +1,21 @@
---
title: 'Users and Groups'
title: "Users and Groups"
visible: true
---
[toc]
## Users
Check users by looking at `/etc/passwd`
### Add users
Basic usage:
`# useradd -m (user)`
Important options:
```
login name -> by default
group -> -G //separate multiple by commas: group1,group2
@ -25,11 +28,13 @@ Example more complicated usage:
`# useradd -m -c "Bruno Huber" -s /bin/bash -G sudo,systemd-journal bruhub`
### Remove user
The command `userdel` can be used to remove users from a system.
Using it with the `-r` additionally deletes the user home directory and mail spool.
`# userdel -r (user)`
### Add user to groups
Add user to more groups:
`# usermod -a -G (group1),(group2) (user)`
@ -37,16 +42,21 @@ Alternative command:
`# gpasswd -a (user) (group)`
### Remove user from group
`# gpasswd -d (user) (group)`
## Groups
Check a user's groups with `id (user)`
### Create group
`# groupadd (group)`
### Rename group
`# groupmod -n (new_group) (old_group)`
### Delete group
`# groupdel (group)`

View File

@ -8,15 +8,18 @@ visible: true
> I'm not using WikiJS anymore. This article might be out of date
`# apt install nginx podman nodejs`
## Preparation
Create a new network for the database and wikijs
`$ podman network create wikijs`
## Database setup
`# podman pull docker://postgres`
```
# podman run -p 127.0.0.1:5432:5432 --name wikijsdb \
```sh
podman run -p 127.0.0.1:5432:5432 --name wikijsdb \
-e POSTGRES_PASSWORD=wikijs \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v /mnt/postgres/wikijsdb:/var/lib/postgresql/data \
@ -28,20 +31,28 @@ Create a new network for the database and wikijs
`# psql -U postgres`
Create database used by wikijs
`=# CREATE DATABASE wikijs;`
```sql
CREATE DATABASE wikijs;
```
### Systemd Service
Generate the systems service file following the [podman guide](/linux/services/podman)
## Wiki.JS Setup
`$ cd /var`
`# wget https://github.com/Requarks/wiki/releases/download/(version)/wiki-js.tar.gz`
`# mkdir wiki`
`# tar xzf wiki-js.tar.gz -C ./wiki`
`$ cd ./wiki`
```sh
cd /var
wget https://github.com/Requarks/wiki/releases/download/(version)/wiki-js.tar.gz
mkdir wiki
tar xzf wiki-js.tar.gz -C ./wiki
cd ./wiki
```
Move default config
`# mv config.sample.yml config.yml`
```
#######################################################################
# Wiki.js - CONFIGURATION #
@ -175,15 +186,20 @@ dataPath: ./data
```
Don't forget to open permissions so the systemd service can run the server
`# useradd -m wiki`
`# chown wiki:wiki -R /var/wiki`
```sh
useradd -m wiki
chown wiki:wiki -R /var/wiki
```
Run server directly:
`$ node server`
## Systemd service
Put this under `/etc/systemd/system/wiki.service`
```
```systemd
[Unit]
Description=Wiki.js
After=network.target
@ -203,12 +219,16 @@ WorkingDirectory=/var/wiki
WantedBy=multi-user.target
```
`# systemctl daemon-reload`
`# systemctl enable --now wiki`
```sh
systemctl daemon-reload
systemctl enable --now wiki
```
## Nginx config
*Replace "IPV4" and "IPV6"*
```
_Replace "IPV4" and "IPV6"_
```nginx
server {
server_name DOMAIN_NAME;
@ -263,12 +283,14 @@ Restart nginx
## Wiki Settings
### Storage with git
Create a home directory for the wiki user if you haven't used "-m" when creating the user.
**Make sure not to have a "/" after the directory you want for your user**
```
# mkdir /home/wiki
# chown wiki:wiki -R /home/wiki
# usermod -d /home/wiki wiki
```sh
mkdir /home/wiki
chown wiki:wiki -R /home/wiki
usermod -d /home/wiki wiki
```
Create ssh key as wiki user
@ -277,30 +299,42 @@ Create ssh key as wiki user
- DB - PostgreSQL used as Search Engine
## Update Wiki
Download and install the latest release with these steps
`# systemctl stop wiki`
`$ cd /var`
`# wget https://github.com/Requarks/wiki/releases/download/(version)/wiki-js.tar.gz`
```sh
systemctl stop wiki
cd /var
wget https://github.com/Requarks/wiki/releases/download/(version)/wiki-js.tar.gz
```
This is to ensure we have a known good version to go back to in case something goes wrong
`# mv wiki wiki-old`
`# mkdir wiki`
`# tar xzf wiki-js.tar.gz -C ./wiki`
`# cp wiki-old/config.yml wiki/`
`# chown wiki:wiki -R /var/wiki`
`# systemctl start wiki`
```sh
mv wiki wiki-old
mkdir wiki
tar xzf wiki-js.tar.gz -C ./wiki
cp wiki-old/config.yml wiki/
chown wiki:wiki -R /var/wiki
systemctl start wiki
```
## Database Backup
`# podman exec (container name) pg_dump (database name) -U (database user) -F c > wikibackup.dump`
## Database Restore
**The wiki has to be installed fully, but not yet configured**
*Also works for transfering wiki from one server to another*
_Also works for transfering wiki from one server to another_
Stop the database and wiki
Drop the existing database and restore from the database
`# podman exec -it (container name) dropdb -U (database user) (database name)`
`# podman exec -it (container name) createdb -U (database user) (database name)`
`cat ~/wikibackup.dump | docker exec -i (container name) pg_restore -U (database user) -d (database name)`
```sh
podman exec -it (container name) dropdb -U (database user) (database name)
podman exec -it (container name) createdb -U (database user) (database name)
cat ~/wikibackup.dump | docker exec -i (container name) pg_restore -U (database user) -d (database name)
```
Start the database and wiki again

View File

@ -4,12 +4,15 @@ visible: true
---
[toc]
## Installation
`# pacman -S wireguard-tools`
*Enable backports for buster and older*
_Enable backports for buster and older_
`# apt install wireguard`
## Configuration
This command creates a private key and also a matching public key
`$ wg genkey | tee (name).key | wg pubkey > (name).pub`
@ -19,7 +22,8 @@ To activate a wireguard tunnel on boot use the following command
`# systemctl enable --now wg-quick@wg0.service`
### VPN "Server" configuration
*Illustration only, don't share your private keys*
_Illustration only, don't share your private keys_
Private key: `oFlgQ3uq4tjgRILDV3Lbqdx0mVZv2VCWWRkhJA3gcX4=`
Public key: `/0LMRaQCx1oMIh+eU/v4T3YQ8gAb/Qf7ulYl0zzFAkQ=`
@ -33,6 +37,7 @@ SystemD only loads settings specified in the `/etc/sysctl.d/` directory
Note how the first peer has two allowed IPs.
`/etc/wireguard/wg0.conf`
```
[Interface]
Address = 172.16.1.10/24
@ -56,7 +61,8 @@ AllowedIPs = 172.16.1.200/32
```
`/etc/wireguard/wg0-postup.sh`
```
```sh
WIREGUARD_INTERFACE=wg0
WIREGUARD_LAN=172.16.1.0/24
MASQUERADE_INTERFACE=ens33
@ -88,7 +94,8 @@ iptables -A $CHAIN_NAME -j RETURN
```
`/etc/wireguard/wg0-postdown.sh`
```
```sh
WIREGUARD_INTERFACE=wg0
WIREGUARD_LAN=172.16.1.0/24
MASQUERADE_INTERFACE=ens33
@ -104,12 +111,14 @@ iptables -X $CHAIN_NAME
```
### VPN "Client" configuration
*Illustration only, don't share your private keys*
_Illustration only, don't share your private keys_
Private key: `kAgCeU6l+RWlFxfpnGj19tzEDyYz3I4HuqHkaUmHX1Q=`
Public key: `r+TAbAN1hGh4MaIk/J5I5L3ZSAn+kCo1MJJq5YxHrl0=`
Here we have two different interfaces configured under the same wireguard config
`/etc/wireguard/wg0.conf`
```
[Interface]
Address = 172.16.1.100/24
@ -132,6 +141,6 @@ PersistentKeepalive = 5
```
## Iptables no local access ssh user
> [Block outgoing network access for single user](https://www.cyberciti.biz/tips/block-outgoing-network-access-for-a-single-user-from-my-server-using-iptables.html)
> [Restrict internet access for user](https://unix.stackexchange.com/questions/21650/how-to-restrict-internet-access-for-a-particular-user-on-the-lan-using-iptables)

View File

@ -1,23 +1,30 @@
---
title: 'Woodpecker CI'
title: "Woodpecker CI"
visible: true
---
[toc]
## Podman
### Network and Pod
`# podman network create net_woodpecker`
`# podman pod create --name pod_woodpecker --network net_woodpecker -p 8000:8000 -p 9000:9000`
```sh
podman network create net_woodpecker
podman pod create --name pod_woodpecker --network net_woodpecker -p 8000:8000 -p 9000:9000
```
#### Port Mappings
```
8000: Woodpecker HTTP listener, Configurable with "WOODPECKER_SERVER_ADDR"
9000: Woodpecker gRPC listener, Configurable with "WOODPECKER_GRPC_ADDR"
```
### Database
```
# podman run --name woodpeckerdb \
```sh
podman run --name woodpeckerdb \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-e POSTGRES_USER=woodpecker \
-e POSTGRES_PASSWORD=woodpecker \
@ -28,10 +35,11 @@ visible: true
```
### Application server
> [Official Documentation](https://woodpecker-ci.org/docs/administration/server-config)
```
# podman run --name woodpecker-server -t \
```sh
podman run --name woodpecker-server -t \
-e WOODPECKER_HOST=https://(hostname/ip address) \
-e WOODPECKER_ADMIN=RealStickman \
-e WOODPECKER_OPEN=false \
@ -50,13 +58,16 @@ Generate `WOODPECKER_AGENT_SECRET` with this command:
`$ openssl rand -hex 32`
#### GitHub
*TODO*
_TODO_
#### Gitea
> [Documentation](https://woodpecker-ci.org/docs/administration/vcs/gitea)
Add these environment variables to enable Woodpecker for a gitea server.
```
```sh
-e WOODPECKER_GITEA=true \
-e WOODPECKER_GITEA_URL=https://(gitea url) \
-e WOODPECKER_GITEA_CLIENT='(oauth client id)' \
@ -70,8 +81,10 @@ Therefor I set added an override rule for my gitea url in OPNsense (Services > U
> [Reddit post I used as guidance](https://www.reddit.com/r/OPNsenseFirewall/comments/lrmtsz/a_potential_dns_rebind_attack/)
#### GitLab
Add these environment variables to enable GitLab in Woodpecker.
```
```sh
-e WOODPECKER_GITLAB=true \
-e WOODPECKER_GITLAB_URL=https://(gitlab url) \
-e WOODPECKER_GITLAB_CLIENT=(oauth client id) \
@ -79,10 +92,11 @@ Add these environment variables to enable GitLab in Woodpecker.
```
### Application agent
> [Official Documentation](https://woodpecker-ci.org/docs/administration/agent-config)
```
# docker run --name woodpecker-agent -t \
```sh
docker run --name woodpecker-agent -t \
-e WOODPECKER_SERVER=(url/ip):(grpc port) \
-e WOODPECKER_AGENT_SECRET=(shared secret for server and agents) \
-e WOODPECKER_HOSTNAME=(agent hostname, def: empty) \
@ -97,8 +111,8 @@ The Woodpecker agent needs access to the docker socket to spawn new container pr
For now I'll be using docker to run my agents.
Podman has support for using sockets since version 3.4.0.
*TODO: try out socket access once Podman 3.4.0 is on my servers*
*Recommended by Woodpecker is at least Podman 4.0*
_TODO: try out socket access once Podman 3.4.0 is on my servers_
_Recommended by Woodpecker is at least Podman 4.0_
[Podman socket activation](https://github.com/containers/podman/blob/main/docs/tutorials/socket_activation.md)
[Woodpecker note on using Podman](https://github.com/woodpecker-ci/woodpecker/blob/master/docs/docs/30-administration/22-backends/10-docker.md#podman-support)

View File

@ -4,16 +4,20 @@ visible: true
---
[toc]
## Firewall
The firewall configuration can be changed with an already included package.
Call the TUI version with `system-config-firewall-tui`
The only open port will be 22/tcp for SSH Access
## SSH Access
Disable password authentication. See [ssh](/remote/ssh)
## Local ISO Storage
Using ISO Storage on "/" or subdirectories on the same partition is not really viable, as only 18GiB are assigned to this mountpoint by default.
Instead use the local EXT mapper device. This is mounted under `/run/sr-mount/(id)`
Create a new "ISO" directory.

View File

@ -24,8 +24,8 @@ Run `# xo-vm-import.sh` to import that VM.
You need to explicitly allow host loopback for the container, or it won't be able to access the local ssh tunnel we'll create later
We'll need to enter the server on 10.0.2.2 with the local port we gave our ssh tunnel
```
# podman run -itd --name xen-orchestra \
```sh
podman run -itd --name xen-orchestra \
--net slirp4netns:allow_host_loopback=true \
-p 8080:80 \
docker.io/ronivay/xen-orchestra
@ -47,7 +47,7 @@ To start and stop the tunnel automatically a systemd service is used. It is a sp
`/etc/systemd/system/local-tunnel@.service`
```
```systemd
[Unit]
Description=Setup a local tunnel to %I
After=network.target

View File

@ -4,15 +4,19 @@ visible: true
---
[toc]
## Zabbix Server
### Pod
```
# podman pod create --name zabbix -p 127.0.0.1:8080:8080 -p 10051:10051
```sh
podman pod create --name zabbix -p 127.0.0.1:8080:8080 -p 10051:10051
```
### Database
```
# podman run --name zabbix-mysql -t \
```sh
podman run --name zabbix-mysql -t \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
-e MYSQL_PASSWORD="zabbix" \
@ -26,10 +30,12 @@ visible: true
```
### Application
Zabbix consists of multiple containers that need to be running.
First is the server itself.
```
# podman run --name zabbix-server -t \
```sh
podman run --name zabbix-server -t \
-e DB_SERVER_HOST="127.0.0.1" \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
@ -40,8 +46,9 @@ First is the server itself.
```
Next, we need the webserver
```
# podman run --name zabbix-web -t \
```sh
podman run --name zabbix-web -t \
-e ZBX_SERVER_HOST="127.0.0.1" \
-e DB_SERVER_HOST="127.0.0.1" \
-e MYSQL_DATABASE="zabbix" \
@ -54,8 +61,9 @@ Next, we need the webserver
```
Finally, we will also install the agent as a container
```
# podman run --name zabbix-agent \
```sh
podman run --name zabbix-agent \
-e ZBX_SERVER_HOST="127.0.0.1,localhost" \
--restart=always \
--pod=zabbix \
@ -65,10 +73,12 @@ Finally, we will also install the agent as a container
The default user is `Admin` with password `zabbix`
### Updating Server
Updating the server might fail for various reasons. Those I already encountered will be documented below.
*NOTE: The server and proxy need to run the same version of zabbix to talk with one another*
_NOTE: The server and proxy need to run the same version of zabbix to talk with one another_
#### MARIADB: Missing permissions (log_bin_trust_function_creators)
From what I could find this error is thrown, when the specified user lacks super user privileges.
A workaround is enabling `log_bin_trust_function_creators` temporarily.
`# podman exec -it bash zabbix-mysql`
@ -78,9 +88,11 @@ A workaround is enabling `log_bin_trust_function_creators` temporarily.
The setting will be reset to default after a restart of the database container.
## Zabbix Proxy
`ZBX_HOSTNAME` has to be the same as the value configured on the zabbix server as the proxy name.
```
# podman run --name zabbix-proxy \
```sh
podman run --name zabbix-proxy \
-p 10051:10051 \
-e ZBX_SERVER_HOST="178.18.243.82" \
-e ZBX_HOSTNAME="he1prx1" \
@ -93,15 +105,17 @@ The setting will be reset to default after a restart of the database container.
```
### Updating Proxy
Updating the proxy will always fail when using the SQLite database, as upgrading is not supported for SQLite.
*NOTE: The server and proxy need to run the same version of zabbix to talk with one another*
_NOTE: The server and proxy need to run the same version of zabbix to talk with one another_
Simply deleting/moving the old SQLite database and restarting the proxy is enough.
*NOTE: History stored on the proxy will obviously be lost*
_NOTE: History stored on the proxy will obviously be lost_
## Zabbix Agent
```
# podman run --name zabbix-agent \
```sh
podman run --name zabbix-agent \
-p 10050:10050 \
-e ZBX_HOSTNAME="(hostname)" \
-e ZBX_SERVER_HOST="(zabbix server/proxy)" \
@ -109,12 +123,14 @@ Simply deleting/moving the old SQLite database and restarting the proxy is enoug
```
### XCP-ng
Use zabbix package from EPEL.
Zabbix server can handle the older agent fine [See the Documentation on Compatibility](https://www.zabbix.com/documentation/current/en/manual/appendix/compatibility)
`# yum install zabbix50-agent --enablerepo=epel`
Edit `/etc/zabbix_agentd.conf`
*haven't managed to make encryption work yet*
_haven't managed to make encryption work yet_
```
Server=(Zabbix server ip)
ServerActive=(Zabbix server ip)
@ -129,12 +145,15 @@ Create the .psk file. Set the user and group to `zabbix`
Allow 10050/TCP on the firewall
*nope*
_nope_
`# yum install openssl11 --enablerepo=epel`
## TODO
### Encryption with PSK
> [Official Documentation](https://www.zabbix.com/documentation/6.0/en/manual/encryption/using_pre_shared_keys)
### Force refresh Proxy
> [Zabbix Forum Post](https://www.zabbix.com/forum/zabbix-troubleshooting-and-problems/363196-cannot-send-list-of-active-checks-to-ip-address-host-ip-address-hostnames-match?p=363205#post363205)

View File

@ -1,9 +1,10 @@
---
title: 'Non-Standard Shell'
title: "Non-Standard Shell"
visible: true
---
[toc]
When trying to use a non-standard shell, `chsh` will throw the following error:
`chsh: /usr/local/bin/zsh: non-standard shell`

View File

@ -4,16 +4,19 @@ visible: true
---
[toc]
## Returning exit status
`exit 1`
Code | Meaning
--- | ---
0 | Success
1 | Error
| Code | Meaning |
| ---- | ------- |
| 0 | Success |
| 1 | Error |
## Check for Arguments given
```
```sh
if [ $# -eq 0 ]; then
echo "Please supply one argument"
$(exit 1); echo "$?"
@ -22,15 +25,19 @@ elif [ $# -ge 2 ]; then
$(exit 1); echo "$?"
fi
```
## Multiline output
```
```sh
cat << EOF
Line 1
Line 2
Line 3
EOF
```
Will output:
```
Line 1
Line 2

View File

@ -4,13 +4,17 @@ visible: true
---
[toc]
## Utils
`# pacman -S btrfs-progs`
## Fstab example
`UUID=2dc70a6e-b4cf-4d94-b326-0ba9f886cf49 /mnt/tmp btrfs defaults,noatime,compress-force=zstd,space_cache=v2,subvol=@ 0 0`
Options:
```
defaults -> Use whatever defaults
noatime -> Reading access to a file is not recorded
@ -20,6 +24,7 @@ subvol -> Subvolume to mount
```
## Filesystem usage
Show storage allocated, used and free
`# btrfs fi usage (mountpoint)`
@ -37,16 +42,19 @@ Check status of rebalance
`# btrfs balance status -v (mountpoint)`
## Disable CoW
Disable copy on write for folders (Only works on new files)
`$ chattr +C (path)`
## Device errors
Error counts for a given mountpoint
`# btrfs dev stat (mountpoint)`
## Compression
### Algorithms
```
zlib: Slow, but strong compression, level 1-9
lzo : Fastest, weak compression
@ -55,7 +63,7 @@ zstd: [Recommended] Medium, newer compression standard than the others, only wor
Enable compression for existing files
`# btrfs filesystem defragment -r -v -c(alg) (path)`
*It is impossible to specify the level of compression wanted.*
_It is impossible to specify the level of compression wanted._
Add `compress=(alg)` to `/etc/fstab`
@ -63,6 +71,7 @@ To specify a level of compression (zlib and zstd) use `compress=(alg):(level)` i
For zstd compression it is recommended to use `compress-force=zstd:(level)`
## Subvolumes
List
`# btrfs subv list (path)`
@ -73,36 +82,43 @@ Mount a subvolume
`# mount -o subvol=@(subvolname) /dev/sdXX /(mountpoint)`
## Snapshots
TODO
## RAID
An array can be mounted by specifying one of its members.
`# mount /dev/sdXX /mnt`
All members of an array have the same UUID, which can be mounted through fstab.
### RAID 1
On filesystem creation
`# mkfs.btrfs -m raid1 -d raid1 /dev/sdXX /dev/sdYY`
### RAID 5
On filesystem creation
*It is recommended not to use raid5/6 for metadata yet*
_It is recommended not to use raid5/6 for metadata yet_
`# mkfs.btrfs -m raid1 -d raid5 /dev/sdXX /dev/sdYY /dev/sdZZ`
### RAID 10
On filesystem creation
`# mkfs.btrfs -m raid10 -d raid10 /dev/sdXX /dev/sdYY /dev/sdZZ /dev/sdQQ`
### Convert to single device
First, the files have to be collected on one device.
*DUP on system and metadata should only be used on HDDs. Use single on SSDs*
_DUP on system and metadata should only be used on HDDs. Use single on SSDs_
`# btrfs balance start -f -sconvert=dup,devid=(id) -mconvert=dup,devid=(id) -dconvert=single,devid=(id) /(mountpoint)`
Now unused devices can be removed
`# btrfs device delete /dev/sdYY /(mountpoint)`
### Replace dying/dead device in RAID array
Show arrays that are available
`btrfs fi show`
@ -124,8 +140,9 @@ Balance the filesystem at the end
## Issues
### 100% CPU Usage
`btrfs-transaction` and `btrfs-cleaner` will run on a single cpu core, maxing it out with 100% load.
*TODO: Check what enabled quotas in the first place. A likely candidate is snapper*
_TODO: Check what enabled quotas in the first place. A likely candidate is snapper_
The issue is apparently caused by using quotas in btrfs.
Check if quotas are enabled:
`# btrfs qgroup show (path)`

View File

@ -1,14 +1,18 @@
---
title: 'Doom Emacs'
title: "Doom Emacs"
visible: true
---
[toc]
## Keybindings
### Minimap
`SPC t m`
### Dired
Provides directory view
Create new directory within the current directory
@ -24,14 +28,17 @@ Unselect
`u`
### Treemacs
Toggle view of directory structure of the current project on the side.
`SPC o p`
### Term
Open terminal
`SPC o t`
### Window management
Open window right of current window
`SPC w v`
@ -42,6 +49,7 @@ Move to other windows
`SPC h/j/k/l`
### Buffers
Open recent within the same project buffers
`SPC b b`
`SPC ,`
@ -56,6 +64,7 @@ Save buffer
`SPC b s`
### Quickly move to start/end of a document
Start of document
`gg`
@ -63,6 +72,7 @@ End of document
`G`
### Evil Snipe
Move to next occurence of one letter
`f (letter)`
@ -75,45 +85,53 @@ Move to previous occurence of one letter
`s (letter)` or `S (letter)` for occurences of two letters
### Indent selection
Press `CTRL x` followed by `TAB` and use h/l to indent text
### SSH Editing
`SPC f f`
Enter `/ssh:`
Press `TAB` to show available options
Enter new options with the following syntax: `/ssh:root@albedo.realstickman.net:/`
#### Privilege elevation
Execute sudo after establishing the connection
`/ssh:nonroot@albedo.realstickman.net|sudo:nonroot@albedo.realstickman.net:/`
## Windows installation
### git
Go to the [git homepage](https://git-scm.com/) and install it.
### emacs
Go to the [emacs homepage](https://www.gnu.org/software/emacs/) and install it.
Add the `(location)\emacs\x86_84\bin` directory to your PATH in the environment variables.
#### Shortcut
Create a shortcut to `(location)\emacs\x86_64\bin\runemacs.exe`
Edit the shortcut to execute in your home directory `C:\Users\(user)`
### HOME
Add the path to your home to the environment variables.
New variable -> HOME -> `C:\Users\(user)`
### doom-emacs
Open git bash
```bash
```sh
git clone --depth 1 https://github.com/hlissner/doom-emacs ~/.emacs.d
```
```bash
~/.emacs.d/bin/doom install
```
Add `C:\Users\(user)\.emacs.d\bin` to your PATH.
*Currently doesn't show emotes*
*Missing ripgrep and fd*
_Currently doesn't show emotes_
_Missing ripgrep and fd_

View File

@ -4,22 +4,27 @@ visible: true
---
[toc]
## List supported codecs and formats
`$ ffmpeg -codecs`
`$ ffmpeg -formats`
## Video Encoding
### H.264
> [H.264 Encoding Guide](https://trac.ffmpeg.org/wiki/Encode/H.264)
### AV1
> [AV1 Encoding Guide](https://trac.ffmpeg.org/wiki/Encode/AV1)
#### libaom
```
$ ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Joker" -disposition 0 \
```sh
ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Joker" -disposition 0 \
-c:v libaom-av1 -crf 23 -b:v 0 -cpu-used 6 -row-mt 1 -map 0:v:0 -metadata:s:v:0 title="Video" \
-c:a libopus -b:a 768k -ac:a 8 -map 0:a:0 -map 0:a:3 -metadata:s:a:0 title="English [7.1ch]" -metadata:s:a:0 language=eng -metadata:s:a:1 title="German [7.1ch]" -metadata:s:a:1 language=ger -disposition:a:0 default \
-c:s copy -map 0:s:0 -map 0:s:1 -metadata:s:s:0 title="English [PGS]" -metadata:s:s:0 language=eng -metadata:s:s:1 title="German [PGS]" -metadata:s:s:1 language=ger -disposition:s:0 default \
@ -27,12 +32,14 @@ $ ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Jo
```
Additional settings for increased speed and cpu usage:
```
-g 239: keyframes every ~10s (fps * 10)
-tiles 2x2: multiple parallel encoding tiles to speed up performance (4 in total here)
```
```
$ ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Joker" -disposition 0 \
```sh
ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Joker" -disposition 0 \
-c:v libaom-av1 -crf 23 -b:v 0 -cpu-used 6 -row-mt 1 -g 239 -tiles 2x2 -map 0:v:0 -metadata:s:v:0 title="Video" \
-c:a libopus -b:a 768k -ac:a 8 -map 0:a:0 -map 0:a:3 -metadata:s:a:0 title="English [7.1ch]" -metadata:s:a:0 language=eng -metadata:s:a:1 title="German [7.1ch]" -metadata:s:a:1 language=ger -disposition:a:0 default \
-c:s copy -map 0:s:0 -map 0:s:1 -metadata:s:s:0 title="English [PGS]" -metadata:s:s:0 language=eng -metadata:s:s:1 title="German [PGS]" -metadata:s:s:1 language=ger -disposition:s:0 default \
@ -40,8 +47,9 @@ $ ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Jo
```
#### SVT-AV1
```
$ ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Joker" -disposition 0 \
```sh
ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Joker" -disposition 0 \
-c:v libsvtav1 -crf 23 -preset 8 -g 239 -map 0:v:0 -metadata:s:v:0 title="Video" \
-c:a libopus -b:a 768k -ac:a 8 -map 0:a:0 -map 0:a:3 -metadata:s:a:0 title="English [7.1ch]" -metadata:s:a:0 language=eng -metadata:s:a:1 title="German [7.1ch]" -metadata:s:a:1 language=ger -disposition:a:0 default \
-c:s copy -map 0:s:0 -map 0:s:1 -metadata:s:s:0 title="English [PGS]" -metadata:s:s:0 language=eng -metadata:s:s:1 title="German [PGS]" -metadata:s:s:1 language=ger -disposition:s:0 default \
@ -49,11 +57,17 @@ $ ffmpeg -i "/mnt/storage/MediaLibrary/input/Joker/test.mkv" -metadata title="Jo
```
## Audio Encoding
> [High Quality Audio Encoding Guide](https://trac.ffmpeg.org/wiki/Encode/HighQualityAudio)
## Video Quality
### VMAF
> [A practical guide for VMAF](https://medium.com/a-practical-guide-for-vmaf-481b4d420d9c)
*Note: The order of the input videos is important. Make sure to place the distorted video first*
`$ ffmpeg -i (distorted) -i (original) -filter_complex libvmaf -f null -`
_Note: The order of the input videos is important. Make sure to place the distorted video first_
```sh
ffmpeg -i (distorted) -i (original) -filter_complex libvmaf -f null -
```

View File

@ -4,10 +4,14 @@ visible: true
---
[toc]
## For loop
### Iterating over number sequence
`for i in (seq 1 10); echo $i; end`
Output:
```
1
2
@ -24,6 +28,7 @@ Output:
If you want all numbers to be padded to equal lengths use the `-w` flag with `seq`
`for i in (seq -w 1 10); echo $i; end`
Output:
```
01
02

View File

@ -4,15 +4,21 @@ visible: true
---
[toc]
## Reset everything to selected branch
Useful for getting to the same state as upstream
`git reset --hard (upstream)/(branch)`
`git pull (upstream) (branch)`
```sh
git reset --hard (upstream)/(branch)
git pull (upstream) (branch)
```
Finally force push all of this into your own repo
## Get Pull Request from foreign repo
*Example with neofetch*
_Example with neofetch_
Add remote if you haven't already done that
`git remote add dylanaraps https://github.com/dylanaraps/neofetch.git`

View File

@ -4,13 +4,16 @@ visible: true
---
[toc]
## Linux Server
`# apt install nfs-kernel-server`
Shares can be configured in `/etc/exports`
`(mountpoint) (allowed_ip)(options) (allowed_ip2)(options)`
### Options
```
ro: specifies that the directory may only be mounted as read only
rw: grants both read and write permissions on the directory
@ -21,10 +24,10 @@ sync: this just ensures that the host keeps any changes uploaded to the shared d
async: ignores synchronization checks in favor of increased speed
```
*Example single host:*
_Example single host:_
`/mnt/nfs 192.168.1.123(rw,sync,no_subtree_check)`
*Example whole subnet:*
_Example whole subnet:_
`/mnt/nfs 192.168.1.0/24(rw,sync,no_subtree_check)`
Apply new config by restarting the service.
@ -34,6 +37,7 @@ Show configured shares
`$ cat /var/lib/nfs/etab`
## Linux Client
`# pacman -S nfs-utils`
`# apt install nfs-common`
@ -43,6 +47,7 @@ Mount through terminal
Can also be mounted with fstab
## Windows Client
Search for `Turn Windows features on or off`
Check everything under `Services for NFS` and click "OK"

View File

@ -4,8 +4,10 @@ visible: true
---
[toc]
## Exit on Keyboard Interrupt
```
```python
try:
<put your code here>
except KeyboardInterrupt:

View File

@ -4,19 +4,22 @@ visible: true
---
[toc]
## Linux Server
`# apt install xrdp`
`# systemctl enable xrdp`
Put the desktop environment you want to start in `.xsession`
*Example*
_Example_
`xfce4-session`
`# systemctl restart xrdp`
### Change port
Edit `/etc/xrdp/xrdp.ini`
Change the value of `port` to what you want
@ -24,12 +27,15 @@ Change the value of `port` to what you want
`# systemctl restart xrdp`
## Windows Server
### Windows Server Edition
Go to `Local Server` in the Server manager.
There should be an option called `Remote Desktop`. Click on it and allow remote connections.
If you refresh the view now, `Remote Desktop` should show as enabled.
#### Allow unlimited RDP sessions
Enter `gpedit` in the search bar
Go to `Administrative Templates>Windows Components>Remote Desktop Services>Remote Desktop Session Host>Connections`
@ -41,10 +47,12 @@ Disable `Restrict Remote Desktop Services users to a single Remote Desktop Servi
Reboot the Server
### Windows Pro Edition
Go to `Remotedesktop` in the settings under `System`
#### Change port
*PowerShell as admin*
_PowerShell as admin_
Check port in use currently:
`Get-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp' -name "PortNumber"`
@ -58,28 +66,34 @@ Firewall exception:
Reboot the PC
## Linux Client
### Installation
Use Remmina as client and install freerdp to get support for RDP.
`# pacman -S remmina freerdp`
### Configuration
Example configuration:
![rdp-linux-client-pic1-example.png](/rdp-linux-client-pic1-example.png)
#### Set different port
![rdp-linux-client-pic2-port.png](/rdp-linux-client-pic2-port.png)
## Windows Client
Enter `Remote Desktop Connection` in Windows search.
The target computer can be specified by IP or name
After clicking on `connect` the user will be asked to insert the username and password.
### Use different port
![rdp-winpro-client-pic1-example-port.png](/rdp-winpro-client-pic1-example-port.png)
## References
- [ArchWiki Remmina](https://wiki.archlinux.org/index.php/Remmina)
- [Azure RDP configuration](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/use-remote-desktop)
- [ArchWiki xrdp](https://wiki.archlinux.org/index.php/Xrdp)

View File

@ -4,13 +4,15 @@ visible: true
---
[toc]
## Linux Server
`sudo apt install samba smbclient`
samba conf backup
`sudo cp /etc/samba/smb.conf /etc/samba/smb.conf_backup`
*Samba users have to exist on the system as well before they are added to samba's user management system.*
_Samba users have to exist on the system as well before they are added to samba's user management system._
Add user to samba and create a password for it
`sudo smbpasswd -a (user)`
@ -18,6 +20,7 @@ Directories can be shared with groups or users.
Make sure to [set the owner and group](/content/linux-other/files.html) for the directories you want to share.
### Sharing with users
```
[sharename]
path = (absolute path)
@ -30,8 +33,10 @@ Make sure to [set the owner and group](/content/linux-other/files.html) for the
```
### Sharing with groups
Make sure to add all users to the group
The "@" signals samba that this is a group
```
[sharename]
path = (absolute path)

View File

@ -1,13 +1,16 @@
---
title: 'Regenerate SSH Keys'
title: "Regenerate SSH Keys"
visible: true
---
[toc]
## Remove from known_hosts
`$ ssh-keygen -R (server name)`
## Debian
Remove the old Hostkeys
`# rm -v /etc/ssh/ssh_host_*`

View File

@ -4,7 +4,9 @@ visible: true
---
[toc]
## Linux Client
`# apt install sshfs`
`# pacman -S sshfs`
@ -12,21 +14,25 @@ visible: true
Mount remote filesystem
`sshfs (user)@(ip/domain):(remotepath) (mountpoint)`
*Example with Windows host:*
_Example with Windows host:_
`sshfs admin@192.168.1.123:/ /mnt/tmp`
## Windows Client
Install [WinFSP](https://github.com/billziss-gh/winfsp)
Install [sshfs-win](https://github.com/billziss-gh/sshfs-win)
### Usage
*No path = start in remote user's home directory*
_No path = start in remote user's home directory_
#### GUI
Map a new network drive in Windows Explorer
`\\sshfs\(user)@(ip/domain)\(path)`
#### Terminal
Mount drive
`net use (letter): \\sshfs\(user)@(ip/domain)\(path)`
@ -37,5 +43,6 @@ Remove mounted drive
`net use (letter): /delete`
## References
- [sshfs](https://github.com/libfuse/sshfs)
- [sshfs-win](https://github.com/billziss-gh/sshfs-win)

View File

@ -1,5 +1,5 @@
---
title: 'Useful Commands'
title: "Useful Commands"
visible: true
---
@ -9,7 +9,7 @@ visible: true
### Splitting PDF files
```bash
```sh
convert -density 600 {INPUT.PDF} -crop 50x100% +repage {OUT.PDF}
```
@ -26,7 +26,7 @@ Using find with its `exec` switch one can set different permissions based on the
One example would be only changing file or directory permissions.
```sh
$ find (directory) -type f -exec chmod 744 {} +
find (directory) -type f -exec chmod 744 {} +
```
Replacing `-type f` with `-type d` would execute the `chmod` for directories instead.
@ -39,7 +39,7 @@ Using openssl on CPUs with AES acceleration one can create pseudorandom data wit
Much faster than `/dev/urandom` at least
```sh
# openssl enc -aes-128-ctr -md sha512 -pbkdf2 -nosalt -pass file:/dev/urandom < /dev/zero | pv > {TARGET DISK}
openssl enc -aes-128-ctr -md sha512 -pbkdf2 -nosalt -pass file:/dev/urandom < /dev/zero | pv > {TARGET DISK}
```
Around 2GiB/s on my Ryzen 7 1700x if output to `/dev/null`
@ -49,7 +49,7 @@ Around 2GiB/s on my Ryzen 7 1700x if output to `/dev/null`
> [From Pretty CSV viewing on the Command Line](https://www.stefaanlippens.net/pretty-csv.html)
```sh
$ column -t -s, < {FILE.CSV}
column -t -s, < {FILE.CSV}
```
### Download directory from webdav
@ -57,5 +57,5 @@ $ column -t -s, < {FILE.CSV}
Using `wget`, it's possible to download directories recursively from WebDAV.
```sh
$ wget -r -nH -np --cut-dirs=1 --user={USERNAME} --password={PASSWORD} https://WEBDAVHOST/DIR/DIR
wget -r -nH -np --cut-dirs=1 --user={USERNAME} --password={PASSWORD} https://WEBDAVHOST/DIR/DIR
```

View File

@ -4,16 +4,20 @@ visible: true
---
[toc]
## Get output from command
`:r!(command)`
*Example to get UUID for a disk*
_Example to get UUID for a disk_
`:r!blkid /dev/(partition) -sUUID -ovalue`
## Write as sudo user
`:w !sudo tee %`
## Replace strings
Globally replace strings
`:%s/foo/bar/g`

View File

@ -4,32 +4,40 @@ visible: true
---
[toc]
## Host
### Networking for nested VMs
To pass through the network connection to nested VMs, the first VM has to put the network adapter into promiscuous mode.
By default only root is allowed to do that, however the permissions can also be granted to others.
Grant permission to group:
```
# chgpr (group) /dev/vmnetX
# chmod g+rw /dev/vmnetX
```sh
chgpr (group) /dev/vmnetX
chmod g+rw /dev/vmnetX
```
Grant permission to everyone:
`# chmod a+rw /dev/vmnetX`
### Allow nested VMs
Enable the following two settings under "Processor" in the settings of the VM.
`Virtualize Intel VT-x/EPT or AMD-V/RVI`
`Virtualize CPU performance counters`
### Fix MSRS bug on Ryzen CPUs
Add `kvm.ignore_msrs=1` in `/etc/default/grub` to `GRUB_CMDLINE_LINUX_DEFAULT=`
Update the Grub configuration
`# grub-mkconfig -o /boot/grub/grub.cfg`
## Guest
### VMWare Tools
**Debian**
`# apt install open-vm-tools`
**Arch**

View File

@ -5,7 +5,9 @@ media_order: vnc-linux-pic1-example.png
---
[toc]
## Linux Server
For the VNC Server we will be using tightVNC.
`# apt install tightvncserver`
@ -20,8 +22,9 @@ Kill VNC server
Edit the `xstartup` file in `.vnc` to your liking.
*Example with xfce*
```
_Example with xfce_
```sh
#!/bin/sh
xrdb $HOME/.Xresources
@ -31,21 +34,24 @@ exec startxfce4
```
### Change password
`vncpasswd`
You can also add a view-only password
## Windows Server
Install tightVNC to get a VNC Client and also a VNC Server in one package for windows.
The server will be started automatically.
One important setting is `Require VNC authentication`, which allows you to define a password for viewing and interacting with the remote pc.
## Linux Client
Install Remmina with libvncserver to get client functionality.
`# pacman -S remmina libvncserver`
![Picture showing the usage of VNC with Remmina](vnc-linux-pic1-example.png)
## Windows Client
Install tightVNC to get a VNC Client and also a VNC Server in one package for windows.
Install tightVNC to get a VNC Client and also a VNC Server in one package for windows.