---
title: BTRFS
visible: true
---

[toc]

## Utils

`# pacman -S btrfs-progs`

## Fstab example

`UUID=2dc70a6e-b4cf-4d94-b326-0ba9f886cf49 /mnt/tmp btrfs defaults,noatime,compress-force=zstd,space_cache=v2,subvol=@ 0 0`

Options:

```
defaults       -> Use whatever defaults
noatime        -> Reading access to a file is not recorded
compress-force -> Send all files through the compression algorithm
space_cache    -> Increases performance by mapping unallocated blocks
subvol         -> Subvolume to mount
```

## Filesystem usage

Show storage allocated, used and free  
`# btrfs fi usage (mountpoint)`

```
allocated: space used
unallocated: actual free space
Used: amount of data stored
Free: free storage based on "Used"
```

Start rebalance of datachunks filled less than 70%  
`# btrfs balance start --b -dusage=70 -musage=70 (mountpoint)`

Check status of rebalance  
`# btrfs balance status -v (mountpoint)`

## Disable CoW

Disable copy on write for folders (Only works on new files)  
`$ chattr +C (path)`

## Device errors

Error counts for a given mountpoint  
`# btrfs dev stat (mountpoint)`

## Compression

### Algorithms

```
zlib: Slow, but strong compression, level 1-9
lzo : Fastest, weak compression
zstd: [Recommended] Medium, newer compression standard than the others, only works on newer kernels, level 1-15
```

Enable compression for existing files  
`# btrfs filesystem defragment -r -v -c(alg) (path)`  
_It is impossible to specify the level of compression wanted._

Add `compress=(alg)` to `/etc/fstab`

To specify a level of compression (zlib and zstd) use `compress=(alg):(level)` in fstab.  
For zstd compression it is recommended to use `compress-force=zstd:(level)`

## Subvolumes

List  
`# btrfs subv list (path)`

Create  
`# btrfs subv create (path)`

Mount a subvolume  
`# mount -o subvol=@(subvolname) /dev/sdXX /(mountpoint)`

## Snapshots

TODO

## RAID

An array can be mounted by specifying one of its members.  
`# mount /dev/sdXX /mnt`

All members of an array have the same UUID, which can be mounted through fstab.

### RAID 1

On filesystem creation  
`# mkfs.btrfs -m raid1 -d raid1 /dev/sdXX /dev/sdYY`

### RAID 5

On filesystem creation  
_It is recommended not to use raid5/6 for metadata yet_  
`# mkfs.btrfs -m raid1 -d raid5 /dev/sdXX /dev/sdYY /dev/sdZZ`

### RAID 10

On filesystem creation  
`# mkfs.btrfs -m raid10 -d raid10 /dev/sdXX /dev/sdYY /dev/sdZZ /dev/sdQQ`

### Convert to single device

First, the files have to be collected on one device.  
_DUP on system and metadata should only be used on HDDs. Use single on SSDs_  
`# btrfs balance start -f -sconvert=dup,devid=(id) -mconvert=dup,devid=(id) -dconvert=single,devid=(id) /(mountpoint)`

Now unused devices can be removed  
`# btrfs device delete /dev/sdYY /(mountpoint)`

### Replace dying/dead device in RAID array

Show arrays that are available  
`btrfs fi show`

From my testing the log has to be dropped before btrfs will mount the incomplete array  
`btrfs rescue zero-log /dev/sdXX`

Mount with these options to be able to fix it  
`mount -o rw,degraded /(mountpoint)`

The ID has to be replaced with the ID of the **missing** device!  
`btrfs replace start -B (id) /dev/sdYY /(mountpoint)`

Query the status of the repace  
`btrfs replace status /(mountpoint)`

Balance the filesystem at the end  
`btrfs balance /(mountpoint)`

## Issues

### 100% CPU Usage

`btrfs-transaction` and `btrfs-cleaner` will run on a single cpu core, maxing it out with 100% load.  
_TODO: Check what enabled quotas in the first place. A likely candidate is snapper_  
The issue is apparently caused by using quotas in btrfs.  
Check if quotas are enabled:  
`# btrfs qgroup show (path)`  
Disable quotas:  
`# btrfs quota disable (path)`