UseCases

From OnnoWiki
Jump to navigation Jump to search

There have been quite a few questions about how to accomplish ZFS tasks in Btrfs, as well as just generic "How do I do X in Btrfs?". This page aims to answer those questions. Simply add your question to the "Use Case" section of the table.

RAID

See also the Using Btrfs with Multiple Devices page for more detailed information on RAID under Btrfs.

How do I create a RAID1 mirror in Btrfs?

mkfs.btrfs -m RAID1 -d RAID1 /dev/sda1 /dev/sdb1

How do I create a RAID10 striped mirror in Btrfs?

 mkfs.btrfs -m RAID10 -d RAID10 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 

How can I create a RAID-1 filesystem in "degraded mode"?

Create a filesystem with a small, empty device, and then remove it:

dd if=/dev/zero of=/tmp/empty bs=1 count=0 seek=4G
losetup /dev/loop1 /tmp/empty
mkfs.btrfs -m raid1 -d raid1 /dev/sda1 /dev/loop1
losetup -d /dev/loop1
rm /tmp/empty

You can add in the real mirror device(s) later:

mount /dev/sda1 /mnt
btrfs dev add /dev/sdb1 /mnt

(kernel 2.6.37) Simply creating the filesystem with too few devices will result in a RAID-0 filesystem. (This is probably a bug).

How do I determine the raid level currently in use?

On a 2.6.37 or later kernel, use

btrfs fi df /mountpoint

The required support was broken accidentally in earlier kernels, but has now been fixed.

How do I change RAID levels?

At the moment, you can't change the RAID level after a filesystem is created. The filesystem design doesn't rule this feature out, but it's not yet implemented yet.

See the Project Ideas page.

Snapshots and subvolumes

I want to be able to do rollbacks with Btrfs

If you use yum you can install the yum-plugin-fs-snapshot plugin and it will take a snapshot before doing any yum transaction. You can also take a snapshot manually by running

btrfs subvolume snapshot /path/to/subvolume /path/to/subvolume/snapshot_name

then if something goes wrong you can either mount using the mount option subvol=<snapshot name> or the subvolid=<id> option. You can determine the ID of a snapshot by running

btrfs subvolume list /path/to/subvolume

If you wish to default to mounting from the rollback snapshot you can run either

btrfs subvolume set-default <id>

using the subvolume ID, or alternatively if you already have the snapshot mounted, you can run

btrfs subvolume set-default /path/to/snapshot

How do I mount the real root of the filesystem once I've made another subvolume the default?

mount -o subvolid=0 <filesystem> <mount-point>

Can a snapshot be replaced atomically with another snapshot?

btrfs subvolume snapshot first second

creates a snapshot of the subvolume first. After changing second in some way, I'd like to replace first with second e.g. using

btrfs subvolume snapshot second first

This isn't currently allowed. I would need to delete first before snapshotting, but this leaves a period of time when there is no directory by that name present, hence the need for atomic replacement á la rename(2).

Is this possible with current btrfs?

  • No, and it's going to be pretty much impossible to achieve. You would have to ensure that all users of the volume being replaced have stopped using it before you replaced it. If you're going to do that, you might as well do an unmount/mount.

Are there a similar commands in BTRFS like ZFS send/receive?

You can use

btrfs subvolume find-new <path> <last_gen>

to find the files modified since the last generation. This uses the searching ioctl which is much faster than doing find, and equivalent to ZFS send. There is nothing for ZFS receive yet.

How I can know how much space is used by a volume?

This is a complex question. Since snapshots are subvolumes, storage can be shared between subvolumes, so how do you assign ownership of that storage to any specific subvolume?

It does make sense to ask "how much space would be freed if I deleted this subvolume?", but this is expensive to compute (it will take at least as long as "du" to run), and currently there is no code that will do the calculation for you.

Are there a similar commands in BTRFS like ZFS export/import?

Your answer here....

Can we create virtual block device in BTRFS?

No. Btrfs doesn't support the creation of virtual block devices or other non-btrfs filesystems within its managed pool of space.

How do we implement quota in BTRFS?

A proposal being discussed (2011/06) http://comments.gmane.org/gmane.comp.file-systems.btrfs/11095

Disk structure proposal (2011/07) http://permalink.gmane.org/gmane.comp.file-systems.btrfs/11845

Other questions

How do I label a filesystem?

The label on a filesystem can only be set at creation time, using the -L option to mkfs.btrfs.

There are patches that have been developed to set the label after creation, but they haven't been merged yet: https://patchwork.kernel.org/patch/175392/

How do I resize a partition? (shrink/grow)

Template:Note

In order to demonstrate and test the back references, Btrfs devel team has added an online resizer, which can both grow and shrink the filesystem via btrfsctl or btrfs commands.

First, ensure that your filesystem is mounted. See elsewhere for the full list of btrfs-specific mount options

mount -t btrfs /dev/xxx /mnt

Growing

Enlarge the filesystem by 2 GiB:

btrfs filesystem resize +2G /mnt

or

btrfsctl -r +2g /mnt

The parameter "max" can be used instead of (e.g.) "+2G", to grow the filesystem to fill the whole block device.

Shrinking

To shrink the filesystem by 4 GiB:

btrfs filesystem resize -4g /mnt

or

btrfsctl -r -4g /mnt

Set the FS size

To set the filesystem to a specific size, omit the leading + or - from the size. For example, to change the filesystem size to 20 GiB:

btrfs filesystem resize 20g /mnt

or

btrfsctl -r 20g /mnt

ZFS does not need file system entries in /etc/vfstab. Is it mandatory in BTRFS to add file system entries in /etc/fstab?

Going to need some clarification here since no file system in Linux requires entries in /etc/fstab, you can mount anything manually. What advantage is there to not having things in /etc/vfstab?

>>>>>Please refer below <<<<<<<<

In fact, zfs "auto" mount pools and filesystems tree hierachies. I think the question was : do we need to use "/etc/fstab" like with ancients FS for auto mount ?

For BTRFS, until we put the mount point entry in "/etc/fstab", its not going to be stable across reboot. In ZFS,making mount point entry in /etc/vfstab comes as legacy option.

So, if you have got 50 BTRFS file systems mounted on different mount points, you will need 50 mount point entries in /etc/fstab to make sure its persistent across the reboot whereas in ZFS its not mandatory. In the scenario, where we have 100+ mounted file system and making 100 entries in /etc/fstab makes fstab file to become big..

yes right as mentioned above question should have been "do we need to use "/etc/fstab" like with ancients FS for auto mount ?

Please correct me if i am wrong...

How can I use btrfs for backups/time-machine?

You can do snapshots frequently. See the SnapBtr script.