LVM support

Issue #19 resolved
Krystle created an issue

Hi Stephan,

This is more a question than a bug. I want to put the persistent storage inside a LVM logical volume. And the same for the swap space. Now all the LVM tools/support are included. However, I don’t know if your scripts are detecting/using the LVM volumes. I ask for this because the “blkid” tool that I suspect that you’re using only detects the physical partitions and not the logical volumes.

Any idea to use them?
Thank you.
K.

Comments (18)

  1. Stephan Henningsen

    Hi Krystle,

    No idea at this point, sorry. LVM sort of goes beyond the intended use-case of Lightwhale. Because in order to have Lightwhale search LVM partitions, you must create them first. And the whole idea is not prepare anything and have Lightwhale manage the disk for you.

    For reference, this is where the script scans for candidate devices: https://bitbucket.org/asklandd/lightwhale/src/486b93ab823f7f0f7e1956bc4a6631958270b83d/custom/rootfs-overlay/lib/lightwhale/setup-persistence#lines-86

    On the other hand, Lightwhale should be able to detect any formatted partition named lightwhale-data and use it. And swapon -a will add all swap devices listed in /etc/fstab. So if you layout your LVM volumes, format them with the correct file system (ext4 and linx-swap), and use e.g. e2label to give the intended persistence partition the known label, then things might work out for you

    But I do understand the desire for better control. There’s also another user request about making Lightwhale put the swap partition on a different disk, e.g. using a magic header lightwhale-please-use-as-swap.

    I’ll rethink this disk-detection and partitioning strategy at some point, but for now it’s not supported.

  2. Krystle reporter

    Hi Stephan,

    IMHO the LVM support is a must have. We need it to provide online live support to “move” the persistent space between disks. For example, if you’ll move from one small disk to a new one. Using LVM is quite easy and you can do without any reboot.

    After a simple review of your scripts, and after some testing… I have this suggestion to add support for it:

    • Creating the PV/VG/LV spaces is already supported. You can boot with the ISO and you can execute all the commands that you need to prepare the logical volumes. One recommendation is to use GPT for the partitions.
    • Then you can format the persistent storage and the swap using:
    mkfs.ext4 -F -i 8192 -j -L lightwhale-data /dev/vg0/data
    mkswap -L lightwhale-swap /dev/vg0/swap
    
    blkid -l -t LABEL="lightwhale-data" -o device
    blkid -l -t LABEL="lightwhale-swap" -o device
    
    • The only problem is when you reboot. Because the volumes aren’t activated at boot. Fortunately, however, only one command is necessary: /usr/sbin/vgchange -ay. This is required to be called before any search of devices to activate all LVM volumes. And after that command is executed, then any call to blkid will work as expected. So nothing else requires to be changed. Therefore if you add this modification in the inittab the LVM support will be automatically added:
    ::sysinit:/lib/lightwhale/rescue-shell
    ::sysinit:/usr/sbin/vgchange -ay        # Activate LVM partitions
    ::sysinit:/lib/lightwhale/setup-persistence
    
    • Finally, to automatically formatting the LVM volumes, another change is recommended. In the script setup-persistence you can replace the command to make the list of devices where serching for the magic label. Here the simplified patch:
    - local devices=$(lsblk -npd -I 3,8,259,179 --output NAME) || fail "FAIL: lsblk"
    + local devices=$(lsblk -npr -I 3,8,259,179 --output NAME) || fail "FAIL: lsblk"
    
    • The difference is not to use the parameter “-d” and replace it with the “-r”. Therefore, not only the root device will be listed, but also partitions/subdevices. In this case, if the user writes the magic label to the partition or the logical volume (pre-activated in the boot with the previous command). This will not breaks anything. And basic support for partitions will be added.

    I hope you want to test these suggestions and merge them.

    Regards.
    K.

  3. Krystle reporter

    Hi Stephan,

    I want to confirm that this is working like a charm for two days without any side effect:

    ::sysinit:/lib/lightwhale/rescue-shell
    ::sysinit:/usr/sbin/lvm vgchange -aay  # activate LVM volumes.
    ::sysinit:/lib/lightwhale/setup-persistence 
    

    Please, consider to merge it.
    K.

  4. Stephan Henningsen

    Hi Krystle,

    This is a very nice observation and suggestion, and I think LVM would be a great thing to add! GPT too. But I don’t have experience with it, as in I haven’t done any manual managing of LVMs.

    I already have support for RAID devices using Linux Software RAID and Multi Devices here. I wonder if MD will conflict with LVM? Should they be initialized in a specific order?

  5. Krystle reporter

    Hi Stephan,

    I have a lot of experience using LVM. And I can guarantee that just adding this line in the inittab script everything works and without side effects. So please consider to add it now.

    And if you want to add more support, simply replacing "-d" with "-r" in your detection script will be enough to search on all block devices: raw, partitions and LVM logical volumes. No further efforts are needed.

    K.

  6. Stephan Henningsen

    FYI, I’m looking at this, but unfortunately it’s not easy, and certainly not just a matter of remove the -d option of lsblk. I’ll push this to a future release, and ship what’s do so far.

  7. Krystle reporter

    Hi Stephan,

    Thank you for your effort with the LVM support. At time, I’m using the last version 2.1.2 with a mod of the file inittab with just the suggested change (the line to activate de LVM partitions).

    Any comments on the new changes you're working on now?

  8. Stephan Henningsen

    I’m basically adding your line from inittab to setup-persistence together with the RAID assembly. And I’ll loose the -I filters and -d and scan any drive and partition.

    But for now I’ll now change how Lightwhale sets up the partitions for persistence automatically. But at least you’ll have the freedom to do as your please, including to use LVM.

    In the future I’ll most likely move to btrfs instead of ext4.

  9. Krystle reporter

    Hi Stephan,

    Thank you for the Beta version. Now I’m testing it. However, some questions:

    • In the new version of the “/lib/lightwhale/setup-persistence” script I see that you first activate LVM and after the RAIDs. But it’s more common to do in the reverse orders, because it’s more common to use MD RAID partitions as LVM Physical partitions.
    • I don’t understand the new changes in the pesrsistence handling. I see several changes. Can you please add some comments about it?
    • Finally, I don’t see in the repository the commits of these changes. Any reason to do it? It’s more easy to follow the changes reviewing the repo.

    Regards.
    K.

  10. Krystle reporter

    And a last comment: I recommend to add some aliases in the “rescue-shell” script to activate the logical partitions and/or raids. It’s good to not do it automatically (because you’re doing a rescue task). But several users perhaps will forget to call to them. Therefore some helping aliases with a comment at the start of the shell could be useful.

    You agree?

  11. Stephan Henningsen

    Hi,

    1. Hmm, that can’t be right; it looks fine here, and I deliberately activated LVM before assembling RAID because of the reason you stated. However, I did find a bug as was doing vgchange -ayinstead of vgchange -aay. I’ve fixed that now, and it will available in 2.1.3-dev2. Here’s proof of the fix: https://bitbucket.org/asklandd/lightwhale/src/bd451bd3aadb94e1a7f92134f035b2e267e7ec20/custom/rootfs-overlay/lib/lightwhale/setup-persistence?at=2.1.3%2Fdev#lines-65
    2. I’ve added comments in the script, and I’ve added comments in the git commits. But in essence, I changed to lsblk -npr --output NAME and run vgchange -ay early in setup-persistence before scanning for partitions.
    3. You’re right, that was a mistake; I was working in a branch that wasn’t pushed. Everything is on 2.1.3/dev now.
    4. I don’t think I’ll add any aliases, but it’s a good idea to display a message with such a reminder. This is not included in dev2.

  12. Log in to comment