- marked as enhancement
Improve setup-persistence: Load needed modules
Hi,
I open another different issue for the topic of improving the setup-persistence script. Here we can discuss about how to load the needed drivers.
The problem is:
- The execution of the
setup-persistence
script is to early to have all modules automatically loaded. For example, virtual device drivers are loaded after the persistence is initialized. As a consequence perhaps not all disks are connected when this script is executed.
The solution could be:
- Inside the script add some simple detection of missing drivers and load them at start. This will not generate any troubles. We only need to care about load only really needed drivers.
The proposal:
- If we detect LVM and/or MD-RAID partitions, then load
dm-mirror
anddm-cache
modules. - If we detect PVSCSI device, then load
vmw_pvscsi
module. - And other disk controller devices (we need to prepare the list, i.e. LSI, SAS, virtual, etc.)
This will work because this script is executed very early. So any action required could be done without problems.
Example for LVM/RAID detection:
info "Activating any LVMs ..."
local typelvm=$(blkid -o device -t TYPE=LVM2_member) || fail "FAILED to get blkid"
info " blkid lvm: $typelvm"
local typeraid=$(blkid -o device -t TYPE=linux_raid_member) || fail "FAILED to get blkid"
info " blkid raid: $typeraid"
if [ -n "$typelvm" ] || [ -n "$typeraid" ]; then
info "Loading RAID modules"
modprobe -s dm-raid >&2 || true
modprobe -s dm-cache >&2 || true
modprobe -s dm-thin-pool >&2 || true
sleep 1 # Necessary to get time to load dependencies
else
info "No modules needed for RAID/LVM"
fi
vgchange -aay >&2 || true
To overcome the sleep 1
I suggest to try (after the code is incorporated) to get the list of the different modules loaded after, and try to load them in reverse order. Therefore, if someone in the future will be necessary, then they will be automatically loaded but very fast, as others will be already loaded. However, if in the future some module is removed/added this list will need to be revised.
Example for PVSCSI controller detection:
info "Activating Controller drivers ..."
local pvscsi=$(lspci -mm -d 15ad:07c0) || fail "FAILED to get lspci"
info " lspci pvscsi: $pvscsi"
if [ -n "$pvscsi" ]; then
info "Loading PVSCSI module"
modprobe -s vmw_pvscsi >&2 || true
fi
vgchange -aay >&2 || true
I don't think it's necessary here to check if the driver has already been loaded. Just check the vendor and device numbers and trigger the loading of the module. This simplifies compilation because all drivers can be modules.
You agree with these suggestions?
Best.
Comments (6)
-
reporter -
reporter Hi Stephan,
What do you think about adding two functions to
setup-persistence
like:load_disk_controllers()
andload_raid_drivers()
? -
Hi Daw Quan,
Nice work. I’m focusing on another task right now, and can’t go too much into detail on this right now. But I’m beginning to think that I should consider starting udev much earlier. This could eliminate the need for hand-written auto-detection and devices, which is almost certain to fail, ie. be incomplete and require a lot of iterations. It would also allow some of the modules, that are currently baked into the kernel, to be extracted as loadable modules.
-
reporter Great idea! If you can do that, I think a lot of problems will be overcome. And memory consumption will reduced as well. However, I'm not sure, but maybe you need to introduce some wait-until you ensure that all devices/modules are initialised.
-
Hi there,
I’ve refactored the startup process in the development build of Lightwhale 2.1.5. In relation to this issue, udev is started before setup-persistence, and therefore all necessary drivers should be detected and loaded. This remove a great burden of the setup-persistence script. For this reason, I’ve also removed some drivers built into the kernel, and built them as modules instead. They were built-in for the exact same reason.
-
- changed status to open
- Log in to comment