Support to run qemu workloads

Issue #15 resolved
Krystle created an issue

Hi,

This project looks very interesting. I'm working in IT in a small company. And I want to migrate our container workload to a server running Lightwhale. However, some of those containers need Qemu inside Docker. So, we need to have KVM support enabled in the kernel. Using the "kvm-ok" script I've detected that the module is missing:

op@test:~$ sudo ./kvm-ok
Password:
INFO: /dev/kvm does not exist
HINT:   sudo modprobe kvm_intel
INFO: Your CPU supports KVM extensions
KVM acceleration can be used
op@test:~$ sudo modprobe kvm_intel
modprobe: FATAL: Module kvm_intel not found in directory /lib/modules/6.1.64

So, please could you be so kind to add support for it?
Thank you.
K.

Comments (6)

  1. Krystle reporter

    Hi,

    From my research, to add support for KVM it’s sufficient to add this to the kernel linux.config file:

    CONFIG_KVM=m
    CONFIG_KVM_INTEL=m
    CONFIG_KVM_AMD=m
    

    But, in addition I want to know if we can “test” another kernel with a dev (custom) version of lightwhale. Any guidelines for booting (temporarily) with a different kernel?

    Regards.
    K.

  2. Stephan Henningsen

    Hi Krystle,

    So, I’ve done a little side project to Lightwhale. It’s basically a copy of Lightwhale, but with KVM modules and QEMU x86 system emulator added. I call it Clusterfox because I use it for experimenting with multiple Lightwhales in a cluster setting. On my workstation I use QEMU to boot Clusterfox into a clean Linux system. Here are no firewall rules, existing Docker networking rules, no noise like there is on my workstation. Here I can use the nested KVM to run virtualized Lightwhales, and also DHCP, PXE, and what not without wrecking havoc on the physical LAN.

    I didn’t want to contaminate Lightwhale with anything from this rather disturbing setup, that’s why I made the extra OS image. But now you’re telling me it’s useful to others than me? This is great news =)

    I’ll add the KVM modules, but I won’t add QEMU itself. That should be done as a container, just like you said.

    Regarding booting a different kernel/rootfs mix, the way I sometimes do it is to use QEMU and specify the individual kernel and initramfs files. I’m actually building those images separately, but I’m not distributing them right now because I’m afraid they’ll confuse people when seen in the download list, and they’re quire large, and most won’t find them useful. Also, then can be extracted from the ISO.

    I’ve put the latest stable kernel and rootfs here: https://lightwhale.asklandd.dk/dev/

  3. Krystle reporter

    Hi Stephan,

    Thank you for adding the KVM support. The QEMU it’s really not necessary in this project. I agree with all of your comments.

    Only one question: If I need to compile a new kernel to test something for Lightwhale and boot it: any guide available? I know how to extract kernel and initramfs from the ISO. Also I know how to replace files inside the ISO with xorriso. For example, to replace the kernel:

    xorriso -indev ./lightwhale-2.1.1-x86.iso -outdev ./lightwhale-2.1.1-x86-custom.iso -boot_image any replay -update lightwhale-new-kernel /boot/lightwhale-kernel
    

    I know how to setup buildroot. And I’ve done a dirty compilation of the Lightwhale’s kernel. But I prefer if you can share a simple config (or a script to patch the current source) to complete only the kernel recompilation. And for sure not to do anything than testing some changes.

    Thank you.
    K.

  4. Stephan Henningsen

    I don’t know any other way that to use Buildroot. That’s how I do it, anyway. Getting there isn’t effortless. However, once you have that setup, simply run make linux-menuconfig to configure and make linux to rebuild the kernel. Use qemu-system-x86 -kernel bzImage -initrd lightwhale-2.1.1-rootfs to boot them. Rolling an ISO takes more effort, but you seem familiar with that.

  5. Log in to comment