Alexi Storage Manager

Small but nifty storage manager intended for small to medium sized setups (<10 boxes), to get rid of ssh'ing into the boxes and running the same repetitive and boring tasks over and over again. Focus on keeping your datacenter running without memorizing command line parameters.


Here are some screen shots of the dashboard:

Here's the share editor:

And we even have a coupl'a safeguards in place already:


Intended features ([x] means we even have that already, though some of it only in /admin until the Web App is ready):

  • LVM:

    • [x] Recognition of existing VGs and LVs
    • [x] Creating and deleting LVs
    • [_] Resizing LVs
    • [x] File System support for Ext4 and XFS
    • [x] Optimizing filesystems for the RAID layout (Stripe Alignment).
    • [x] Optional Volume Flags: ro, sync
    • [x] Disable Volumes, causing them to be unexported and unmounted
  • NFS:

    • [x] Export Volumes via NFS
    • [x] Recognize existing exports
  • Samba:

  • Ceph:

    • [x] Show general health and performance indicators.
    • [x] Creating and deleting pools.
    • [x] Configuring repsize and minrepsize on pools.
    • [_] Listing and deleting images in pools.
    • [_] Keeping track of CephX authentication entities, updating their privileges as pools are created or removed.
    • [x] Creating OSDs on volumes.
  • Libvirt:

    • [_] Register Ceph pools as Storage pools.
    • [_] Handle libvirt secrets to access VM images on Ceph pools.
  • Distributed:

    • [x] Central engine controlling daemons distributed across a bunch'a nodes.
    • [x] Everything works even when the Engine is down.
  • Web app:

    • [x] Nice and usable web app built with AngularJS and Material Design that makes all the cool stuff easily accessible.

Features we won't support:

  • Block-oriented storage (iSCSI, FC).
  • Configuring RAID controllers through Alexi.
  • LVM Snapshots. Mostly, the same can be achieved easier by the application.
  • ZFS, Btrfs. Their sense of a "volume" is completely different.
  • Adding tasks to Alexi that can be done better through Ceph-Deploy.
  • RBD Image creation. That's the clients' job.
  • Engine HA. The system needs to be able to tolerate the Engine being down.

System architecture

Storage architecture managed by Alexi:

               +-----------+     +------------+     +----------+
               | Smb Share |     | NFS Export |     | Ceph OSD |
               +-----------+     +------------+     +----------+
                     |                |                  |
    +-----+      +------+          +-----+            +-----+
    | KVM |      | Ext4 |          | XFS |            | XFS |
    +-----+      +------+          +-----+            +-----+
       |             |                |                  |
     +----+       +----+           +----+             +----+
     | LV |       | LV |           | LV |             | LV |
     +----+       +----+           +----+             +----+
       |             |                |                  |
    |                     LVM VG                                |
                 |       Disk arrays       |

Architecture of Alexi itself:

                          | Web App |
       |                     Engine                    |
              |                |                |
       +-------------+  +-------------+  +-------------+
       |   LLD       |  |   LLD       |  |   LLD       |
       |             |  |             |  |             |
       |   Node 1    |  |   Node 2    |  |   Node 3    |
       |             |  |             |  |             |
       |   LVM       |  |   LVM       |  |   LVM       |
       |   Ceph      |  |   Ceph      |  |   Ceph      |
       |   FS        |  |   FS        |  |   FS        |
       |             |  |             |  |             |
       +-------------+  +-------------+  +-------------+


To be ready for the installation of Alexi, you should have done a basic setup of your system. That means,

  1. install Ubuntu >= 16.04 (Xenial) or Debian >= 8 (Jessie)

  2. Create at least one LVM Volume Group. The underlying storage arrays should (but do not need to) have a proper Disk alignment.

    You can find an in-depth introduction to that topic here:

    It sums up to: Every mistake you make here will cost you an order of magnitude of performance. Your systems will still run fine most of the time, but won't be able to handle load spikes as easily as they would with a properly-aligned disk layout.

  3. Install Alexi, make the host known to the engine, and have fun.


Packages are available from a PPA:

Although the PPA is for Ubuntu Xenial, the packages also work on Debian Jessie. Just add it like you would add any other Apt repository.

  1. Basic installation:

    add-apt-repository ppa:svedrin/alexi
    apt-get update
    apt-get install alexi
    alexi-config install
  2. Log in to the engine (you'll create the credentials during the "alexi-config install" step) and add your local LLD as a host.

    Name:     some name useful for you
    Base URL:
  3. Go to the LVM->VolumeGroups section and add the Volume Groups you'd like to manage through Alexi.

  4. Now you can create LVs and NFS/Samba shares.

Installing a distributed setup

Install alexi on one node, and alexi-lld on the others. Edit /etc/alexi/lld.yaml to allow the engine to connect to all LLDs. Then add the other LLDs as hosts to the engine.

Using Ceph

If you intend to use Ceph along with Alexi, the basic steps are these:

  1. See Preparation. :)

  2. See Installation. :P

  3. Create LVs for your OSDs. I like to name them "osd1" on all nodes (yeah, the name will not match the OSD ID that way, but I don't care), format them using XFS and have fewer, larger OSDs on RAID arrays. That way I can make use of the RAID Controller's cache, and I can place other stuff besides the Ceph OSD on that node and have it all work in the same fashion.

  4. Follow the Ceph Pre-flight Check list found here:

  5. Follow the Ceph Cluster Quick Start guide found here:

    Follow the guide up until your mons are running (that is, until you have run the "ceph-deploy mon create-initial" command).

  6. Configure OSDs, pools and client authentication through Alexi.

    When adding new pools, you'll need to specify the number of placement groups you want in that pool. 64 or 128 are sane defaults for setups built the way Alexi is supposed to be used, so you should be able to go with that. For more information, check out the Ceph docs:

  7. Enjoy your Ceph cluster.

Hacking Alexi

To get a dev system for Alexi, install the packages, and then run:

rm -rf /usr/share/alexi
apt-get install mercurial npm
ln -s /usr/bin/nodejs /usr/local/bin/node
npm install -g bower
hg clone /srv/alexi
ln -s /srv/alexi /usr/share/alexi
cd /srv/alexi/webui
bower install

Building Debian packages:

cd /srv/alexi
apt-get install devscripts
hg commit -m "did sum cool stuff, yo"
ls /srv


In general, pull requests are welcome -- and here's the but:

I'd like to keep the code base of Alexi as lean as possible, so please before you start working on something, check the "Features we won't support" list above. If a feature is on that list, I really, really mean it, and I'll feel free to not merge pull requests that go in these directions.

If you're unsure about whether I'll merge or object to your pull request (or you just want to know why I hate on specific features and how I'd build systems differently), please feel free to open an issue or talk to me (Svedrin) in #alexi on Freenode IRC.