provide compiled ET code on tutorial machine

Issue #2260 resolved
Roland Haas created an issue

Currently each user of the ET tutorial has to compile the code from scratch, which takes >1hr of time. Since, at least on the tutorial server, we know exactly what code will be compiled it would make sense to provide a precompiled code.

It is not clear how to craft a notebook that will work with both a precompiled code and as a standalone notebook in a user’s laptop though.

Comments (8)

  1. Erik Schnetter

    One option is to use Docker. It’s reasonably easy to install, and a Docker image can contain precompiled code. People can start a shell in Docker where they see the Cactus source tree and a precompiled executable.

  2. Roland Haas reporter

    The tutorial server uses docker and each user gets their own docker containers, so we are fine there. The tricky par t is that since the tutorial notebook is supposed to work both in the tutorial docker container as well as standalone and also show the basic workflow when building the Einstein Toolkit we would want to showcase how to use GetComponents (which will behave differently if its output directory exists) and simfactory (which would work fine I think even within an existing build).

    As for having new users install docker on their laptop then run inside, this seems not so easy to me as docker is actually not so easy to get up and running and its command line interface is neither stable nor small (or obvious as far as I can tell).

  3. Ian Hinder

    Is Docker hard to install? On Mac OS, you just install it like any other application from a dmg (https://download.docker.com/mac/stable/Docker.dmg). Granted, they don’t make that link very easy to find, trying to make you sign up etc. Docker for Mac no longer uses VirtualBox, and is much easier to use. You just run docker commands from a terminal window; no more boot2docker etc. On Linux, my understanding was that you can just install the package from your distribution repository and similarly just run docker commands. I don’t know about Windows. Regarding the command line interface: for this purpose, one would just need to type a couple of simple commands which we provide. Probably “docker run -ti <image>”.

    I’m not saying this is the way we should go, just commenting on the specific issues that you raised.

  4. Roland Haas reporter

    Well, I am admittedly not a big fan of docker on Desktop machines (servers are a different story with their own benefits and issues) admittedly so please treat with caution. So here’s my rant.

    I can speak from my own experience on Linux: each distro will come with its own version of docker (since docker evolves faster than the slower moving Linux distros), with “docker” likely split into multiple packages (not necessarily called “docker”, see eg https://packages.debian.org/buster/docker or https://packages.ubuntu.com/disco/docker). So you may have to provide docker commands for either multiple versions of docker or the oldest one you care about (likely CentOS or Debian) and accept that the docker docs on the web will be “all wrong” because they refer to a newer version.

    You also need to have the proper kernel modules installed to get docker to use overlay file systems (this should happen automatically, but I know of at least one version of CentOS with an XFS file system where the overlay drivers fails on one has to resort the device-manager based storage driver).

    On OSX and Windows Docker still (since it needs a Linux kernel to interact with) relies on a virtual machine (just no longer virtualbox but something less heavyweight) so there is for example the issue of having to provision a large enough disk image (or fill up your laptop disk) and memory (unless the VM can balloon which it might, I have not used Docker on a Mac or Windows). It also means that files are ”hidden” in the docker container and not easily accessible from the outside (one needs docker cp or use the jupyter web-interface to download them from the container).

    Docker also seems to be quite bad at actually cleaning up after itself and leaves docker containers / images around that are no longer used (docker ps -a shows them).

  5. Ian Hinder

    Yup, agree with all. And you seem to have had more experience than me with having to run docker on multiple Linux systems. I think I’ve only used it on Mac OS and Ubuntu.

  6. Roland Haas reporter

    To take away a bit of the sting in my rant: I am not actually against having such a docker image for download, as long is it is not the primary option and it is made clear on the website that this is provided without warranty or any guarantee of functionality.

  7. Log in to comment