Wiki

Clone wiki

opennebula-openvz / Home

Overview

This page describes the procedures to install and use OpenVZ with Opennebula.

Branch "current" contains the driver for the current stable Opennebula release and is not guaranteed to work properly with older releases. Other branches are kept to support older releases.

Table of contents

Supported features in the current driver

  • ploop: deploy, suspend, poweroff, stop*, shutdown, undeploy, migrate*, migrate live, VM snapshots
  • simfs is not tested but may work - use at your own risk.

Features marked with * need datastores location on the hosts to be the same as on the frontend. If they differ, then on the frontend you can create a simlink matching host's datastore location pointing to the actual frontend's datastore.

OpenVZ-specific template parameters

OSTEMPLATE corresponds to OpenVZ "ostemplate" parameter. Make sure the value of the OSTEMPLATE parameter is written in the format <OS name>-<version>-<architecture>, e.g. OSTEMPLATE="sl-6-x86_64".

VE_LAYOUT parameter is used to set filesystem type of the VM. It can be ploop or simfs. If not specified, ploop is used by default.

OVZ_SIZE sets the required disk size for the VM. If it is not specified, the value from DEFAULT_CT_CONF is used. Example:

DISK=[
  IMAGE_ID="1",
  OVZ_SIZE="20480" ]

Installing current driver

Contextualization

In this version of the driver contextualization is performed by copying ISO file contents to the specified location in VM file tree; default location for copying files is configured in the file remotes/vmm/ovz/ovzrc by changing CT_CONTEXT_DIR variable.

Frontend node installation and configuration

[root@FN]$ yum install mercurial patch genisoimage
[root@FN]$ yum install http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-7.noarch.rpm 

or

[root@FN]$ yum install yum-conf-epel

Download and install OpenNebula according http://opennebula.org/documentation:rel4.2:ignc, i.e. download opennebula tarball for CentOS-6.x from http://downloads.opennebula.org ("OpenNebula 4.2 CentOS 6.4 tarball"), unpack it and install needed rpms on FN. Alternatively it can be installed from the ONE repository:

[root@FN]$ cat << EOT > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=opennebula
baseurl=http://opennebula.org/repo/CentOS/6/stable/\$basearch
enabled=1
gpgcheck=0
EOT

[root@FN]$ # yum install <packages>

Installation of the opennebula-* rpms may create a user oneadmin with UID 498 and group with GID 499 which are reserved for cgred group (comes from installation of libcgroup library which ploop depends on). In that case it is easier to change oneadmin UID and GID on FN instead of changing GID on all CNs (anyway, make sure that UID and GID of oneadmin user on FN and CNs are the same).

[root@FN]$ groupmod -g 1000 oneadmin

[root@FN]$ usermod -u 1000 -g 1000 oneadmin

[root@FN]$ chown oneadmin:oneadmin /var/run/one /var/lock/one /var/log/one

[root@FN]$ chgrp oneadmin -R /etc/one/

[root@FN]$ yum install ruby-devel

[root@FN]$ /usr/share/one/install_gems

[root@FN]$ hg clone  https://bitbucket.org/hpcc_kpi/opennebula-openvz

[root@FN]$ cd opennebula-openvz

Switch to the current branch and install the driver:

[root@FN]$ hg update current

[root@FN]$ cd src/

[root@FN]$ bash install.sh

Now make sure that all permissions are correct and generate ssh keys:

[root@FN]$ chown oneadmin:oneadmin -R /var/lib/one/

[root@FN]$ cd ~

[root@FN]$ ssh-keygen -t rsa

Put id_rsa.pub in root@CN:/.ssh/authorized_keys as well as id_rsa* in root@CN:/.ssh/:

[root@FN]$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

[root@FN]$ scp -r ~/.ssh/ <CN>:~/

StrictHostKeyChecking needs to be disabled in /etc/ssh/ssh_config file on FN and CNs:

Host *
    	StrictHostKeyChecking no

Don't forget to restart sshd on host where /etc/ssh/ssh_config file was modified.

[root@FN]$ service sshd restart

Make sure that root is able to login on CNs without being asked for a password.

Sunstone GUI

[root@FN]$ yum install opennebula-sunstone-4.2.x86_64.rpm
[root@FN]$ bash /usr/share/one/install_novnc.sh

MySQL

If MySQL is going to be used as OpenNebula DB backend then the following steps need to be performed.

[root@FN]$  yum install mysql-server

[root@FN]$ /etc/init.d/mysqld start

[root@FN]$ chkconfig mysqld on

[root@FN]$ mysql
mysql> USE mysql;
mysql> UPDATE user SET Password=PASSWORD('<password>') WHERE user='root';
mysql> FLUSH PRIVILEGES;
mysql> create database opennebula;
mysql> GRANT ALL PRIVILEGES ON opennebula.* TO 'one_db_user'@'localhost' IDENTIFIED BY 'one_db_user' WITH GRANT OPTION;
mysql> UPDATE user SET Password=PASSWORD('<password>') WHERE user='one_db_user';
mysql> FLUSH PRIVILEGES;

where <password> is either can be taken from ~oneadmin/.one_auth file or you can set any other.

Passwordless access across nodes for oneadmin user

Opennebula generates DSA keys in ~oneadmin/.ssh/ If necessary, you can generate your own keys:

[root@FN]$ su - oneadmin
[oneadmin@FN]$ ssh-keygen -t rsa
[oneadmin@FN]$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

Put id_rsa.pub in oneadmin@CN:~/.ssh/authorized_keys file as well as id_rsa* in oneadmin@CN:~/.ssh/ folder.

StrictHostKeyChecking needs to be disabled in /etc/ssh/ssh_config file on FN and CNs:

Host *
    	StrictHostKeyChecking no

Remember to restart sshd on host where /etc/ssh/ssh_config file was modified.

[root]$ service sshd restart

oned.conf

Edit /etc/one/oned.conf according to your cloud configuration. E.g.:

HOST_MONITORING_INTERVAL     	= 60
VM_POLLING_INTERVAL        	= 60
SCRIPTS_REMOTE_DIR=/vz/one/scripts
PORT = 2633
DB = [ backend = "mysql",
   	server  = "localhost",
   	port	= 0,
   	user	= "one_db_user",
   	passwd  = "<password>",
   	db_name = "opennebula" ]
VNC_BASE_PORT = 5900
DEBUG_LEVEL = 3
NETWORK_SIZE = 254
MAC_PREFIX   = "02:00"
DATASTORE_LOCATION = /vz/one/datastores
DEFAULT_IMAGE_TYPE	= "OS"
DEFAULT_DEVICE_PREFIX = "sd"
IM_MAD = [
  	name   	= "im_ovz",
  	 executable= "one_im_ssh",
  	arguments  = "-r 0 -t 15 ovz" ]
VM_MAD = [
	name   	= "vmm_ovz",
	executable = "one_vmm_exec",
	arguments  = "-t 15 -r 0 ovz",
	default	= "vmm_exec/vmm_exec_ovz.conf",
	type   	= "xml" ]
TM_MAD = [
	executable = "one_tm",
	arguments  = "-t 15 -d dummy,shared,ssh" ]
DATASTORE_MAD = [
	executable = "one_datastore",
	arguments  = "-t 15 -d fs"
]
HM_MAD = [
	executable = "one_hm" ]
AUTH_MAD = [
	executable = "one_auth_mad",
	authn = "ssh,x509,ldap,server_cipher,server_x509"
]
SESSION_EXPIRATION_TIME = 900
VM_RESTRICTED_ATTR = "CONTEXT/FILES"
VM_RESTRICTED_ATTR = "NIC/MAC"
VM_RESTRICTED_ATTR = "NIC/VLAN_ID"
VM_RESTRICTED_ATTR = "RANK"
IMAGE_RESTRICTED_ATTR = "SOURCE"

DEFAULT_CT_CONF

Set DEFAULT_CT_CONF in /var/lib/one/remotes/vmm/ovz/ovzrc file to the needed value (e.g. /etc/vz/conf/ve-vswap-1g.conf-sample).

Setting oneadmin password

If you installed from packages, you should have the '/.one/one_auth' file created with a randomly-generated password. Otherwise, set oneadmin's OpenNebula credentials (username and password) adding the following to /.one/one_auth (change password for the desired password):

[oneadmin@FN]$ mkdir ~/.one
[oneadmin@FN]$ echo "oneadmin:<password>" > ~/.one/one_auth
[oneadmin@FN]$ chmod 600 ~/.one/one_auth

This will set the oneadmin password on the first boot. From that point, you must use the 'oneuser passwd' command to change oneadmin's password.

Starting OpenNebula daemons

[oneadmin@FN]$ one start

Check logs (/var/log/one/oned.log) for any errors.

Datastore (on OpenNebula)

Currently only sshd transfer manager driver is supported by OVZ driver, so you need to change all the datastores to use ssh driver. To change the datastores transfer manager driver (e.g. from shared to ssh) one can perform the following command:

[oneadmin@FN]$ env EDITOR=vim onedatastore update 1

and set a value of TM_MAD parameter accordingly (e.g. TM_MAD=”ssh”).

Apart from that there is a necessity to set NO_DECOMPRESS="yes" in DS config otherwise OpenNebula will try to decompress OpenVZ template archives and fail.

Cluster nodes configuration (OpenVZ)

Install OS on CN in minimal configuration and remove unnecessary rpms. E.g. on SL 6.x OS the following rpms can be removed:

[root@CN]$ yum remove qpid* matahari*
[root@CN]$ userdel -rf qpidd
[root@CN]$ groupdel qpidd

or by just one command

[root@CN]$ yum remove qpid* matahari* && userdel -rf qpidd && groupdel qpidd

Disable selinux in /etc/selinux/config

[root@CN]$ setenforce 0
[root@CN]$ sestatus
[root@CN]$ wget -P /etc/yum.repos.d/ http://download.openvz.org/openvz.repo
[root@CN]$ rpm --import  http://download.openvz.org/RPM-GPG-Key-OpenVZ

[root@CN]$ yum install vzkernel vzkernel-firmware

[root@CN]$ mv /etc/sysctl.conf{,.orig}

[root@CN]$ scp <configured CN>:/etc/sysctl.conf /etc/

[root@CN]$ chkconfig ntpd on

[root@CN]$ chkconfig apcupsd on

[root@CN]$ yum install vzctl vzquota ploop

Edit /etc/vz/vz.conf according to desirable configuration. Edit vz.conf on CNs as below:

$ diff /etc/vz/vz.conf.orig /lesf
45c45
< IPTABLES="ipt_REJECT ipt_tos ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length"
---
> IPTABLES="ipt_REJECT ipt_tos ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length ipt_state"
50c50
< IPV6="yes"
< IP6TABLES="ip6_tables ip6table_filter ip6table_mangle ip6t_REJECT"
---
> IPV6="no"
> #IP6TABLES="ip6_tables ip6table_filter ip6table_mangle ip6t_REJECT"

Make sure that modules xt_state and nf_conntrack are loaded.

Reboot. Make sure vz and vzeventd daemons are running. If they are not then check if they are set to be started at boot. They can be run by executing commands as below:

[root@CN]$ /etc/init.d/vz start

[root@CN]$ /etc/init.d/vzeventd start

Default CT conf

Make sure to set the proper values in the file $DEFAULT_CT_CONF (e.g./etc/vz/conf/ve-vswap-1g.conf-sample) corresponding to good enough resources (e.g. disk space) otherwise VM deployment may fail with errors like “Disk quota exceeded”).

iptables

Copy iptables rules from configured CN and restart iptables services: [root@CN]$ /etc/init.d/iptables restart

On CNs execute the following commands:

[root@CN]$ iptables -P FORWARD ACCEPT && iptables -F FORWARD

iptables config example on VMs:

# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp -s <IP/mask trusted_network> --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

Install required RPMs on CN

[root@CN]$ yum install ruby rubygems file bind-utils

oneadmin user

Create oneadmin group and user on CNs with the same uid and gid as on FN:

[root@CN]$ groupadd --gid 1000 oneadmin

[root@CN]$ useradd --uid 1000 -g oneadmin -d /vz/one oneadmin

Edit /etc/sudoers file:

# Defaults	requiretty
%oneadmin  ALL=(ALL)       	NOPASSWD: ALL
Defaults:%oneadmin secure_path="/bin:/sbin:/usr/bin:/usr/sbin"

[root@CN]$ su - oneadmin

[oneadmin@CN]$ mkdir /vz/one/datastores

Make sure /vz/one/datastores is writable for group otherwise make it such:

[oneadmin@CN]$ chmod g+w /vz/one/datastores

[oneadmin@FN]$ scp -r .ssh/ root@CN:~oneadmin/

[root@CN]$ chown oneadmin:oneadmin -R ~oneadmin/.ssh/

And again StrictHostKeyChecking needs to be disabled in /etc/ssh/ssh_config file on FN and CNs:

Host *~
    	StrictHostKeyChecking no

Remember to restart sshd on host where /etc/ssh/ssh_config file was modified.

[root]$ service sshd restart

Make sure that oneadmin user is able to login on CN from FN without being asked for a password:

[oneadmin@FN]$ ssh <CN hostname>

[root@FN]$ ssh-copy-id <CN hostname>

[root@FN]$ scp -r .ssh/id_rsa* root@CN:~/.ssh/

Make sure that root is able to login on CN from FN without being asked for a password:

[root@FN]$ ssh <CN hostname>

Some VMs operations examples

Network

[oneadmin@FN]$ cat public.net
NAME = "Public"
TYPE = FIXED

BRIDGE = venet0

LEASES = [IP=<IP1>]
LEASES = [IP=<IP2>]

GATEWAY = <gateway_IP>
DNS = <DNS_IP>
[oneadmin@FN]$ onevnet list

[oneadmin@FN]$ onevnet create public.net

One can add/remove/hold/release leases to/from FIXED network:

[oneadmin@FN]$ onevnet addleases <network_id> <new_IP_address>
[oneadmin@FN]$ onevnet rmleases <network_id> <IP_address>
[oneadmin@FN]$ onevnet hold <network_id> <IP_address>

Cluster

[oneadmin@FN]$ onecluster create ovz_x64

[oneadmin@FN]$ onecluster addvnet 100 0

[oneadmin@FN]$ onecluster adddatastore 100 1

[oneadmin@FN]$ onehost create <CN hostname> --im im_ovz --vm vmm_ovz --cluster ovz_x64 --net dummy


[oneadmin@FN]$ oneimage create -d default --name "SL 6.3 x86_64 persistent" --path /tmp/sl-6-x86_64.tar.gz --prefix sd --type OS --description "Scientific linux 6.3 custom"

[oneadmin@FN]$ oneimage list

To make image persistent execute the following command:

[oneadmin@FN]$ oneimage persistent <IMAGE_ID>

Create template for VMs:

$ cat sl-6.3-x86_64.one.vm.tmpl
CONTEXT=[
  FILES="/var/lib/one/vm_files/rc.local /var/lib/one/vm_files/id_rsa.pub",
  NAMESERVER="$NETWORK[DNS, NETWORK_ID=0 ]" ]
CPU="0.01"
DISK=[
  IMAGE_ID="1",
  SIZE="20480" ]
DISK=[
  SIZE="2048",
  TYPE="swap" ]
LOOKUP_HOSTNAME="true"
MEMORY="4096"
NAME="SL6 x86_64"
NIC=[
  NETWORK_ID="0" ]
OS=[
  ARCH="x86_64",
  BOOT="sd" ]
OSTEMPLATE="sl-6-x86_64"
VE_LAYOUT=”ploop”
RCLOCAL="rc.local"

Make sure the value of the OSTEMPLATE parameter is written in the format <OS name>-<version>-<architecture>, e.g. OSTEMPLATE="sl-6-x86_64".

VE_LAYOUT parameter is used to set filesystem type of the VM. It can be ploop or simfs. If it is not specified, ploop is used.

Due to the new datastore model in OpenNebula 4.4 you can't use disk size attribute anymore, instead you can specify OVZ_SIZE attribute for disk containing VM image. If it is not specified the value from DEFAULT_CT_CONF is used. Example:

DISK=[
  IMAGE_ID="1",
  OVZ_SIZE="20480" ]

You can also pass OpenVZ native parameters directly to hypervisor using RAW attribute. For example:

RAW = [
	FEATURES = "nfs:on"
	QUOTATIME = "0"
	....
]

One can update created template by command “env EDITOR=vim onetemplate update <TEMPLATE ID>”.

Create VM template in ONE:

[oneadmin@FN]$ onetemplate create sl-6.3-x86_64.one.vm.tmpl

Instantiate VM from existing template:

[oneadmin@FN]$ onetemplate instantiate 0 -n vps103

Opennebula 3.x

These instructions were tested against Opennebula 3.4 and 3.6 available as public downloads at http://downloads.opennebula.org/

Pre-installation configuration changes

  • Install OpenVZ as usual.
  • Moving /vz to other directories is supported, just edit /etc/vz/vz.conf:
LOCKDIR=/home/vz/lock
DUMPDIR=/home/vz/dump

TEMPLATE=/home/vz/template

VE_ROOT=/home/vz/root/$VEID
VE_PRIVATE=/home/vz/private/$VEID
  • Do the usual edits to /etc/sysctl.conf to match OpenVZ requirements:
# Controls IP packet forwarding
net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1 
net.ipv4.conf.default.proxy_arp = 0

# We do not want all our interfaces to send redirects
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0
  • As required for ONE, you need to create ~oneadmin/.one/one_auth;
  • Edit sudoers to grant oneadmin root rights to control OpenVZ machines:
Defaults:oneadmin !requiretty
oneadmin  ALL=(ALL) NOPASSWD: ALL
  • Setup passwordless (key-based) ssh access for oneadmin user between all hosts and frontend.
  • If you want to use live OpenVZ migration, setup passwordless (key-based) ssh access for root user between all hosts.

Installing and reinstalling ONE

Install vanilla ONE:

user@cloudX$ cd ~/one
... compile ...
user@cloudX$ sudo -u oneadmin ./install.sh -d /home/oneadmin/one-bin

Checkout and install OpenVZ scripts on top of that:

user@cloudX$ hg clone https://bitbucket.org/hpcc_kpi/opennebula-openvz
user@cloudX$ cd ~/opennebula-openvz/src

If you are using ONE 4.2 then switch to 4.2 branch:

user@cloudX$ hg update main-4.2

Then run the install script:

user@cloudX$ sudo -u oneadmin ./install.sh /home/oneadmin/one-bin

Don't forget to remove cached ONE scripts after any change to ~oneadmin/one-bin/var (not only after reinstalling). This should be done on all hosts:

root@cloudX# rm -rf /var/tmp/one/

Starting ONE

Start ONE only:

oneadmin@cloudX$ oned

Start ONE and scheduler that deploys VMs automatically if they meet resource requirements:

oneadmin@cloudX$ one start

Post-installation configuration

Create host:

oneadmin@cloudX$ onehost create cloud2 -i im_ovz -v vmm_ovz -n dummy

Create network:

oneadmin@cloudX$ onevnet create red-net

where file red-net contains:

NAME    = "Red LAN"
TYPE    = RANGED

# Now we'll use the host private network (physical)
BRIDGE  = venet0

NETWORK_ADDRESS = 192.168.0.0/24
IP_START        = 192.168.0.3

# Custom Attributes to be used in Context
GATEWAY = 192.168.0.1
DNS     = 192.168.0.1

LOAD_BALANCER = 192.168.0.2

Change system datastore settings to use TM ssh:

oneadmin@cloudX$ onedatastore update 0

set:

TM_MAD="ssh"

Using ONE

Create image in default datastore (the image is copied there):

oneadmin@cloudX$ oneimage create deb-t-img --datastore default

where deb-t-img file contains:

NAME = "Debian Testing"
PATH = "/home/vz/debian-6.0-x86_64.tar.gz"
DRIVER = "raw"
BUS = "virtio"

Create VM:

oneadmin@cloudX$ onevm create deb-t

where deb-t file contains:

NAME="deb-t"
MEMORY=512
CPU=2
OS = [ BOOT = hd,
       ARCH = "x86_64" ]
DISK = [ IMAGE_ID = 0 ]
NIC = [ NETWORK="Red LAN", IP="192.168.0.42" ]
DISK = [ TYPE     = swap,
         SIZE     = 1024 ]
OSTEMPLATE = "debian-6.0"

Edit IMAGE_ID according to the results of image creation command.

Please note that image is nonpersistent by default. It means that changes are not saved back to the datastore upon shutdown. To make the image persistent run:

oneadmin@cloudX$ oneimage persistent 0

where 0 is image ID.

Nonpersistent images can be attached to one or more VMs. Persistent images can be attached to at most one VM.

Contextualization

Contextualization is implemented in two ways:

  • traditional OpenNebula approach that makes an ISO image available as a device inside the VM; this way may be difficult since it requires additional OpenVZ configuration for enabling it to mount ISO images inside VMs via fuse (http://wiki.openvz.org/Mount_ISO_image_in_a_container).
  • copying ISO file contents to the specified location in VM file tree; default location for copying files is configured in the file remotes/vmm/ovz/ovzrc by changing CT_CONTEXT_DIR variable.

The former approach is implemented in the default branch while the latter - in context-copy. As of now the latter is preferable. At this moment 4.2 version of the driver uses the latter approach only.

Our script also extracts network-related parameters hostname, nameserver and searchdomain from CONTEXT section and passes them directly to OpenVZ. Important: OSTEMPLATE option should be set appropriately to enable OpenVZ setting up these parameters.

Contextualization at start-up

Our implementation also features the possibility to execute contextualization scripts during start-up sequence by overriding /etc/rc.local configuration file inside the virtual machine.

The special option RCLOCAL was introduced to VM template format in order to specify the file that will be used instead of default rc.local. This file should be included in the context file list. Important: consider using the edited rc.local file supplied with your distribution in order to prevent unwanted behavior.

Usage example:

CONTEXT = [
    files = "/tmp/rc.txt /tmp/ttt.sh /tmp/tttt.sh"
]
RCLOCAL = "rc.txt"

Notice, that RCLOCAL takes only the filename without path to it since contextualization engine puts all the files to a single directory on the device available inside VM.

Determining hostnames automatically

This driver allows you to use DNS for setting up the hostname of a virtual machine. To enable it one should put the option LOOKUP_HOSTNAME="true" in the virtual machine template file. Notice that any other value of this parameter as well as its absence is considered as false value, i.e. disables this functionality. If enabled, the driver during deployment performs a reverse DNS query using UNIX host utility providing it with the IP address specified in template or obtained from OpenNebula and sets up the virtual machine hostname to the domain name from DNS response. In case of DNS errors the hostname is not explicitly passed to OpenVZ. If the DNS replies with FQDN, the terminal point is truncated.

Hacks in our implementation

Fix file permissions in TM

When ssh'ing to a remote host, TM does not preserve umask of the original (that is, frontend), system. As a result permissions may be set incorrectly if umask after ssh'ing differ.

We fix this by patching shared/clone and shared/mkimage scripts.

Fix race condition in onevm resubmit

It seems like onevm resubmit starts TM script delete in parallel with VMM script cancel. That does not work for OpenVZ because deployment directory contains private directory that contains files owned by root (and these can't be deleted by oneadmin user). Another problem is that VM is being shutdown while its files are being deleted.

OpenVZ is missing cancel semantics

It seems like OpenVZ does not provide a forced shutdown command. Thus we use normal shutdown procedures for VMM cancel script.

We did not really test cancel script. That is, cancel is useful for situations when normal shutdown fails (for example, some process is stuck in disk sleep state), but we didn't encounter such situations.

Datastore manager tries to unarchive tarballs

Since 3.6 datastore's downloader.sh has a new feature of compressing the data while transferring and it therefore decompresses any tarball going to be saved on filesystem. Because OpenVZ uses tarballs as virtual machine images, this behavior is unwanted.

We applied a quick-and-dirty workaround as described in mailing list http://lists.opennebula.org/pipermail/users-opennebula.org/2012-July/009527.html. As soon as this issue http://dev.opennebula.org/issues/1352 is closed, this hack will be removed.

Other pages

Our test servers @ HPCC: HPCC-test-servers

Updated