Wiki

Clone wiki

mobilityfirst / SoftwareRelease / GENI_Deployment

Pre-requisites

  • GENI account: To create a slice and reserve resources on GENI, you must first get a experimenter account, say from the GENI Experimenter Portal. The account is normally associated with a project, so you can either create a new project or join an existing one - e.g., 'MobilityFirst'.

The following steps assume experiments will be setup and run on the GENI slice from a separate control node. The controller should be able to reach at least one interface on each reserved host to be able to issue commands over SSH.

1. Prepare the Controller

Any platform that has a Unix-like env would do, including Cygwin.

  • Install gcf (GENI Control Framework) - requires Python. This includes the Omni tools required to reserve resources.
  • Download the MobilityFirst software release to access the GENI specific control scripts found under eval/geni/scripts
  • Install a recent version of Perl, which is required by some helper scripts.
  • Set env. variables as below:
    #!bash
    export PYTHONPATH=path/to/gcf/src
    
    export PATH=${PATH}:path/to/gcf/src:path/to/gcf/examples
    

2. Create GENI Slice and Reserve Resources

The Omni command line tool is one way (and the one we prefer) to create slices and reserve resources. One can also use other graphical tools noted on the GENI site and experimenter portal (e.g., Flack) to achieve identical results. The following instructions are for when using Omni.

2.a. Create a Slice

A slice helps bind together all slivers (resource allocations) created across two or more resource aggregates (RA). Basically, one or more public keys (or users effectively) can be associated with a slice and determines accessibility of resources.

#!bash

omni.py createslice <slicename>

Note that the omni tool authenticates the user before executing the operation. Authentication is handled through a clearing house (CH) that manages project and user identities. CH also registers the slices for each user. A '-a URL' option can be used to specify a particular CH, which usually defaults to the CH associated with the GENI portal.

2.b. Create RSPECs

An RSPEC is an XML-based definition of the resources you want to request from a particular RA. These can be for hosts, VMs, links (tunnels or VLAN) or any other resource type supported by an aggregate. You can either write RSPECs from scratch or be extracted from the resource advertisements returned by an RA when queried.

2.c. Reserve Resources

The collection of resources reserved within an RA under a particular slice constitutes a sliver. Due to limitations in GENI RA implementation, only one sliver can exist at an RA per slice, and the sliver is immutable once created.

Resource described in an RSPEC can be reserved either using Omni directly OR using the 'createslivers.sh' in the MF scripts dir:

#!bash
omni.py -a <AR-URL|AR-nickname> <slicename> <rspec-file>

OR using MF helper script to handle any number of rspec files:

#!bash

createsliver.sh <slicename> <rspec_file1> <rspec_file2> .....
This helper script is handy when all rspecs are in a single folder and simple shell expansion could be used to specify all. For this simpler form to work, we cheat a little by adding a AR nickname comment to each RSPEC file at creation time - can be placed anywhere in the file. The format of comment is shown in the example below taken from RSPEC for Rutgers Instageni AR:

#!xml

<!-- 
AM nickname: ru-ig
--> 

3. Setup Layer-2 Connectivity Between Routers

Since MF introduces a new non-IP layer-3 protocol, the router deployment requires layer-2 connectivity between neighbor nodes. This can be achieved in one of several ways supported within GENI - refer to GENI's Connectivity Overview page.

A separate 'fia-mobilityfirst' VLAN is currently set aside for MF experimentation. This currently connects 7 InstaGENI rack sites - Rutgers, NYU, BBN/GPO, NYSERNet, UIUC, UWisc, and U.Utah - and provides a single layer-2 broadcast domain across these locations. Since this is a LAN, a network topology will need to be enforced to achieve the required neighbor connectivity between routers. This is supported within the Click-based MF software router by passing a simple topology file that specifies the adjacency.

This VLAN can be easily shared across multiple experimenters with a little coordination by using distinct Ethertype values when framing layer 2 packets. This can specified in Click router configuration. If you decide to use this VLAN, please get in touch with Ivan Seskar or Kiran Nagaraja at WINLAB to get a Ethertype value assigned for your experiments so we can avoid conflicts.

Current Ethertype Usage:

Ethertype Group/Univ. Contact Person Experiment Description
0x27c0 WINLAB Kiran Nagaraja long term deployment

4. Configure MobilityFirst Deployment

The MF helper scripts rely on the configuration specified in a single file named 'config'. There's a template 'sample-config' provided with the scripts that can be customized to specify particulars of the deployment.

The key properties to change in the 'config' file are:

4.a. GENI account

#!bash
# -----------------------
# GENI account properties
# -----------------------

key="/path/to/geni/private_key"
username="mygeniusername"

4.b. Network

MF deployment assumes a two level topology of core and edge networks. The helper scripts identify interfaces on the router and host nodes as either edge or core facing based on whether the assigned IP belongs to a core or edge designated subnet. This is used for instance by the router control script that brings up either a edge (has additional host services) or a core router on a particular experiment node.

#!bash
# ---------------------
# network configuration
# ---------------------

#the IP subnet that will be used for GNRS service plane
Netfilter="10.44.0.0/16"

#the IP subnet that will be used for end-host access
Edgefilter="10.43.0.0/16"

4.c. Source code

Details of which MF repository to use as origin during install and the specific branch or tag that should be installed.

#!bash
# --------------------
# code base properties
# --------------------

#mf git repo; bitbucket  
repo_username="myrepousername"
mfgitorigin="https://${repo_username}@bitbucket.org/nkiran/mobilityfirst.git"

#mf branch to install
mfbranch="master"

#click release to install
clickversion="v2.0.1"

5. Build a nodes-file (a handy list of all reserved nodes)

Since we'll be using SSH as the means to issue commands at each node, and often issuing the same commands across several or all nodes, a list of the hostnames or their control IPs will be needed to address each node. Remember that RSPECs only specify data plane interfaces and the control interfaces are determined once the resources are allocated and active. So we've built a helper script that uses the RSPECs to query the RAs about the control interfaces and build a list. We scour a few more details from the nodes such as the names of the data plane interfaces (e.g., eth0, eth1) since these are not uniformly assigned across different RAs, and also derive the GUID of the node by passing it's OpenSSH host key through a hash function. All of these details can be gotten using the 'identifynodes.sh' helper script:

#!bash
identifynodes.sh <slicename> <rspec-file1> [<rspec-file2>] ...  > <nodes-file>
By passing all RSPEC files to the script, we can build the db/list of all nodes deployed at once and capture the output in a nodes-file. Here's a sample nodes-file from the MF long running deployment across 7 Instageni sites:

#!bash

#hostname,interface,hwaddr,ipv4addr,guid
pcvm5-44.instageni.gpolab.bbn.com,eth0,02:c8:9a:b2:f9:0c,192.1.242.158,343275562
pcvm5-44.instageni.gpolab.bbn.com,eth1,02:5a:68:40:8a:d7,10.44.2.1,343275562
pcvm5-45.instageni.gpolab.bbn.com,eth0,02:b1:fe:5a:30:12,192.1.242.159,1326973177
pcvm5-45.instageni.gpolab.bbn.com,eth1,02:9b:09:39:61:27,10.44.2.128,1326973177
pcvm3-30.instageni.illinois.edu,eth0,02:27:98:d4:e7:ad,72.36.65.65,1105395882
pcvm3-30.instageni.illinois.edu,eth1,02:06:aa:9e:17:37,10.44.9.1,1105395882
pcvm3-31.instageni.illinois.edu,eth0,02:ad:81:00:57:1b,72.36.65.68,1087418188
pcvm3-31.instageni.illinois.edu,eth1,02:61:12:f6:c7:17,10.44.9.128,1087418188
pcvm3-6.instageni.nysernet.org,eth0,02:2c:68:ee:70:6e,199.109.64.50,1864282817
pcvm3-6.instageni.nysernet.org,eth1,02:1c:54:e4:58:f8,10.44.18.1,1864282817
pcvm3-7.instageni.nysernet.org,eth0,02:a2:63:55:83:31,199.109.64.52,1008227076
pcvm3-7.instageni.nysernet.org,eth1,02:d9:73:3e:30:0b,10.44.18.128,1008227076
pcvm3-1.genirack.nyu.edu,eth0,02:af:aa:76:0f:74,192.86.139.64,743633713
pcvm3-1.genirack.nyu.edu,eth1,02:b9:93:28:39:08,10.43.4.1,743633713
pcvm3-1.genirack.nyu.edu,eth2,02:64:14:d8:86:98,10.44.4.1,743633713
pcvm3-3.genirack.nyu.edu,eth0,02:1a:0b:1f:01:fa,192.86.139.65,1650457279
pcvm3-3.genirack.nyu.edu,eth1,02:64:1a:41:39:0e,10.44.4.128,1650457279
pcvm3-3.genirack.nyu.edu,eth2,02:c5:ea:b3:66:ec,10.43.4.128,1650457279
pcvm3-3.instageni.rutgers.edu,eth0,02:6c:9a:f2:39:99,165.230.161.230,1455426667
pcvm3-3.instageni.rutgers.edu,eth1,02:a9:01:f8:f4:78,10.43.0.1,1455426667
pcvm3-3.instageni.rutgers.edu,eth2,02:c4:01:9e:d9:9f,10.44.0.1,1455426667
pcvm3-4.instageni.rutgers.edu,eth0,02:a5:c7:4d:6d:5b,165.230.161.231,1394255251
pcvm3-4.instageni.rutgers.edu,eth1,02:d4:ce:eb:6c:04,10.43.0.128,1394255251
pcvm3-4.instageni.rutgers.edu,eth2,02:2b:39:c5:4b:9c,10.44.0.128,1394255251
pcvm3-1.utah.geniracks.net,eth0,02:9f:d8:b1:37:34,155.98.34.130,603169490
pcvm3-1.utah.geniracks.net,eth1,02:37:37:70:fc:37,10.44.14.1,603169490
pcvm3-2.utah.geniracks.net,eth0,02:ee:0d:90:2f:df,155.98.34.131,646019851
pcvm3-2.utah.geniracks.net,eth1,02:17:a0:cb:59:c0,10.44.14.128,646019851
pcvm3-24.instageni.wisc.edu,eth0,02:93:6c:e5:bf:39,128.104.159.129,1852534458
pcvm3-24.instageni.wisc.edu,eth1,02:ee:22:cc:d9:3e,10.44.8.1,1852534458
pcvm3-24.instageni.wisc.edu,eth2,02:38:f4:c9:f3:a1,10.43.8.1,1852534458
pcvm3-25.instageni.wisc.edu,eth0,02:9f:f7:47:c3:a8,128.104.159.131,336385182
pcvm3-25.instageni.wisc.edu,eth1,02:2e:ad:b2:a0:df,10.44.8.128,336385182
pcvm3-25.instageni.wisc.edu,eth2,02:a2:e0:55:d7:87,10.43.8.128,336385182
Note that there is one line per interface on each node. Some have 'core' interfaces, some 'edge' and some have both. Those with both will run edge router configurations.

6. Install MobilityFirst on GENI Nodes

Router, naming service (GNRS), host stack and network API libraries can be individually installed according to the role of a node. Or, the provided helper script could be used to install all of these on all of the nodes in parallel. The 'installmf.sh' script uses the nodes-file assembled in the previous step.

#!bash
installmf.sh <nodes-file>

The install script first copies over the configuration file and a local installation script -'localinstallmf.sh' - to each of the nodes, followed by the simultaneous execution of the local script across all of the nodes. Note that the script handles duplicate host entries in the nodes-file (unless they are DNS aliases) and nodes may be left out by commenting out ('#') the corresponding node-file entries.

Note on Git Authentication:

To access non-public git repos (e.g., the current MF repo) during installation on remote nodes, the username/password can entered in more than way. For automation however, it's simpler to install a '.netrc' file in the home directory of each node with the file containing the following single line:

#!plain-text
machine <hostname> login <username> password <password>

For the wary experimenter, look into the git property 'core.askpass' to enable interactive password entry. You would have to modify the 'localinstallmf.sh' script where this property can be set as shown below:

#!bash
git config --global core.askpass /usr/lib/git-core/git-gui--askpass

7. Bring up Routers and GNRS

Helper scripts simplify bringing up and controlling router and gnrs instances on the GENI nodes. For instance, routerctl.sh automates the running of core and edge routers with appropriate configurations on the designated GENI nodes. The determination of core vs edge is presently based on implicit rules on availability of interfaces on the edge network (i.e., interface is assigned IP with edge net prefix). A topology file that establishes GUID-based adjacency for each router is a required input. Look at the section on Topology Control for details on how to compose a deployment topology.

#!bash

> routerctl.sh
usage: routerctl.sh <nodes-file> <cmd=list|start|stop> <topologyfile>

> routerctl.sh mynodes start mytopo

The following will bring up the GNRS server instances on each of the router nodes. The provided configuration is customized to the particular instance where needed (e.g., server listen interface):

#!bash
> gnrsctl.sh 
Usage: ./gnrsctl.sh <nodes-file> {config|start|stop|clean} [options]
        options:
            if 'config': <template-config-dir> 

> gnrsctl.sh nodes.all config mygnrsconfdir

> gnrsctl.sh nodes.all start
Template configuration files for gnrs servers can be found under eval/geni/conf/gnrs-srvr

8. Bring up Host Stacks

The following brings up the host stack on nodes determined to be clients - this is currently determined by the implicit rule that client nodes will have core interfaces with lastoctet > 128 :

#!bash

> hostctl.sh 
Usage: hostctl.sh <nodes-file> <cmd=start|stop>

hostctl.sh nodes.all start

Updated