Despite what its name suggests, the Robotics Operating System (ROS) is not a complete operating system. ie. it does not seek to replace the function of Linux. It is in fact a meta operating system, which provides many useful patterns that help speed up development of networked cyber physical systems. Notably:
- A framework for developing distributed components (nodes)
- Prevents failures in non-critical code from affecting critical code
- Forces separation between distinct modules of code
- Promotes intra and extra project re-usability
- A blackboard system for passing messages between nodes
- Visualization frameworks for rapid GUI development
- A complete end-to-end physics simulation stack
- An overlay-based build system (catkin) for linking and building projects
- An advanced logging and playback system
ROS is a large project with many functions, and so there is a reasonable learning curve. The purpose of this tutorial is to bootstrap this process, with a particular emphasis on applying ROS to the ROSELINE project.
STEP 1 - Install ROS on the host PC
Follow these instructions to install ros-indigo-desktop-full on your host PC: http://wiki.ros.org/indigo/Installation/Ubuntu
Don't forget to run
sudo rosdep init and
This will likely take around hour, as ROS is a large system to install.
STEP 2 - Familiarize yourself with ROS
ROS relies heavily on the notion of
overlays. This is a method of linking a working environment with a base set of tools, without actually polluting the base environment with all the files (this nested setup preserves dependency between separate toolchains).
By default, ROS installs its base system to the location
/opt/ros/indigo, and provides several useful tools. In the first instance the Linux system can't see these tools, so you have to tell your system about them. To do this, you type the following in a bash prompt:
source /opt/ros/indigo/setup.bash. You can add this command to the bottom of the file
~/.bashrc to have it run every time bash starts.
The command effectively adds directories from ROS to your binary and library search path, and sets a few environment variables to tell ROS applications where important things are. You can inspect what it's done by typing the following command:
env | grep ROS. You will see something like this:
ROS_ROOT=/opt/ros/indigo/share/ros ROS_PACKAGE_PATH=/opt/ros/indigo/share:/opt/ros/indigo/stacks ROS_MASTER_URI=http://localhost:11311 ROS_DISTRO=indigo ROS_ETC_DIR=/opt/ros/indigo/etc/ros
The most important variable above is
ROS_MASTER_URI. At the heart of ROS is a central application called
roscore, which presents itself on a TCP port (allowing nodes to run remotely, which is extremely useful in a networked system). The
ROS_MASTER_URI environment variable tells any ROS node that launches itself where to find its
master. Another really important environment variable that is not set by default is
ROS_IP. This variable tells any ROS node what its current IP is, which is really important if you are not running a DNS server on your network.
A ROS project may contain many packages, each of which may provide many applications (also known as
nodes). To run a node you'll need to first start the master. So, in one terminal type:
roscore. You should see the server start with a bit of debug output.
In another terminal type:
rosrun gazebo_ros gazebo world:=worlds/empty_world.world. This command instructs ROS to find the package
gazebo_ros (a simulation framework) and then run the node
gazebo (a simulator) with a single argument
world:=worlds/empty_world.world, which is the default world that should be loaded by the simulator.
You should see something that looks like an empty world...
Starting applications in this way is extremely tedious, as you need to first start master and then all the applications one-by-one. For this reason, launch files were created. Quit the simulator and cancel the ROS master by pressing ctrl+c in both terminals. Now, just run this command:
roslaunch gazebo_ros empty_world.launch
You'll see it does exactly the same thing, but much more concisely.
It is often useful to be able to see what messages are being exchanged between nodes. Messages are defined by structures and sent by
topics, which are bound to by
subscribers You can examine messages with the
rostopic tool. With master launched, type the following:
rostopic pub /hello std_msgs/String "hello world"
This command encodes the phrase
hello world into a
std_msgs/String type, and publishes it on the
/hello topic. The forward slash indicates that the message is relative to the global root; nodes typically publish messages relative to their local namespace, for example
~/hello (translates to
/<node name>/hello. Now, in a separate terminal subscribe to the topic in the following way:
rostopic echo /hello
You should see something like this:
data: hello world ---
STEP 3 - Compile the ROS-based ROSELINE applications
Recall that ROS is driven by overlays. To compile your own project you need to set it up as an overlay above the base system. This is how you do it:
cd <roseline>/applications/nsf-localization/src catkin_init_workspace
This creates a symbolic link in the current directory that points to a base CMakeLists.txt file (build script) provided by the ROS system. You can now build the ROS project using
catkin_make. This is really just a wrapper around the
cd <roseline>/applications/nsf-localization catkin_make
During the compilation process catkin puts intermediate build products, such as the header files generated by message definitions, in the ./build directory. The compiled products are then written to the ./devel directory. In order to run the compiled applications you'll need to show bash where to find them:
Now, try running the user interface:
roslaunch interface experiment.launch
The best way to learn how to develop your own applications is to follow the tutorials on the ROS wiki, and look at the example code I have written as part of the ROSELINE project. Here are some useful tips and tricks:
- If you create a file called
CATKIN_IGNOREin any package root, then
catkin_makewill skip building this package. This is very useful when you need to isolate build problems.
- Within each package create standardised folders called
sharefor your shared headers, source code, message definitions, launch files and shared resources. Keeping to this nomenclature helps developers.
- Use the
find_packagecmake macro to help find dependencies. Hard-coding include and library locations makes it difficult for others to use your code.
- ROS is all about code reuse. Have a look at
rqtfor visual debugging tools,
control_msgsfor standard message types.
STEP 3 - Debug and logging
Hardware experiments are tricky to run, and so it is useful to be able to replay an experiment. ROS provides a tool called
rosbag to help you with this. The crux of this tool is that it binds to the
master and records messages into a *.bag file. You can then replay these messages at a later time, effectively simulating the experiment.
To record an experiment type the following with rosmaster started (the
-a flag instructs
rosbag to recall every message that passes through the master node):
rosbag record -a
Some time later press ctrl+c and you will see that a timestamped bag file was added to the directly from which you ran the command. Using rosbag you are able to compress the data, get some info about its contents and replay the stream into a current ROS master node. Very useful.
STEP 4 - Deploy across multiple machines
There are a few quirks when dealing with networked systems which you should be aware of. Let's assume that we have three devices: a desktop
controller (on which master is being run) and two slave BeagleBone Black devices,
bravo. What we would like to do is
roslaunch our application on the central controller, to which the remote slave nodes are bound.
Part of this process involves starting nodes on remote machines. The way ROS achieves this is through passwordless SSH. For this to work, you'll need to add your controller ssh public key (contents of
~/.ssh/id_rsa.pub on your central controller) to the
/root/.ssh/authorized_keys on each of your slave nodes.
Another important part of the process is making sure that the compiled ROS is available to the nodes. You could copy it to each node, but it's generally easier to just mount an NFS share from the central controller onto the /root/shared directory of each node.
Finally, you need to make sure that each node knows its own IP address, as well as the IP and port of the master to which it must connect. The easiest way to do this is to just add the following environment variables to the end of
/root/.bashrc (replace aaa.bbb.ccc.ddd with the actual IPs).
export ROS_MASTER_URI=http://aaa.bbb.ccc.ddd.:11311 export ROS_IP=aaa.bbb.ccc.ddd
We are now in a position to launch the project. Here is a sample launch file in which the two remote machines are defined, along with the env-loader that should be run before nodes are launched (this just boostraps bash remotely with paths to libraries and binaries).
<launch> <machine name="alpha" address="10.42.0.100" user="root" env-loader="/root/shared/roseline/applications/nsf-localization/devel/env.sh" timeout="30"/> <machine name="bravo" address="10.42.0.101" user="root" env-loader="/root/shared/roseline/applications/nsf-localization/devel/env.sh" timeout="30"/> <node name="localization" pkg="interface" type="localization" respawn="false" output="screen"> <param name="threshold" type="double" value="0.5" /> <param name="minimum" type="int" value="6" /> </node> <node machine="alpha" name="a_anchor" pkg="tdoa" type="receiver" respawn="false" output="screen"> <param name="n" type="string" value="alpha" /> <param name="x" type="double" value="1.03505" /> <param name="y" type="double" value="0.08255" /> <param name="z" type="double" value="0.0" /> </node> <node machine="bravo" name="b_anchor" pkg="tdoa" type="receiver" respawn="false" output="screen"> <param name="n" type="string" value="bravo" /> <param name="x" type="double" value="0.83185" /> <param name="y" type="double" value="0.08255" /> <param name="z" type="double" value="0.0" /> </node> </launch>
Finally, on a side note, it is sometimes necessary to run more than one ROS master. A good case for this is when wireless backbones are used and connectivity is not guaranteed. In a typical ROS setup, the disconnection of a node from the master causes problems. However, if masters are run locally, then critical nodes always stay online. A special node runs on each platform that forwards messages between masters. This is referred to as a
multimaster set up, and I would suggest multimaster_fkie if you choose to go this route.