HTTPS SSH

If you get issues starting docker containers, or after starting them you have issues reaching them via the network, and you are on centos7u2 , chances are there is a conflict with shorewall.. shorewall screws up some stuff docker creates. This happens if docker is started first (the service) and you can fix it by simply restarting it, it will recreate the necessary configuration in iptables.

[root@rb-centos7u2trunk4 cloudera]# docker run --net=bridge --name=cloudera -h cloudera.bridge -p 7180:7180 --privileged -t -i -v /sys/fs/cgroup:/sys/fs/cgroup rayburgemeestre/cloudera-master:3 docker: Error response from daemon: driver failed programming external connectivity on endpoint cloudera (9bddcfbe80f5b6871193e680d422c4aed9212cd3da523889e2fdad95d9bf3752): iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 7180 -j DNAT --to-destination 172.17.0.2:7180 ! -i docker0: iptables: No chain/target/match by that name.

h1. Requirements

h2. Get docker 1.8.x from:

h2. Known issues:

  • I've had the problem with aufs backend for Docker not working properly, in that case modify the Docker Daemon settings to use a different backend like devicemapper or btrfs.

h2. Regarding firewall

You may want to edit the shorewall rules, and then open port 7180 (for Cloudera Manager) , maybe some more ports. Then go to the Krusty interface, and make sure the port is open there as well. Then you need to reload the shorewall rules, note that this screws up some entries that the docker daemon made to iptables, so you need to restart docker! Now docker should work, and you can start the instances. The start bash scripts already forward port 7180 for docker to the host, so using your CLUSTER_IP:7180 should work if you followed these steps.

h1. Usage

Available scripts:

In cloudera-folder:

  • bash build_all.sh - build all cloudera docker images
  • bash start_all.sh - start all cloudera docker images

In hortonworks-folder:

  • bash build_all.sh - build all hortonworks docker images
  • bash start_all.sh - start all hortonworks docker images

use docker ps to see if everything started.

h2. More info

Cloudera Dockerfile's are based on instructions from http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/cm_ig_install_path_b.html Hortonworks Dockerfile's are based on the manuals too and manual investigation.

h1. TODO

You should end up with something like this:

root@FIREFLY:/projects/docker-hadoop# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 307d148ee120 rayburgemeestre/hortonworks-slave "/usr/sbin/init" 3 minutes ago Up 3 minutes node105 0aa7fbc7683c rayburgemeestre/hortonworks-slave "/usr/sbin/init" 3 minutes ago Up 3 minutes node104 21a3de49e999 rayburgemeestre/hortonworks-slave "/usr/sbin/init" 3 minutes ago Up 3 minutes node103 3565b18caee2 rayburgemeestre/hortonworks-slave "/usr/sbin/init" 3 minutes ago Up 3 minutes 0.0.0.0:8088->8088/tcp node102 f6b6039d4d90 rayburgemeestre/hortonworks-slave "/usr/sbin/init" 3 minutes ago Up 3 minutes 0.0.0.0:16010->16010/tcp, 0.0.0.0:50070->50070/tcp node101 10a5ade72a83 rayburgemeestre/hortonworks-master "/usr/sbin/init" 4 minutes ago Up 4 minutes 0.0.0.0:8080->8080/tcp, 0.0.0.0:8440-8441->8440-8441/tcp horton 9953b53a449d rayburgemeestre/cloudera-slave "/usr/sbin/init" 4 minutes ago Up 4 minutes node005 ec8069001ef0 rayburgemeestre/cloudera-slave "/usr/sbin/init" 4 minutes ago Up 4 minutes node004 f4e8351b713d rayburgemeestre/cloudera-slave "/usr/sbin/init" 4 minutes ago Up 4 minutes node003 f19d692df848 rayburgemeestre/cloudera-slave "/usr/sbin/init" 4 minutes ago Up 4 minutes node002 00607743ae67 rayburgemeestre/cloudera-slave "/usr/sbin/init" 4 minutes ago Up 4 minutes 0.0.0.0:8888->8888/tcp node001 265a2e2a10fc rayburgemeestre/cloudera-master "/usr/sbin/init" 5 minutes ago Up 5 minutes 0.0.0.0:7180->7180/tcp cloudera

Hosts file inside the cloudera master should look like this:

172.17.0.48 cloudera 172.17.0.48 cloudera.bridge 172.17.0.49 node001 172.17.0.49 node001.bridge 172.17.0.50 node002 172.17.0.50 node002.bridge 172.17.0.51 node003 172.17.0.51 node003.bridge 172.17.0.52 node004 172.17.0.52 node004.bridge 172.17.0.53 node005 172.17.0.53 node005.bridge 172.17.0.54 horton 172.17.0.54 horton.bridge 172.17.0.55 node101 172.17.0.55 node101.bridge 172.17.0.56 node102 172.17.0.56 node102.bridge 172.17.0.57 node103 172.17.0.57 node103.bridge 172.17.0.58 node104 172.17.0.58 node104.bridge 172.17.0.59 node105 172.17.0.59 node105.bridge

If you have bridge first cloudera manager will get confused. Because we create the instances with 5 seconds delay in between the hosts file should end up like the above output.