[Docker] Backup / Restore Images

Images should be a kind of "throw-away" format: You got your Dockerfile, and as soon as you need to run a specific container, you should just built it from scratch: All the latest security updates, software versions - great. However, sometimes you got a bit of specialized stuff which you want to backup for later use.

I found a good answer to that topic: http://stackoverflow.com/questions/26707542/how-to-backup-restore-docker-image-for-deployment


Backup:
docker save myusername/myproject:latest | gzip -c > myproject_img_bak20141103.tgz

Restore:
gunzip -c myproject_img_bak20141103.tgz | docker load

[Docker] Sonarqube with Gogs/Drone to check software quality

Sonarqube is an really useful tool to check for different kind of errors in all kinds of programming languages. If you want to check out Sonarqube in a livedemo, just go over to https://sonarqube.com/. In the last two parts of my Docker series we created an gogs Git Repo as docker container and then went over to integrating gogs with drone (Version 0.4), an CI Build Server - also as Docker container. Today, we want to install Sonarqube as a tool to check upon our software quality, so that - as soon as we push an update to our gogs project, drone will execute an analysis using Sonarqube - which will give us information about different kind of errors and code quality.

# We will start by creating the needed folders:
sudo mkdir /var/sonarqube
sudo mkdir /var/sonarqube/data
sudo mkdir /var/sonarqube/extensions
sudo chown -R yourusername:yourusername /var/sonarqube
# Then we change our drone docker-compose.yml, so that drone and sonarqube will be started at the same time

sonarqube:
  restart: unless-stopped
  image: sonarqube:lts-alpine
  volumes:
    - /var/sonarqube/data:/opt/sonarqube/data
    - /var/sonarqube/extensions:/opt/sonarqube/extensions
  environment:
    - SONARQUBE_HOME=/opt/sonarqube
    - SONARQUBE_JDBC_USERNAME=sonar
    - SONARQUBE_JDBC_PASSWORD=sonar
    - SONARQUBE_JDBC_URL=
  ports:
    - "9000:9000"
    - "9092:9092"
drone:
  restart: unless-stopped
  image: drone/drone:0.4.2
  volumes:
    - /var/drone:/var/lib/drone
    - /var/run/docker.sock:/var/run/docker.sock
  env_file:
    - ./dronerc
  ports:
    - "8000:8000"
  links:
    - sonarqube

After that, we can start the service with docker-compose up -d and see Sonarqube on http://IPADDRESS:9000 (needs some time to load...).
To do an check of i.e. an Java Project, we need to write an new pom.xml file:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.example.appexample</groupId>
  <artifactId>appexample</artifactId>
  <version>1.0</version>

  <name>phpTest</name>

  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <sonar.language>java</sonar.language>
    <sonar.sources>src</sonar.sources>
    <sonar.exclusions>src/test/test.php, src/test/test/*</sonar.exclusions>
  </properties>

</project>

and we need a new .drone.yml

cache:
  mount:
    - /drone/.m2
build:
  main:
    image: maven:3-jdk-8-onbuild
    commands:
      - mvn sonar:sonar -Dsonar.host.url=http://IPOFTHESONARQUBESERVER:9000 -Dmaven.repo.local=/drone/.m2
      - echo "Sonarqube has been completed."
      - mvn clean install -Pcoverage -Dmaven.repo.local=/drone/.m2
      - mvn package -Dmaven.repo.local=/drone/.m2
      - mvn test -Dmaven.repo.local=/drone/.m2
      - echo "Build has been completed."
debug: true

And thats it :). Login into drone, activate your repo as valid CI repo and after that - with every push to that gogs repo, a new sonarqube analysis should be performed.

[Docker] CI with Drone and Gogs

Now that we have a running Gogs installation for our source code (see here) - we can use Drone to build working software from those repos. Drone is an Continious Integration System, that comes in two flavors: Drone.io - which is the hosted service you could use - or Drone - as self-hosted service. We want to use the later one. To install drone in our existing Docker setup (with gogs already installed) we need to complete following steps:

# Create docker-compose.yml in ~/drone/
cd ~
mkdir drone
cd drone
vi docker-compose.yml
# Copy this content into your docker-compose.yml file

drone:
  restart: unless-stopped
  image: drone/drone:0.4.2
  volumes:
    - /var/drone:/var/lib/drone
    - /var/run/docker.sock:/var/run/docker.sock
  env_file:
    - ./dronerc
  ports:
    - "8000:8000"

# Create the dronerc file
vi dronerc
# Copy this content into your dronerc file
# replace yourserverip with the ip or dns name of your server, i.e. example.com
# replace the gogsport with the port of the http port of the gogs installation, i.e. was 3000 in our example

REMOTE_DRIVER=gogs
REMOTE_CONFIG=https://yourserverip:gogsport?open=false
DEBUG=true

# Create the needed folders
sudo mkdir /var/drone
sudo chown -R yourusername:yourusername /var/drone/

After that, an docker-compose up will start the service, you can end it via STRG + C and really start it in deattached mode with docker-compose up -d
You can then go to http://yourserverip:8000 and log in into drone with your gogs login and allow access.

The current readme for drone can be found on http://readme.drone.io/usage/overview/ and you'll need to include a .drone.yml in your repos to really build something.

In my example, I used this .drone.yml

cache:
  mount:
    - /drone/.m2

build:
  main:
    image: maven:3-jdk-8-onbuild
    commands:
      - mvn clean install -Pcoverage -Dmaven.repo.local=/drone/.m2
      - mvn package -Dmaven.repo.local=/drone/.m2
      - mvn test -Dmaven.repo.local=/drone/.m2
      - echo "Build has been completed."

debug: true

and this pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.mycompany.app</groupId> 
  <artifactId>WhereIsMyPi</artifactId>
  <version>1.0</version>
  <packaging>jar</packaging>
 
  <name>Where is my Pi</name>
  <url>http://www.nico-maas.de</url>

  <build>
    <sourceDirectory>src</sourceDirectory>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>3.5</version>
        <configuration>
          <source>1.8</source>
          <target>1.8</target>
        </configuration>
      </plugin>
      <plugin>
       <!-- Build an executable JAR -->
       <groupId>org.apache.maven.plugins</groupId>
       <artifactId>maven-jar-plugin</artifactId>
       <version>2.4</version>
       <configuration>
        <archive>
          <manifest>
            <addClasspath>true</addClasspath>
            <classpathPrefix>lib/</classpathPrefix>
            <mainClass>WMP</mainClass>
          </manifest>
        </archive>
       </configuration>
      </plugin>
    </plugins>
  </build> 

</project>

to build my "WhereIsMyPi" project :).
Happy building!

[Docker] Keep Docker Container up-to-date with Watchtower

If you're using Docker, you know you will need to update these containers from time to time by hand. Mostly with an docker pull repo/DockerContainerName and an docker-compose up -d. If you want to automate this, you can now use Watchtower: https://github.com/CenturyLinkLabs/watchtower

Using it, is very easy. Just run following command to let all your containers automatically update via watchtower:

docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
centurylink/watchtower

If you want choosen containers to be updated, include the names of the running containers as arguments, i.e.:


docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
centurylink/watchtower gogs_gogs_1 drone_drone_1

This Info was brought to you by Christopher Perrin @ https://blog.screebo.net/ ;).

[Docker] Install Gogs as Docker container

Gogs (Go Git Service) is an awesome Github/Gitlab like solution, completly written in Go(lang) - which makes it blazing fast - and lightweight.
And as there is even an Docker Container of Gogs available - I thought - why not using this to finally move from my SSH-only-Git to an "real" Gitservice :).

We are going to use docker-compose in this example - so I assume you have installed this as shown in my last guide on Docker.

# Create docker-compose.yml in ~/gogs/
cd ~
mkdir gogs
cd gogs
vi docker-compose.yml
# Copy this content into your docker-compose.yml file

gogs:
  restart: unless-stopped
  image: gogs/gogs
  volumes:
    - /var/gogs:/data
  ports:
    - "10022:22"
    - "3000:3000"

After that save the file and issue docker-compose up in your terminal. Docker will start to pull the gogs image from the hub and launching gogs. All your gogs files will be saved on your local drive in /var/gogs. You can find the overview of the file structure here.
After Docker is ready - launching your favorite browser and go to http://127.0.0.1:3000.

Now its time to configure gogs. Please bear in mind the important information of the Gogs Guide regarding Settings in a Docker installation.

Regarding this, we will use following settings:
As Database type, we choose SQLite3. As path, already data/gogs.db is choosen, which is important: A Docker Container is non-persistent - so if you restart that container, all files not saved in a mounted directory (like your /var/gogs, which is mounted as data in the docker session of gogs...) will be lost.
As name, we choose something catchy like PiGit or so - as you wish.
We won't touch the repo path (/data/git/gogs-repositories), nor the user (git). As Domain, we could choose our http://mydnshost.com.
As SSH Port, we choose 10022. HTTP Port remains as 3000 and the application domain should be something like http://mydnshost.com:3000 - while 3000 is the exposed HTTP port - so someone can actually access your Gogs service.

You can also configure your mail server - as you wish.

Regarding the server and additional settings, I did choose the Offline Mode, but enabled Gravatar, disabled User Registration and Captchas and I did enable "Need to be registred to see contents" - as I want my Gogs Server to be actually reachable from the net - but I only want to create user accounts by hand - and not have my server filed with stuff of foreign people.

The last step is to create an Admin User Account. I would recommend to do this "now". After that, click "Install Gogs". And then - you can login :)!

The cool thing about gogs: You can even migrate from other Git Services to your new Gogs Server via the "Migration" Tab (+ next to your User Icon after you have logged in). Please bear in mind, that only http/https and local paths do work for that.

If you create a new repo, you should always check the "Init the repo", so that you can directly clone and use it.

Regardings cloning: You can access your repo via the webpage (ip:3000) or ssh (ip:10022). To use ssh, you need to insert your public key in the "SSH Key" tab in your gogs settings.

If you expose the 3000 and 10022 ports via your Firewall/Router, you can access gogs from everywhere - or you just use VPN to get into your network.

Bonus: Making Gogs Secure with Letsencrypt
If you already have an Letsenrcypt certificate for your server / pc, you can easily get gogs to use that: Just go to /var/gogs/gogs/data and copy your fullchain.pem and privkey.pem from your letsencrypt folder ( /etc/letsencrypt/live/[yourdomain]/ ) and give your user access to it via chown.
After that, go to /var/gogs/gogs/conf open app.ini and add following settings under [server]:
PROTOCOL = https
CERT_FILE = data/fullchain.pem
KEY_FILE = data/privkey.pem

If there should be another entry like PROTOCOL = http, just delete it. Save that file, go back to your open docker terminal with gogs running, CTRL + C and enter docker-compose up -d. With that, it will restart in detached mode. And more important: Your service will automatically start on every reboot of your system.

If you ever would need to stop gogs, just go again to your docker-compose file, i.e. cd ~/gogs/ and enter docker-compose stop.
You can also watch what your docker container is doing with the command docker ps

Happy coding!

[Docker] OpenWRT Images for x86, x64, Raspberry Pi and Raspberry Pi 2

As some of you know, I am trying to learn to use Docker.
I love the simplicity of this tool and the fact that a lot of my Appliances could be built and mainted more efficiently with the use of it.
So I thought "Well, I should at least try to create some useful Images for the Docker Registry / Hub" - and so I came up with the Github Repo of x-drum which I could not help - but fork ;). So, x-drum showed an easy way to build x86 Images for 14.07 and 15.05 OpenWRT.
And I thought "well, lets extend that". So now, we also got 12.09 OpenWRT x86, as well as trunk x86... and while I was doing some research, I slapped the x64 Versions for trunk and 15.05 on as well :).

But wait - somethings missing - yeah: We need some Raspberry Pi Stuff ;):
The guys over at Hypriot did an really awesome job with creating Hypriot OS - basically an bootable Image for RPi1/2 to use a recent Version of Docker :). But - truth been told - they already created some special RPi (ARM) Images on the Docker Hub - but... it would be nice to get some more base images to play with...

So I basically used Hypriot OS on a RPi 1 to create the OpenWRT 12.09, 14.07, 15.05 and Trunk Docker Images - and on a RPi 2 to create OpenWRT 15.05 and Trunk Docker Images.
The RPi 1 Images are also usable on a RPi 2 - so I recommend to use them. RPi 2 Images are only usable on a RPi 2 :).

Everything can be grabbed from my Github Repo: https://github.com/nmaas87/docker-openwrt or directly on Docker Hub.
The x86/x64 Images can be found on https://hub.docker.com/r/nmaas87/docker-openwrt/, while the RPi 1 and RPi 2 Images are here: https://hub.docker.com/r/nmaas87/rpi-openwrt/.

Have fun :)!

[Ubuntu] Install Docker

This is a short guide to install the recent Docker Version from the official ppa on Ubuntu 14.04 LTS - along with some other great tools like docker-compose.
Please bear in mind, that Docker needs an 64 Bit System to work with :)! So no i686 plattforms from here on.

# Add Docker Key
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
# Add Repo
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | sudo tee /etc/apt/sources.list.d/docker.list
# Update and Install
sudo apt-get update
# Install Recommended Package
sudo apt-get install linux-image-extra-$(uname -r)
# Install Docker itself
sudo apt-get install docker-engine
# Useradd - so you can use docker with your own user without sudo
sudo usermod -aG docker ${USER}
# Install pip
sudo apt-get install python-pip
# Install docker-compose
sudo pip install docker-compose
# Test Docker Install
sudo docker run hello-world
# After an additional reboot, you will be able to use docker with your own user (also recommend because of the the new linux-image-extra 🙂

[RPi] The cheapest Raspberry Pi Cluster Ever Made

As soon as the Pi Zero came out, I started on thinking about Clusters again. I wanted to create an big - but at the same time, cheap cluster.
Yes, an Pi Zero is not nearly as fast, as an RPi 2. And yes, there are some problems with this idea - especially about the fact, that the Pi Zero is not as common as an - say - RPi Modell B v. 2.0 - but - as this is more about science and trying to just "do it" - I tried it anyway.

TLDR; Yes, it works - and better than you might think :)!

So, the basic problem about the Pi Zero are its interfaces: Yes, we got USB, but none Ethernet Port. So the basic approach would be to buy an 5€ Pi as well as an about 8€ AX88772 Ethernet Interface and some USB OTG Adapters - an we would end up with about 15€+ for each member of the cluster. Well, that is reasonable - but bulky and "not sexy".

0. Building mpich
I used some old Appliance Image I created from an Minibian Wheezy Image (https://minibianpi.wordpress.com/) earlier this year - for the 1.) section on the RPi Modell B pre 2.0 and RPi Modell A+. For the 2.) section, I used an special Appliance Image I made from an Minibian Jessie Image. However, I will document needed changes here, to get it running from any source. I recommend the Minibian Jessie Image as starting point, with this changes:


apt-get update
apt-get install -y raspi-config keyboard-configuration
raspi-config
# Default Configuration and Expand Filesystem using raspi-config
# Enter Finish and press Yes on Reboot the Device

apt-get install -y rpi-update sudo
apt-get -y dist-upgrade
reboot

rpi-update
reboot

# Create Default User pi
adduser pi
# Enter Password as wanted, i.e. raspberry
# Add user to default groups
usermod -a -G pi,adm,dialout,cdrom,audio,video,plugdev,games,users pi
# Add sbin Paths to pi
echo 'export PATH="$PATH:/sbin:/usr/sbin:usr/local/sbin"' >> /home/pi/.bashrc
# Add user to sudo
visudo
# Add under
# # User privilege specification
# root ALL=(ALL:ALL) ALL
pi ALL=(ALL:ALL) ALL
# Save and Exit
reboot

# Disable root login
sudo passwd -l root

or - and default RPi Jessie Image.

Building MPICH 3 was quite easy:


# Update and Install Dependencies, then reboot
sudo apt-get update
sudo apt-get -y dist-upgrade
sudo apt-get install -y build-essential
sudo reboot

# Make MPICH 3.2
cd ~
wget http://www.mpich.org/static/downloads/3.2/mpich-3.2.tar.gz
tar -xvzf mpich-3.2.tar.gz
cd mpich-3.2
# This will take some time
sudo ./configure --disable-fortran
# This will take several cups of tea ;)
sudo make
sudo make install

# Create SSH on Master, distribute to Slaves

cd ~
ssh-keygen -t rsa –C "raspberrypi"

Default location should be set to /home/pi/.ssh/id_rsa if you're using the standard user pi. Then choose this command to distribute the key to all your "slave maschines":
cat ~/.ssh/id_rsa.pub | ssh pi@IP_OF_SLAVES "mkdir .ssh;cat >> .ssh/authorized_keys"
( Was taken from http://www.southampton.ac.uk/~sjc/raspberrypi/ - he was the original father of the RPi Clusters and his work inspired me already years ago - so please read and support his work :)! Additional infos can be found at http://westcoastlabs.blogspot.co.uk/2012/06/parallel-processing-on-pi-bramble.html)
You could also just
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
to your own authorized files, shutdown your Master Pi after that and clone the card several times for all your Clients. This way, you would only need to do the work once - however, maybe you should release the keys in ~/.ssh/ so that only your Master Pi could command the slaves

1. Frist Try: Serial (did work, but not choosen)

ppplink

First idea was, to use the Serial Line of the Pi Zero for IP communication:
I wanted to have an Master Pi (Modell B) with Ethernet Port for network connectivity and connect up to 4 Pi Zero to it via Serial. And as the RPis only have one serial port, I would use an Serial to SPI Converter to keep it small and simple. But as it turns out, I could not find the MAX14830 (https://www.maximintegrated.com/en/products/interface/controllers-expanders/MAX14830.html) for sale on the net, so I got two 2 Port MAX3109 Serial to SPI Converters (https://www.maximintegrated.com/en/products/interface/controllers-expanders/MAX3109.html).
They were on the way to my home, so I wanted to already test some basic stuff by using an RPi Modell B pre 2.0 and an RPi Modell A+.

As we only had one serial port, we could only drive the one RPi Modell A+ with that. I will use the B pre 2.0 as Master, and the A+ as Slave. First, we connected both RPis via Serial (both shutdown and unpluged!). Just connect the Serial TX of RPi B to the Serial RX of the RPi A+ and vice versa. Then connect a Ground Pin of the RPi B to the RPi A+. Thats it.
Then we powered on the RPis and made some changes:
(Some ideas taken from MagPi 41: https://www.raspberrypi.org/magpi/)

Guest:

# I actually prepared the whole sdcard of the RPi A+ Guest while having that SD Card in the RPi B - because of the networking connection :).
# Install ppp
sudo apt-get install ppp -y
# Change Hostname
sudo vi /etc/hostname
sudo vi /etc/hosts
# Add DNS Server
echo 'nameserver 8.8.8.8' | sudo tee --append /etc/resolv.conf
# Remove Serial Console
cd /boot
sudo cp cmdline.txt old_cmdline
sudo vi cmdline.txt
# Normal should read about
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsc
# Remove console=ttyAMA0,115200 from that line, save and quit

# Disable Serial Console by using sudo raspi-config and the option, to be sure

# Increase Clock on the Serial Line, so drive the Serial Line at not only 115200 baud, but 1 MBit/s (taken from: http://www.thedevilonholiday.co.uk/raspberry-pi-increase-uart-speed/)
echo 'init_uart_clock=64000000' | sudo tee --append /boot/config.txt

# Add the following line to /etc/rc.local before exit 0
pppd /dev/ttyAMA0 1000000 10.0.5.2:10.0.5.1 noauth local defaultroute nocrtscts xonxoff persist maxfail 0 holdoff 1 &

# and shutdown
sudo shutdown -h now

After that, insert that SD Card into the A+.

Host:

# Now insert the real SD Card for the B into the B and power it on.
# Install ppp
sudo apt-get install ppp -y
# Enable ipv4 Forward for networking access
echo 'net.ipv4.ip_forward=1' | sudo tee --append /etc/sysctl.conf
# Remove Serial Console
cd /boot
sudo cp cmdline.txt old_cmdline
sudo vi cmdline.txt
# Normal should read about
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsc
# Remove console=ttyAMA0,115200 from that line, save and quit

# Disable Serial Console by using sudo raspi-config and the option, to be sure

# Increase Clock on the Serial Line, so drive the Serial Line at not only 115200 baud, but 1 MBit/s (taken from: http://www.thedevilonholiday.co.uk/raspberry-pi-increase-uart-speed/)
echo 'init_uart_clock=64000000' | sudo tee --append /boot/config.txt

# Reboot
reboot

# After having rebooted the RPi B, boot up the RPi A+ as well.
# Wait a little bit, then...

# Start PPP Connection
sudo pppd /dev/ttyAMA0 1000000 10.0.5.1:10.0.5.2 proxyarp local noauth nodetach nocrtscts xonxoff passive persist maxfail 0 holdoff 1

In a new putty window, you can now ping 10.0.5.2 - your RPi A+ and can SSH into it.

After that, I could use MPI with both machines, B and A+ - by entering their IP addresses into the machinefile and executing the cpi example to crunch some Pi Numbers.

But after all - it was quite ineffective and slow. So I tried to think of something better.. And then LadyAda came with her christmas present to me:
https://learn.adafruit.com/turning-your-raspberry-pi-zero-into-a-usb-gadget/ethernet-gadget - that was the moment my jaw dropped and I thought "YES! Thats it!".

2. Second Try: PiZero on Virtual Ethernet (Solution)

usblink

Now my prefered solution: As the USB Port of the PiZero is an real OTG Port, you could repurpose it as an Serial or even Virtual Ethernet port. And hell, Lady Ada striked again :)! So to sum it up shortly:
I build my MPICH as mentioned in 0) on an Minibian Jessie image (SDCard running on an RPi B). Then I installed her special kernel:


cd ~
wget http://adafruit-download.s3.amazonaws.com/gadgetmodulekernel_151224a.tgz -o gadgetkernel.tgz
tar -xvzf gadgetkernel.tgz

sudo mv /boot/kernel.img /boot/kernelbackup.img
sudo mv tmp/boot/kernel.img /boot

sudo mv tmp/boot/overlays/* /boot/overlays
sudo mv tmp/boot/*dtb /boot
sudo cp -R tmp/boot/modules/lib/* /lib

# Load virtual ethernet module on boot
echo 'g_ether' | sudo tee --append /etc/modules

# Add settings to network interfaces
echo '
allow-hotplug usb0
iface usb0 inet static
address 192.168.7.2
netmask 255.255.255.0
network 192.168.7.0
broadcast 192.168.7.255
gateway 192.168.7.1' | sudo tee --append /etc/network/interfaces

# Add DNS Server
echo 'nameserver 8.8.8.8' | sudo tee --append /etc/resolv.conf

# Some additional tweaks:
Add

# Turn HDMI Off
/usr/bin/tvservice -o
# Turn HDMI Back On
#/usr/bin/tvservice -p

# Turn ACT LED Off on Pi Zero
echo none | sudo tee /sys/class/leds/led0/trigger
echo 1 | sudo tee /sys/class/leds/led0/brightness

to your /etc/rc.local before exit 0 to turn off the HDMI Interface on boot,
as well as the LED of the Pi Zero to use less energy. Found on:
http://www.midwesternmac.com/blogs/jeff-geerling/raspberry-pi-zero-conserve-energy and http://www.midwesternmac.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi

This was enough to create an Pi Zero Slave Image.
Shutdown the RPi now with

sudo shutdown -h now

remove the Power and insert the SDcard into your Pi Zero.

On the Master Machine, I did following changes:

# Add settings to network interfaces
echo '
allow-hotplug usb0
iface usb0 inet static
address 192.168.7.1
netmask 255.255.255.0
network 192.168.7.0
broadcast 192.168.7.255' | sudo tee --append /etc/network/interfaces

# Allow Ipv4 Forward
echo 'net.ipv4.ip_forward=1' | sudo tee --append /etc/sysctl.conf

# Install iptables
sudo apt-get install -y iptables

# Define NATing rules
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o usb0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i usb0 -o eth0 -j ACCEPT

# Save NAT rules / load iptables on interface up
sudo touch /etc/iptables_masq.rules
sudo chown pi:pi /etc/iptables_masq.rules
sudo iptables-save > /etc/iptables_masq.rules

Add
pre-up iptables-restore < /etc/iptables_masq.rules under the eth0 section in the network interfaces sudo vi /etc/network/interfaces i.e. auto eth0 iface eth0 inet dhcp pre-up iptables-restore < /etc/iptables_masq.rules ( Info taken from: http://serverfault.com/questions/405628/routing-traffic-on-ubuntu-to-give-raspberry-pi-internet-access )

#After that, I shutdown the RPi via
sudo shutdown -h now
#removed power from it.

Then I attached the Pi Zero to the Hub of Pi B via an Micro USB Cable by using the Micro USB OTG Slot on the Pi Zero, connecting it to the Hub of the Pi Modell B. Next, I powered up the Pi B - and both booted.

I pinged 192.168.7.2 - which was the IP of the Pi Zero - and it answered. Now I only had to use cat ~/.ssh/id_rsa.pub | ssh pi@192.168.7.2 "mkdir .ssh;cat >> .ssh/authorized_keys" from Section 0 to get the SSH Key, created in Step 0 onto the Pi Zero and could use that to automatically login in into the Pi Zero.
With the new IP of the RPi B and Pi Zero in the machinefile of mpich I could then use my both RPis with higher speed and nearly zero costs for cabeling and power :)!
The clou: I don't need an additional powersupply for the Pi Zero - nor network adapters, RJ45 cabling, an switch - only one USB A to USB Micro cable per Pi Zero - and maybe an big, active USB Hub ;)!

Now, I need to get more Pi Zeros - I plan on using an Modell B as Master with an big active USB Hub to support 4 Pi Zeros - or an Modell B+ with an REALLY BEFFY USB Supply to work them all the same RPi - but that would come down to trying this... And I got only one Pi Zero - so I need some more time (or some sponsors?) to get me more RPi Zeros to try and see, whether this approach does scale ;)!

Best thing: This can also be used to try the awesome work of http://blog.hypriot.com/ to build an Docker Cluster from that - cool, ain't it?

Merry Christmas :)!

Yours,

Nico