[Docker] Sonarqube with Gogs/Drone to check software quality

Sonarqube is an really useful tool to check for different kind of errors in all kinds of programming languages. If you want to check out Sonarqube in a livedemo, just go over to https://sonarqube.com/. In the last two parts of my Docker series we created an gogs Git Repo as docker container and then went over to integrating gogs with drone (Version 0.4), an CI Build Server - also as Docker container. Today, we want to install Sonarqube as a tool to check upon our software quality, so that - as soon as we push an update to our gogs project, drone will execute an analysis using Sonarqube - which will give us information about different kind of errors and code quality.

# We will start by creating the needed folders:
sudo mkdir /var/sonarqube
sudo mkdir /var/sonarqube/data
sudo mkdir /var/sonarqube/extensions
sudo chown -R yourusername:yourusername /var/sonarqube
# Then we change our drone docker-compose.yml, so that drone and sonarqube will be started at the same time

sonarqube:
  restart: unless-stopped
  image: sonarqube:lts-alpine
  volumes:
    - /var/sonarqube/data:/opt/sonarqube/data
    - /var/sonarqube/extensions:/opt/sonarqube/extensions
  environment:
    - SONARQUBE_HOME=/opt/sonarqube
    - SONARQUBE_JDBC_USERNAME=sonar
    - SONARQUBE_JDBC_PASSWORD=sonar
    - SONARQUBE_JDBC_URL=
  ports:
    - "9000:9000"
    - "9092:9092"
drone:
  restart: unless-stopped
  image: drone/drone:0.4.2
  volumes:
    - /var/drone:/var/lib/drone
    - /var/run/docker.sock:/var/run/docker.sock
  env_file:
    - ./dronerc
  ports:
    - "8000:8000"
  links:
    - sonarqube

After that, we can start the service with docker-compose up -d and see Sonarqube on http://IPADDRESS:9000 (needs some time to load...).
To do an check of i.e. an Java Project, we need to write an new pom.xml file:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.example.appexample</groupId>
  <artifactId>appexample</artifactId>
  <version>1.0</version>

  <name>phpTest</name>

  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <sonar.language>java</sonar.language>
    <sonar.sources>src</sonar.sources>
    <sonar.exclusions>src/test/test.php, src/test/test/*</sonar.exclusions>
  </properties>

</project>

and we need a new .drone.yml

cache:
  mount:
    - /drone/.m2
build:
  main:
    image: maven:3-jdk-8-onbuild
    commands:
      - mvn sonar:sonar -Dsonar.host.url=http://IPOFTHESONARQUBESERVER:9000 -Dmaven.repo.local=/drone/.m2
      - echo "Sonarqube has been completed."
      - mvn clean install -Pcoverage -Dmaven.repo.local=/drone/.m2
      - mvn package -Dmaven.repo.local=/drone/.m2
      - mvn test -Dmaven.repo.local=/drone/.m2
      - echo "Build has been completed."
debug: true

And thats it :). Login into drone, activate your repo as valid CI repo and after that - with every push to that gogs repo, a new sonarqube analysis should be performed.

[Ubuntu] Use Molly-Guard to stop shooting your own leg

If you're working on some dozens of linux servers (or even more than 100,.. as in my case), you end up doing administration via SSH - which is the way to go. And chances are, that you'll get dozens of SSH connections open in dozens of tabs and you did some updates on some of those servers and want to restart this thing with a quick sudo reboot now...
I won't lie if I say, it happend more than once that I accidentally rebooted the wrong server - at least that was the case more than a year ago.
For the last year, since I have been using Molly-Guard - that did not happen once. Why? Because Molly-Guard does stop the reboot command if it detects that you're issuing it from an SSH console - and asks for the server name. If you're entering it correctly - it will reboot. If you're in a frenzy, doing your "sudo reboot now" and enter name serverB while you're on serverA - yep, Molly-Guard will stop you from shooting yourself in the leg. Neat, ain't it?

Oh - and the best part? Ease of use: sudo apt-get install molly-guard
Thats it, you're set, bye.

Nope. Really. No configuration needed. Just install that baby and be safe :)!

[Docker] CI with Drone and Gogs

Now that we have a running Gogs installation for our source code (see here) - we can use Drone to build working software from those repos. Drone is an Continious Integration System, that comes in two flavors: Drone.io - which is the hosted service you could use - or Drone - as self-hosted service. We want to use the later one. To install drone in our existing Docker setup (with gogs already installed) we need to complete following steps:

# Create docker-compose.yml in ~/drone/
cd ~
mkdir drone
cd drone
vi docker-compose.yml
# Copy this content into your docker-compose.yml file

drone:
  restart: unless-stopped
  image: drone/drone:0.4.2
  volumes:
    - /var/drone:/var/lib/drone
    - /var/run/docker.sock:/var/run/docker.sock
  env_file:
    - ./dronerc
  ports:
    - "8000:8000"

# Create the dronerc file
vi dronerc
# Copy this content into your dronerc file
# replace yourserverip with the ip or dns name of your server, i.e. example.com
# replace the gogsport with the port of the http port of the gogs installation, i.e. was 3000 in our example

REMOTE_DRIVER=gogs
REMOTE_CONFIG=https://yourserverip:gogsport?open=false
DEBUG=true

# Create the needed folders
sudo mkdir /var/drone
sudo chown -R yourusername:yourusername /var/drone/

After that, an docker-compose up will start the service, you can end it via STRG + C and really start it in deattached mode with docker-compose up -d
You can then go to http://yourserverip:8000 and log in into drone with your gogs login and allow access.

The current readme for drone can be found on http://readme.drone.io/usage/overview/ and you'll need to include a .drone.yml in your repos to really build something.

In my example, I used this .drone.yml

cache:
  mount:
    - /drone/.m2

build:
  main:
    image: maven:3-jdk-8-onbuild
    commands:
      - mvn clean install -Pcoverage -Dmaven.repo.local=/drone/.m2
      - mvn package -Dmaven.repo.local=/drone/.m2
      - mvn test -Dmaven.repo.local=/drone/.m2
      - echo "Build has been completed."

debug: true

and this pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.mycompany.app</groupId> 
  <artifactId>WhereIsMyPi</artifactId>
  <version>1.0</version>
  <packaging>jar</packaging>
 
  <name>Where is my Pi</name>
  <url>http://www.nico-maas.de</url>

  <build>
    <sourceDirectory>src</sourceDirectory>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>3.5</version>
        <configuration>
          <source>1.8</source>
          <target>1.8</target>
        </configuration>
      </plugin>
      <plugin>
       <!-- Build an executable JAR -->
       <groupId>org.apache.maven.plugins</groupId>
       <artifactId>maven-jar-plugin</artifactId>
       <version>2.4</version>
       <configuration>
        <archive>
          <manifest>
            <addClasspath>true</addClasspath>
            <classpathPrefix>lib/</classpathPrefix>
            <mainClass>WMP</mainClass>
          </manifest>
        </archive>
       </configuration>
      </plugin>
    </plugins>
  </build> 

</project>

to build my "WhereIsMyPi" project :).
Happy building!

[Ubuntu] Secure your Apache 2 Reverse Proxy

We got an Apache 2, working as Reverse Proxy to some Docker instances (we won't talk about the nginx vs Apache stuff here for the same reasons we won't talk about vi vs emacs vs xyz ;)) - and somehow we realized that our apps are a little bit too sensitive to allow them from any ip.

First, we want to activate the needed modules. Normally that should not be necessary, but for sake of completeness:

sudo a2enmod mod_authz_core
sudo a2enmod mod_authz_host

Secondly, we want to allow them only from trusted ips. We do redirect them to the docker instances via ProxyPass - but need to create an Location / "catcher" - otherwise we could not use the mod_authz to deny other ips :).

<VirtualHost *:80>
ServerAdmin johndoe@example.com
ServerName hex.example.com
ServerAlias hex

RedirectMatch ^/$ https://example.com

<Location / >
<RequireAll>
Require ip 192.168.1.0/24 192.168.2.0/24 192.168.3.0/24
</RequireAll>
</Location>

ProxyPass "/" "http://127.0.0.1:8020/"
ProxyPassReverse "/" "http://127.0.0.1:8020/"

</VirtualHost>

<VirtualHost *:443>
ServerAdmin johndoe@example.com
ServerName hex.example.com
ServerAlias hex

<Location / >
<RequireAll>
Require ip 192.168.1.0/24 192.168.2.0/24 192.168.3.0/24
</RequireAll>
</Location>

ProxyPass "/" "http://127.0.0.1:8020/"
ProxyPassReverse "/" "http://127.0.0.1:8020/"

# Alias /static /srv/example_sw/sw/public_html/

SSLEngine on
SSLCertificateFile /etc/ssl/certs/hex.example.com.pem
SSLCertificateKeyFile /etc/ssl/private/hex.example.com.key
SSLCertificateChainFile /etc/ssl/chains/example-ca-chain.pem

</VirtualHost>

That way, hosts from other subnets than 192.168.1.0, 2.0 and 3.0 won't be able to access the proxy and therefore our app :)!

[Ubuntu] Radsecproy for secure Radius over WAN

Chances are you going to need an radius Auth over WAN - because your Radius and Identity Mngmnt is hosted in the security of the local datacenter of your corp... but the client (i.e. an network switch) is somewhere over the rainbow WAN. You *could* just pipe the radius traffic over the internet - but there be dragons: radius communication is unencrypted. So... just no.

Enter radsecproxy: Radsecproxy is - as the name implies, an radius proxy - which needs to be installed on both servers (the local one in your company, now called SERVER, and the remote one with the switch attached, now called CLIENT) - and does encrypt the communication between both server parts (over WAN i.e.) via TLS.

1.) Install radsecproxy on Server ( sudo apt-get install radsecproxy )
2.) Create CA with generate-CA.sh (in /etc/radsecproxy/) [ https://github.com/owntracks/tools/blob/master/TLS/generate-CA.sh - please change keybits to 4096 bits, thanks! ]
3.) Create Certs (Server, Client) with generate-client.sh (in /etc/radsecproxy/) [ at the end of this post, http://rockingdlabs.dunmire.org/exercises-experiments/ssl-client-certs-to-secure-mqtt - please change keybits to 4096 bits as well! 🙂 ]
4.) Configure /etc/radsecproxy.conf [UPPERLETTERS are constants which you need to change]

# Master config file for radsecproxy
sourceTLS IPADDR_OF_SERVER
listenTLS IPADDR_OF_SERVER:2083

LogLevel 3
LogDestination file:///var/log/radsecproxy/radsecproxy.log

LoopPrevention on

tls default {
CACertificateFile /etc/radsecproxy/ca.crt
CertificateFile /etc/radsecproxy/SERVER_NAME_FQDN.crt
CertificateKeyFile /etc/radsecproxy/SERVER_NAME_FAQN.key
}

client CLIENT_NAME {
host IPADDR_OF_CLIENT
type tls
certificatenamecheck off
secret PW_OF_CLIENT_RADSEC
}

server SERVER_NAME_auth {
host IPADDR_OF_SERVER:1812
type udp
StatusServer on
secret PW_OF_SERVER_FOR_RADIUS
}

server SERVER_NAME_acct {
host IPADDR_OF_SERVER:1813
type udp
StatusServer on
secret PW_OF_SERVER_FOR_RADIUS
}

realm * {
server SERVER_NAME_auth
accountingserver SERVER_NAME_acct
}

# example config for localhost, rejecting all users
client 127.0.0.1 {
type udp
secret TEST_SECRET
}

realm * {
replymessage "User unknown"
}

5.) sudo service radsecproxy restart

6.) Install radsecproxy on Client ( sudo apt-get install radsecproxy )
7.) Copy client cert and ca.crt to Client /etc/radsecproxy
8.) Configure /etc/radsecproxy.conf [UPPERLETTERS are constants which you need to change]

#sourceUDP 127.0.0.1
sourceUDP IPADDR_OF_CLIENT
listenUDP *:1812
listenUDP *:1813

LogLevel 3
LogDestination file:///var/log/radsecproxy/radsecproxy.log

LoopPrevention on

tls default {
CACertificateFile /etc/radsecproxy/ca.crt
CertificateFile /etc/radsecproxy/CLIENT_NAME_FQDN.crt
CertificateKeyFile /etc/radsecproxy/CLIENT_NAME_FQDN.key
}

client CLIENT_NAME {
#host 127.0.0.1
host IPADDR_OF_CLIENT
type udp
secret CLIENT_RADIUS_SECRET
}

client SWITCH_NAME {
host SWITCH_IP
type udp
secret SWITCH_RADIUS_SECRET
}

server SERVER_NAME {
certificatenamecheck off
host IPADDR_OF_SERVER
type tls
StatusServer on
secret PW_OF_CLIENT_RADSEC
}

realm * {
server SERVER_NAME
accountingserver SERVER_NAME
}

# example config for localhost, rejecting all users
client 127.0.0.1 {
type udp
secret TEST_SECRET
}

realm * {
replymessage "User unknown"
}

9.) sudo service radsecproxy restart
10.) If you now point your switches to the CLIENT_IP with the correct credential, it should go via the radsecproxy to your main radius server and get the connection working. Please pay attention that on your CLIENT site no radiusd daemon is allowed to run, as it would block the ports needed for radsecproxy / radius. Make use of the radsecproxy log files to see, wheter the two radsecproxy servers do connect and talk to each other :).

[Ubuntu] Freeradius: Improve Uptime

As a network admin, you're going to have at least one Freeradius running, mostly for 802.1x authentication. At my place the problem arised, that the service was down too often - for different reasons.

1.) Logrotate
If you're using logrotate, you should check out /etc/logrotate.d/freeradius:

/var/log/freeradius/*.log {
weekly
rotate 52
compress
delaycompress
notifempty
missingok
postrotate
invoke-rc.d freeradius reload >/dev/null 2>&1 || true
endscript
}

Logrotate does restart freeradius after it swapped the logs with reload, which often results in a crash or race condition (freeradius does not shutdown fast enough, and the restarting process thinks it already got one running process - and both terminate). So to change that, you should stop the process, wait, and start again.

/var/log/freeradius/*.log {
weekly
rotate 52
compress
delaycompress
notifempty
missingok
postrotate
invoke-rc.d freeradius stop >/dev/null 2>&1 || true
sleep 5
invoke-rc.d freeradius start >/dev/null 2>&1 || true
endscript
}

2.) Monit
monit is an monitoring programm which checks wheter a service is still running.
Install via: sudo apt-get install monit
Configure:

vi /etc/monit/conf.d/freeradius

check process freeradius with pidfile "/var/run/freeradius/freeradius.pid"
start program "/etc/init.d/freeradius start"
stop program "/etc/init.d/freeradius stop"
if failed host 127.0.0.1 port 1812 type udp protocol radius secret RADIUSSECRET then alert
if failed host 127.0.0.1 port 1813 type udp protocol radius secret RADIUSSECRET then alert
if 5 restarts within 5 cycles then timeout

sudo service monit restart

You should change the RADIUSSECRET to the one of your freeradius.

[Ubuntu] Letsencrypt with Apache and Freeradius

This little tutorial describes how to use Letsencrypt with Apache, Freeradius and Auto-Renewal of the Certificates.

#Install Letsencrypt
sudo apt-get update
sudo apt-get install git
cd /opt
sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt
cd /opt/letsencrypt

#Become root
sudo su

#"Order" certificates (replace SERVERDOMAIN.COM with the DNS of your Server!)
./letsencrypt-auto --apache -d SERVERDOMAIN.COM --rsa-key-size 4096
Enter Contact Mail: mail@SERVERDOMAIN.COM
Configuration Type: Secure #is best, as it does redirect insecure http to https)

#Read PATH variable
echo $PATH

#Cronjob for certificate renewal
#you should under all circumstances replace the string following PATH= with your own, as read with the command above.
#Seperate with ; from the rest of the command like shown in the example
crontab -e

#letsencrypt
30 2 * * 1 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games;/opt/letsencrypt/letsencrypt-auto renew >> /var/log/le-renew.log
35 2 * * 1 /etc/init.d/freeradius restart
35 2 * * 1 /etc/init.d/apache2 restart

#Configure Freeradius
cp -r /etc/freeradius/certs/ /etc/freeradius/certs_bkp
rm /etc/freeradius/certs/*.pem
cp /etc/freeradius/eap.conf /etc/freeradius/eap.conf_bkp

vi /etc/freeradius/eap.conf

#certdir = ${confdir}/certs
#cadir = ${confdir}/certs
certdir = /etc/letsencrypt/live/SERVERDOMAIN.COM
cadir = /etc/letsencrypt/live/SERVERDOMAIN.COM
#dh_file = ${certdir}/dh
dh_file = ${confdir}/certs/dh
#private_key_password = whatever
private_key_file = ${certdir}/privkey.pem
certificate_file = ${certdir}/cert.pem
CA_file = ${cadir}/fullchain.pem

#Configure access rights on /etc/letsencrypt
cd /etc/letsencrypt/
chgrp -R ssl-cert archive csr keys live options-ssl-apache.conf renewal # set group of cert/key dirs to ssl-cert
find . -type d -exec chmod g+xs {} \; # directories executable and setguid (set group ssl-cert for new files/dirs)
find . -type f -exec chmod g+r {} \; # files readable

#Restart Freeradius
service freeradius stop
service freeradius start

Additional infos: https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-14-04

[Ubuntu] Networked UPS with apcupsd, APC 750 and Windows

Due to some serious power outages, I had to install an UPS at the Office of one client. It is an rather small setup: One low-power Ubuntu Server, one Laptop with one TFT Screen, one i3 Desktop with two TFT Screens, one network switch. All in all, about 400VA. I had an old APC Smart UPS 750 VA at hand and used it.
Idea was to connect the UPS via USB directly to the Server and hook Laptop and Desktop to that Server via Network. As soon as the Server found that the whole Powergrid went offline, all pcs should shutdown automatically: Enter apcupsd.

Power installation:
Connect the UPS input to the power grid, connect the UPS out to your PCs. NEVER CONNECT ANY LASER PRINTER TO THAT OUTPUT!
Power up the UPS.

Server installation:
Connect the UPS USB Port to the Server.
Install apcupsd:
sudo apt-get install apcupsd
Configure apcupsd:
sudo vi /etc/apcupsd/apcupsd.conf
In my case I configured that settings:

UPSNAME blaUPS # How you want to name your ups
UPSCABLE smart # in my case, it is a smart cable
UPSTYPE usb # on usb
POLLTIME 60 # poll ups every 60 seconds
ONBATTERYDELAY 10 # delay alarm for 10 seconds
BATTERYLEVEL 10 # on less than 10 percent battery level shutdown server
MINUTES 3 # on less than 3 minutes battery runtime shutdown server
NETSERVER on # activate network server
NISIP 0.0.0.0 # allow access from all nics
NISPORT 3551 # default port for network server

Allow port 3551, tcp through iptables!

Restart apcupsd:
sudo service apcupsd restart

Give status of current apcupsd session:
sudo service apcupsd status

Client installation on Windows:
Download latest version for Windows (i.e. winapcupsd-3.14.13.exe), you only need apcupsd Service and Tray Applet.
Leave everything on default on setup and configure apcupsd.conf

UPSNAME blaUPS # How you want to name your ups
UPSCABLE ether # network to server
UPSTYPE net # on network
DEVICE IP:3551 # for IP, enter the IP of the server
POLLTIME 15 # poll ups every 15 seconds
ONBATTERYDELAY 10 # delay alarm for 10 seconds
BATTERYLEVEL 20 # on less than 20 percent battery level shutdown client
MINUTES 3 # on less than 3 minutes battery runtime shutdown client
NETSERVER on # activate network server
NISIP 127.0.0.1 # allow access only from localhost

And thats it 🙂

[Docker] Install Gogs as Docker container

Gogs (Go Git Service) is an awesome Github/Gitlab like solution, completly written in Go(lang) - which makes it blazing fast - and lightweight.
And as there is even an Docker Container of Gogs available - I thought - why not using this to finally move from my SSH-only-Git to an "real" Gitservice :).

We are going to use docker-compose in this example - so I assume you have installed this as shown in my last guide on Docker.

# Create docker-compose.yml in ~/gogs/
cd ~
mkdir gogs
cd gogs
vi docker-compose.yml
# Copy this content into your docker-compose.yml file

gogs:
  restart: unless-stopped
  image: gogs/gogs
  volumes:
    - /var/gogs:/data
  ports:
    - "10022:22"
    - "3000:3000"

After that save the file and issue docker-compose up in your terminal. Docker will start to pull the gogs image from the hub and launching gogs. All your gogs files will be saved on your local drive in /var/gogs. You can find the overview of the file structure here.
After Docker is ready - launching your favorite browser and go to http://127.0.0.1:3000.

Now its time to configure gogs. Please bear in mind the important information of the Gogs Guide regarding Settings in a Docker installation.

Regarding this, we will use following settings:
As Database type, we choose SQLite3. As path, already data/gogs.db is choosen, which is important: A Docker Container is non-persistent - so if you restart that container, all files not saved in a mounted directory (like your /var/gogs, which is mounted as data in the docker session of gogs...) will be lost.
As name, we choose something catchy like PiGit or so - as you wish.
We won't touch the repo path (/data/git/gogs-repositories), nor the user (git). As Domain, we could choose our http://mydnshost.com.
As SSH Port, we choose 10022. HTTP Port remains as 3000 and the application domain should be something like http://mydnshost.com:3000 - while 3000 is the exposed HTTP port - so someone can actually access your Gogs service.

You can also configure your mail server - as you wish.

Regarding the server and additional settings, I did choose the Offline Mode, but enabled Gravatar, disabled User Registration and Captchas and I did enable "Need to be registred to see contents" - as I want my Gogs Server to be actually reachable from the net - but I only want to create user accounts by hand - and not have my server filed with stuff of foreign people.

The last step is to create an Admin User Account. I would recommend to do this "now". After that, click "Install Gogs". And then - you can login :)!

The cool thing about gogs: You can even migrate from other Git Services to your new Gogs Server via the "Migration" Tab (+ next to your User Icon after you have logged in). Please bear in mind, that only http/https and local paths do work for that.

If you create a new repo, you should always check the "Init the repo", so that you can directly clone and use it.

Regardings cloning: You can access your repo via the webpage (ip:3000) or ssh (ip:10022). To use ssh, you need to insert your public key in the "SSH Key" tab in your gogs settings.

If you expose the 3000 and 10022 ports via your Firewall/Router, you can access gogs from everywhere - or you just use VPN to get into your network.

Bonus: Making Gogs Secure with Letsencrypt
If you already have an Letsenrcypt certificate for your server / pc, you can easily get gogs to use that: Just go to /var/gogs/gogs/data and copy your fullchain.pem and privkey.pem from your letsencrypt folder ( /etc/letsencrypt/live/[yourdomain]/ ) and give your user access to it via chown.
After that, go to /var/gogs/gogs/conf open app.ini and add following settings under [server]:
PROTOCOL = https
CERT_FILE = data/fullchain.pem
KEY_FILE = data/privkey.pem

If there should be another entry like PROTOCOL = http, just delete it. Save that file, go back to your open docker terminal with gogs running, CTRL + C and enter docker-compose up -d. With that, it will restart in detached mode. And more important: Your service will automatically start on every reboot of your system.

If you ever would need to stop gogs, just go again to your docker-compose file, i.e. cd ~/gogs/ and enter docker-compose stop.
You can also watch what your docker container is doing with the command docker ps

Happy coding!

[Ubuntu] Install Docker

This is a short guide to install the recent Docker Version from the official ppa on Ubuntu 14.04 LTS - along with some other great tools like docker-compose.
Please bear in mind, that Docker needs an 64 Bit System to work with :)! So no i686 plattforms from here on.

# Add Docker Key
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
# Add Repo
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | sudo tee /etc/apt/sources.list.d/docker.list
# Update and Install
sudo apt-get update
# Install Recommended Package
sudo apt-get install linux-image-extra-$(uname -r)
# Install Docker itself
sudo apt-get install docker-engine
# Useradd - so you can use docker with your own user without sudo
sudo usermod -aG docker ${USER}
# Install pip
sudo apt-get install python-pip
# Install docker-compose
sudo pip install docker-compose
# Test Docker Install
sudo docker run hello-world
# After an additional reboot, you will be able to use docker with your own user (also recommend because of the the new linux-image-extra 🙂