Docker Con Europe 2017 - A recap

Welcome 🙂

Being an Docker Campus Ambassador, I got the oppurtunity to visit Docker Con Europe 2017 - which was an awesome experience which I want to share here. As it has been quite some time since I've been to a bigger conference - and this trip does not only include visiting Docker Con - I am going to seperate this blog into two sections.

First, I am going to go for the main take-aways, which should have been posted everywhere in the net already. Secondly, I'm going to go through the whole story and add some pictures of the beautiful city of Copenhagen.

So lets get started 🙂

 

1.) Take aways

Modernize Traditional Apps (MTA)

Docker has found itself a new usecase: Use Docker to deploy legacy apps in your DevOps enabled workflow. Docker does present tools for that during its keynote, like the Docker Application Converter. However, these tools are not given out to users and only work in the specific field of Tomcat Java Web Apps or IIS Web Apps with .NET. The only way to get your apps converted in a professional way, is to buy Docker Enterprise Edition and get a Docker Partner like Avanade or Amazic over to your premises and do the work for you. So it is not magic, but hard work to convert your old apps. More infos here.

Docker with Swarm and Kubernetes

Docker is going to include Swarm and Kubernetes in the future - side by side - which is awesome. However, the reason behind that might not tbe that Docker wants to do something for its users, but more for Google: In the past it was looking like Google is going to seperated from Docker and doing its own thing. So embracing Google and Kubernetes might be the thing that keeps Google from running away - and leading the pack away from Docker. I personally think that after some iterations, Swarm and Kubernetes might disappear and lead way to another tool, which consists of parts of Swarm and Kubernetes. You might want to buy KubeSwarm.com today? 😉 Oh, and if you want to join the adventure early: beta.docker.com will get you started.

Docker Certified Associate

There is a official Docker Certification available now, which can be found here. Due to beeing a Beta Tester, I was already in that program - however, the first experience was kind of rough, which should be corrected by now to a more pleasent one. To get you started, we prepared a little DCA Prep Guide on Github. PRs are very welcome!

ARM (IoT) - Resin

Finally I got to meet up with the nice guys and girl of resin.io - if you're a regular on my blog, you might have seen a load of different articles and videos about their Infrastructure platform for IoT, as well as their OS resinOS for IoT applications. Basically: They get Docker running on Raspberry Pi and similiar platforms. And they also created balena, which I already talked about in this post.

ARM (Server) - Qualcomm

DockerCon is a Software Convention by design, so vendors like Cisco had a hard job getting to people. But the hardest job among all, had Chandni Chandran and Elsie Wahlig from Qualcomm. They actually showcased the coolest piece of hardware of the overall convention (just fighting with the bleeding-edge new specialized Prototype Raspberry Pi from resin ;)) - the Qualcomm Centriq 2400 - a 48 core ARMv8 CPU - ready for datacenter usage - and yes, it does run Linuxkit! Meeting Chandni, the Product Manager for this Server Series, and Elsie, the Director of Product Management for Datacenter Technologies was a huge honor, as well as it was a blast for an ARM fanboy as I am ^^'. The cool thing about their technology is, that it might come soon to packet.net (which I did review sometime ago) - so, lets get our fingers crossed that this beautiful and awesome machine finds its way into the racks of every major hoster - and maybe onto my table 😉 [Hey, it is as cool as you could *really use it* without building a server closet around it - and.. for the other insane path - dreaming is allowed, ain't it? :3]

Monitoring

Felt like the "hot-*hit" of this Docker Con: Nearly everyone was holding up a product in that area. Be it DataDog, sysdig or Instana. However, as some booth-visitors pointed out - some of these products, like DataDog, only exist in a SaaS solution and cannot be used on-prem. Quite the security breach you got there... I would go with Instana.

Security Solutions

CyberArk, BlackDuck, Aqua Security, Twistlock and cilium - among others. I would vote for cilium, as they do Open Source.

Storage

StorageOS, Virtuozzo Storage, elastfile, NetApp, Zenko CloudServer where the main players, however, next to the raw storage, also storage adapters were available like the Zenko Multi Cloud Connector or the ever famous RexRay - it seems like quite a trend to go more and more to Amazon S3 compatible interfaces. For reference I linked only vendors which had Community Editions or Open Source Software available ;).

Virtualization

VMware, Cisco, IBM, Nutanix. Well, that was a surprise. While VMware stays on track with its vSphere Integrated Containers 1.2, Cisco trys to wrap up Docker in its UCS and Flexpod series via RedHat. IBM starts up with its IBM Cloud Private - which even comes in a Community Edition - and seems interesting. Same goes for Nutanix Community Edition which can be checked out here.

Misc

Somehow everyone seems to go out for an Enterprise Edition now: Create.io, Redis Enterprise and nginx+ (Load Balancer). But somehow, some of these corporations deserve special treatment: nginx+ is all about trust, as you get your binaries delivered without "call-home" - which is a nice thing and I would love to see this being the "norm" again. Other than that, jFrog was there with awesome coffee and lovely designed shirts, as well as Atlassian and other DevOps tool makers like Puppet, Chef and Rancher. Also - Microsoft, Amazon and Google had their booths as well but... well, that was kind of a must, so ;).

 

2.) The whole story

Well, now that we got through this, we might add some images to complete the overall picture ;)!

Flight from Luxemburg to Zurich

Just short before Zurich

Zurich Airport - here I got information that resins new Moby Clone balena had been released. So I just grabbed the next Wifi Connection, downloaded all files - and went back into the air - next stop: Copenhagen!

Zurich from above

balena Experiments in Flight

... and working!

Arrived in the heart of Copenhagen

My sleeping and dev place for the next days to come 😉

Skt Alban Church

Kastellet

Kastellet

Kastellet

Kastellet

Kastellet

Harbour

The little Mermaid

Amalienborg Palace

Nyhavn

Nyhavn

Nyhavn

North Atlantic House

Folketheatret

Bicycles

Soylent Green - anyone?

Food - some assembly required

Chili from Berlin, Wine from Trier and Water from Copenhagen - does taste awesome

Working on the JCTixx v2 Portable Scanner Type M / Munin

Crew on board!

Bella Center on Monday, fist Docker Day!

Already a cool start

On Monady, we got to attend the Community Leaders Summit. We finally got to meet a big part of the Community Leaders familiy, Meetup Organizers from all over the world - and other Campus Ambassadors, which was awesome. I finally got to meet Karen Bajza, Bret Fisher, Jean-Marc Meessen and Jens Doose, which was really cool :).

The obligatory picture with the floating Moby on Day 2 was even an event on which one was awarded a special pin.

Breakfast was awesome, I loved those pancakes...

... and did I mention we got a load of coffee everywhere around the Bella Center? During all that time, I could meet up with Xinity, Gildas Cuisinier, Oliver Robert and Xiaowen - waiting eagerly for the keynote.

The first Keynote was one of the biggest events

... and Modernize Traditional Apps (MTA) was the big buzzword for the days to come...

... I could not help but start hacking around with image2docker during the keynote... The results were mixed.

#dinoselfie - with Michael Irwin!

resin.io's beautiful new custom Raspberry PI Board...

..with some awesome features!

Cisco is also commited to Containers

... and jFrog had some awesome designs

OpenFAAS with Alex Ellis

Qualcomm Centriq 2400

Qualcomm Centriq 2400 - thats what I call hands-on!

Docker Party at the second evening...

...in an old train hall...

...with lots of space...

... and games!

Tivoli park 🙂

Bella Center in the morning of Day 3

The famous jFrog Universal Coffee Registry

... and the even more famous resin.io Demo -
with a Raspberry Pi - running resin.io - in resin 😉

Working on balena demos at the resin booth

Working on balena demos at the resin booth

Lunch was awesome, too 😉

I finally meet Chanwit Kaewkasi during a Hallway Track and as he is a fellow ARM fan - I could not help but bring him to the Qualcomm booth to get him some demotime - I think I liked it ;).

Elsie and Chandni are demo-ing LinuxKit on their new Qualcomm System

A last selfie with Karen 🙂

And I needed to pay the container bath a visit - on the Hallway Track

Leaving Kopenhagen on Day 4 - and last thing I see...

... is a container ship. Well. Ain't going to get more Dockerized than that ;)!

I was very lucky and attended the MTA Pin Challenge - during that I bumped into Mano Marks - finished the Challenge as first one, and got an custom WASD Keyboard with some nice finish. Nick Harvey found out about that - and my new job - and completely went bananas - as he figured out that this keyboard might soon receive code which could go to space... Well... Let's hope so ;).

And with that, I conclude my little Docker Con recap. I hope you enjoyed the ride as much as I did - there are certainly interesting times ahead ;)!

PS: I heared the recorded videos are online now :3

State of Decay: M.Sc. / Docker Campus Ambassador / Work

Hi there,

the last year was quite a ride, so I wanted to share some more or less private stuff about what is going on in my life currently:

a) As not mentioned here on the blog, I did my Bachelors Degree on Applied Computer Sciences at the University of Applied Sciences (htw saar) in Saarbrücken. After that, I started my Masters Studies in Computer Sciences at the University of Trier. I concluded my studies with the degree Master of Science in July of this year :).

b) As of July, I also started working as an Docker Campus Ambassador.The main idea of said program is to give students access to a fellow peer which is involved in Docker and could be kind of mentor to them in all Docker related questions - which was something I was already involved with back in early 2016, i.e. at the Saarcamp 2016 - so I decided to do said work in a more official way 😉

Sadly, I am going to need to resign from said position soon, as I am going to leave the University Trier at the end of the year. Which brings me to c):

c) I am going to move to another city at the end of the year and start to work for an yet to be disclosed coporation in the aerospace sector as IT Engineer with some additional work loads - which I am very excited about.

d) I am still working on my Ticketing / Convention Management System called JCTixx, and that system just got an big upgrade, in terms of new ticket scanners, which come in two different types: The big appliance version and a smaller, portable version:

Also, I had the oppurtunity to visit docker con Europe 2017 - which I am going to write about in a upcoming post.

So to sum it all up: This year was quite a ride and I am very happy how it turned out so far - and I am hoping for some vacation as soonish as possible - and more time to produce content for this blog :)! Thanks a lot for sticking with me, I just recalled that this blog is going to be 10 years old next year - which is quite awesome and I hope to continue this work with interesting articles and insights maybe from a higher - earth viewing perspective - quite soon ;).

All the best,

Nico

Running resin balena on Raspberry Pi 3

Just two days ago, resin.io announced balena their new, moby based container engine. Basically, it is a Docker-dropin-replacement for IoT Devices: It is compatible with Docker and Docker Hub, gains a lot of stabilty with atomic pulls, more conservative flash memory use - as well as smaller updates due to true container delta pulls. Also, it comes bundled as single file, is smaller in size and as easy to use as Docker. So - a very good bundle.

However, this comes with the disadvantage of losing Plugin support, Swarm, Cloud logging, Overlay networking and Non-boltdb backed stores - which is a small price to pay, as none of these features are really needed in an IoT scenario.

balena is going to replace Docker in resin.io and resinOS in the near future - but I wanted to testdrive it right now, which ended up in me pluggin my Raspberry Pi 3 inflight from Zurich to Copenhagen and getting it "flying" ;).

To get it working, little is needed :):

1.) Download and install the latest Raspbian image (Stretch Lite should do the trick)

2.) Login to the RPi and run the installer: curl -sfL https://balena.io/install.sh | sh

(Always check the file before running it to shell, to be sure nothing bad happens!)

3.)

sudo balenad &

4.) Now you can use balena like docker with the command sudo balena

The whole thing works pretty good - this short scrible is just to get it working for using it in a hackish way - a real tutorial will come as soon as I get the time to make it really persistent and auto-starting... But.. Well ;). Living on the edge comes with sacrifices :).

If you're on Docker Con and want to meet up, just send me a message via Twitter, E-Mail or the Hallway Track - see you soon ;)!

CUDA and Tensorflow in Docker

In this howto we will get CUDA working in Docker. And - as bonus - add Tensorflow on top! However, please note that you'll need following prereqs:

GNU/Linux x86_64 with kernel version > 3.10
Docker >= 1.9 (official docker-engine, docker-ce or docker-ee only)
NVIDIA GPU with Architecture > Fermi (2.1)
NVIDIA drivers >= 340.29 with binary nvidia-modprobe

We will install the NVIDIA drivers in this tutorial, so you should only have the right kernel and docker version already installed, we're using a Ubuntu 15.05 x64 machine here. For CUDA, you'll need a Fermi 2.1 CUDA card (or better), for tensorflow a >= 3.0 CUDA card...

Which Graphicscard Model do I own?
lspci | grep VGA
sudo lshw -C video

Output i.e.:

product: GF108 [GeForce GT 430]
vendor: NVIDIA Corporation

You should lookup on google if it works with cuda / Fermi 2.1, i.e. on https://developer.nvidia.com/cuda-gpus

GeForce GT 430 - Compute: 2.1

Ok, that one works!

I got additional infos from: https://www.geforce.com/hardware/desktop-gpus/geforce-gt-430/specifications

CUDA and Docker?

You can find out more about that topic on https://github.com/NVIDIA/nvidia-docker

Getting it to work will be the next step:

Download right CUDA / NVIDIA Driver

from http://www.nvidia.com/object/unix.html
I choose Linux x86_64/AMD64/EM64T, Latest Long Lived Branch version: 375.66, but please check in the description of the file, if your graphics card is supported!

After Download, install the driver:
chmod +x NVIDIA-Linux-x86_64-375.66.run
sudo ./NVIDIA-Linux-x86_64-375.66.run

It will ask for permission, accept it. If it gives info that the nouveau driver needs to be disabled, just accept that, in the next step, it will generate a blacklist file and exit the setup. Afterwards, run

sudo update-initramfs -u

and reboot your server. Then, rerun the setup with

sudo ./NVIDIA-Linux-x86_64-375.66.run

You can check the installation with

nvidia-smi

and get an output similar to this one:

Mon Jul 24 09:03:47 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 430      Off  | 0000:01:00.0     N/A |                  N/A |
| N/A   40C    P0    N/A /  N/A |      0MiB /   963MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+

which means that it worked!

Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb
Test nvidia-smi from Docker
nvidia-docker run --rm nvidia/cuda nvidia-smi

should output:

Using default tag: latest
latest: Pulling from nvidia/cuda
e0a742c2abfd: Pull complete
486cb8339a27: Pull complete
dc6f0d824617: Pull complete
4f7a5649a30e: Pull complete
672363445ad2: Pull complete
ba1240a1e18b: Pull complete
e875cd2ab63c: Pull complete
e87b2e3b4b38: Pull complete
17f7df84dc83: Pull complete
6c05bfef6324: Pull complete
Digest: sha256:c8c492ec656ecd4472891cd01d61ed3628d195459d967f833d83ffc3770a9d80
Status: Downloaded newer image for nvidia/cuda:latest
Mon Jul 24 07:07:12 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 430      Off  | 0000:01:00.0     N/A |                  N/A |
| N/A   40C    P8    N/A /  N/A |      0MiB /   963MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+

Yep, you got it working in Docker!

Running an interactive CUDA session isolating the first GPU
NV_GPU=0 nvidia-docker run -ti --rm nvidia/cuda
Input our first Hello World program
echo '#include <stdio.h>
// Kernel-execution with __global__: empty function at this point
__global__ void kernel(void) {
// printf("Hello, Cuda!\n");
}
int main(void) {
// Kernel execution with <<<1,1>>>
kernel<<<1,1>>>();
printf("Hello, World!\n");
return 0;
}' > helloWorld.cu
Compile it within the Docker container
nvcc helloWorld.cu -o helloWorld
Execute it...
./helloWorld
and you get,...
Hello, World!

Congrats, you got it working!

Encore, Tensorflow
Getting Tensorflow to work is straight forward:
nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:latest-gpu

It will output something like:

Copy/paste this URL into your browser when you connect for the first time, to login with a token:
http://localhost:8888/?token=d747247b33023883c1a929bc97d9a115e8b2dd0db9437620

you should do that 🙂

Then enter the 1_hello_tensorflow notebook and run the first sample:

from __future__ import print_function
import tensorflow as tf
with tf.Session():
    input1 = tf.constant([1.0, 1.0, 1.0, 1.0])
    input2 = tf.constant([2.0, 2.0, 2.0, 2.0])
    output = tf.add(input1, input2)
    result = output.eval()
    print("result: ", result)

by selecting it and clicking on the >| (run cell, select below) Button.
This worked for me:

result: [ 3. 3. 3. 3.]

however... sadly not the GPU was calculating the results as shown by the Docker CLI:

Kernel started: 2bc4c3b0-61f3-4ec8-b95b-88ed06379d85
[I 07:31:45.544 NotebookApp] Adapting to protocol v5.1 for kernel 2bc4c3b0-61f3-4ec8-b95b-88ed06379d85
2017-07-24 07:32:17.780122: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-24 07:32:17.837112: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2017-07-24 07:32:17.837440: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties:
name: GeForce GT 430
major: 2 minor: 1 memoryClockRate (GHz) 1.4
pciBusID 0000:01:00.0
Total memory: 963.19MiB
Free memory: 954.56MiB
2017-07-24 07:32:17.837498: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0
2017-07-24 07:32:17.837522: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0:   Y
2017-07-24 07:32:17.837549: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] Ignoring visible gpu device (device: 0, name: GeForce GT 430, pci bus id: 0000:01:00.0) with Cuda compute capability 2.1. The minimum required Cuda capability is 3.0.

So, CUDA >= 3.0 devices only for tensorflow 🙁 - but, it still works, as it is using the CPU (however, not as fast as it could :/)

Infos taken from:

https://github.com/NVIDIA/nvidia-docker
https://developer.nvidia.com/cuda-gpus
https://hub.docker.com/r/tensorflow/tensorflow/

[resinOS] Build resinOS from scratch

As the time of writing, resinOS is available for Download at Version 2.0.6+rev3.dev for Raspberry Pi 3. This build, however, is nearly 2 weeks old and in the meantime, something great happend: Docker has finally updated to Version 17.03.1 - upgraded from the old ~10 (ten-ish) version - which was not that cool (and without Swarm ;)). So, it is a good idea to get to know how to build your own resinOS in case you really want to live on the bleeding edge ;).

Install Dependencies (Ubuntu 16.04 LTS)

sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib \
     build-essential chrpath socat cpio python python3 python3-pip python3-pexpect \
     xz-utils debianutils iputils-ping libsdl1.2-dev xterm

goto /, because this build will create very long filenames

cd /

clone the repo, maybe some root power is needed here 😉

git clone https://github.com/resin-os/resin-raspberrypi
cd resin-raspberrypi
git submodule update --init --recursive

you would be done here and could build your own resinOS with the build command,
however, if you really want to pull the latest upgrades...

cd layers/meta-resin
git checkout master
git pull
cd ../..

finally build resinOS for Raspberry Pi 3

./resin-yocto-scripts/build/barys -r --shared-downloads $(pwd)/shared-downloads/ --shared-sstate $(pwd)/shared-sstate/ -m raspberrypi3                       

after quite some time, you'll find the image in

build/tmp/deploy/images/raspberrypi3/resin-image-raspberrypi3.resinos-img

 

There is quite a lot of stuff you can change on your resinOS, so be sure to check out https://resinos.io/docs/custombuild/ for more documentation on that topic. Have fun :)!

[resinOS] Dockerize your own Flask GUI with resinOS

I have been working with Docker and resin.io as well as resinOS for quite some time now and I am actively using those services actively for different projects: The possibility to just throw away a container and build it anew in record breaking time is just awesome and also the fact that I can now just ship images to some device, deploy it there - and it will just work. As some of you know, I am working as volunteer in different NPOs, i.e. the RepairCafé Trier in which I repair Computers and Mobile phones for free, the PiAndMore / CMD e.V. on which I teach interessted guys and gals about Raspberry Pi and Linux (and even did two presentations on resin.io and resinOS, both linked on this blog - and as video :)) - and also one other NPO which is trying to give information about Japanese Culture. For those events I already created an Tickting System called JCTixx, which uses QR Codes to sell and check tickets for those named festivals. However, I created the Scanner Hardware in 2010, just starting my second year in an Apprenticeship as IT Systemselectronics guy, without anything as a Raspberry Pi available. However, being a bit creative, I just took some old Linux routers, threw the good old OpenWRT on those and soldered directly to the its testpoints / CPU to get the GPIO pins I needed ;). That worked very well - however, nowadays, TLS 1.2 and such eat away those small MIPS cores - and I could - with a lot of work - just get them working with said crypto in 2015. However, being 7 years old now, I need new Scanners - and more flexible way of delivery software as well. So resinOS it is! 🙂

Full disclosure: As I am currently developing the Hardware for the Scanners and I am always looking for some Raspberry Pis and other hardware, I stumbled upon resin.ios #blog4swag program and decided to make this tutorial because of two reasons: 1.) I actually need an dockerized GUI for the scanners and therefore am working on a good solution to the problem - and as usually - I want my dear readers to get some interesting stuff to read. 2.) Maybe I can acquire some hardware support from the resin guys which then would help me out to get my NPO helped :). So in the end, everyone wins and no one dies - to cite Doctor Who :).

So, lets get started!

1.) Getting some GUI working at all
I saw some users experimenting with Docker and Chromium along the x86 systems - however, the first resin project I stumbled upon which really made me think about putting the GUI within a container, was resin.ios ElectronJS example. This thing really rocked, but I saw that I needed to work on something a bit easier which would just deploy some sort of GUI - and being as simple as it gets. So I started to look around and stumbled across different projects, namely
RPiBrowser-resin.io and resin-electron-app. Those two were a great starting point and using resin.ios ElectronJS example as well, I started hacking the code together, which I finally submitted to this Git Repo:
docker-raspbian_xinit. With the great help of Gergely Imreh from the resinIO Team (thanks a lot :)!) I got the project working and pushed it onto Docker Hub. To walk you through the Git Repo: I made two versions, one nailed to the release of the 26.04.2017 Raspbian and one made from the latest version. I basically grabbed resin.ios Raspbian from Docker Hub, injected the qemu files (which they did show in one tutorial) and used this as a starting point. It enables Docker Hub to build the Raspberry Pi Images, so that you can use them on your Pi.

If you want to build the docker-raspbian_xinit image yourself, you need to remove the RUN [ "cross-build-start" ] and RUN [ "cross-build-stop" ] command from the Dockerfile :).

The Dockerfile itself builds a basic system with xinit / xorg, alsa and even touchscreen support (xinput-calibrator). It generated german locales and a user called pi (which is both not needed, but I included it as matter of personal preference), copys the needed files and enables systemd. Systemd will start the xinit-docker.service which invokes the start.sh which does prepare the system, i.e. adds the container name to the /etc/hostname, so that sudo will work, activates its own SSH server on port 22, includes a touchscreen calibiration file - if available, creates a very needed config folder, removes files from old X11 sections, sets volume to 100% on speakers and phones and then starts the xinit process with the launch_app.sh :).

The launch_app.sh then works in the xinit context and does xinit specific stuff: It disables screen blanking, sets the keyboard to the created german locales (you can comment this out as well if you want or overwrite the file ;)) - then - finally - starts the matchbox-window-manager without titlebar, but WITH keyboard and mousesupport and launches....
... gedit.
Yeah. Right. I know that is a real disappointment. But I needed a small tool which would not add too much file size and show that keyboard and mouse are working - so I just went for gedit ^^'. Sorry if you were hoping to find something really awesome and cool at this place. But nonetheless, it works, and this image as well as the Git Files should serve as a starting point for your own Xinit adventure - so, thats the reason :).

After I got it finally working, I thought about my personal usecase: I will be using a lot of Python and thought about using Flask as a GUI. However, Flask is just a webframework and does not have the ability to shown something "GUI-like" - thus needing some kind of Webbrowser - and this part could be found in shape of pywebview. Pywebview just includes some website, app or similiar into a small GUI Frame with Webkit Browser in it. Cool! Exactly what I needed. However, I did not have time get to work on my own UI - and wanted to jumpstart the project by getting the Docker Container to work - so I decided to grab a cool small Flask Web GUI project and use this to showcase how easy it is to built a self-starting Docker Container GUI - on resinOS. And with that in mind, I went for the very cool helloflask Calculator by Grey Li.

2.) Getting a Flask GUI working - using 1. 😉
Ok, as soon as I got my Xinit project working, I decided to use this a base image, just overwriting changes in the system and injecting the files needed to run the pywebview'd Calculator.
With that in mind, the Dockerfile became quite small (you'll find the Source Files on Github and the Image on Docker Hub as well :))

FROM nmaas87/docker-raspbian_xinit:jessie-20170426
MAINTAINER Nico Maas <mail@nico-maas.de>
ENV DEBIAN_FRONTEND noninteractive
RUN [ "cross-build-start" ]

RUN apt-get update \
    && apt-get install -yq --no-install-recommends \
        python3-pip python3-pyqt5 python-gi gir1.2-webkit-3.0=2.4.9-1~deb8u1+rpi1 gir1.2-javascriptcoregtk-3.0=2.4.9-1~deb8u1+rpi1 libjavascriptcoregtk-3.0-0=2.4.9-1~deb8u1+rpi1 libwebkitgtk-3.0-0=2.4.9-1~deb8u1+rpi1 \
    && apt-get autoremove -qqy \
    && apt-get autoclean -y \
    && apt-get clean && rm -rf /var/lib/apt/lists/* && mkdir /var/lib/apt/lists/partial \
    && pip3 install Flask pywebview \
    && mkdir /usr/src/app/templates /usr/src/app/static

# copy program
COPY src /usr/src/app

# start init system
ENV INITSYSTEM on

RUN [ "cross-build-end" ]

I just needed to include the cross-build-start and end for Docker Hub again, installed python3-pip and pyqt5 dependencies with webkit (in a special version, otherwise it did not work...) and then installed Flask and pywebview. I then proceded to inject the pywebview-erized Calculator:

def start_server():
    app.run(host="0.0.0.0",port="80");
 
if __name__ == '__main__':
    t = threading.Thread(target=start_server)
    t.daemon = True
    t.start()
 
    webview.create_window("Calculator", "http://127.0.0.1:80/")
    webview.toggle_fullscreen()
 
    sys.exit()

I had to create a start_server function to let Flask run in its own thread while pywebview would show the GUI in fullscreen mode and connect to the Flask server.

As last step, I need to rewrite the launch_app.sh

#!/bin/bash

# Disable DPMS / Screen blanking
xset -dpms
xset s off
xset s noblank

# Change Keyboard Layout from US to German
setxkbmap de

# Debug Tools
#xinput --list
#evtest

# Start Window Manager
sudo matchbox-window-manager -use_cursor yes -use_titlebar no & 
#sudo matchbox-window-manager -use_cursor no -use_titlebar no &

# Start Payload App
#gedit
python3 /app/app.py

As seen, I only changed the "Start Payload App" line and now initialize Python 3 with the pywebview/Flask/Caculator app.

And thats it :).

 

To use this app with resinOS, just go to resinos.io, download the latest 2.0.3 release for the RPi, flash the image to your SD Card using i.e. etcher, boot your RPi and ssh to your system IP, using user root and port 22222. From then on, you can just run the app via

docker run --name pywebview --privileged --restart=unless-stopped nmaas87/docker-raspbian_pywebview:jessie-20170426

or you upload the Git src to the /mnt/data folder and build your own version of this pywebview using

docker build -t pywebview .

Please do NOT forget to comment out the RUN [ "cross-build-start" ] and RUN [ "cross-build-end" ] commands!

After that worked, you can start the app via

 

docker run --name pywebview --privileged --restart=unless-stopped pywebview

 

You can also use the app with resin.io by creating an resin.io account, a new RPi App, pushing either the latest or jessie-20170426 tag to resin - and it should built and work :). However, I am more a fan of the flexibility resinOS offers in terms of developing Docker Apps - so I decided to describe this way here.

And with that said, you can now starting working on your own GUI Apps - working in resinOS / resin.io on your RPi or similar device :)! Have fun - and if you'll excuse me, I now have 4 JCTixx Ticket Scanners to build ^^'.

[Docker] Autobuild Docker Images from Github Repos you don't own

With Docker Hub it is dead simple to just link your Github Account to Docker, choose an Github Repo which contains your Dockerfiles and Apps - and build them as soon as you push new code. But as soon as you're not the owner of the repo (i.e. you're a fan and want to provide Dockerbuilds for Docker Hub) - things get messy. But actually, it works quite easy: Just apply some IFTTT - and you're set ;).

Ok, a little bit more detail: If this then that is a nice Website which lets you create some small but powerful Applets. I.e.: If the weather is cloudy, sent me an email. Things like that. Just sign up on https://ifttt.com, its free! 🙂

After that, just go over to https://hub.docker.com. Link your Github Account with Docker Hub and create and Auto Build Repo on Github with the Dockerfile of the project you want to build. In my case, the program I'd like is developed on Github itself, so my Dockerfile contains some instructions to just check out that repo, build the program and.. done :)! After this is working, go to the "Build Settings" Dialog of your Docker Hub Repo and go to the "Build Triggers" area. Activate the Build Triggers and Copy Down the Trigger URL. Every time the URL is used, your Repo is going to be build a new :).

The only thing we now need, is to know when a new commit is made. And Github makes that very easy: Go to your wanted Github Repo (I will use pklaus awesome brother_ql as example: https://github.com/pklaus/brother_ql), click on commits in the branch you like to use (i.e. master in my case) - and you will be redirected to the page showing you all commits: https://github.com/pklaus/brother_ql/commits/master. Just add an .atom after that URL, and voilá: You got yourself an RSS Feed of that commit stream ;)!. ( https://github.com/pklaus/brother_ql/commits/master.atom )

Copy that link as well, we're going to combine both links via the "magic" of IFTTT ;): Click on "my Applets", "New Applet" and on the "+ this" part. Select the Feed Service, and "New Feed item". Paste the collected Atom/RSS Feed URL in the Feed URL field and click "Create Trigger" (https://github.com/pklaus/brother_ql/commits/master.atom).

After that, click on the "+ that" part and search for "Maker" - and choose it as action service - the action should be "Make a web request". Now enter the Build trigger URL from Docker Hub you noted down, choose the "POST" Method, "application/json" as Content Type and enter "build": true as Body. After that, click save.

Activate your newly made Applet - and you're done: As soon as someone pushes a new commit, IFTTT will see that and start an rebuild of your Docker Hub Repo :).

(Yeah, that is a really hacked-together part, but it works ;))