WD My Cloud Mirror Gen2 with Debian 11 and Linux Kernel 5.15 LTS

Intro

Since 2017 I have been using an Western Digital My Cloud Mirror Gen 2 which I bought at Amazons Black Friday (or similar) - because the included 2x 8 TB WD Red were even cheaper with the NAS than standalone. Using the NAS had been quite ok, especially the included Docker Engine and Plex Support were a nice to have, the included Backdoor in older Versions - not so much. Recently WD had their new "My Cloud OS 5" replace the old My Cloud OS 3 - and made things worse for a lot of people. As I don't want any more surprises - and more control over my hardware - I decided to finally go down the road and get Debian 11 with an LTS (5.15) Kernel running on the hardware. This is how it went.

Warning

Warning, these are just my notes on how to convert a My Cloud OS 3 / My Cloud Mirror Gen 2 device to a "real" Debian system. You will need to take your device fully apart, solder wires and lose the warranty. Additionally you will lose all your data and even brick the hardware if something goes wrong, I am taking in no way responsibility, neither can I give support. You're on your own now.

Step 0: Get Serial Console Access

Without a serial console, you will not be able to do anything here. You will need to completely disassembly the NAS and will lose all warranty. The plain motherboard will look like this. On the most right side you will see the pins for the UART interface you will need to solder to.

When you're done with that, connect your 3v3 TTL UART USB device like this:

... and connect to it via 115200 BAUD with Putty, TeraTerm Pro or any other software (Do not connect the 3v3 pin :)). It would be wise starting without the hard drives installed.

Step 1: Flashing U-Boot

The current U-Boot on the NAS is flawed, you need to replace it. I will be CyberPK here which did an awesome job explaining everything:

We have to prepare an usb drive formatted in Fat32, and extract the uboot at link into it and connect to usb port#2.

Connect the device to the serial adapter, poweron the device and start pressing '1' (one) during the boot until you can see the 'Marvell>>' Command Prompt
press ctrl+c
then

We will start here to change stuff and break stuff. But if I could give you one tip before you start: Please execute printenv once. Copy and paste all env variables and everything Uboots spits out. It could save your hardware one day. Thanks, Nico out!

usb start
bubt u-boot-a38x-GrandTeton_2014T3_PQ-nand.bin nand usb
reset

This will reboot the device. Access again the Command prompt and add the following envs, a modified version of the ones provided by bodhi at this post:

setenv set_bootargs_stock 'setenv bootargs root=/dev/ram console=ttyS0,115200'

setenv bootcmd_stock 'echo Booting from stock ... ; run set_bootargs_stock; printenv bootargs; nand read.e 0xa00000 0x500000 0x500000;nand read.e 0xf00000 0xa00000 0x500000;bootm 0xa00000 0xf00000'

setenv bootdev 'usb'

setenv device '0:1'

setenv load_image_addr '0x02000020'

setenv load_initrd_addr '0x2900000'

setenv load_image 'echo loading Image ...; fatload $bootdev $device $load_image_addr /boot/uImage'

setenv load_initrd 'echo loading uInitrd ...; fatload $bootdev $device $load_initrd_addr /boot/uInitrd'

setenv usb_set_bootargs 'setenv bootargs "console=ttyS0,115200 root=LABEL=rootfs rootdelay=10 $mtdparts earlyprintk=serial init=/bin/systemd"'

setenv bootcmd_usb 'echo Booting from USB ...; usb start; run usb_set_bootargs; if run load_image; then if run load_initrd; then bootm $load_image_addr $load_initrd_addr; else bootm $load_image_addr; fi; fi; usb stop'

setenv bootcmd 'setenv fdt_skip_update yes; setenv usbActive 0; run bootcmd_usb; setenv usbActive 1; run bootcmd_usb; setenv fdt_skip_update no; run bootcmd_stock; reset'

saveenv

reset

(This code was also modified by me to use the fatload instead of the ext2load)

With this, our NAS is ready.

Step 2: Build a kernel and rootfs

  • On your current linux machine, get yourself a copy / git clone of Heisaths wdmc2-kernel Repo
  • Get all dependencies installed according to this repo, I installed it on a Debian 11 machine
  • Replace the file content of wdmc2-kernel/dts/armada-375-wdmc-gen2.dts with the content of the real and improved dts for the WDMCMG2 (original from this link, copy available here) - but keep the file name still armada-375-wdmc-gen2.dts
  • Replace the file content of wdmc2-kernel/config/linux-5.15.y.config with the file from here (please know this config ain't perfect, but it will get you running. You can always file a PR and help me out ;))
  • Start the build process in wdmc2-kernel with ./build.sh
  • Mark: Linux Kernel, Clean Kernel sources, Debian Rootfs, Enable ZRAM on rootfs
  • Kernel -> Kernel 5.15 LTS
  • Build initramfs -> Yes
  • Debian -> Bullseye
  • Fstab -> usb
  • Rootpw -> whateverYouWant
  • Hostname -> whateverYouWant
  • Locales -> no changes, accept (or whatever you want)
  • Default locale for system -> en_US.UTF-8 (or whatever you want)
  • Tzdata -> Your region
  • Now your kernel and rootfs will be build

While this is on-going, get yourself a nice USB 2.0 or USB 3.0 stick prepared with

  • partition table: msdos
  • 1st partition: 192 MB, FAT32, label set to boot, boot flag enabled
  • 2nd partition: rest, ext4, label set to rootfs

When the kernel is done compiling and your usb stick is done, copy all the files (sda is the name of my usb stick

  • mkdir /mnt/boot /mnt/root
  • mount /dev/sda1 /mnt/boot
  • mount /dev/sda2 /mnt/root
  • mkdir /mnt/boot/boot
  • cp wdmc2-kernel/output/boot/uImage-5.15.* /mnt/boot/boot/uImage
  • cp wdmc2-kernel/output/boot/uRamdisk /mnt/boot/boot/uInitrd
  • tar -xvzf wdmc2-kernel/output/bullseye-rootfs.tar.gz --directory=/mnt/root/
  • rm -rf /mnt/root/etc/fstab
  • cp /mnt/root/etc/fstab.usb /mnt/root/etc/fstab
    // within /mnt/root/etc/fstab:
    // change all /dev/sdb to /dev/sdc if all two drive slots on the NAS are used <- this!
    // change all /dev/sdb to /dev/sda if no drive slots on the NAS are used
  • umount /mnt/boot /mnt/root

Step 3: First boot and getting things running

Insert the USB stick into the 2 slot of the NAS. Leave the drives still out and boot it up for the first time, watch it via terminal. Login at the end with root and your chosen password.

If it boots, you can shut it down again with shutdown -P now, unplug power, insert the drives and reboot.

First thing after the first boot with drives, your own initramfs / Ramdisk from your current setup:

  • cd /root/
  • ./build_initramfs.sh
  • cp initramfs/uRamdisk /boot/boot/uInitrd

Second, install MDADM for the RAID:

  • apt update
  • apt install mdadm
  • mkdir /mnt/HD
  • edit your /etc/fstab and add a mount point for your md/raid. I used the old drives with my old data on it like this (depending on the fact as which mdX it launches...)
/dev/md0        /mnt/HD         ext4    defaults,noatime,nodiratime,commit=600,errors=remount-ro        0       1
 

A lot of good knowledge about Ramdrives can be found here.

I would advise to do steps: 1. Folder2RAM, 2. Kernel Options, 4. Logrotate - option 3 did not work out for me.

To get the drive to sleep at some point, we need to reconfigure MDADM

dpkg-reconfigure mdadm
// monthly check ok 
// daily degration check ok
// monitoring disable

... and get hdparm working

apt install hdparm hd-idle

# hdparm config
, add in /etc/hdparm.conf 

/dev/sda {
#        apm = 127
#        acoustic_management = 127
        spindown_time = 120
#       spindown_time = 4
        write_cache = on
}

/dev/sdb {
#        apm = 127
#        acoustic_management = 127
        spindown_time = 120
#       spindown_time = 4
        write_cache = on
}

# Spindown Time means: 120 * 5 sec = 600 sec / 60 sec = 10 min
# apply it after saving the file with:
/usr/lib/pm-utils/power.d/95hdparm-apm resume

We can check the status of the drives with smartctl

smartctl -i -n standby /dev/sda
smartctl -i -n standby /dev/sdb

To get fan control working

apt install wget
wget -O mcm-fancontrol-master.tar.gz https://github.com/nmaas87/mcm-fancontrol/archive/refs/heads/master.tar.gz
tar -xvzf mcm-fancontrol-master.tar.gz
cd mcm-fancontrol-master/
cp fan-daemon.py /usr/sbin/
chmod +x /usr/sbin/fan-daemon.py
cp fan-daemon.service /etc/systemd/system/
systemctl enable fan-daemon
systemctl start fan-daemon

(You can change low temp and high temp in the /usr/sbin/fan-daemon.py to get the Fan to kick in later and also set DEBUG = True if you want to see some details in the systemctl status fan-daemon)

MDT Utils can be useful, just mentioning it here

apt install -y mtd-utils
cat /proc/cmdline
cat /proc/mtd

Samba ...

apt install samba --no-install-recommends
# change /etc/samba/smb.conf to your liking and setup your SMB

Plex ...

# Plex 
apt update
apt install apt-transport-https ca-certificates curl gnupg2
curl https://downloads.plex.tv/plex-keys/PlexSign.key | apt-key add -
echo deb https://downloads.plex.tv/repo/deb public main | tee /etc/apt/sources.list.d/plexmediaserver.list
apt update
apt install plexmediaserver
systemctl status plexmediaserver

Well, that's it.

Thanks a lot to all awesome contributors in the net:

Companion repo with files: https://github.com/nmaas87/WDMCMG2

gpsTime

I think there is nothing more pleasing than having extremely precise measurements at your fingertips. Like time. While in the past it was quite problematic to measure time accurately (not talking about sundials, but... why not? ;)) - mankind has created one precise time source as the byproduct (read: "waste") for usage in accurate navigation: GNSS and their different kinds like GPS, Glonass, Galileo, BaiDou and others.

Taping into this time source and providing it to your local computer network via NTP has been done by countless people and is an extreme rewarding task. Is it necessary? Maybe not. Is it really cool? Yes. And now it is even easier as you don't need to configure it yourself, but can use the balenaHub and the preconfigured gpsTime project.

We do not waste time on fancy logos 😉

Basically you just need an RPi B+ (2/3/4), Micro SD Card, Powersupply and 3v3 TTL Level GPS Module with PPS Output. The rest is just done by going on the balenaHub entry shown above, creating a free account and flashing balenaOS onto your SD card, booting the RPi on the internet for the first time and let it get the needed containers. Afterwards you can use the RPi offline and still enjoy your precise time source.

A watterott CQM-M8Q Breakout and an good old RPi 2B+ are more than powerful enough

More details can be found in the Github Repo and you can work and improve that project to your hearts content. I am probably going to do an PiAndMore talk about it - and use the project myself as a block for precise timing in some support equipment.

BeagleLogic

To get your Beaglebone Black (BBB) to work as an Logic Analyzer, you can use BeagleLogic:

1.) Download latest version of BeagleLogic
from: https://beaglelogic.readthedocs.io/en/latest/beaglelogic_system_image.html
Released on 2017-07-13, sha256sum = be67e3b8a21c054cd6dcae7c50e9e518492d5d1ddaa83619878afeffe59c99bd
https://goo.gl/RiXGBs

2.) Burn it onto sdcard with Etcher

3.) Put SDCard into BeagleBone Black

4.) Hold "Boot" Button, plug in BBB with Mini USB Cable to your PC

5.) After 10 seconds or so, release "Boot" Button (this will just boot from SDcard with this trick)

6.) Wait some time and login to 192.168.7.2 via SSH
Username: debian
password: temppwd

7.) Test if BeagleLogic works
ls -l /dev/beaglelogic
There should be entry /dev/beaglelogic as answer, if not check the BeagleLogic Website

8.) Change uEnv.txt
sudo vi /boot/uEnv.txt
#go to following lines:
##enable Generic eMMC Flasher:
##make sure, these tools are installed: dosfstools rsync
#cmdline=init=/opt/scripts/tools/eMMC/init-eMMC-flasher-v3.sh

remove # from #cmdline=init=/opt/scripts/tools/eMMC/init-eMMC-flasher-v3.sh

#save and exit

9.) Shutdown
sudo shutdown now

10.) Boot again, this time with flasher
Hold "Boot" Button, plug in BBB with Mini USB Cable to your PC, after 10 seconds, release the button.
During the flash process of the eMMC, the 4 leds will make a "Knight Rider" like lightshow.
This will take some minutes. The BBB will shutdown afterwards (all LEDs off).

11.) Remove SDCard, replug Mini USB and boot without holding any buttons

12.) Wait some time and login to 192.168.7.2 via SSH
Username: debian
password: temppwd

13.) Open your webbrowser and go to: http://192.168.7.2:4000/#

Next:
- Upgrade
- this enables you to also use the BeagleLogic as "external analyzer" and control it from i.e. a Window System running a special version of pulseview 🙂
- More infos: https://goo.gl/L9QUrt
Connect to Ethernet / Internet
cd /opt/BeagleLogic/
git config http.sslVerify false
git pull
sudo apt update
sudo ./install.sh --upgrade
sudo reboot now

Useful infos:
https://beagleboard.org/getting-started#update
http://beagleboard.org/latest-images
https://beaglelogic.readthedocs.io/en/latest/beaglelogic_system_image.html
https://beaglelogic.readthedocs.io/en/latest/install.html

Rogue One: My first field pentesting

Earlier that year, very - earlier - I had one technician of an international company calling me for some advice. She/He had a problem with the local networking staff and their "modus operandi" regarding network security. The company had a big assembly line and very powerful, automated machinery - which made the leaky security all the more troublesome. My job was to exploit one of those security holes and show - as clearly and easily as possible - said problems - so that they were getting finally fixed.

The first stage of the whole testing was the usual: Reconnaissance. Though, in this case, this was very easily achived, as my contact handed me over parts of the firewall ruleset as well as an access to their office lan. First thing that lit up like a christmas tree: They actually had the production and office networks seperated by a firewall - which is good. For the bad part: They did drop everything. Everything except everykind of ICMP packets. Well. Damn.

Second stage was to create an exploit to that happy little mishap: My contact wanted to be able to bridge office and production networks and access them via the - according to the networking department - water tight secure firewall. The exploit needed to be able to run on a Windows 7 machine as well. With that in mind, I went through different ICMP tunnels: HANS and Dhaval Kapils icmptunnel were the first one to be dropped from that list, as they did not satisfy all constrains. In the end, I choose icmptunnel or short ptunnel. With a bit of manual patching, I could get it to compile and work again on Windows, thanks to the efforts of Mike Miller.

For testing I recreated the network and firewall using a Cisco 1841 and a Cisco 3560 switch. As I needed to integrate ptunnel into the production network, I wanted it to look as innocent and  inconspicuous as possible: So I used a Raspberry Pi 3 and dumped it into a DIN Rail case - then I outfited it with a Power over Ethernet adapter and could serve it network as well as power over said network connection.

The tests worked flawlessly and I even cramped enough speed over ICMP to get some remote desktop working.

 

On to stage three: Attack.

This stage turned out to be way cooler than thought: Due to certain circumstances, we meet at night, 0 dark thirty - you could say - and sneaked through the production line, past workers which did not take much notice in my presence. I inserted the "Rogue Pi" into one closet next to an Siemens Human-Machine Interface and plugged it into the network switch.

Then we left again. Back in the office, I tried to connect to my little helper and was immediately rewarded with a working ICMP tunnel - now transfering an SSH connection as payload. From that moment on, I could connect to a dozend different systems from different vendors in that production network. Last but not least, as "visual" demo, we created a little batch script to start the connection and connect to the Remote Desktop Interface / Human Machine Interface of a very heavy and very unsecured press - now leaving it to our control.

At this point, said connection was only opened in a "read/view only" mode so that - even by accident, we could not harm anyone. We had to bear in mind that this multi-hundred ton press was now at the mercy of our fingertips and we did not wanted to wreck hevoc at all costs - so - if you're conducting field exercises with real "heavy hardware" - find a way to interact safetly with that - before you engage any connection to it.

With this preparation, the technician was able to run the demo in front of the higher ups and finally got the attention, permission and support needed to bring security to a higher standard.

So that effort paid of in the end for the production security of that company - and rewarded me with my first - and hopefully not last - field pentest :).

 

Odroid U3 Kernel Upgrade + Docker

I wrote this back in January 2017. Since then I had not much time to work on the Odroid - however, user hexdump did just came up with a new repo, supporting the Odroid U3 with Kernel 5.4.x - you can find the overview over his awesome work here and the repo with complete releases (i.e. Ubuntu Bionic or Debian Buster i.e. odroid_u3-armv7l-debian.img.gz) here

I am using an trusty old Odroid U3 which I acquired years ago. With its SAMSUNG Exynos 4412 Cortex-A9 Quad Core 1,7 Ghz, 1MB L2 cache and 2 GB of RAM, this little puppy was an real beast - compared to the Raspberry Pi 1 at that time. However, Hardkernel did drop the support - again, which left the Users back with very old Kernel versions. However, thanks to some users and the fact that all needed support for the Exynos is now included in the current kernel - well, we can build our own. This write up is the distilled result of days of work and a lot of research - and the work of other people which I found on the net (which I will try to give proper credits at the right locations :)).

EDIT: I upgraded the Kernel Configuration GIST for my Kernel Config + Docker on 10.02.2017. Thanks to an E-Mail from Tobias Jakobi I found the pieces I missed about adding the Kernel Internal Fanservice into the Config. This works now, however - I still like my tweaked program a bit better, as it cools the system more aggressivly, while the kernel default one is a lot more silent, but runs in the 80's°C while mine will stay at 70° on max load.

It is important that these instructions, especially if it comes down to installing stuff - is written for the usage of eMMC memory, NOT THE SDCARD! Also, there be dragons and something could go wrong - so please, as usual, advance at your own pace and risk! 🙂

0.) Get an Serial Interface for 1.8V
Important. The UART is 1.8V LVTTL ONLY! If you connect 3.3V or 5V, you'll blow the U3! I used an regular 5V TTL USB Adapter as well as an Sparkfun BiDir Level Converter: https://www.sparkfun.com/products/12009 With that set to 1.8V from the UART of the U3, it worked flawlessly with the usual 115000 BAUD.

Pinout:
http://odroid.com/dokuwiki/doku.php?id=en:u3_hardware

_____UART____
|Pin 4 - GND|
|Pin 3 - RXD|
|Pin 2 - TXD|
|Pin 1 - VCC|
___________|
1.8V LVTTL

1.) Build U-Boot
A lot of stuff is taken from here, thanks a lot for your great work, SnakeBite!
We asume you're working as root, as all this stuff will need root rights :).

# update your packages
apt-get update
# needed for building u-boot
apt-get install device-tree-compiler
# get ODROID signed u-boot
wget http://odroid.in/guides/ubuntu-lfs/boot.tar.gz
tar xzf boot.tar.gz
# get patched u-boot & build for the U3
git clone https://github.com/tobiasjakobi/u-boot
cd u-boot
make odroid_config
make
#copy fresh u-boot to ODROID directory
cp u-boot-dtb.bin ../boot/u-boot.bin
cd ../boot
## install on SDCard - not what we want, just as an remark for me
#bash sd_fusing.sh /dev/mmcblk0

Copy the needed files (u-boot.bin, E4412_S.bl1.HardKernel.bin, bl2.signed.bin, E4412_S.tzsw.signed.bin) to your PC, reboot your Odroid U3 into fastboot via connecting the UART to the U3 and aborting the boot. After that, you can issue the fastboot command on the UART. The U3 will now wait for filetransfer over the Micro USB Port, which you'll need to connect to your PC. Also, for the sake of an easy upgrade, use an Linux PC (more infos here: http://odroid.com/dokuwiki/doku.php?id=en:u3_building_u-boot ).

# install needed programs
sudo apt-get update
sudo apt-get install android-tools-adb android-tools-fastboot
# and - being in the right folder, start the transfer
# u-boot.bin install
sudo fastboot flash bootloader u-boot.bin
# bl1.bin install
sudo fastboot flash fwbl1 bl1.HardKernel
# bl2.bin install
sudo fastboot flash bl2 bl2.HardKernel
# tzsw.bin install
sudo fastboot flash tzsw tzsw.HardKernel
# If installation is done, you can reboot your ODROID-U3 with fastboot.
sudo fastboot reboot

You should now have a more recent U-Boot install.

Old: U-Boot 2010.12-svn (May 12 2014 - 15:05:46) for Exynox4412
New: U-Boot 2016.11-rc3-g8a65327 (Jan 07 2017 - 23:00:56 +0100)

By the way, we needed to download this boot.tar.gz, because it contains the keys needed to sign our new U-Boot install. More Infos about U-Boot and Keys: https://github.com/dsd/u-boot/blob/master/doc/README.odroid

The Installation of a more recent U-Boot version was necessary to facilitate the boot of the to-be-build new Kernel zImage with bootz.

1b.) eMMC recovery in case something goes wrong:
http://forum.odroid.com/viewtopic.php?f=53&t=969
DL the tool [ exynos4412_emmc_recovery_from_sd_20140629.zip ]

  1. Prepare a microSD card and flash the attached image.
  2. Insert microSD into U2/U3, disconnect eMMC
  3. Turn on U2/U3 and wait for a few seconds and blue LED will blink.
  4. Plug your eMMC module into U2/U3
    4b - wait 10 seconds!
  5. Plug micro-USB cable into U2/U3 and connect other side to your PC USB host or ODROID's USB host port. (This is a trigger to start the recovery)
  6. After recovery process (only a few seconds), the blue LED will turn off automatically.
  7. Finish. Install OS on your eMMC with as usual.

2.) Building Next Kernel for Odroid U3 with eMMC
And now to start the real work:

apt-get update
apt-get install live-boot u-boot-tools
cd ~
git clone --depth 1 git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git linux_odroid
cd linux_odroid
# we could make an default config, but this is not needed, we take rglinuxtech config in the next step
# make exynos_defconfig
# Odroid Config Kernel 4.4 from http://rglinuxtech.com/?p=1656
curl -o .config http://pastebin.com/raw/NveRajaZ
# Or you can use my Config which enables Docker as well (Gist at the End of the page)
curl -o .config https://gist.githubusercontent.com/nmaas87/81818c1db9dc292a4c21125bd2602658/raw/7e4e14fa15d7c68b177f31b9e2348d62c52cf83c/u3_docker_config
make menuconfig
make prepare modules_prepare
make -j4 bzImage modules dtbs
make modules_install
cp arch/arm/boot/dts/exynos4412-odroidu3.dtb /media/boot/exynos4412-odroidu3_next.dtb
cp arch/arm/boot/zImage /media/boot/zImage_next
cp .config /boot/config-`cat include/config/kernel.release`
update-initramfs -c -k `cat include/config/kernel.release`
mkimage -A arm -O linux -T ramdisk -C none -a 0 -e 0 -n uInitrd -d /boot/initrd.img-`cat include/config/kernel.release` /boot/uInitrd-`cat include/config/kernel.release`
cp /boot/uInitrd-`cat include/config/kernel.release` /media/boot/
cd /media/boot/
vi boot.txt
# now we have to rework the boot.txt / config
# comment out the old values and set in the new ones
# please do NOT copy blindly, you need to adjust the zImage, uInitrd and eyxnos4412***.dtb file names according to your system!
setenv initrd_high "0xffffffff"
setenv fdt_high "0xffffffff"
#setenv bootcmd "fatload mmc 0:1 0x40008000 zImage; fatload mmc 0:1 0x42000000 uInitrd; bootm 0x40008000 0x42000000"
setenv bootcmd "fatload mmc 0:1 0x40008000 zImage_next; fatload mmc 0:1 0x42000000 uInitrd-4.10.0-rc2-next-20170106-v7; fatload mmc 0:1 0x44000000 exynos4412-odroidu3_next.dtb; bootz 0x40008000 0x42000000 0x44000000"
#setenv bootargs "console=tty1 console=ttySAC1,115200n8 root=/dev/mmcblk0p2 rootwait ro mem=2047M"
setenv bootargs "console=tty1 console=ttySAC1,115200n8 root=/dev/mmcblk1p2 rootwait ro mem=2047M"
boot

#After you have done that, write the commands to the boot.scr file
mkimage -C none -A arm -T script -d boot.txt boot.scr
# sync and reboot and it should work
sync
reboot now

With this in mind I really upgraded my system from kernel 3.8.13 from 2015 - to the most recent 4.10.0-rc2 next Kernel 🙂

Old: Linux odroid 3.8.13.30 #1 SMP PREEMPT Fri Sep 4 23:45:57 BRT 2015 armv7l armv7l armv7l GNU/Linux
New: Linux odroid 4.10.0-rc2-next-20170106-v7 #3 SMP PREEMPT Mon Jan 9 19:17:32 CET 2017 armv7l armv7l armv7l GNU/Linux

2b.) FAN does not work, warning!
The CPU Fan does somehow not work right out of the box, so we will now enable it manually. [EDIT, with the new Kernel Config it works out of the box, but you can still decide to use this software to have a more aggressiv cooling :)]

# Fan to full speed
echo 255 > /sys/devices/platform/pwm-fan/hwmon/hwmon0/pwm1
# Read out current temperature in °C
cat /sys/devices/virtual/thermal/thermal_zone0/temp

To get things working again, I forked and updated the odroidu2 fan tool. Install it via:

git clone --depth 1 https://github.com/nmaas87/odroidu2-fan-service.git
cd odroidu2-fan-service
make
# install it as upstart service, i.e. < Ubuntu 16.04
make usi
# install it as systemd, i.e. Ubuntu 16.04 / Xenial
make systemd
reboot

Useful Commands:

# Read max CPU Speed:
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq
# Get current CPU Speed:
cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq
# Torrture Test:
openssl speed -multi 4

2c.) Upgrade to Xenial
As I upgraded to Xenial with do-relase-upgrade, I had some problems:

Authentication Problem:
It was not possible to authenticate some packages. This may be a transient network problem. You may want to try again later. See below for a list of unauthenticated packages. create /etc/update-manager/release-upgrades.d/unauth.cfg with

[Distro]
AllowUnauthenticated=yes

After upgrade, remove this file.
from: http://askubuntu.com/questions/425355/error-authenticating-some-packages-while-upgrade

After that, apt-get clean did not work:
apt-get clean
W: Problem unlinking the file apt-fast - Clean (21: Is a directory)

Solution was:

rm -rf /var/cache/apt/archives/apt-fast

from: http://askubuntu.com/questions/765274/error-problem-unlinking-in-apt-get-clean

2d.) MAC address changes every reboot:
One solution, which did not work, was following:

rm /etc/smsc95xx_mac_addr

from: http://forum.odroid.com/viewtopic.php?f=7&t=1070

Which worked better, was to really set the MAC address to a static value:
add in /etc/network/interfaces

auto eth0
iface eth0 inet dhcp
hwaddress ether bb:aa:ee:cc:dd:ff

from: http://forum.odroid.com/viewtopic.php?f=111&t=8198

2e.) Control the CPU speeds via cpufrequtils:

apt-get install cpufrequtils
vi /etc/default/cpufrequtils

ENABLE="true"
GOVERNOR="ondemand"
MAX_SPEED=1704000
MIN_SPEED=200000

However, I chose "performance" as GOVERNOR and a MIN_SPEED=800000

from: http://forum.odroid.com/viewtopic.php?f=65&t=2795

2f.) Install Docker
If you chose my .config with Docker enabled, you can install Docker with a fast
curl -sSL https://get.docker.com/ | sh
Thanks a lot to the Guys over at Hypriot, I took their RPi Kernel Configs as an example and merged those with the U3 Configs to get to this results. And yes, AUFS is still missing but... it is ok 😉

Additional stuff:
- Gist of my Kernel Config: https://gist.github.com/nmaas87/81818c1db9dc292a4c21125bd2602658

Following sites helped:
- https://blogs.s-osg.org/install-ubuntu-run-mainline-kernel-odroid-xu4/
- http://rtisat.blogspot.de/search/label/odroid-u3
- https://github.com/umiddelb/armhf/wiki/How-To-compile-a-custom-Linux-kernel-for-your-ARM-device
- http://rglinuxtech.com/?p=1622
- http://rglinuxtech.com/?p=1656
- http://forum.odroid.com/viewtopic.php?f=81&t=9342

Restoring an Apple //c / 2c and Monitor

Normally, I tend to work on freelance projects in the Art sector, i.e. for exhibitions (like seen here) or props for different cosplays / costumes. Another job I am really into, is the repair of different electronics and computer related things at the RepairCafĂŠ in Trier, which I do on a voluntarily base / for free. However, this time a friend approached me to do a commission work / repair an old computer he found - and wanted to see working again :). So, I ended up doing just that.

"State of Decay":
The Apple 2c and Monitor came in quite good condition, however, without any accessories, not even the power supply "brick" which was needed to operate the 2c. So my first order of business was to create an power supply from scratch, then testing the monitor.

1.) Apple 2c / //c Power Supply:
Creating the power supply was surprisingly easy, as the 2c uses an internal converter which can take in about anything between 9 and 20 V. However, I wanted to stay as "true" to the original, which uses an 15 V, 1,2 A supply. To get this voltage, I used an Toshiba PA3755U-1ACA (Toshiba Partnumbers: P000567170 P000519840) which takes in 100-240 V AC and Outpus 15 V @ 5A (75 W) - that is quite a bit more than needed by the 2c, but it came in at 8,98 € on ebay (with shipping) - so that was quite ok! Also, I needed an connector to hook up the supply to the Apple. Luckily, Apple did use an common DIN Style Connector, so I just needed to buy an Female DIN Plug Connector with 7 Pins - and thats it. The wiring schematics came from an old scan of the Apple Reference Manual which I attached here. After connecting everything together, it just worked :)!

Handbook:

Another shot of the original plug:

Self-soldered-mess before cleaning up and sealing

Apple 2c working

The obligatory "Hello World" in basic 🙂

2.) Apple Monitor:
After the 2c was back in operation, I tested the monitor, which worked out of the box perfectly. I just needed to grap an Cinch cable from my Soundsystem, connect the Apple 2c to the monitor and it just worked - for about 20 minutes. Then, the Monitor just went up in smoke and failed. I opened up the case and followed the stench to a nicely blown interference suppression capacitor:

As the fuse was blown as well and another capacitor was sitting on the same board, I figured I should replace them all. So the "big" one was an 0,47uF / 250 VAC one, the smaller an 0,1uF / 250 VAC, both X2 rated. The fuse was an 250 V, 315 mA, "T" rated "träge" / "slow" fuse.

After I replaced everything an wired it up again, I dared a small test: It worked!

I did some additional cleaning as well as an good testing and it seemed to be working very well. I figured out that I could jump directly to the basic interpreter by pressing CONTROL + RESET and had some PRINT "Hello World" fun again ;). And with that said, the whole thing was ready to be given back to its owner :).

Nexus 4 Upgrade to CM 14.1 / Android 7.1 Nougat + Updates

[UPDATED 10.12.2016]
A little bit problematic, but... works in the end.

Warning: Bleeding Edge!
Will bootloop, you'll need to format your phone, it will break eggs, pcs, glass and burn down your house.
Comes with no support what-so-ever from my side.
You have been warned!

1.) Get all the files
CM: https://download.cyanogenmod.org/?device=mako (Latest version was: https://download.cyanogenmod.org/get/jenkins/187896/cm-14.1-20161202-NIGHTLY-mako.zip)
GAPPS: http://opengapps.org/ (ARM, 7.1 GAPPS, Picco Version)
TWRP: https://twrp.me/devices/lgnexus4.html (Download from: https://dl.twrp.me/mako)

2.) Flash TWRP
- Press Vol Down and Power Button to shut down the Phone.
- Press Vol Down and Power Button again to boot to the Bootloader
- Upload TWRP via fastboot: fastboot flash recovery twrp.img

3.) Flash and Install
- Boot to Recovery
- Delete all Files from your Phone
- Upload CM and GAPPS file via MTP to sdcard
- Flash CM, then GAPPS
- Clean Cache

4.) "Debug"
You would get an bootloop, if you would reboot now. According to http://androidforums.com/threads/twrp-bootloop-fix-after-update-ota.922585/ you need to go to the Terminal (or adb shell) and enter following commands:
dd if=/dev/zero of=/dev/block/platform/msm_sdcc.1/by-name/fota
(enter) - and then
dd if=/dev/zero of=/dev/block/platform/msm_sdcc.1/by-name/misc
(enter)

5.) Reboot
And you're good to go. However, somehow the update functions are bricked, so... check out the forums and bug reports:

http://forum.xda-developers.com/nexus-4/orig-development/official-cyanogenmod-14-1-nexus-4-t3507532/page7
https://www.cmxlog.com/14.1/mako
https://review.cyanogenmod.org/#/c/173328/

EDIT / REGARDING UPDATES:
Updates seem to be bricked because of some error in TWRP which sends Android 7.1 into a bootloop after EVERY update. So if you're updating via regular "Fullsized Images" from get.cm, you need to run the 4th Section / Debug EVERY time you update! However, you can get around that, if you use CyanDelta Updater (https://play.google.com/store/apps/details?id=com.cyandelta). Just install CyanDelta Updater on your phone, choose your CURRENT cm.zip (i.e. the one you installed your phone with, in this tutorial the cm-14.1-20161202-NIGHTLY-mako.zip) and let it index that. After this, it will try to find newer CM versions on the net and only download the delta, i.e. the changes to your current image. This will most likely break down your new 330+ MB download to an about 40+ MB download (which is nice!) - and after that, you can just install the update via the same app (actually it will ask you, just give it root rights and let it do its job, next reboot will take a while, but it works! :)). In any case, the TWRP Bootloop problem is solved with that, update downloads are smaller and everyones happy ;).

IMPORTANT: Best strategy before attempting ANY kind of update would be to make a full backup via TWRP and move it to a secure location (i.e. your pc harddrive) in case you brick your phone and need to restore it! At least one backup after your initial install (with all apps and stuff) should be made. It makes life so much easier in case something goes wrong :).

[Ubuntu / Proxmox] Hosting NFSv3 Server on Ubuntu for Proxmox Server

So, yesterday evening y0sh came to me with following problem: "I got an real nice proxmox server with loads of CPU and RAM - and nearly no storage left. And I got an small little Atom Server with loads of Storage - but not very useful as VM Host - and yes, they are within the same network". Ok, so: Lets create an NFSv3 Server on the Atom system and mount it as disk storage within Proxmox 🙂

# Atom Server (Ubuntu)
# Install NFS v3 Server
sudo apt-get install nfs-kernel-server rpcbind
# Create Shared Directories
sudo mkdir -p /var/nfsshare
sudo chmod -R 777 /var/nfsshare
# Configure Server
sudo vi /etc/exports
# Insert the line into exports, with the IP Address of your NFS Client / Proxmox Server
/var/nfsshare 192.168.1.111(rw,sync,no_root_squash)
# save and exit, fine tuning:
sudo vi /etc/default/nfs-kernel-server
# change the line RPCMOUNTDOPTS to
RPCMOUNTDOPTS="--manage-gids --no-nfs-version 4"
# to use nfs v3 ...
# reload the nfs exports...
sudo exportfs -r
# ...or restart the server
sudo /etc/init.d/nfs-kernel-server restart
# show all share directorys
sudo exportfs -v

# Mount NFS in Proxmox
Go to Datacenter, Storage, Add, NFS
IP:

[Art] real-time operator

real-time operator was an project of Melanie Windl, an artist from the University of Mainz, Germany on which I helped as technical advisor and programmer. The idea of the project was to live-record sounds from the staircase of the Tokyo Wondersite and replay them using different filters on multiple ballons (using exciters). The project was realised using three Raspberry Pi, Modell B+, some USB Soundcards and PureData. You can find more about the project here: https://atelier-windl.com/portfolio/real-time-operator-2/

Festival / Festival
real-time operator
09.01.-07.02.2016
Tokyo Wonder Site Hongo
2-4 16 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan

real_02

A short video showing the project can be found here

[RPi] The cheapest Raspberry Pi Cluster Ever Made

As soon as the Pi Zero came out, I started on thinking about Clusters again. I wanted to create an big - but at the same time, cheap cluster.
Yes, an Pi Zero is not nearly as fast, as an RPi 2. And yes, there are some problems with this idea - especially about the fact, that the Pi Zero is not as common as an - say - RPi Modell B v. 2.0 - but - as this is more about science and trying to just "do it" - I tried it anyway.

TLDR; Yes, it works - and better than you might think :)!

So, the basic problem about the Pi Zero are its interfaces: Yes, we got USB, but none Ethernet Port. So the basic approach would be to buy an 5€ Pi as well as an about 8€ AX88772 Ethernet Interface and some USB OTG Adapters - an we would end up with about 15€+ for each member of the cluster. Well, that is reasonable - but bulky and "not sexy".

0. Building mpich
I used some old Appliance Image I created from an Minibian Wheezy Image (https://minibianpi.wordpress.com/) earlier this year - for the 1.) section on the RPi Modell B pre 2.0 and RPi Modell A+. For the 2.) section, I used an special Appliance Image I made from an Minibian Jessie Image. However, I will document needed changes here, to get it running from any source. I recommend the Minibian Jessie Image as starting point, with this changes:


apt-get update
apt-get install -y raspi-config keyboard-configuration
raspi-config
# Default Configuration and Expand Filesystem using raspi-config
# Enter Finish and press Yes on Reboot the Device

apt-get install -y rpi-update sudo
apt-get -y dist-upgrade
reboot

rpi-update
reboot

# Create Default User pi
adduser pi
# Enter Password as wanted, i.e. raspberry
# Add user to default groups
usermod -a -G pi,adm,dialout,cdrom,audio,video,plugdev,games,users pi
# Add sbin Paths to pi
echo 'export PATH="$PATH:/sbin:/usr/sbin:usr/local/sbin"' >> /home/pi/.bashrc
# Add user to sudo
visudo
# Add under
# # User privilege specification
# root ALL=(ALL:ALL) ALL
pi ALL=(ALL:ALL) ALL
# Save and Exit
reboot

# Disable root login
sudo passwd -l root

or - and default RPi Jessie Image.

Building MPICH 3 was quite easy:


# Update and Install Dependencies, then reboot
sudo apt-get update
sudo apt-get -y dist-upgrade
sudo apt-get install -y build-essential
sudo reboot

# Make MPICH 3.2
cd ~
wget http://www.mpich.org/static/downloads/3.2/mpich-3.2.tar.gz
tar -xvzf mpich-3.2.tar.gz
cd mpich-3.2
# This will take some time
sudo ./configure --disable-fortran
# This will take several cups of tea ;)
sudo make
sudo make install

# Create SSH on Master, distribute to Slaves

cd ~
ssh-keygen -t rsa –C "raspberrypi"

Default location should be set to /home/pi/.ssh/id_rsa if you're using the standard user pi. Then choose this command to distribute the key to all your "slave maschines":
cat ~/.ssh/id_rsa.pub | ssh pi@IP_OF_SLAVES "mkdir .ssh;cat >> .ssh/authorized_keys"
( Was taken from http://www.southampton.ac.uk/~sjc/raspberrypi/ - he was the original father of the RPi Clusters and his work inspired me already years ago - so please read and support his work :)! Additional infos can be found at http://westcoastlabs.blogspot.co.uk/2012/06/parallel-processing-on-pi-bramble.html)
You could also just
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
to your own authorized files, shutdown your Master Pi after that and clone the card several times for all your Clients. This way, you would only need to do the work once - however, maybe you should release the keys in ~/.ssh/ so that only your Master Pi could command the slaves

1. Frist Try: Serial (did work, but not choosen)

ppplink

First idea was, to use the Serial Line of the Pi Zero for IP communication:
I wanted to have an Master Pi (Modell B) with Ethernet Port for network connectivity and connect up to 4 Pi Zero to it via Serial. And as the RPis only have one serial port, I would use an Serial to SPI Converter to keep it small and simple. But as it turns out, I could not find the MAX14830 (https://www.maximintegrated.com/en/products/interface/controllers-expanders/MAX14830.html) for sale on the net, so I got two 2 Port MAX3109 Serial to SPI Converters (https://www.maximintegrated.com/en/products/interface/controllers-expanders/MAX3109.html).
They were on the way to my home, so I wanted to already test some basic stuff by using an RPi Modell B pre 2.0 and an RPi Modell A+.

As we only had one serial port, we could only drive the one RPi Modell A+ with that. I will use the B pre 2.0 as Master, and the A+ as Slave. First, we connected both RPis via Serial (both shutdown and unpluged!). Just connect the Serial TX of RPi B to the Serial RX of the RPi A+ and vice versa. Then connect a Ground Pin of the RPi B to the RPi A+. Thats it.
Then we powered on the RPis and made some changes:
(Some ideas taken from MagPi 41: https://www.raspberrypi.org/magpi/)

Guest:

# I actually prepared the whole sdcard of the RPi A+ Guest while having that SD Card in the RPi B - because of the networking connection :).
# Install ppp
sudo apt-get install ppp -y
# Change Hostname
sudo vi /etc/hostname
sudo vi /etc/hosts
# Add DNS Server
echo 'nameserver 8.8.8.8' | sudo tee --append /etc/resolv.conf
# Remove Serial Console
cd /boot
sudo cp cmdline.txt old_cmdline
sudo vi cmdline.txt
# Normal should read about
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsc
# Remove console=ttyAMA0,115200 from that line, save and quit

# Disable Serial Console by using sudo raspi-config and the option, to be sure

# Increase Clock on the Serial Line, so drive the Serial Line at not only 115200 baud, but 1 MBit/s (taken from: http://www.thedevilonholiday.co.uk/raspberry-pi-increase-uart-speed/)
echo 'init_uart_clock=64000000' | sudo tee --append /boot/config.txt

# Add the following line to /etc/rc.local before exit 0
pppd /dev/ttyAMA0 1000000 10.0.5.2:10.0.5.1 noauth local defaultroute nocrtscts xonxoff persist maxfail 0 holdoff 1 &

# and shutdown
sudo shutdown -h now

After that, insert that SD Card into the A+.

Host:

# Now insert the real SD Card for the B into the B and power it on.
# Install ppp
sudo apt-get install ppp -y
# Enable ipv4 Forward for networking access
echo 'net.ipv4.ip_forward=1' | sudo tee --append /etc/sysctl.conf
# Remove Serial Console
cd /boot
sudo cp cmdline.txt old_cmdline
sudo vi cmdline.txt
# Normal should read about
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsc
# Remove console=ttyAMA0,115200 from that line, save and quit

# Disable Serial Console by using sudo raspi-config and the option, to be sure

# Increase Clock on the Serial Line, so drive the Serial Line at not only 115200 baud, but 1 MBit/s (taken from: http://www.thedevilonholiday.co.uk/raspberry-pi-increase-uart-speed/)
echo 'init_uart_clock=64000000' | sudo tee --append /boot/config.txt

# Reboot
reboot

# After having rebooted the RPi B, boot up the RPi A+ as well.
# Wait a little bit, then...

# Start PPP Connection
sudo pppd /dev/ttyAMA0 1000000 10.0.5.1:10.0.5.2 proxyarp local noauth nodetach nocrtscts xonxoff passive persist maxfail 0 holdoff 1

In a new putty window, you can now ping 10.0.5.2 - your RPi A+ and can SSH into it.

After that, I could use MPI with both machines, B and A+ - by entering their IP addresses into the machinefile and executing the cpi example to crunch some Pi Numbers.

But after all - it was quite ineffective and slow. So I tried to think of something better.. And then LadyAda came with her christmas present to me:
https://learn.adafruit.com/turning-your-raspberry-pi-zero-into-a-usb-gadget/ethernet-gadget - that was the moment my jaw dropped and I thought "YES! Thats it!".

2. Second Try: PiZero on Virtual Ethernet (Solution)

usblink

Now my prefered solution: As the USB Port of the PiZero is an real OTG Port, you could repurpose it as an Serial or even Virtual Ethernet port. And hell, Lady Ada striked again :)! So to sum it up shortly:
I build my MPICH as mentioned in 0) on an Minibian Jessie image (SDCard running on an RPi B). Then I installed her special kernel:


cd ~
wget http://adafruit-download.s3.amazonaws.com/gadgetmodulekernel_151224a.tgz -o gadgetkernel.tgz
tar -xvzf gadgetkernel.tgz

sudo mv /boot/kernel.img /boot/kernelbackup.img
sudo mv tmp/boot/kernel.img /boot

sudo mv tmp/boot/overlays/* /boot/overlays
sudo mv tmp/boot/*dtb /boot
sudo cp -R tmp/boot/modules/lib/* /lib

# Load virtual ethernet module on boot
echo 'g_ether' | sudo tee --append /etc/modules

# Add settings to network interfaces
echo '
allow-hotplug usb0
iface usb0 inet static
address 192.168.7.2
netmask 255.255.255.0
network 192.168.7.0
broadcast 192.168.7.255
gateway 192.168.7.1' | sudo tee --append /etc/network/interfaces

# Add DNS Server
echo 'nameserver 8.8.8.8' | sudo tee --append /etc/resolv.conf

# Some additional tweaks:
Add

# Turn HDMI Off
/usr/bin/tvservice -o
# Turn HDMI Back On
#/usr/bin/tvservice -p

# Turn ACT LED Off on Pi Zero
echo none | sudo tee /sys/class/leds/led0/trigger
echo 1 | sudo tee /sys/class/leds/led0/brightness

to your /etc/rc.local before exit 0 to turn off the HDMI Interface on boot,
as well as the LED of the Pi Zero to use less energy. Found on:
http://www.midwesternmac.com/blogs/jeff-geerling/raspberry-pi-zero-conserve-energy and http://www.midwesternmac.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi

This was enough to create an Pi Zero Slave Image.
Shutdown the RPi now with

sudo shutdown -h now

remove the Power and insert the SDcard into your Pi Zero.

On the Master Machine, I did following changes:

# Add settings to network interfaces
echo '
allow-hotplug usb0
iface usb0 inet static
address 192.168.7.1
netmask 255.255.255.0
network 192.168.7.0
broadcast 192.168.7.255' | sudo tee --append /etc/network/interfaces

# Allow Ipv4 Forward
echo 'net.ipv4.ip_forward=1' | sudo tee --append /etc/sysctl.conf

# Install iptables
sudo apt-get install -y iptables

# Define NATing rules
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o usb0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i usb0 -o eth0 -j ACCEPT

# Save NAT rules / load iptables on interface up
sudo touch /etc/iptables_masq.rules
sudo chown pi:pi /etc/iptables_masq.rules
sudo iptables-save > /etc/iptables_masq.rules

Add
pre-up iptables-restore < /etc/iptables_masq.rules under the eth0 section in the network interfaces sudo vi /etc/network/interfaces i.e. auto eth0 iface eth0 inet dhcp pre-up iptables-restore < /etc/iptables_masq.rules ( Info taken from: http://serverfault.com/questions/405628/routing-traffic-on-ubuntu-to-give-raspberry-pi-internet-access )

#After that, I shutdown the RPi via
sudo shutdown -h now
#removed power from it.

Then I attached the Pi Zero to the Hub of Pi B via an Micro USB Cable by using the Micro USB OTG Slot on the Pi Zero, connecting it to the Hub of the Pi Modell B. Next, I powered up the Pi B - and both booted.

I pinged 192.168.7.2 - which was the IP of the Pi Zero - and it answered. Now I only had to use cat ~/.ssh/id_rsa.pub | ssh pi@192.168.7.2 "mkdir .ssh;cat >> .ssh/authorized_keys" from Section 0 to get the SSH Key, created in Step 0 onto the Pi Zero and could use that to automatically login in into the Pi Zero.
With the new IP of the RPi B and Pi Zero in the machinefile of mpich I could then use my both RPis with higher speed and nearly zero costs for cabeling and power :)!
The clou: I don't need an additional powersupply for the Pi Zero - nor network adapters, RJ45 cabling, an switch - only one USB A to USB Micro cable per Pi Zero - and maybe an big, active USB Hub ;)!

Now, I need to get more Pi Zeros - I plan on using an Modell B as Master with an big active USB Hub to support 4 Pi Zeros - or an Modell B+ with an REALLY BEFFY USB Supply to work them all the same RPi - but that would come down to trying this... And I got only one Pi Zero - so I need some more time (or some sponsors?) to get me more RPi Zeros to try and see, whether this approach does scale ;)!

Best thing: This can also be used to try the awesome work of http://blog.hypriot.com/ to build an Docker Cluster from that - cool, ain't it?

Merry Christmas :)!

Yours,

Nico