Add icons to Jetpack Social Menu

I have been looking around the net quite a lot to find a solution on how to add icons to the "social menu" in wordpress:

At first I thought this menu was a feature of the theme I am using, Independent Publisher 2. I only found a Github repo for the version 1 theme - tried all the hacks there - and finally found out that version 2 was bought by WordPress.com and customized. So all the hacks available for version 1 did not even work. Bummer.

I really wanted to finally have icons for Mastodon, Hackster, Keybase or the RSS feed - so I looked into the file system - and look and behold, I found the path which actually does all the "heavy lifting":

wp-content/plugins/jetpack/modules/theme-tools/social-menu

Warning: Thanks to dear Stacy for the update: "As of Jetpack 13.7, these files are now in:"

wp-content/plugins/jetpack/jetpack_vendor/automattic/jetpack-classic-theme-helper/src/social-menu

Turns out, this menu is actually generated as part of the WordPress Jetpack and its Social Menu part.

To add to its library is very simple (even though not documented...):

  • Look up SVG icons, maybe from a free website like https://simpleicons.org/
  • Download the svg file and open in notepad or other editor, it will look like this:
<svg role="img" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"><title>Mastodon</title><path d="M23.268 5.313c-.35-2.578-2.617-4.61-5.304-[...]2.96-1.498 1.13 0 2.043.395 2.74 1.164.675.77 1.012 1.81 1.012 3.12z"/></svg>
  • You will need to change the "svg" element to symbol", set an name/id, remove role item and xmlns as well as the title. It will then look like this:
<symbol id="icon-mastodon" viewBox="0 0 24 24"><path d="M23.268 5.313c-.35-2.578-2.617-4.61-5.304-5.004C17.51.242 15.792 0 11.813 0h-.03c-3.98 0-[...]2 1.81 1.012 3.12z"/></symbol>
  • Add this new object into the social-menu.svg before the closing </defs></svg> tag and save the file
  • Open the icon-functions.php and add some entries to the $social_links_icons array. The binding is basically URL Path/Matching => Icon ID in the social-menu.svg. so to add e.g. the Keybase.io, Mastodon (on chaos.social) and Hackster.io icons I added:
          'keybase.io'       => 'keybase',
            'chaos.social'       => 'mastodon',
            'hackster.io'       => 'hackster',
        );
  • Save and close the file, if you now add a new custom element/external link to your social bar e.g. containing keybase.io in the URL, it will show up as the newly added keybase icon.

USB C power for the Nvidia Jetson Nano 4 GB dev board

The best way to power a Jetson Nano 4 GB dev board is by using a center positive, 5 V and at least 4 A barrel connector type power adapter. However, these are often bulky and not the best travel companion - while USB C power bricks are becoming more common and the relevant USB C sockets are getting build into nearly every device (maybe yours too, Apple?).

So I set out to build a USB C power adapter for the Jetson board.

By using an inexpensive USB C "trigger" combined with two 5V@3A step-down converters this did actually work.

The trick is setting the USB C trigger to request 20 V and using the 5 V converters in parallel to step-down the 20 V to 5 V - and then feeding the resulting voltage again in parallel to the barrel plug, like so.

For the curios among you now asking why I did not just set the trigger to 5 V and used it all alone: I tried this first, but it did not work. It was not able to provide enough current to support the operation of the Jetson at "MAXN mode" - it was constantly coming up with Overcurrent protection messages if pushed too hard.

I am happy with the result and shortend the wires after testing, putting everything into a neat small form factor.

With this change I can finally replace my old Jetson Nano power supply with something smaller than this chunky unit which I was gifted back in the day by the awesome Morlac :).

labSentinel 2

About nearly a year ago, I wrote the labSentinel project for my Nvidia Jetson AI Specialist certification. The basic idea of the project is to be able to supervise old Lab Equipment which does not poses any kind of log output or interface other than a graphical user interface, running on an Windows 3.11 / 95 / NT - maybe even XP system. I solved this issue by using a video grabber attached to a Jetson Nano and "out-of-band" grabbing the screen output of the experiment computer. I then learned good and bad system states via Nvidias Inference tools and finally got the system to report via MQTT as soon as something did go wrong. (As a "test system" I designed a flashy GUI application to try to mimic the old interfaces - specifically thinking about a lab power supply with multiple outputs - and the ability to simulate errors.)(https://developer.nvidia.com/embedded/community/jetson-projects#labsentinel / https://github.com/nmaas87/labsentinel)

While the project did work, there was still a lot left to be desired:

  • The system did capture the complete screen in full size. Running inference on a 1024x768 or even higher resolution picture is not efficient and has a high failure rate.
  • Training, testing and improving the model was time consuming and did not yield the precision and results I was hoping for.
  • The system could differentiate between "good" and "error" states - however if an error occurred, I would have loved to get more information - "reading the GUI" and its output. For example in the lab power supply use case, getting the specific voltages of the different lines to see which line failed or what is wrong - maybe even with the possibility to cross check if the detected error is an error in the first place
  • While the Nvidia Jetson Nano Development Board is an awesome tool for development, it is not hardend enough / suited for a lab or even factory floor environment.

These were all points I wanted to address, but as time was lacking - I did not take up the project again - until the start of this year Advantech and Edge Impulse started their Advantech Edge AI Challenge 2022. They wanted to know about specific use cases and how to solve them with factory hardend Jetson products (e.g. Advantechs AIR-020 series) and Edge Impulse Studio.

Well, that reminded me of the first labSentinel - and I thought I'd give it a try. As luck would have it, I actually was one of the two lucky guys who were picked to be able to realize their project. Advantech sent me one of their AIR-020X boards (review is here :)) and I was good to go:

Let me introduce you to labSentinel 2:

Build from the ground up, it does solve the above mentioned issues:

  • The actually GUI window is found and extracted from the "full size Desktop screenshots" via OpenCV 2 - and resized to 320x320 pixels to neatly fit the inference model
  • All model training, testing and optimization is done with Edge Impulse, which makes handling a breeze
  • If an error is detected and included OCR module using tesseract can extract text from predesignated / labeled areas on the non-resized GUI and sent this information along with the MQTT alert
  • The AIR-020X board is more than robust enough for all normal lab and factory floors

All source code is freely available with a demo project and documentation on Github ( https://github.com/nmaas87/labSentinel2 ) and also a video instruction on how to use it ( https://youtu.be/KEN_HT20exs )

Thanks again to Gary Lin (Advantech) as well as Louis Moreau and David Tischler (Edge Impulse) for their support :)!

Update: I added a Review to the Advantech AIR-020X and got balenaOS working on it.

WD My Cloud Mirror Gen2 with Debian 11 and Linux Kernel 5.15 LTS

Intro

Since 2017 I have been using an Western Digital My Cloud Mirror Gen 2 which I bought at Amazons Black Friday (or similar) - because the included 2x 8 TB WD Red were even cheaper with the NAS than standalone. Using the NAS had been quite ok, especially the included Docker Engine and Plex Support were a nice to have, the included Backdoor in older Versions - not so much. Recently WD had their new "My Cloud OS 5" replace the old My Cloud OS 3 - and made things worse for a lot of people. As I don't want any more surprises - and more control over my hardware - I decided to finally go down the road and get Debian 11 with an LTS (5.15) Kernel running on the hardware. This is how it went.

Warning

Warning, these are just my notes on how to convert a My Cloud OS 3 / My Cloud Mirror Gen 2 device to a "real" Debian system. You will need to take your device fully apart, solder wires and lose the warranty. Additionally you will lose all your data and even brick the hardware if something goes wrong, I am taking in no way responsibility, neither can I give support. You're on your own now.

Step 0: Get Serial Console Access

Without a serial console, you will not be able to do anything here. You will need to completely disassembly the NAS and will lose all warranty. The plain motherboard will look like this. On the most right side you will see the pins for the UART interface you will need to solder to.

When you're done with that, connect your 3v3 TTL UART USB device like this:

... and connect to it via 115200 BAUD with Putty, TeraTerm Pro or any other software (Do not connect the 3v3 pin :)). It would be wise starting without the hard drives installed.

Step 1: Flashing U-Boot

The current U-Boot on the NAS is flawed, you need to replace it. I will be CyberPK here which did an awesome job explaining everything:

We have to prepare an usb drive formatted in Fat32, and extract the uboot at link into it and connect to usb port#2.

Connect the device to the serial adapter, poweron the device and start pressing '1' (one) during the boot until you can see the 'Marvell>>' Command Prompt
press ctrl+c
then

We will start here to change stuff and break stuff. But if I could give you one tip before you start: Please execute printenv once. Copy and paste all env variables and everything Uboots spits out. It could save your hardware one day. Thanks, Nico out!

usb start
bubt u-boot-a38x-GrandTeton_2014T3_PQ-nand.bin nand usb
reset

This will reboot the device. Access again the Command prompt and add the following envs, a modified version of the ones provided by bodhi at this post:

setenv set_bootargs_stock 'setenv bootargs root=/dev/ram console=ttyS0,115200'

setenv bootcmd_stock 'echo Booting from stock ... ; run set_bootargs_stock; printenv bootargs; nand read.e 0xa00000 0x500000 0x500000;nand read.e 0xf00000 0xa00000 0x500000;bootm 0xa00000 0xf00000'

setenv bootdev 'usb'

setenv device '0:1'

setenv load_image_addr '0x02000020'

setenv load_initrd_addr '0x2900000'

setenv load_image 'echo loading Image ...; fatload $bootdev $device $load_image_addr /boot/uImage'

setenv load_initrd 'echo loading uInitrd ...; fatload $bootdev $device $load_initrd_addr /boot/uInitrd'

setenv usb_set_bootargs 'setenv bootargs "console=ttyS0,115200 root=LABEL=rootfs rootdelay=10 $mtdparts earlyprintk=serial init=/bin/systemd"'

setenv bootcmd_usb 'echo Booting from USB ...; usb start; run usb_set_bootargs; if run load_image; then if run load_initrd; then bootm $load_image_addr $load_initrd_addr; else bootm $load_image_addr; fi; fi; usb stop'

setenv bootcmd 'setenv fdt_skip_update yes; setenv usbActive 0; run bootcmd_usb; setenv usbActive 1; run bootcmd_usb; setenv fdt_skip_update no; run bootcmd_stock; reset'

saveenv

reset

(This code was also modified by me to use the fatload instead of the ext2load)

With this, our NAS is ready.

Step 2: Build a kernel and rootfs

  • On your current linux machine, get yourself a copy / git clone of Heisaths wdmc2-kernel Repo
  • Get all dependencies installed according to this repo, I installed it on a Debian 11 machine
  • Replace the file content of wdmc2-kernel/dts/armada-375-wdmc-gen2.dts with the content of the real and improved dts for the WDMCMG2 (original from this link, copy available here) - but keep the file name still armada-375-wdmc-gen2.dts
  • Replace the file content of wdmc2-kernel/config/linux-5.15.y.config with the file from here (please know this config ain't perfect, but it will get you running. You can always file a PR and help me out ;))
  • Start the build process in wdmc2-kernel with ./build.sh
  • Mark: Linux Kernel, Clean Kernel sources, Debian Rootfs, Enable ZRAM on rootfs
  • Kernel -> Kernel 5.15 LTS
  • Build initramfs -> Yes
  • Debian -> Bullseye
  • Fstab -> usb
  • Rootpw -> whateverYouWant
  • Hostname -> whateverYouWant
  • Locales -> no changes, accept (or whatever you want)
  • Default locale for system -> en_US.UTF-8 (or whatever you want)
  • Tzdata -> Your region
  • Now your kernel and rootfs will be build

While this is on-going, get yourself a nice USB 2.0 or USB 3.0 stick prepared with

  • partition table: msdos
  • 1st partition: 192 MB, FAT32, label set to boot, boot flag enabled
  • 2nd partition: rest, ext4, label set to rootfs

When the kernel is done compiling and your usb stick is done, copy all the files (sda is the name of my usb stick

  • mkdir /mnt/boot /mnt/root
  • mount /dev/sda1 /mnt/boot
  • mount /dev/sda2 /mnt/root
  • mkdir /mnt/boot/boot
  • cp wdmc2-kernel/output/boot/uImage-5.15.* /mnt/boot/boot/uImage
  • cp wdmc2-kernel/output/boot/uRamdisk /mnt/boot/boot/uInitrd
  • tar -xvzf wdmc2-kernel/output/bullseye-rootfs.tar.gz --directory=/mnt/root/
  • rm -rf /mnt/root/etc/fstab
  • cp /mnt/root/etc/fstab.usb /mnt/root/etc/fstab
    // within /mnt/root/etc/fstab:
    // change all /dev/sdb to /dev/sdc if all two drive slots on the NAS are used <- this!
    // change all /dev/sdb to /dev/sda if no drive slots on the NAS are used
  • umount /mnt/boot /mnt/root

Step 3: First boot and getting things running

Insert the USB stick into the 2 slot of the NAS. Leave the drives still out and boot it up for the first time, watch it via terminal. Login at the end with root and your chosen password.

If it boots, you can shut it down again with shutdown -P now, unplug power, insert the drives and reboot.

First thing after the first boot with drives, your own initramfs / Ramdisk from your current setup:

  • cd /root/
  • ./build_initramfs.sh
  • cp initramfs/uRamdisk /boot/boot/uInitrd

Second, install MDADM for the RAID:

  • apt update
  • apt install mdadm
  • mkdir /mnt/HD
  • edit your /etc/fstab and add a mount point for your md/raid. I used the old drives with my old data on it like this (depending on the fact as which mdX it launches...)
/dev/md0        /mnt/HD         ext4    defaults,noatime,nodiratime,commit=600,errors=remount-ro        0       1
 

A lot of good knowledge about Ramdrives can be found here.

I would advise to do steps: 1. Folder2RAM, 2. Kernel Options, 4. Logrotate - option 3 did not work out for me.

To get the drive to sleep at some point, we need to reconfigure MDADM

dpkg-reconfigure mdadm
// monthly check ok 
// daily degration check ok
// monitoring disable

... and get hdparm working

apt install hdparm hd-idle

# hdparm config
, add in /etc/hdparm.conf 

/dev/sda {
#        apm = 127
#        acoustic_management = 127
        spindown_time = 120
#       spindown_time = 4
        write_cache = on
}

/dev/sdb {
#        apm = 127
#        acoustic_management = 127
        spindown_time = 120
#       spindown_time = 4
        write_cache = on
}

# Spindown Time means: 120 * 5 sec = 600 sec / 60 sec = 10 min
# apply it after saving the file with:
/usr/lib/pm-utils/power.d/95hdparm-apm resume

We can check the status of the drives with smartctl

smartctl -i -n standby /dev/sda
smartctl -i -n standby /dev/sdb

To get fan control working

apt install wget
wget -O mcm-fancontrol-master.tar.gz https://github.com/nmaas87/mcm-fancontrol/archive/refs/heads/master.tar.gz
tar -xvzf mcm-fancontrol-master.tar.gz
cd mcm-fancontrol-master/
cp fan-daemon.py /usr/sbin/
chmod +x /usr/sbin/fan-daemon.py
cp fan-daemon.service /etc/systemd/system/
systemctl enable fan-daemon
systemctl start fan-daemon

(You can change low temp and high temp in the /usr/sbin/fan-daemon.py to get the Fan to kick in later and also set DEBUG = True if you want to see some details in the systemctl status fan-daemon)

MDT Utils can be useful, just mentioning it here

apt install -y mtd-utils
cat /proc/cmdline
cat /proc/mtd

Samba ...

apt install samba --no-install-recommends
# change /etc/samba/smb.conf to your liking and setup your SMB

Plex ...

# Plex 
apt update
apt install apt-transport-https ca-certificates curl gnupg2
curl https://downloads.plex.tv/plex-keys/PlexSign.key | apt-key add -
echo deb https://downloads.plex.tv/repo/deb public main | tee /etc/apt/sources.list.d/plexmediaserver.list
apt update
apt install plexmediaserver
systemctl status plexmediaserver

Well, that's it.

Thanks a lot to all awesome contributors in the net:

Companion repo with files: https://github.com/nmaas87/WDMCMG2

gpsTime

I think there is nothing more pleasing than having extremely precise measurements at your fingertips. Like time. While in the past it was quite problematic to measure time accurately (not talking about sundials, but... why not? ;)) - mankind has created one precise time source as the byproduct (read: "waste") for usage in accurate navigation: GNSS and their different kinds like GPS, Glonass, Galileo, BaiDou and others.

Taping into this time source and providing it to your local computer network via NTP has been done by countless people and is an extreme rewarding task. Is it necessary? Maybe not. Is it really cool? Yes. And now it is even easier as you don't need to configure it yourself, but can use the balenaHub and the preconfigured gpsTime project.

We do not waste time on fancy logos 😉

Basically you just need an RPi B+ (2/3/4), Micro SD Card, Powersupply and 3v3 TTL Level GPS Module with PPS Output. The rest is just done by going on the balenaHub entry shown above, creating a free account and flashing balenaOS onto your SD card, booting the RPi on the internet for the first time and let it get the needed containers. Afterwards you can use the RPi offline and still enjoy your precise time source.

A watterott CQM-M8Q Breakout and an good old RPi 2B+ are more than powerful enough

More details can be found in the Github Repo and you can work and improve that project to your hearts content. I am probably going to do an PiAndMore talk about it - and use the project myself as a block for precise timing in some support equipment.

BeagleLogic

To get your Beaglebone Black (BBB) to work as an Logic Analyzer, you can use BeagleLogic:

1.) Download latest version of BeagleLogic
from: https://beaglelogic.readthedocs.io/en/latest/beaglelogic_system_image.html
Released on 2017-07-13, sha256sum = be67e3b8a21c054cd6dcae7c50e9e518492d5d1ddaa83619878afeffe59c99bd
https://goo.gl/RiXGBs

2.) Burn it onto sdcard with Etcher

3.) Put SDCard into BeagleBone Black

4.) Hold "Boot" Button, plug in BBB with Mini USB Cable to your PC

5.) After 10 seconds or so, release "Boot" Button (this will just boot from SDcard with this trick)

6.) Wait some time and login to 192.168.7.2 via SSH
Username: debian
password: temppwd

7.) Test if BeagleLogic works
ls -l /dev/beaglelogic
There should be entry /dev/beaglelogic as answer, if not check the BeagleLogic Website

8.) Change uEnv.txt
sudo vi /boot/uEnv.txt
#go to following lines:
##enable Generic eMMC Flasher:
##make sure, these tools are installed: dosfstools rsync
#cmdline=init=/opt/scripts/tools/eMMC/init-eMMC-flasher-v3.sh

remove # from #cmdline=init=/opt/scripts/tools/eMMC/init-eMMC-flasher-v3.sh

#save and exit

9.) Shutdown
sudo shutdown now

10.) Boot again, this time with flasher
Hold "Boot" Button, plug in BBB with Mini USB Cable to your PC, after 10 seconds, release the button.
During the flash process of the eMMC, the 4 leds will make a "Knight Rider" like lightshow.
This will take some minutes. The BBB will shutdown afterwards (all LEDs off).

11.) Remove SDCard, replug Mini USB and boot without holding any buttons

12.) Wait some time and login to 192.168.7.2 via SSH
Username: debian
password: temppwd

13.) Open your webbrowser and go to: http://192.168.7.2:4000/#

Next:
- Upgrade
- this enables you to also use the BeagleLogic as "external analyzer" and control it from i.e. a Window System running a special version of pulseview 🙂
- More infos: https://goo.gl/L9QUrt
Connect to Ethernet / Internet
cd /opt/BeagleLogic/
git config http.sslVerify false
git pull
sudo apt update
sudo ./install.sh --upgrade
sudo reboot now

Useful infos:
https://beagleboard.org/getting-started#update
http://beagleboard.org/latest-images
https://beaglelogic.readthedocs.io/en/latest/beaglelogic_system_image.html
https://beaglelogic.readthedocs.io/en/latest/install.html

Rogue One: My first field pentesting

Earlier that year, very - earlier - I had one technician of an international company calling me for some advice. She/He had a problem with the local networking staff and their "modus operandi" regarding network security. The company had a big assembly line and very powerful, automated machinery - which made the leaky security all the more troublesome. My job was to exploit one of those security holes and show - as clearly and easily as possible - said problems - so that they were getting finally fixed.

The first stage of the whole testing was the usual: Reconnaissance. Though, in this case, this was very easily achived, as my contact handed me over parts of the firewall ruleset as well as an access to their office lan. First thing that lit up like a christmas tree: They actually had the production and office networks seperated by a firewall - which is good. For the bad part: They did drop everything. Everything except everykind of ICMP packets. Well. Damn.

Second stage was to create an exploit to that happy little mishap: My contact wanted to be able to bridge office and production networks and access them via the - according to the networking department - water tight secure firewall. The exploit needed to be able to run on a Windows 7 machine as well. With that in mind, I went through different ICMP tunnels: HANS and Dhaval Kapils icmptunnel were the first one to be dropped from that list, as they did not satisfy all constrains. In the end, I choose icmptunnel or short ptunnel. With a bit of manual patching, I could get it to compile and work again on Windows, thanks to the efforts of Mike Miller.

For testing I recreated the network and firewall using a Cisco 1841 and a Cisco 3560 switch. As I needed to integrate ptunnel into the production network, I wanted it to look as innocent and  inconspicuous as possible: So I used a Raspberry Pi 3 and dumped it into a DIN Rail case - then I outfited it with a Power over Ethernet adapter and could serve it network as well as power over said network connection.

The tests worked flawlessly and I even cramped enough speed over ICMP to get some remote desktop working.

 

On to stage three: Attack.

This stage turned out to be way cooler than thought: Due to certain circumstances, we meet at night, 0 dark thirty - you could say - and sneaked through the production line, past workers which did not take much notice in my presence. I inserted the "Rogue Pi" into one closet next to an Siemens Human-Machine Interface and plugged it into the network switch.

Then we left again. Back in the office, I tried to connect to my little helper and was immediately rewarded with a working ICMP tunnel - now transfering an SSH connection as payload. From that moment on, I could connect to a dozend different systems from different vendors in that production network. Last but not least, as "visual" demo, we created a little batch script to start the connection and connect to the Remote Desktop Interface / Human Machine Interface of a very heavy and very unsecured press - now leaving it to our control.

At this point, said connection was only opened in a "read/view only" mode so that - even by accident, we could not harm anyone. We had to bear in mind that this multi-hundred ton press was now at the mercy of our fingertips and we did not wanted to wreck hevoc at all costs - so - if you're conducting field exercises with real "heavy hardware" - find a way to interact safetly with that - before you engage any connection to it.

With this preparation, the technician was able to run the demo in front of the higher ups and finally got the attention, permission and support needed to bring security to a higher standard.

So that effort paid of in the end for the production security of that company - and rewarded me with my first - and hopefully not last - field pentest :).

 

Odroid U3 Kernel Upgrade + Docker

I wrote this back in January 2017. Since then I had not much time to work on the Odroid - however, user hexdump did just came up with a new repo, supporting the Odroid U3 with Kernel 5.4.x - you can find the overview over his awesome work here and the repo with complete releases (i.e. Ubuntu Bionic or Debian Buster i.e. odroid_u3-armv7l-debian.img.gz) here

I am using an trusty old Odroid U3 which I acquired years ago. With its SAMSUNG Exynos 4412 Cortex-A9 Quad Core 1,7 Ghz, 1MB L2 cache and 2 GB of RAM, this little puppy was an real beast - compared to the Raspberry Pi 1 at that time. However, Hardkernel did drop the support - again, which left the Users back with very old Kernel versions. However, thanks to some users and the fact that all needed support for the Exynos is now included in the current kernel - well, we can build our own. This write up is the distilled result of days of work and a lot of research - and the work of other people which I found on the net (which I will try to give proper credits at the right locations :)).

EDIT: I upgraded the Kernel Configuration GIST for my Kernel Config + Docker on 10.02.2017. Thanks to an E-Mail from Tobias Jakobi I found the pieces I missed about adding the Kernel Internal Fanservice into the Config. This works now, however - I still like my tweaked program a bit better, as it cools the system more aggressivly, while the kernel default one is a lot more silent, but runs in the 80's°C while mine will stay at 70° on max load.

It is important that these instructions, especially if it comes down to installing stuff - is written for the usage of eMMC memory, NOT THE SDCARD! Also, there be dragons and something could go wrong - so please, as usual, advance at your own pace and risk! 🙂

0.) Get an Serial Interface for 1.8V
Important. The UART is 1.8V LVTTL ONLY! If you connect 3.3V or 5V, you'll blow the U3! I used an regular 5V TTL USB Adapter as well as an Sparkfun BiDir Level Converter: https://www.sparkfun.com/products/12009 With that set to 1.8V from the UART of the U3, it worked flawlessly with the usual 115000 BAUD.

Pinout:
http://odroid.com/dokuwiki/doku.php?id=en:u3_hardware

_____UART____
|Pin 4 - GND|
|Pin 3 - RXD|
|Pin 2 - TXD|
|Pin 1 - VCC|
___________|
1.8V LVTTL

1.) Build U-Boot
A lot of stuff is taken from here, thanks a lot for your great work, SnakeBite!
We asume you're working as root, as all this stuff will need root rights :).

# update your packages
apt-get update
# needed for building u-boot
apt-get install device-tree-compiler
# get ODROID signed u-boot
wget http://odroid.in/guides/ubuntu-lfs/boot.tar.gz
tar xzf boot.tar.gz
# get patched u-boot & build for the U3
git clone https://github.com/tobiasjakobi/u-boot
cd u-boot
make odroid_config
make
#copy fresh u-boot to ODROID directory
cp u-boot-dtb.bin ../boot/u-boot.bin
cd ../boot
## install on SDCard - not what we want, just as an remark for me
#bash sd_fusing.sh /dev/mmcblk0

Copy the needed files (u-boot.bin, E4412_S.bl1.HardKernel.bin, bl2.signed.bin, E4412_S.tzsw.signed.bin) to your PC, reboot your Odroid U3 into fastboot via connecting the UART to the U3 and aborting the boot. After that, you can issue the fastboot command on the UART. The U3 will now wait for filetransfer over the Micro USB Port, which you'll need to connect to your PC. Also, for the sake of an easy upgrade, use an Linux PC (more infos here: http://odroid.com/dokuwiki/doku.php?id=en:u3_building_u-boot ).

# install needed programs
sudo apt-get update
sudo apt-get install android-tools-adb android-tools-fastboot
# and - being in the right folder, start the transfer
# u-boot.bin install
sudo fastboot flash bootloader u-boot.bin
# bl1.bin install
sudo fastboot flash fwbl1 bl1.HardKernel
# bl2.bin install
sudo fastboot flash bl2 bl2.HardKernel
# tzsw.bin install
sudo fastboot flash tzsw tzsw.HardKernel
# If installation is done, you can reboot your ODROID-U3 with fastboot.
sudo fastboot reboot

You should now have a more recent U-Boot install.

Old: U-Boot 2010.12-svn (May 12 2014 - 15:05:46) for Exynox4412
New: U-Boot 2016.11-rc3-g8a65327 (Jan 07 2017 - 23:00:56 +0100)

By the way, we needed to download this boot.tar.gz, because it contains the keys needed to sign our new U-Boot install. More Infos about U-Boot and Keys: https://github.com/dsd/u-boot/blob/master/doc/README.odroid

The Installation of a more recent U-Boot version was necessary to facilitate the boot of the to-be-build new Kernel zImage with bootz.

1b.) eMMC recovery in case something goes wrong:
http://forum.odroid.com/viewtopic.php?f=53&t=969
DL the tool [ exynos4412_emmc_recovery_from_sd_20140629.zip ]

  1. Prepare a microSD card and flash the attached image.
  2. Insert microSD into U2/U3, disconnect eMMC
  3. Turn on U2/U3 and wait for a few seconds and blue LED will blink.
  4. Plug your eMMC module into U2/U3
    4b - wait 10 seconds!
  5. Plug micro-USB cable into U2/U3 and connect other side to your PC USB host or ODROID's USB host port. (This is a trigger to start the recovery)
  6. After recovery process (only a few seconds), the blue LED will turn off automatically.
  7. Finish. Install OS on your eMMC with as usual.

2.) Building Next Kernel for Odroid U3 with eMMC
And now to start the real work:

apt-get update
apt-get install live-boot u-boot-tools
cd ~
git clone --depth 1 git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git linux_odroid
cd linux_odroid
# we could make an default config, but this is not needed, we take rglinuxtech config in the next step
# make exynos_defconfig
# Odroid Config Kernel 4.4 from http://rglinuxtech.com/?p=1656
curl -o .config http://pastebin.com/raw/NveRajaZ
# Or you can use my Config which enables Docker as well (Gist at the End of the page)
curl -o .config https://gist.githubusercontent.com/nmaas87/81818c1db9dc292a4c21125bd2602658/raw/7e4e14fa15d7c68b177f31b9e2348d62c52cf83c/u3_docker_config
make menuconfig
make prepare modules_prepare
make -j4 bzImage modules dtbs
make modules_install
cp arch/arm/boot/dts/exynos4412-odroidu3.dtb /media/boot/exynos4412-odroidu3_next.dtb
cp arch/arm/boot/zImage /media/boot/zImage_next
cp .config /boot/config-`cat include/config/kernel.release`
update-initramfs -c -k `cat include/config/kernel.release`
mkimage -A arm -O linux -T ramdisk -C none -a 0 -e 0 -n uInitrd -d /boot/initrd.img-`cat include/config/kernel.release` /boot/uInitrd-`cat include/config/kernel.release`
cp /boot/uInitrd-`cat include/config/kernel.release` /media/boot/
cd /media/boot/
vi boot.txt
# now we have to rework the boot.txt / config
# comment out the old values and set in the new ones
# please do NOT copy blindly, you need to adjust the zImage, uInitrd and eyxnos4412***.dtb file names according to your system!
setenv initrd_high "0xffffffff"
setenv fdt_high "0xffffffff"
#setenv bootcmd "fatload mmc 0:1 0x40008000 zImage; fatload mmc 0:1 0x42000000 uInitrd; bootm 0x40008000 0x42000000"
setenv bootcmd "fatload mmc 0:1 0x40008000 zImage_next; fatload mmc 0:1 0x42000000 uInitrd-4.10.0-rc2-next-20170106-v7; fatload mmc 0:1 0x44000000 exynos4412-odroidu3_next.dtb; bootz 0x40008000 0x42000000 0x44000000"
#setenv bootargs "console=tty1 console=ttySAC1,115200n8 root=/dev/mmcblk0p2 rootwait ro mem=2047M"
setenv bootargs "console=tty1 console=ttySAC1,115200n8 root=/dev/mmcblk1p2 rootwait ro mem=2047M"
boot

#After you have done that, write the commands to the boot.scr file
mkimage -C none -A arm -T script -d boot.txt boot.scr
# sync and reboot and it should work
sync
reboot now

With this in mind I really upgraded my system from kernel 3.8.13 from 2015 - to the most recent 4.10.0-rc2 next Kernel 🙂

Old: Linux odroid 3.8.13.30 #1 SMP PREEMPT Fri Sep 4 23:45:57 BRT 2015 armv7l armv7l armv7l GNU/Linux
New: Linux odroid 4.10.0-rc2-next-20170106-v7 #3 SMP PREEMPT Mon Jan 9 19:17:32 CET 2017 armv7l armv7l armv7l GNU/Linux

2b.) FAN does not work, warning!
The CPU Fan does somehow not work right out of the box, so we will now enable it manually. [EDIT, with the new Kernel Config it works out of the box, but you can still decide to use this software to have a more aggressiv cooling :)]

# Fan to full speed
echo 255 > /sys/devices/platform/pwm-fan/hwmon/hwmon0/pwm1
# Read out current temperature in °C
cat /sys/devices/virtual/thermal/thermal_zone0/temp

To get things working again, I forked and updated the odroidu2 fan tool. Install it via:

git clone --depth 1 https://github.com/nmaas87/odroidu2-fan-service.git
cd odroidu2-fan-service
make
# install it as upstart service, i.e. < Ubuntu 16.04
make usi
# install it as systemd, i.e. Ubuntu 16.04 / Xenial
make systemd
reboot

Useful Commands:

# Read max CPU Speed:
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq
# Get current CPU Speed:
cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq
# Torrture Test:
openssl speed -multi 4

2c.) Upgrade to Xenial
As I upgraded to Xenial with do-relase-upgrade, I had some problems:

Authentication Problem:
It was not possible to authenticate some packages. This may be a transient network problem. You may want to try again later. See below for a list of unauthenticated packages. create /etc/update-manager/release-upgrades.d/unauth.cfg with

[Distro]
AllowUnauthenticated=yes

After upgrade, remove this file.
from: http://askubuntu.com/questions/425355/error-authenticating-some-packages-while-upgrade

After that, apt-get clean did not work:
apt-get clean
W: Problem unlinking the file apt-fast - Clean (21: Is a directory)

Solution was:

rm -rf /var/cache/apt/archives/apt-fast

from: http://askubuntu.com/questions/765274/error-problem-unlinking-in-apt-get-clean

2d.) MAC address changes every reboot:
One solution, which did not work, was following:

rm /etc/smsc95xx_mac_addr

from: http://forum.odroid.com/viewtopic.php?f=7&t=1070

Which worked better, was to really set the MAC address to a static value:
add in /etc/network/interfaces

auto eth0
iface eth0 inet dhcp
hwaddress ether bb:aa:ee:cc:dd:ff

from: http://forum.odroid.com/viewtopic.php?f=111&t=8198

2e.) Control the CPU speeds via cpufrequtils:

apt-get install cpufrequtils
vi /etc/default/cpufrequtils

ENABLE="true"
GOVERNOR="ondemand"
MAX_SPEED=1704000
MIN_SPEED=200000

However, I chose "performance" as GOVERNOR and a MIN_SPEED=800000

from: http://forum.odroid.com/viewtopic.php?f=65&t=2795

2f.) Install Docker
If you chose my .config with Docker enabled, you can install Docker with a fast
curl -sSL https://get.docker.com/ | sh
Thanks a lot to the Guys over at Hypriot, I took their RPi Kernel Configs as an example and merged those with the U3 Configs to get to this results. And yes, AUFS is still missing but... it is ok 😉

Additional stuff:
- Gist of my Kernel Config: https://gist.github.com/nmaas87/81818c1db9dc292a4c21125bd2602658

Following sites helped:
- https://blogs.s-osg.org/install-ubuntu-run-mainline-kernel-odroid-xu4/
- http://rtisat.blogspot.de/search/label/odroid-u3
- https://github.com/umiddelb/armhf/wiki/How-To-compile-a-custom-Linux-kernel-for-your-ARM-device
- http://rglinuxtech.com/?p=1622
- http://rglinuxtech.com/?p=1656
- http://forum.odroid.com/viewtopic.php?f=81&t=9342

Restoring an Apple //c / 2c and Monitor

Normally, I tend to work on freelance projects in the Art sector, i.e. for exhibitions (like seen here) or props for different cosplays / costumes. Another job I am really into, is the repair of different electronics and computer related things at the RepairCafé in Trier, which I do on a voluntarily base / for free. However, this time a friend approached me to do a commission work / repair an old computer he found - and wanted to see working again :). So, I ended up doing just that.

"State of Decay":
The Apple 2c and Monitor came in quite good condition, however, without any accessories, not even the power supply "brick" which was needed to operate the 2c. So my first order of business was to create an power supply from scratch, then testing the monitor.

1.) Apple 2c / //c Power Supply:
Creating the power supply was surprisingly easy, as the 2c uses an internal converter which can take in about anything between 9 and 20 V. However, I wanted to stay as "true" to the original, which uses an 15 V, 1,2 A supply. To get this voltage, I used an Toshiba PA3755U-1ACA (Toshiba Partnumbers: P000567170 P000519840) which takes in 100-240 V AC and Outpus 15 V @ 5A (75 W) - that is quite a bit more than needed by the 2c, but it came in at 8,98 € on ebay (with shipping) - so that was quite ok! Also, I needed an connector to hook up the supply to the Apple. Luckily, Apple did use an common DIN Style Connector, so I just needed to buy an Female DIN Plug Connector with 7 Pins - and thats it. The wiring schematics came from an old scan of the Apple Reference Manual which I attached here. After connecting everything together, it just worked :)!

Handbook:

Another shot of the original plug:

Self-soldered-mess before cleaning up and sealing

Apple 2c working

The obligatory "Hello World" in basic 🙂

2.) Apple Monitor:
After the 2c was back in operation, I tested the monitor, which worked out of the box perfectly. I just needed to grap an Cinch cable from my Soundsystem, connect the Apple 2c to the monitor and it just worked - for about 20 minutes. Then, the Monitor just went up in smoke and failed. I opened up the case and followed the stench to a nicely blown interference suppression capacitor:

As the fuse was blown as well and another capacitor was sitting on the same board, I figured I should replace them all. So the "big" one was an 0,47uF / 250 VAC one, the smaller an 0,1uF / 250 VAC, both X2 rated. The fuse was an 250 V, 315 mA, "T" rated "träge" / "slow" fuse.

After I replaced everything an wired it up again, I dared a small test: It worked!

I did some additional cleaning as well as an good testing and it seemed to be working very well. I figured out that I could jump directly to the basic interpreter by pressing CONTROL + RESET and had some PRINT "Hello World" fun again ;). And with that said, the whole thing was ready to be given back to its owner :).

Nexus 4 Upgrade to CM 14.1 / Android 7.1 Nougat + Updates

[UPDATED 10.12.2016]
A little bit problematic, but... works in the end.

Warning: Bleeding Edge!
Will bootloop, you'll need to format your phone, it will break eggs, pcs, glass and burn down your house.
Comes with no support what-so-ever from my side.
You have been warned!

1.) Get all the files
CM: https://download.cyanogenmod.org/?device=mako (Latest version was: https://download.cyanogenmod.org/get/jenkins/187896/cm-14.1-20161202-NIGHTLY-mako.zip)
GAPPS: http://opengapps.org/ (ARM, 7.1 GAPPS, Picco Version)
TWRP: https://twrp.me/devices/lgnexus4.html (Download from: https://dl.twrp.me/mako)

2.) Flash TWRP
- Press Vol Down and Power Button to shut down the Phone.
- Press Vol Down and Power Button again to boot to the Bootloader
- Upload TWRP via fastboot: fastboot flash recovery twrp.img

3.) Flash and Install
- Boot to Recovery
- Delete all Files from your Phone
- Upload CM and GAPPS file via MTP to sdcard
- Flash CM, then GAPPS
- Clean Cache

4.) "Debug"
You would get an bootloop, if you would reboot now. According to http://androidforums.com/threads/twrp-bootloop-fix-after-update-ota.922585/ you need to go to the Terminal (or adb shell) and enter following commands:
dd if=/dev/zero of=/dev/block/platform/msm_sdcc.1/by-name/fota
(enter) - and then
dd if=/dev/zero of=/dev/block/platform/msm_sdcc.1/by-name/misc
(enter)

5.) Reboot
And you're good to go. However, somehow the update functions are bricked, so... check out the forums and bug reports:

http://forum.xda-developers.com/nexus-4/orig-development/official-cyanogenmod-14-1-nexus-4-t3507532/page7
https://www.cmxlog.com/14.1/mako
https://review.cyanogenmod.org/#/c/173328/

EDIT / REGARDING UPDATES:
Updates seem to be bricked because of some error in TWRP which sends Android 7.1 into a bootloop after EVERY update. So if you're updating via regular "Fullsized Images" from get.cm, you need to run the 4th Section / Debug EVERY time you update! However, you can get around that, if you use CyanDelta Updater (https://play.google.com/store/apps/details?id=com.cyandelta). Just install CyanDelta Updater on your phone, choose your CURRENT cm.zip (i.e. the one you installed your phone with, in this tutorial the cm-14.1-20161202-NIGHTLY-mako.zip) and let it index that. After this, it will try to find newer CM versions on the net and only download the delta, i.e. the changes to your current image. This will most likely break down your new 330+ MB download to an about 40+ MB download (which is nice!) - and after that, you can just install the update via the same app (actually it will ask you, just give it root rights and let it do its job, next reboot will take a while, but it works! :)). In any case, the TWRP Bootloop problem is solved with that, update downloads are smaller and everyones happy ;).

IMPORTANT: Best strategy before attempting ANY kind of update would be to make a full backup via TWRP and move it to a secure location (i.e. your pc harddrive) in case you brick your phone and need to restore it! At least one backup after your initial install (with all apps and stuff) should be made. It makes life so much easier in case something goes wrong :).