Advantech AIR-020X Review

Normally, I am not getting review units. This is due to the fact that I am only hosting this small weblog, along some conference talks - and most companies would probably be better off to send their units along someone with a reach of Linus Tech Tips, or similar.

On the other hand - when I get the possibility to do a review, it can be a bit worrisome for the companies as well, as I am a very honest person. I have been working in tech for some time now and had the honor to build stuff which went to space - and came back to tell the tale. I know what I want in a unit - and what could be a problem.

With this out of the way, I was one lucky winner of the Advantech Edge AI Challenge 2022 and got an AIR-020X-S9A1 unit at no charge to be able to realize my labSentinel 2 project. By doing this project I learned a bit about the box and thought it would not be a bad idea to share my ideas with the readers of my blog - and also Advantech, so that they can improve upon their product. This review is not paid for, reflects my own thoughts and I got the mentioned unit for my project - the review was not a part of that deal. With that out of the way, lets get started.

The hardware

The AIR-020X comes very well packaged - having its own foam jacket which will save it from all but the most horrible abuse from postal services. Not that it would matter: The roughly 14 cm x 12 cm x 4,5 cm compact unit weighs in at nearly 850 gr and is built sturdy and robust - like a tank:

The most obvious part of the unit is its heatsink, which it does put to good use - but more on that topic later. Along with the computer itself comes a chinese printed starting guide and a short USB A to Micro B cable, which will be needed to factory reset and reflash the unit.

All in all, the AIR-020X is an impressive unit, including an Nvidia Jetson Xavier NX module with 8 GB RAM, 16 GB onboard eMMC, 128 GB M.2 Flash, 2x RS232/422/485, 1x CANbus, 1xDIO ("GPIO"), 2x 1 Gbit ethernet, 1x Fullsize mPCIe with nano SIM holder, 1x 4k HDMI Output, 2x USB 3.0 Type A, 1x USB Type C. The unit is powered by a 12-24 V DC power supply, which is an optional accessory.

Being an industrial unit, it uses an industrial type connector for power, which is an HT5.08 2 pole type:

As this connector is also not part of the base package and the USB C connector does not accept power delivery (and neither works in Display Port Mode) - it becomes a bit harder to power up the unit after receiving it. Finding a usable power supply within the sizable voltage range of 12 - 24 V (e.g. from an old Laptop) is fairly easy, but without the connector - it becomes a dead end until the next delivery is there. It would be useful to at least include one connector with the base unit. The usb cable is a nice addition, but could be left out (even though its very high quality) - along with the chinese manual. This could be replaced with a small card with direct links to the english and chinese PDF versions of the manual.

Opening up the unit reveals the internals - but not without a fight:

The used screws are perfectly fixed to the structure by using blue loctite - a touch I cannot recommend enough for the vibration resistance of the overall unit - but the screws themselves are made from extremely soft metal, so that - using the correct screwdriver - I stripped nearly all screws and had really issues removing all of them. Somehow this problem seems to exist for all the external black screws, the internal silver ones were of a lot higher quality. In my case I fixed the issue by replacing the screws with new ones and never had an issue anymore with them.

The internal structure is very well laid out, raising the M.2 drive onto a pole to keep it a bit further from the heat source / Xavier NX module which is just sitting on the other side of the PCB and directly sandwiches with the big heatsink.

Very welcome are also the addition of the two Raspberry Pi Style Camera connectors, although they are a bit hidden by the serial console cables. I understand that the unit should be as closed as possible for the use in factories, but I would have loved to see two small slits (possibly even with some IP/EMC gaskets to allow for protective shielding of those entry points) so that cameras on the outside of the case can be easily attached.

The mPCIe slot gives the system an additional expansion slot for e.G. UMTS or LoRaWAN modules and also the internal CR2032 cell for the RTC is a small but valuable detail.

The AIR-020X has some mounting points available on both system sides for additional wall mounting rails. Looking at the mounting points and the obvious use of the AIR-020 series in lab and factory settings, the inclusion of a DIN rail mount as available accessory could prove very useful to directly mount this small computer into an electrical cabinet.

The software

Booting up the system greats one with a very familiar picture: Ubuntu 18.04 is running on the machine in form of a tailored version of Nvidia Jetpack. This version by Advantech is only using the eMMC of the Xavier NX module to start the bootloader, but the actual data is kept on the M.2. This is a great idea for the longevity of the eMMC on the (currently hard to find) Xavier NX module - but comes with the drawback of additional needed customization other than "only" the PCB, included hardware, drivers and other changes made by Advantech in comparision to an Nvidia Developerboard for the same module.

This is a problem I also learned the hard way: I realized that the board was delivered with L4T 32.5.2 - not the current 32.7.x (JetPack 4.6.1) - so I updated this by hand. Just to have the board bootloop. This was the moment I took a closer look to the online presence of Advantech and the manual - just to learn that the recovery process was neither described, nor was the download of the image available. I got the needed recovery file as well as the documentation (which also included vital information on how to use the DIO (GPIO), RS422 and CANbus interface) and as able to restore the board to working order. Obviously there were multiple problems with this: First, the online available manual should contain all needed information regardings settings, ports, recovery, etc - secondly, the current (and maybe even last) images also need to be available online on their website - with checksums to be able to deploy these images safetly.

I also voiced my concerns regarding the high impact security issues / CVEs found in 32.5.2 - which would make the use of AIR-020 series an absolute liability in a production environment. I am glad to report that Advantech reacted to these concerns with providing a beta version of a new JetPack 4.6.1 Image. A short time afterwards, Advantech did add some information to their wiki:

On the download page you can find the AIR020A2AIM20UIV00004 entry for the Jetson NX JetPack 4.6.1 from 2022-07-20. This links to a Dropbox folder containing a the latest image (AIR020A2AIM20UIV00004_194.tar.gz / 2022-09-16).

With this latest image I was able to upgrade the AIR-020X to JetPack 4.6.1 and even do and apt upgrade to upgrade to L4T 32.7.2, at the time the latest L4T. However, this did not go as planed: After doing the upgrade and rebooting the device, it got caught in a bootloop. This bootloop kept on repeating for about 10 minutes until the device mysteriously started then working and came back on without issues. Obviously this would not be a graceful upgrade and did instill some concerns why this was a reproducible issue.

I am glad to report that Advantech has provided the latest image - which will eliminate several security issues. However, the changes needed in the manual as well as the provision of the recovery images (now via Dropbox?) and the secure provision of security updates to the unit remain. Maybe Advantech would think about starting to use balena.io to handle these issues?

Verdict

The Advantech AIR-020X is an extremely capable unit in a small form factor, sturdy built and highly reliable. Even with the latest JetPack 4.6.1 and abuse of the formerly not available 20 Watt mode I could not get this unit to heat up too much in my testing with labSentinel 2. There is still enough headroom available to use it in any kind of environment, which makes it a perfect choice for labs and factories - if Advantech can tackle the presented issues. Especially the ones regarding timely and secure availability of security patches and software updates. This also means availability of these images, fast adaption after release of official Nvidia updates and all needed documentation in one manual for public download. With these exceptions and some small kinks, Advantech is so close to building the perfect unit for their envisioned use case. I really hope they can close that last (security/software/manual) gap to an otherwise nearly perfect hardware - and with that create an recommendable product.

Edit: balenaOS

I got balenaOS working on the device - see here.

100 Euro External 13,5'' / 2256x1504 Display with 2x Mini HDMI Input

Working with telemetry, scientific data, programming, writing papers or just casually hacking the Gibson requires one thing: A lot of screen real estate. To allow for that "bit of extra space", I built myself an external 13,5'' / 2256x1504 display. It comes with 2x Mini HDMI Inputs and is powered over USB.

The front is protected by perspex, while the back is just scrap material from a PCB stencil transport. The black straps hold down the eDP cable runing to the display.

The whole project comes in just a shy more expensive than 100 Euros and with that is probably one of the most affordable, external high resolution displays you can get for your moneys worth.

The used display is a NE135FBM-N41 (here on Aliexpress for 70,12 Euro).

After talking with one of the sellers on Aliexpress I got them to re-program their dual Mini HDMI controllers for this display - which can be had together with the requires eDP cable (here on Aliexpress for 30,07 Euro). It just nicely plugs together and then works just out of the box. You will just need to get yourself some Mini HDMI to HDMI Adapters and build a case. All in a all a very worthwhile project. Have fun!

Add icons to Jetpack Social Menu

I have been looking around the net quite a lot to find a solution on how to add icons to the "social menu" in wordpress:

At first I thought this menu was a feature of the theme I am using, Independent Publisher 2. I only found a Github repo for the version 1 theme - tried all the hacks there - and finally found out that version 2 was bought by WordPress.com and customized. So all the hacks available for version 1 did not even work. Bummer.

I really wanted to finally have icons for Mastodon, Hackster, Keybase or the RSS feed - so I looked into the file system - and look and behold, I found the path which actually does all the "heavy lifting":

wp-content/plugins/jetpack/modules/theme-tools/social-menu

Turns out, this menu is actually generated as part of the WordPress Jetpack and its Social Menu part.

To add to its library is very simple (even though not documented...):

  • Look up SVG icons, maybe from a free website like https://simpleicons.org/
  • Download the svg file and open in notepad or other editor, it will look like this:
<svg role="img" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"><title>Mastodon</title><path d="M23.268 5.313c-.35-2.578-2.617-4.61-5.304-[...]2.96-1.498 1.13 0 2.043.395 2.74 1.164.675.77 1.012 1.81 1.012 3.12z"/></svg>
  • You will need to change the "symbol" element to svg, set an name/id, remove role item and xmlns as well as the title. It will then look like this:
<symbol id="icon-mastodon" viewBox="0 0 24 24"><path d="M23.268 5.313c-.35-2.578-2.617-4.61-5.304-5.004C17.51.242 15.792 0 11.813 0h-.03c-3.98 0-[...]2 1.81 1.012 3.12z"/></symbol>
  • Add this new object into the social-menu.svg before the closing </defs></svg> tag and save the file
  • Open the icon-functions.php and add some entries to the $social_links_icons array. The binding is basically URL Path/Matching => Icon ID in the social-menu.svg. so to add e.g. the Keybase.io, Mastodon (on chaos.social) and Hackster.io icons I added:
			'keybase.io'       => 'keybase',
			'chaos.social'       => 'mastodon',
			'hackster.io'       => 'hackster',
		);
  • Save and close the file, if you now add a new custom element/external link to your social bar e.g. containing keybase.io in the URL, it will show up as the newly added keybase icon.

USB C power for the Nvidia Jetson Nano 4 GB dev board

The best way to power a Jetson Nano 4 GB dev board is by using a center positive, 5 V and at least 4 A barrel connector type power adapter. However, these are often bulky and not the best travel companion - while USB C power bricks are becoming more common and the relevant USB C sockets are getting build into nearly every device (maybe yours too, Apple?).

So I set out to build a USB C power adapter for the Jetson board.

By using an inexpensive USB C "trigger" combined with two 5V@3A step-down converters this did actually work.

The trick is setting the USB C trigger to request 20 V and using the 5 V converters in parallel to step-down the 20 V to 5 V - and then feeding the resulting voltage again in parallel to the barrel plug, like so.

For the curios among you now asking why I did not just set the trigger to 5 V and used it all alone: I tried this first, but it did not work. It was not able to provide enough current to support the operation of the Jetson at "MAXN mode" - it was constantly coming up with Overcurrent protection messages if pushed too hard.

I am happy with the result and shortend the wires after testing, putting everything into a neat small form factor.

With this change I can finally replace my old Jetson Nano power supply with something smaller than this chunky unit which I was gifted back in the day by the awesome Morlac :).

An active GNSS antenna for the CAM-M8Q breakout

I have been using multiple CAM-M8Q breakouts by Watterott and really am loving these units. They are small, reasonable priced and have the advantage of an integrated chip antenna. However, this also their small shortcoming: While the antenna is good enough for most outdoor jobs, you can run into sensitivity issues when deploying it indoors - if not setup next to a window. Luckily, the module has two additional u.fl connectors for RFin and RFout, meaning you can use an external antenna.

To accomplish this, you just need to remove the resistor R3 to position R1 - as outlined by the schematics:

Overview over the CAM-M8Q, copyright by Watterott

With this, you could attach an passive antenna, but an active one will not work, as no power is supplied from the module. But you can add this power insert with an inductor and an capacitor.

I did this with some SMD components, but did not add the insert "behind" the u.fl connector, but between both jumper points R1 should be using. So I can make this a part of the module.

This worked perfectly and the reception is great

As an antenna I am using the Navilock NL-202AA - I have not received any Galileo signal (even though it should be possible), but other than that I am very happy with the solution.

Thanks again to Mr. Watterott to pointing me to this StackExchange post which contained the solution for the power insert.

labSentinel 2

About nearly a year ago, I wrote the labSentinel project for my Nvidia Jetson AI Specialist certification. The basic idea of the project is to be able to supervise old Lab Equipment which does not poses any kind of log output or interface other than a graphical user interface, running on an Windows 3.11 / 95 / NT - maybe even XP system. I solved this issue by using a video grabber attached to a Jetson Nano and "out-of-band" grabbing the screen output of the experiment computer. I then learned good and bad system states via Nvidias Inference tools and finally got the system to report via MQTT as soon as something did go wrong. (As a "test system" I designed a flashy GUI application to try to mimic the old interfaces - specifically thinking about a lab power supply with multiple outputs - and the ability to simulate errors.)(https://developer.nvidia.com/embedded/community/jetson-projects#labsentinel / https://github.com/nmaas87/labsentinel)

While the project did work, there was still a lot left to be desired:

  • The system did capture the complete screen in full size. Running inference on a 1024x768 or even higher resolution picture is not efficient and has a high failure rate.
  • Training, testing and improving the model was time consuming and did not yield the precision and results I was hoping for.
  • The system could differentiate between "good" and "error" states - however if an error occurred, I would have loved to get more information - "reading the GUI" and its output. For example in the lab power supply use case, getting the specific voltages of the different lines to see which line failed or what is wrong - maybe even with the possibility to cross check if the detected error is an error in the first place
  • While the Nvidia Jetson Nano Development Board is an awesome tool for development, it is not hardend enough / suited for a lab or even factory floor environment.

These were all points I wanted to address, but as time was lacking - I did not take up the project again - until the start of this year Advantech and Edge Impulse started their Advantech Edge AI Challenge 2022. They wanted to know about specific use cases and how to solve them with factory hardend Jetson products (e.g. Advantechs AIR-020 series) and Edge Impulse Studio.

Well, that reminded me of the first labSentinel - and I thought I'd give it a try. As luck would have it, I actually was one of the two lucky guys who were picked to be able to realize their project. Advantech sent me one of their AIR-020X boards (review is here :)) and I was good to go:

Let me introduce you to labSentinel 2:

Build from the ground up, it does solve the above mentioned issues:

  • The actually GUI window is found and extracted from the "full size Desktop screenshots" via OpenCV 2 - and resized to 320x320 pixels to neatly fit the inference model
  • All model training, testing and optimization is done with Edge Impulse, which makes handling a breeze
  • If an error is detected and included OCR module using tesseract can extract text from predesignated / labeled areas on the non-resized GUI and sent this information along with the MQTT alert
  • The AIR-020X board is more than robust enough for all normal lab and factory floors

All source code is freely available with a demo project and documentation on Github ( https://github.com/nmaas87/labSentinel2 ) and also a video instruction on how to use it ( https://youtu.be/KEN_HT20exs )

Thanks again to Gary Lin (Advantech) as well as Louis Moreau and David Tischler (Edge Impulse) for their support :)!

Update: I added a Review to the Advantech AIR-020X and got balenaOS working on it.

WD My Cloud Mirror Gen2 with Debian 11 and Linux Kernel 5.15 LTS

Intro

Since 2017 I have been using an Western Digital My Cloud Mirror Gen 2 which I bought at Amazons Black Friday (or similar) - because the included 2x 8 TB WD Red were even cheaper with the NAS than standalone. Using the NAS had been quite ok, especially the included Docker Engine and Plex Support were a nice to have, the included Backdoor in older Versions - not so much. Recently WD had their new "My Cloud OS 5" replace the old My Cloud OS 3 - and made things worse for a lot of people. As I don't want any more surprises - and more control over my hardware - I decided to finally go down the road and get Debian 11 with an LTS (5.15) Kernel running on the hardware. This is how it went.

Warning

Warning, these are just my notes on how to convert a My Cloud OS 3 / My Cloud Mirror Gen 2 device to a "real" Debian system. You will need to take your device fully apart, solder wires and lose the warranty. Additionally you will lose all your data and even brick the hardware if something goes wrong, I am taking in no way responsibility, neither can I give support. You're on your own now.

Step 0: Get Serial Console Access

Without a serial console, you will not be able to do anything here. You will need to completely disassembly the NAS and will lose all warranty. The plain motherboard will look like this. On the most right side you will see the pins for the UART interface you will need to solder to.

When you're done with that, connect your 3v3 TTL UART USB device like this:

... and connect to it via 115200 BAUD with Putty, TeraTerm Pro or any other software (Do not connect the 3v3 pin :)). It would be wise starting without the hard drives installed.

Step 1: Flashing U-Boot

The current U-Boot on the NAS is flawed, you need to replace it. I will be CyberPK here which did an awesome job explaining everything:

We have to prepare an usb drive formatted in Fat32, and extract the uboot at link into it and connect to usb port#2.

Connect the device to the serial adapter, poweron the device and start pressing '1' (one) during the boot until you can see the 'Marvell>>' Command Prompt
press ctrl+c
then

We will start here to change stuff and break stuff. But if I could give you one tip before you start: Please execute printenv once. Copy and paste all env variables and everything Uboots spits out. It could save your hardware one day. Thanks, Nico out!

usb start
bubt u-boot-a38x-GrandTeton_2014T3_PQ-nand.bin nand usb
reset

This will reboot the device. Access again the Command prompt and add the following envs, a modified version of the ones provided by bodhi at this post:

setenv set_bootargs_stock 'setenv bootargs root=/dev/ram console=ttyS0,115200'

setenv bootcmd_stock 'echo Booting from stock ... ; run set_bootargs_stock; printenv bootargs; nand read.e 0xa00000 0x500000 0x500000;nand read.e 0xf00000 0xa00000 0x500000;bootm 0xa00000 0xf00000'

setenv bootdev 'usb'

setenv device '0:1'

setenv load_image_addr '0x02000020'

setenv load_initrd_addr '0x2900000'

setenv load_image 'echo loading Image ...; fatload $bootdev $device $load_image_addr /boot/uImage'

setenv load_initrd 'echo loading uInitrd ...; fatload $bootdev $device $load_initrd_addr /boot/uInitrd'

setenv usb_set_bootargs 'setenv bootargs "console=ttyS0,115200 root=LABEL=rootfs rootdelay=10 $mtdparts earlyprintk=serial init=/bin/systemd"'

setenv bootcmd_usb 'echo Booting from USB ...; usb start; run usb_set_bootargs; if run load_image; then if run load_initrd; then bootm $load_image_addr $load_initrd_addr; else bootm $load_image_addr; fi; fi; usb stop'

setenv bootcmd 'setenv fdt_skip_update yes; setenv usbActive 0; run bootcmd_usb; setenv usbActive 1; run bootcmd_usb; setenv fdt_skip_update no; run bootcmd_stock; reset'

saveenv

reset

(This code was also modified by me to use the fatload instead of the ext2load)

With this, our NAS is ready.

Step 2: Build a kernel and rootfs

  • On your current linux machine, get yourself a copy / git clone of Heisaths wdmc2-kernel Repo
  • Get all dependencies installed according to this repo, I installed it on a Debian 11 machine
  • Replace the file content of wdmc2-kernel/dts/armada-375-wdmc-gen2.dts with the content of the real and improved dts for the WDMCMG2 (original from this link, copy available here) - but keep the file name still armada-375-wdmc-gen2.dts
  • Replace the file content of wdmc2-kernel/config/linux-5.15.y.config with the file from here (please know this config ain't perfect, but it will get you running. You can always file a PR and help me out ;))
  • Start the build process in wdmc2-kernel with ./build.sh
  • Mark: Linux Kernel, Clean Kernel sources, Debian Rootfs, Enable ZRAM on rootfs
  • Kernel -> Kernel 5.15 LTS
  • Build initramfs -> Yes
  • Debian -> Bullseye
  • Fstab -> usb
  • Rootpw -> whateverYouWant
  • Hostname -> whateverYouWant
  • Locales -> no changes, accept (or whatever you want)
  • Default locale for system -> en_US.UTF-8 (or whatever you want)
  • Tzdata -> Your region
  • Now your kernel and rootfs will be build

While this is on-going, get yourself a nice USB 2.0 or USB 3.0 stick prepared with

  • partition table: msdos
  • 1st partition: 192 MB, FAT32, label set to boot, boot flag enabled
  • 2nd partition: rest, ext4, label set to rootfs

When the kernel is done compiling and your usb stick is done, copy all the files (sda is the name of my usb stick

  • mkdir /mnt/boot /mnt/root
  • mount /dev/sda1 /mnt/boot
  • mount /dev/sda2 /mnt/root
  • mkdir /mnt/boot/boot
  • cp wdmc2-kernel/output/boot/uImage-5.15.* /mnt/boot/boot/uImage
  • cp wdmc2-kernel/output/boot/uRamdisk /mnt/boot/boot/uInitrd
  • tar -xvzf wdmc2-kernel/output/bullseye-rootfs.tar.gz --directory=/mnt/root/
  • rm -rf /mnt/root/etc/fstab
  • cp /mnt/root/etc/fstab.usb /mnt/root/etc/fstab
    // within /mnt/root/etc/fstab:
    // change all /dev/sdb to /dev/sdc if all two drive slots on the NAS are used <- this!
    // change all /dev/sdb to /dev/sda if no drive slots on the NAS are used
  • umount /mnt/boot /mnt/root

Step 3: First boot and getting things running

Insert the USB stick into the 2 slot of the NAS. Leave the drives still out and boot it up for the first time, watch it via terminal. Login at the end with root and your chosen password.

If it boots, you can shut it down again with shutdown -P now, unplug power, insert the drives and reboot.

First thing after the first boot with drives, your own initramfs / Ramdisk from your current setup:

  • cd /root/
  • ./build_initramfs.sh
  • cp initramfs/uRamdisk /boot/boot/uInitrd

Second, install MDADM for the RAID:

  • apt update
  • apt install mdadm
  • mkdir /mnt/HD
  • edit your /etc/fstab and add a mount point for your md/raid. I used the old drives with my old data on it like this (depending on the fact as which mdX it launches...)
/dev/md0        /mnt/HD         ext4    defaults,noatime,nodiratime,commit=600,errors=remount-ro        0       1
 

A lot of good knowledge about Ramdrives can be found here.

I would advise to do steps: 1. Folder2RAM, 2. Kernel Options, 4. Logrotate - option 3 did not work out for me.

To get the drive to sleep at some point, we need to reconfigure MDADM

dpkg-reconfigure mdadm
// monthly check ok 
// daily degration check ok
// monitoring disable

... and get hdparm working

apt install hdparm hd-idle

# hdparm config
, add in /etc/hdparm.conf 

/dev/sda {
#        apm = 127
#        acoustic_management = 127
        spindown_time = 120
#       spindown_time = 4
        write_cache = on
}

/dev/sdb {
#        apm = 127
#        acoustic_management = 127
        spindown_time = 120
#       spindown_time = 4
        write_cache = on
}

# Spindown Time means: 120 * 5 sec = 600 sec / 60 sec = 10 min
# apply it after saving the file with:
/usr/lib/pm-utils/power.d/95hdparm-apm resume

We can check the status of the drives with smartctl

smartctl -i -n standby /dev/sda
smartctl -i -n standby /dev/sdb

To get fan control working

apt install wget
wget -O mcm-fancontrol-master.tar.gz https://github.com/nmaas87/mcm-fancontrol/archive/refs/heads/master.tar.gz
tar -xvzf mcm-fancontrol-master.tar.gz
cd mcm-fancontrol-master/
cp fan-daemon.py /usr/sbin/
chmod +x /usr/sbin/fan-daemon.py
cp fan-daemon.service /etc/systemd/system/
systemctl enable fan-daemon
systemctl start fan-daemon

(You can change low temp and high temp in the /usr/sbin/fan-daemon.py to get the Fan to kick in later and also set DEBUG = True if you want to see some details in the systemctl status fan-daemon)

MDT Utils can be useful, just mentioning it here

apt install -y mtd-utils
cat /proc/cmdline
cat /proc/mtd

Samba ...

apt install samba --no-install-recommends
# change /etc/samba/smb.conf to your liking and setup your SMB

Plex ...

# Plex 
apt update
apt install apt-transport-https ca-certificates curl gnupg2
curl https://downloads.plex.tv/plex-keys/PlexSign.key | apt-key add -
echo deb https://downloads.plex.tv/repo/deb public main | tee /etc/apt/sources.list.d/plexmediaserver.list
apt update
apt install plexmediaserver
systemctl status plexmediaserver

Well, that's it.

Thanks a lot to all awesome contributors in the net:

Companion repo with files: https://github.com/nmaas87/WDMCMG2

[Win10] Long Path / Filenames

Having the need to discuss this topic in 2022 is something I would not have dreamed of - but still, we're here to address the elephant in the room: Yes, Windows 10 does support long path names - no, it does not support it by default.

You need to enable it using the AD config or the registry.

* launch regedit with admin rights
* navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem
* Add a new DWORD (32-bit) named LongPathsEnabled with value 1
* reboot

Alternatively, you can also have the same result by enabling this value in the Group Policy (Computer Configuration > Administrative Templates > System > Filesystem > Enable NTFS long path).

However, this will only help with your explorer and "new" applications, some old apps can still suffer from issues (and we're not going to talk about potential WSL/WSLv2/Docker issues here by mounting paths...)

Another interesting thing is to find files / paths which are "a little abundant". There is a nice tool called TLPD for this, HOWEVER (warning), I need to highlight that only the version 4.6 is considered ok ( https://sourceforge.net/projects/tlpd/files/v4.6/ ). The latest version 4.6.0.1 is infected by some kind of Trojan - and about half of all scanners on Virus Total are also confirming this issue. So if you want to use this tool - please only download the 4.6 version - and for good measure scan it before use. Just to be sure that in the future not someone plays around with the files...

Taming the RAK5146 / SX1303

Its been a while - but good things come to those who wait ;).

Trying to work out a new system you're unfamiliar with can be quite a challenge. In my case I got my first LoRaWAN concentrator along with some CubeCell HTCC-AB01 and tried to get them to work. It turned out - it was quite hit and miss. On the one hand, the firmware support for the RAK5146 with USB, GPS and LBT was not really ready yet - on the other hand, the CubeCell Arduino code has a fatal flaw with the preamble size so that those cannot join a LoRa network if used in EU868 MHz Band (the perfect fix by 1rabbit is linked as well!).

In the end, as I wanted to get this working as best as possible, I bought myself the RAK2287 Pi Hat and started modifying it. I was quite sure that the I2C signals should be available somewhere on that board - as well as 5V + 3v3 along with the raw PPS signal of the GPS module within the RAK5146. I was right and could bridge the PPS signal to an used RPi GPIO pin.

PPS Pin bridged to GPIO 04

Using the I2C signal lines, Ground and 3v3 I added an I2C sensor interface (call it an ugly QWIIC connector ;)).

PPS hack and "poor mans QWIIC connector added"

I installed the latest UDP Packet Forwarder package by Xose - and everything was working perfectly since then.

I even added brocaars Packet Multiplexer and started running a local ChirpStack instance on my home server. Now my sensors are feeding their data directly to my local InfluxDBv2 and Grafana - but at the same time my Gateway is still available for TTNv3 users to receive their data. Its awesome and with that I even receive my private data during WAN outages. Nice!

As added "bonus", my gpsTime project is running on the same RPi, using the GPS time of the RAK5146 and its PPS signal to be an extremely precise GPS timeserver in my network - and an additional BME280 is running as the "room sensor", because adding another battery operated device - if you are having more than enough CPU power (in form of the RPi ;)) is really not needed. The whole device fits behind the TV.

"Not Great, Not Terrible"

All in all, the project was very successful, I am working on some new ideas regarding the sensors, but this is pending on my KiCad 6 skills and deliveries of new RAK hardware currently on the way ;).

I still keep all infos in the balena Forums, so head over if you want more details.

LoRaWAN with RAKwireless RAK5146: The SX1303 in the field, a rocky start

Setting the scene

Working with the guys over at balena comes with some perks - including the peer pressure of getting into new technologies and trying them out. My go-to person this time was balena Developer Advocate Marc Pous - who is not only no stranger to soldering - but also deeply rooted in the LoRaWAN field (and he is also doing balenaHub projects for LoRaWAN applications 1, 2).

I heared about LoRaWAN a while back, when I was working with Sigfox - but never really gave it a try, other than trying to run a basestation for a certain satellite using LoRa for its communication.

This changed through Marcs constant LoRa presentations and work - I just gave in and wanted to try it. I had already three projects at hand: One was getting some BME280s into a network (because due to 2.4 GHz spectrum hell induced by too many neighbours with too many WiFi APs cranked to 11 using WiFi with ESP8266 did not work anymore...). This project would be the "first hello-world" to maybe be allowed to deploy a network in my company to care for our sensors - and lastly, strapping LoRa technology to a rocket and see whether we can muster the about 250 km+ downrange.

But things start small, so I wanted to get into the field with best-in-class technology - and the latest available products (especially thinking about ultra-long communications like on my last usecase). Luckily, the new Semtech SX1303 came to market 8 months ago and RAKwireless decided to make their first concentrator on its base, the RAKwireless Wislink RAK5146 - and I wanted to be an "early adopter".

Wrong expectations

If I am getting into a new product, I am used to finding a certain amount of good documentation, software and support. I must admit that I was really spoiled with my latest endeavors using Paul Stoffregens excellent Teensy series (loyal customer since Teensy 2.0/2010 ;)), some new STM32s and also NVIDIAs Jetson lineup.

So I went into this whole project with wrong expectations, as RAKwireless had made a ton of useful stuff available already for their older systems like the RAK2287 - and I thought everything would be in place for the new system as well, which was wrong.

I started the project more than a month ago and at that time not even a real firmware was available. The only project I found was this Github Repo - which could not even install for my RAK5146 USB because someone forgot the chmod +x on the install script, which gave a bad first impression regarding how tested this official project would be - and I was not disappointed, as I had to try to figure out how to get it working with TTN - because the important step (after the installation was done) to changing the server address in the configuration file - was not included in the setup instructions, so my gateway never connected.

Also, no RPi Hat was available - which brought me into looking for own solutions, which I found in an USB WWAN adapter card to be stacked onto the RPi.

A last thing which was confusing, especially with trying to figure out how to connect everything were the Pinout and Blockdiagram. I remind you, there are 6 different configurations of the RAK5146:

  • SPI (always without LBT)
    • with GPS
    • without GPS
  • USB
    • with LBT
      • with GPS
      • without GPS
    • without LBT
      • with GPS
      • without GPS

Both the Blockdiagram and Pinout for all these variations were handled in one graphic. While one could handle the block diagram, the pinout is just confusing. Which Pins are actually on the card using the USB version with LBT and GPS? What is going on with the SPI pins? Are they not even routed?

I tried to use my new TTNv3 gateway with Heltec Cubecells - and while I got it working, there was just unreliable data transmission and huge packet loss.

I must admit, I went to this project with wrong expectations - because I saw the RAK5146 now freely available on RAKwireless shop - thinking it should be another product like the RAK2287 - but that was wrong. It seems to be a product which is firstly marketed towards huge OEM customers - and not for the hackers at heart.

Moving forwards

To try to get this issues resolved, I wrote a bunch of PRs and Issues on the Github repo and documented my findings on balenas Forum. Luckily, those comments did not fall onto deaf ears and the situation improved:

RAKwireless worked in enhancing the documentation of the RAK5146 - but sadly the Blockdiagram and Pinout is still in the same state. Also, no quickstart guide was added.

Thanks to Taylor there is now a firmware package made available, which can be downloaded from the RAK5146 page.

Regarding the documentation on using it, this is not available yet. Please stick to the RAK2287 Quickstart guide. You can set it up as "RAK Gateway LoRa Concentrator" for TTN like shown there - but need to edit the packet-forwarder config afterwards (see the main menu of gateway-config) and replace the "server_address": "router.us.thethings.network" with your things network router (e.g. eu1.cloud.thethings.network in Europe TTNv3). You should then restart the Gateway / Forwarded. In adding it to the TTNv3 stack, the Quickstart guide is a bit outdated. You can still find out the EUI of your gateway using gateway-version and use this for adding it to TTNv3, but please be careful not to choose the frequency plan "Europe 863-870 MHz (SF9 for RX2 - recommended)" - but the RX2 SF12 option to improve the signal quality.

I still have problems with bad performance - and the helpful Jose Marcelino decided to send me a RPi Hat for switching out my contraption, which should improve quality. However, after waiting a month I have not yet received this Hat yet and could therefore not test.

Also Xose Pérez did try to help and I am grateful for both of them trying to resolve the situation and getting the RAK5146 also working in the hands of power users.

After all, it looks like an extremely capable platform - and I would really like to see it performing accordingly.

What is next?

Even though the start was a bit rocky, I want to continue working on this project - after all, my room sensors need some LoRa, as well as my other projects - and when these issues are resolved, thr RAK5146 looks like a good offer. I will report back on my blog with updates - and wait for the RPi Hat which hopefully will resolve the issues - or helping me pin down the issue, if it were to be somewhere else.