RT-Thread IoT-Contest 2023 with NXP i.MX RT1060 - Getting started

The folks over at RT-Thread decided to make a IOT-Contest using hardware from different vendors and their RT-Thread OS. I was luckily enough to get chosen for a networking project with the NXP i.MX RT1060 EVKB, so I decided to document my journey first getting started with RT-Thread. Here we go! 🙂

Getting started

First of all, its important to know that there are two version of the MIMXRT1060 - the EVK and the EVKB. The EVK seems to be an older version which also includes a Camera sensor module, the EVKB version the recent one. RT-Threads example on Github is for the EVK version, which means things like User LED blinking does not work out of the box, but we will take care of this later.

NXP i.MX RT1060 EVKB files and documentation

You'll need to create a free user account on NXPs webpage to download the User manual and Schematic, otherwise the last two links will not work.

RT-Thread Setup

  • Main RT-Thread Github Page for the NXP i.MX RT1060: https://github.com/RT-Thread/rt-thread/tree/master/bsp/imxrt/imxrt1060-nxp-evk
  • If not yet done, download and install Git
  • RT-Thread Studio: https://www.rt-thread.io/studio.html
    • Just download and install (e.g. D:\RT-ThreadStudio)
  • envTool: https://github.com/RT-Thread/env-windows/releases
    • Pick the latest env-windows-v*.7z and download
    • Unzip the included folder (e.g. D:\RT-ThreadStudioEnv\env-windows-v1.3.5)
    • Start env.exe
    • Click on the Hamburger Menu, Settings, Integration
    • Click on "Register" and "Save settings", close the settings menu and the envTool
    • More detailed info for the envTool can be found: https://github.com/RT-Thread/rt-thread/blob/master/documentation/env/env.md
  • RT-Thread Github Repo: https://github.com/RT-Thread/rt-thread
    • Clone the repo ( git clone https://github.com/RT-Thread/rt-thread.git ) to your system, (e.g. to D:\RT-ThreadGithub)
  • First project
    • Start RT-Thread Studio
    • File -> Import -> RT-Thread Bsp Project into Workspace
    • Bsp Location (within the Github Repo): D:\RT-ThreadGithub\rt-thread\bsp\imxrt\imxrt1060-nxp-evk
    • Project Name: What you want, I choose blinky
    • Chip Name: MIMXRT1060
    • Debugger: DAP-LINK
    • click finish
    • This will lead to an error, pointing you to the workspace folder ( e.g. mine is D:\RT-ThreadStudio\workspace.metadata ) where a .log file resides. Open it up and scroll to the end. At this time in the development, there seems to be an error with the initial compilation shown with this error: "!MESSAGE D:\RT-ThreadGithub\rt-thread\bsp\imxrt\imxrt1060-nxp-evk>scons --dist-ide --project-path=D:\RT-ThreadStudio\workspace/blinky --project-name=blinky"``. To fix this issue, navigate with the windows explorer to the folder ``D:\RT-ThreadGithub\rt-thread\bsp\imxrt\imxrt1060-nxp-evk``. Within the folder, hold shift and right-click and choose ``ConEmu Here`` - the envTools will open up. Just copy and paste the complete scons command (``scons --dist-ide --project-path=D:\RT-ThreadStudio\workspace/blinky --project-name=blinky) into the envTools window and press enter. It should compile now.
    user@system D:\RT-ThreadGithub\rt-thread\bsp\imxrt\imxrt1060-nxp-evk
    > scons --dist-ide --project-path=D:\RT-ThreadStudio\workspace/blinky --project-name=blinky
    scons: Reading SConscript files ...
    Newlib version: 4.1.0
    make distribution....
    => imxrt1060-nxp-evk
    => start dist handle
    => copy imxrt bsp library
    => copy bsp drivers
    => copy bsp peripherals
    => components
    => include
    => libcpu
    => src
    => tools
    Update configuration files...
    suggest to use command scons --dist [--target=xxx] [--project-name=xxx] [--project-path=xxx]
    dist project successfully!
After this step, click finish in the still open import menu in RT-Thread Studio again, it should work now and generate the new project.
  • Navigate within Project Explorer through the Projectname to applications\ and open the main.c file.
  • Replace

    /* defined the LED pin: GPIO1_IO9 */
    #define LED0_PIN               GET_PIN(1, 9)
    
    int main(void)
    {
    #ifndef PHY_USING_KSZ8081
        /* set LED0 pin mode to output */
        rt_pin_mode(LED0_PIN, PIN_MODE_OUTPUT);
    
        while (1)
        {
            rt_pin_write(LED0_PIN, PIN_HIGH);
            rt_thread_mdelay(500);
            rt_pin_write(LED0_PIN, PIN_LOW);
            rt_thread_mdelay(500);
        }
    #endif
    }

    with

    /* defined the LED pin: GPIO1_IO8 */
    #define LED0_PIN               GET_PIN(1, 8)
    
    int main(void)
    {
    #ifndef PHY_USING_KSZ8081
        /* set LED0 pin mode to output */
        rt_pin_mode(LED0_PIN, PIN_MODE_OUTPUT);
    
        while (1)
        {
            rt_pin_write(LED0_PIN, PIN_HIGH);
            rt_thread_mdelay(500);
            rt_pin_write(LED0_PIN, PIN_LOW);
            rt_thread_mdelay(500);
        }
    #endif
    }

    and save the file. The User LED on the EVKB board is not on IO PIN 9, but 8. Also, this pin is shared with the ethernet controller - so if we enable this later, the User LED will not work anymore.

  • Click on the hammer icon ("Build 'Debug'") and it should compile the new software.
  • Click on the downward green arrow ("Flash Download") to download the program to the hardware board. The User LED should now be flashing.
  • There can be multiple issues at Download:
    • A window with "J-Link Emulator selection" pops up and asks for connection methods. This error means that RT-Thread Studio tries to program via Segger Link, which is the incorrect flash tool for the EVKB. If this comes up, please click no on the J-Link screen. Then check on the little black arrow attached to the Flash Download icon, that "DAP-LINK" is checked. Afterwards try Downloading again.
    • "pyocd.core.exceptions.TargetSupportError: Target type 'mimxrt1060' not recognized." If this error arises it can mean two things:
      • You did not enter the Chip Name correctly. Please check that the error is really mimxrt1060 - and no spelling issues are there. If there are, go to the Cogwheel Icon ("Debug configuration"), Debugger tab and correct the Chip Name within the Device name area. Click Ok to save and try again.
      • Scroll up through the error list and you might see the path of the pyocd software, e.g. RealThread\PyOCD\0.1.3 - this would mean you're running the default PyOCD 0.1.3 - which has some errors that will mean you cannot download to flash. Directly next to the "Flash Download" icon is the "SDK Manager", open it up and scroll down to the "Debugger_Support_Packages", "PyOCD". Choose the latest version (e.g. 0.2.0) and click on "Install packages". You can then select the old version(s) you have installed and click on "Delete packages". Afterwards close the SDK Manager. This should fix the issue.
  • If there are no issues with the Download, can you also "Open a Terminal" (computer screen icon close to "Flash Download"). And start with the correct settings (e.g. 115200 BAUD and the correct Serial port, should be chosen automatically if you already flashed a program before). You should see the RT msh console running on your EVKB and be able to send a "help" to get an overview over the device

    
    \ | /
    - RT -     Thread Operating System
    / | \     5.0.1 build May 28 2023 14:25:59
    2006 - 2022 Copyright by RT-Thread team
    msh >help
    RT-Thread shell commands:
    clear            - clear the terminal screen
    version          - show RT-Thread version information
    list             - list objects
    help             - RT-Thread shell help.
    ps               - List threads in the system.
    free             - Show the memory usage in the system.
    pin              - pin [option]
    reboot           - reset system
    
    msh >
  • To get a little bit further into a project, I replaced the main.c PIN definition and main() with following code

    // wrong definition, GPIO1_IO9 is ethernet leds on EVKB
    // #define LED0_PIN               GET_PIN(1, 9)
    
    // D8 (GPIO01-08) is the user led
    #define LED0_PIN               GET_PIN(1, 8)
    // SW5 (GPIO5-00) is the user button
    #define SW5_PIN                GET_PIN(5, 0)
    
    int main(void)
    {
    #ifndef PHY_USING_KSZ8081
        // set LED0 pin mode to output
        rt_pin_mode(LED0_PIN, PIN_MODE_OUTPUT);
        // set SW5 pin mode to pullup
        rt_pin_mode(SW5_PIN, PIN_MODE_INPUT_PULLUP);
    
        while (1)
        {
            if (!rt_pin_read(SW5_PIN)) {
                rt_pin_write(LED0_PIN, PIN_HIGH);
            } else {
                rt_pin_write(LED0_PIN, PIN_LOW);
            }
            /*
            rt_pin_write(LED0_PIN, PIN_HIGH);
            rt_thread_mdelay(500);
            rt_pin_write(LED0_PIN, PIN_LOW);
            rt_thread_mdelay(500);
            */
        }
    #endif
    }

    This will couple the LED to the status of the user switch (SW5): If its pressed, the LED will turn on, if its not pressed, the LED will stay off. Just save, re-compile and re-download.

Bugs

As seen there are some bugs already found:

  • RT-Thread Studio error upon trying to import a project
  • RT-Thread Studio failing to choose the correct debugger even though it was selected on creation/import of the project
  • pyocd error upon download of firmware to the NXP MCU due to old pyocd version shipped with RT-Thread Studio
  • Integration of the menuconfig tooling as "RT-Thread Settings" within RT-Thread Studio - but it just does not have any effect on the project

I hope that these issues get solved soon - but with the infos above you should be able to get started. I will see you in the next post - probably going through the project I made :).

balenaOS on the Advantech AIR-020X

Last week I posted the review of the Advantech AIR-020X with which I used to create the labSentinel 2 system.

I remarked that the hardware was great, however the software support and update capability of the system was severly lagging behind for an "industrial floor, always on" type of machine. Luckily, thats exactly what balena has been created for.

Even better, their environment already support Nvidia Jetson devices - also Nvidia Jetson Xavier NX modules. With the AIR-020X being a really nice carrierboard (and housing) for this module, I went to work.

Spoiler Alert - it works!

Installing balenaOS on the AIR-020X

1.) I setup an Ubuntu 20.04 LTS machine, installed npm and setup jetson-flash

2.) I went to https://www.balena.io/os and downloaded the latest NVIDIA JETSON XAVIER NX DEVKIT EMMC image (2.107.10) in the development version.

3.) Unzip the file after setting up jetson-flash and getting your AIR-020X into recovery mode. This means opening the bottom of the case by unscrewing the 4 philipps head screws, connecting the Micro USB port of the AIR-020X with your Ubuntu host computer, applying power to the AIR-020X, but do not yet press the power switch.

4.) There is foil/recovery switch next to the Micro USB connector and LAN port. You need to press and hold this switch and at the same time press the power on button of the unit for about 4 seconds.

5.) On Ubuntu, run lsusb | grep Nvidia - this should return a similar line to this

Bus 003 Device 005: ID 0955:7023 NVIDIA Corp. APX

Import is the ending "APX", which means it is in recovery mode.

6.) Now you can start the flash process

user@balenaTest:~/jetson-flash$ ./bin/cmd.js -f ./jetson-xavier-nx-devkit-emmc-2.107.10-v14.4.4.img -m jetson-xavier-nx-devkit-emmc

The .img value points to the unzipped image file, the -m tells the jetson-flash tool that we are running a Xavier NX system and want to install balenaOS on the internal eMMC module.

7.) This will now start the process which will take some minutes and also ask you for your sudo password. At the end you should see something like this:

[ 255.8670 ] Flashing completed

[ 255.8670 ] Coldbooting the device
[ 255.8696 ] tegrarcm_v2 --ismb2
[ 255.9454 ]
[ 255.9502 ] tegradevflash_v2 --reboot coldboot
[ 255.9530 ] Bootloader version 01.00.0000
[ 255.9984 ]
*** The target t186ref has been flashed successfully. ***
Reset the board to boot from internal eMMC.

8.) As soon as you reboot the device, you will be greeted with the balenaOS logo and can use it as any other balenaOS device.

Adding the AIR-020X to a fleet

If you want to use it e.g. in a fleet, I would recommend creating a new one with the device type Nvidia Jetson Xavier. This is important to allow sample projects to correctly work, as its basically the same thing as the more specialized version "jetson-xavier-nx-devkit-emmc" - but most demo projects just implement the former one :).

To now join the installed device onto your new fleet, download and install balenaCLI - login to your balena Cloud account and do a balena scan using balenaCLI to find your AIR-020X on the network.

-
  host:          56e1ef3.local
  address:       192.168.178.112
  osVariant:     development
  dockerInfo:
    Containers:        1
    ContainersRunning: 1
    ContainersPaused:  0
    ContainersStopped: 0
    Images:            1
    Driver:            overlay2
    SystemTime:        2023-03-10T14:05:52.568438957Z
    KernelVersion:     4.9.253-l4t-r32.7
    OperatingSystem:   balenaOS 2.107.10
    Architecture:      aarch64
  dockerVersion:
    Version:    20.10.17
    ApiVersion: 1.41

After that, you can easily join this device with

.\balena join 192.168.178.112
? Select fleet <yourFleetNameToSelect>
? Check for updates every X minutes 10
[Success] Device successfully joined balena-cloud.com!

... and voila, its online!

What does work?

Testing GPIO pins with a multimeter

The AIR-020X has a lot of custom GPIO chips, 2x RS485/RS232 interface, 1x CANbus interface, a second network interface and even a NVMe. Luckily, everything just works out of the box.

- HDMI works
- USB works
- onboard network card (dmesg + dhcp test, gets ip / works)
[   29.231807] eqos 2490000.ether_qos eth0: Link is Up - 1Gbps/Full - flow control rx/tx

- 2nd network card (dmesg + dhcp test, get ip / works)
[  104.307175] igb 0004:05:00.0 enP4p5s0: igb: enP4p5s0 NIC Link is Up 1000 Mbps Full Duplex, Flow 
- NVMe is recognized (lsblk)
nvme0n1      259:0    0 119.2G  0 disk
|-nvme0n1p1  259:1    0    96G  0 part
|-nvme0n1p2  259:2    0    64M  0 part
|-nvme0n1p3  259:3    0    64M  0 part
|-nvme0n1p4  259:4    0   448K  0 part
|-nvme0n1p5  259:5    0   448K  0 part
|-nvme0n1p6  259:6    0    63M  0 part
|-nvme0n1p7  259:7    0   512K  0 part
|-nvme0n1p8  259:8    0   256K  0 part
|-nvme0n1p9  259:9    0   256K  0 part
|-nvme0n1p10 259:10   0   300M  0 part
`-nvme0n1p11 259:11   0  22.8G  0 part
- can bus interface is auto loaded on boot (see ifconfig -a)
can0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          NOARP  MTU:16  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:10
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Interrupt:63
- gpio/dio, works, but bit3 does sadly not work
( more info: http://ess-wiki.advantech.com.tw/view/File:AIR-020-nVidia_GPIO.docx )
Pin Number AIR-020X AIR-020T AIR-020N
GPIO bit1 	393 	269 		38
GPIO bit2 	421 	425 		149
GPIO bit3 	265 	411 		65
GPIO bit4 	424 	264 		168
GPIO bit5 	418 	476 		202
GPIO bit6 	436 	396 		246
GPIO bit7 	417 	337 		169
GPIO bit8 	268 	338 		194

# set bit 1 as GPIO pin
echo 393 > /sys/class/gpio/export
# get value 0=low, 1=high
cat /sys/class/gpio/gpio393/value
# set direction out or in
echo out > /sys/class/gpio/gpio393/direction
# get direction
cat /sys/class/gpio/gpio393/direction
out
# set value on out pin
echo 1 > /sys/class/gpio/gpio393/value

test:
# 265, bit3 did not work on export
echo 393 > /sys/class/gpio/export
echo 421 > /sys/class/gpio/export
echo 265 > /sys/class/gpio/export
echo 424 > /sys/class/gpio/export
echo 418 > /sys/class/gpio/export
echo 436 > /sys/class/gpio/export
echo 417 > /sys/class/gpio/export
echo 268 > /sys/class/gpio/export

echo out > /sys/class/gpio/gpio393/direction
echo out > /sys/class/gpio/gpio421/direction
echo out > /sys/class/gpio/gpio265/direction
echo out > /sys/class/gpio/gpio424/direction
echo out > /sys/class/gpio/gpio418/direction
echo out > /sys/class/gpio/gpio436/direction
echo out > /sys/class/gpio/gpio417/direction
echo out > /sys/class/gpio/gpio268/direction

echo 1 > /sys/class/gpio/gpio393/value
echo 1 > /sys/class/gpio/gpio421/value
echo 1 > /sys/class/gpio/gpio265/value
echo 1 > /sys/class/gpio/gpio424/value
echo 1 > /sys/class/gpio/gpio418/value
echo 1 > /sys/class/gpio/gpio436/value
echo 1 > /sys/class/gpio/gpio417/value
echo 1 > /sys/class/gpio/gpio268/value
- com ports, running as RS-232 or RS-485 (not tested, but recognized)
( more info: http://ess-wiki.advantech.com.tw/view/AIR-020-RS-485 )
root@56e8bf3:/# ls /dev/ | grep ttyTH
ttyTHS0 <- COM1
ttyTHS1 <- COM2
ttyTHS4

More info to the hardware can be found in the Advantech Wiki.

GPU Demos

Last but not least I want to point you towards the nice balena Jetson tutorial which can be found here.

It will help you getting started with Jetson samples that are hosted here.

In the end I was able to also get CUDA acceleration to work and see this smoke demo:

Nothing like some GPU accelerated smoke

With that I am closing this post. It was surprisingly easy to get this device to work - the only thing left would be to get it to boot and to work from its internal NVMe storage, but other than that its a nice tool for working with GPU workloads like Edge Impulse.

Advantech AIR-020X Review

Normally, I am not getting review units. This is due to the fact that I am only hosting this small weblog, along some conference talks - and most companies would probably be better off to send their units along someone with a reach of Linus Tech Tips, or similar.

On the other hand - when I get the possibility to do a review, it can be a bit worrisome for the companies as well, as I am a very honest person. I have been working in tech for some time now and had the honor to build stuff which went to space - and came back to tell the tale. I know what I want in a unit - and what could be a problem.

With this out of the way, I was one lucky winner of the Advantech Edge AI Challenge 2022 and got an AIR-020X-S9A1 unit at no charge to be able to realize my labSentinel 2 project. By doing this project I learned a bit about the box and thought it would not be a bad idea to share my ideas with the readers of my blog - and also Advantech, so that they can improve upon their product. This review is not paid for, reflects my own thoughts and I got the mentioned unit for my project - the review was not a part of that deal. With that out of the way, lets get started.

The hardware

The AIR-020X comes very well packaged - having its own foam jacket which will save it from all but the most horrible abuse from postal services. Not that it would matter: The roughly 14 cm x 12 cm x 4,5 cm compact unit weighs in at nearly 850 gr and is built sturdy and robust - like a tank:

The most obvious part of the unit is its heatsink, which it does put to good use - but more on that topic later. Along with the computer itself comes a chinese printed starting guide and a short USB A to Micro B cable, which will be needed to factory reset and reflash the unit.

All in all, the AIR-020X is an impressive unit, including an Nvidia Jetson Xavier NX module with 8 GB RAM, 16 GB onboard eMMC, 128 GB M.2 Flash, 2x RS232/422/485, 1x CANbus, 1xDIO ("GPIO"), 2x 1 Gbit ethernet, 1x Fullsize mPCIe with nano SIM holder, 1x 4k HDMI Output, 2x USB 3.0 Type A, 1x USB Type C. The unit is powered by a 12-24 V DC power supply, which is an optional accessory.

Being an industrial unit, it uses an industrial type connector for power, which is an HT5.08 2 pole type:

As this connector is also not part of the base package and the USB C connector does not accept power delivery (and neither works in Display Port Mode) - it becomes a bit harder to power up the unit after receiving it. Finding a usable power supply within the sizable voltage range of 12 - 24 V (e.g. from an old Laptop) is fairly easy, but without the connector - it becomes a dead end until the next delivery is there. It would be useful to at least include one connector with the base unit. The usb cable is a nice addition, but could be left out (even though its very high quality) - along with the chinese manual. This could be replaced with a small card with direct links to the english and chinese PDF versions of the manual.

Opening up the unit reveals the internals - but not without a fight:

The used screws are perfectly fixed to the structure by using blue loctite - a touch I cannot recommend enough for the vibration resistance of the overall unit - but the screws themselves are made from extremely soft metal, so that - using the correct screwdriver - I stripped nearly all screws and had really issues removing all of them. Somehow this problem seems to exist for all the external black screws, the internal silver ones were of a lot higher quality. In my case I fixed the issue by replacing the screws with new ones and never had an issue anymore with them.

The internal structure is very well laid out, raising the M.2 drive onto a pole to keep it a bit further from the heat source / Xavier NX module which is just sitting on the other side of the PCB and directly sandwiches with the big heatsink.

Very welcome are also the addition of the two Raspberry Pi Style Camera connectors, although they are a bit hidden by the serial console cables. I understand that the unit should be as closed as possible for the use in factories, but I would have loved to see two small slits (possibly even with some IP/EMC gaskets to allow for protective shielding of those entry points) so that cameras on the outside of the case can be easily attached.

The mPCIe slot gives the system an additional expansion slot for e.G. UMTS or LoRaWAN modules and also the internal CR2032 cell for the RTC is a small but valuable detail.

The AIR-020X has some mounting points available on both system sides for additional wall mounting rails. Looking at the mounting points and the obvious use of the AIR-020 series in lab and factory settings, the inclusion of a DIN rail mount as available accessory could prove very useful to directly mount this small computer into an electrical cabinet.

The software

Booting up the system greats one with a very familiar picture: Ubuntu 18.04 is running on the machine in form of a tailored version of Nvidia Jetpack. This version by Advantech is only using the eMMC of the Xavier NX module to start the bootloader, but the actual data is kept on the M.2. This is a great idea for the longevity of the eMMC on the (currently hard to find) Xavier NX module - but comes with the drawback of additional needed customization other than "only" the PCB, included hardware, drivers and other changes made by Advantech in comparision to an Nvidia Developerboard for the same module.

This is a problem I also learned the hard way: I realized that the board was delivered with L4T 32.5.2 - not the current 32.7.x (JetPack 4.6.1) - so I updated this by hand. Just to have the board bootloop. This was the moment I took a closer look to the online presence of Advantech and the manual - just to learn that the recovery process was neither described, nor was the download of the image available. I got the needed recovery file as well as the documentation (which also included vital information on how to use the DIO (GPIO), RS422 and CANbus interface) and as able to restore the board to working order. Obviously there were multiple problems with this: First, the online available manual should contain all needed information regardings settings, ports, recovery, etc - secondly, the current (and maybe even last) images also need to be available online on their website - with checksums to be able to deploy these images safetly.

I also voiced my concerns regarding the high impact security issues / CVEs found in 32.5.2 - which would make the use of AIR-020 series an absolute liability in a production environment. I am glad to report that Advantech reacted to these concerns with providing a beta version of a new JetPack 4.6.1 Image. A short time afterwards, Advantech did add some information to their wiki:

On the download page you can find the AIR020A2AIM20UIV00004 entry for the Jetson NX JetPack 4.6.1 from 2022-07-20. This links to a Dropbox folder containing a the latest image (AIR020A2AIM20UIV00004_194.tar.gz / 2022-09-16).

With this latest image I was able to upgrade the AIR-020X to JetPack 4.6.1 and even do and apt upgrade to upgrade to L4T 32.7.2, at the time the latest L4T. However, this did not go as planed: After doing the upgrade and rebooting the device, it got caught in a bootloop. This bootloop kept on repeating for about 10 minutes until the device mysteriously started then working and came back on without issues. Obviously this would not be a graceful upgrade and did instill some concerns why this was a reproducible issue.

I am glad to report that Advantech has provided the latest image - which will eliminate several security issues. However, the changes needed in the manual as well as the provision of the recovery images (now via Dropbox?) and the secure provision of security updates to the unit remain. Maybe Advantech would think about starting to use balena.io to handle these issues?

Verdict

The Advantech AIR-020X is an extremely capable unit in a small form factor, sturdy built and highly reliable. Even with the latest JetPack 4.6.1 and abuse of the formerly not available 20 Watt mode I could not get this unit to heat up too much in my testing with labSentinel 2. There is still enough headroom available to use it in any kind of environment, which makes it a perfect choice for labs and factories - if Advantech can tackle the presented issues. Especially the ones regarding timely and secure availability of security patches and software updates. This also means availability of these images, fast adaption after release of official Nvidia updates and all needed documentation in one manual for public download. With these exceptions and some small kinks, Advantech is so close to building the perfect unit for their envisioned use case. I really hope they can close that last (security/software/manual) gap to an otherwise nearly perfect hardware - and with that create an recommendable product.

Edit: balenaOS

I got balenaOS working on the device - see here.

An active GNSS antenna for the CAM-M8Q breakout

I have been using multiple CAM-M8Q breakouts by Watterott and really am loving these units. They are small, reasonable priced and have the advantage of an integrated chip antenna. However, this also their small shortcoming: While the antenna is good enough for most outdoor jobs, you can run into sensitivity issues when deploying it indoors - if not setup next to a window. Luckily, the module has two additional u.fl connectors for RFin and RFout, meaning you can use an external antenna.

To accomplish this, you just need to remove the resistor R3 to position R1 - as outlined by the schematics:

Overview over the CAM-M8Q, copyright by Watterott

With this, you could attach an passive antenna, but an active one will not work, as no power is supplied from the module. But you can add this power insert with an inductor and an capacitor.

I did this with some SMD components, but did not add the insert "behind" the u.fl connector, but between both jumper points R1 should be using. So I can make this a part of the module.

This worked perfectly and the reception is great

As an antenna I am using the Navilock NL-202AA - I have not received any Galileo signal (even though it should be possible), but other than that I am very happy with the solution.

Thanks again to Mr. Watterott to pointing me to this StackExchange post which contained the solution for the power insert.

labSentinel 2

About nearly a year ago, I wrote the labSentinel project for my Nvidia Jetson AI Specialist certification. The basic idea of the project is to be able to supervise old Lab Equipment which does not poses any kind of log output or interface other than a graphical user interface, running on an Windows 3.11 / 95 / NT - maybe even XP system. I solved this issue by using a video grabber attached to a Jetson Nano and "out-of-band" grabbing the screen output of the experiment computer. I then learned good and bad system states via Nvidias Inference tools and finally got the system to report via MQTT as soon as something did go wrong. (As a "test system" I designed a flashy GUI application to try to mimic the old interfaces - specifically thinking about a lab power supply with multiple outputs - and the ability to simulate errors.)(https://developer.nvidia.com/embedded/community/jetson-projects#labsentinel / https://github.com/nmaas87/labsentinel)

While the project did work, there was still a lot left to be desired:

  • The system did capture the complete screen in full size. Running inference on a 1024x768 or even higher resolution picture is not efficient and has a high failure rate.
  • Training, testing and improving the model was time consuming and did not yield the precision and results I was hoping for.
  • The system could differentiate between "good" and "error" states - however if an error occurred, I would have loved to get more information - "reading the GUI" and its output. For example in the lab power supply use case, getting the specific voltages of the different lines to see which line failed or what is wrong - maybe even with the possibility to cross check if the detected error is an error in the first place
  • While the Nvidia Jetson Nano Development Board is an awesome tool for development, it is not hardend enough / suited for a lab or even factory floor environment.

These were all points I wanted to address, but as time was lacking - I did not take up the project again - until the start of this year Advantech and Edge Impulse started their Advantech Edge AI Challenge 2022. They wanted to know about specific use cases and how to solve them with factory hardend Jetson products (e.g. Advantechs AIR-020 series) and Edge Impulse Studio.

Well, that reminded me of the first labSentinel - and I thought I'd give it a try. As luck would have it, I actually was one of the two lucky guys who were picked to be able to realize their project. Advantech sent me one of their AIR-020X boards (review is here :)) and I was good to go:

Let me introduce you to labSentinel 2:

Build from the ground up, it does solve the above mentioned issues:

  • The actually GUI window is found and extracted from the "full size Desktop screenshots" via OpenCV 2 - and resized to 320x320 pixels to neatly fit the inference model
  • All model training, testing and optimization is done with Edge Impulse, which makes handling a breeze
  • If an error is detected and included OCR module using tesseract can extract text from predesignated / labeled areas on the non-resized GUI and sent this information along with the MQTT alert
  • The AIR-020X board is more than robust enough for all normal lab and factory floors

All source code is freely available with a demo project and documentation on Github ( https://github.com/nmaas87/labSentinel2 ) and also a video instruction on how to use it ( https://youtu.be/KEN_HT20exs )

Thanks again to Gary Lin (Advantech) as well as Louis Moreau and David Tischler (Edge Impulse) for their support :)!

Update: I added a Review to the Advantech AIR-020X and got balenaOS working on it.

gpsTime

I think there is nothing more pleasing than having extremely precise measurements at your fingertips. Like time. While in the past it was quite problematic to measure time accurately (not talking about sundials, but... why not? ;)) - mankind has created one precise time source as the byproduct (read: "waste") for usage in accurate navigation: GNSS and their different kinds like GPS, Glonass, Galileo, BaiDou and others.

Taping into this time source and providing it to your local computer network via NTP has been done by countless people and is an extreme rewarding task. Is it necessary? Maybe not. Is it really cool? Yes. And now it is even easier as you don't need to configure it yourself, but can use the balenaHub and the preconfigured gpsTime project.

We do not waste time on fancy logos 😉

Basically you just need an RPi B+ (2/3/4), Micro SD Card, Powersupply and 3v3 TTL Level GPS Module with PPS Output. The rest is just done by going on the balenaHub entry shown above, creating a free account and flashing balenaOS onto your SD card, booting the RPi on the internet for the first time and let it get the needed containers. Afterwards you can use the RPi offline and still enjoy your precise time source.

A watterott CQM-M8Q Breakout and an good old RPi 2B+ are more than powerful enough

More details can be found in the Github Repo and you can work and improve that project to your hearts content. I am probably going to do an PiAndMore talk about it - and use the project myself as a block for precise timing in some support equipment.