An active GNSS antenna for the CAM-M8Q breakout

I have been using multiple CAM-M8Q breakouts by Watterott and really am loving these units. They are small, reasonable priced and have the advantage of an integrated chip antenna. However, this also their small shortcoming: While the antenna is good enough for most outdoor jobs, you can run into sensitivity issues when deploying it indoors - if not setup next to a window. Luckily, the module has two additional u.fl connectors for RFin and RFout, meaning you can use an external antenna.

To accomplish this, you just need to remove the resistor R3 to position R1 - as outlined by the schematics:

Overview over the CAM-M8Q, copyright by Watterott

With this, you could attach an passive antenna, but an active one will not work, as no power is supplied from the module. But you can add this power insert with an inductor and an capacitor.

I did this with some SMD components, but did not add the insert "behind" the u.fl connector, but between both jumper points R1 should be using. So I can make this a part of the module.

This worked perfectly and the reception is great

As an antenna I am using the Navilock NL-202AA - I have not received any Galileo signal (even though it should be possible), but other than that I am very happy with the solution.

Thanks again to Mr. Watterott to pointing me to this StackExchange post which contained the solution for the power insert.

labSentinel 2

About nearly a year ago, I wrote the labSentinel project for my Nvidia Jetson AI Specialist certification. The basic idea of the project is to be able to supervise old Lab Equipment which does not poses any kind of log output or interface other than a graphical user interface, running on an Windows 3.11 / 95 / NT - maybe even XP system. I solved this issue by using a video grabber attached to a Jetson Nano and "out-of-band" grabbing the screen output of the experiment computer. I then learned good and bad system states via Nvidias Inference tools and finally got the system to report via MQTT as soon as something did go wrong. (As a "test system" I designed a flashy GUI application to try to mimic the old interfaces - specifically thinking about a lab power supply with multiple outputs - and the ability to simulate errors.)(https://developer.nvidia.com/embedded/community/jetson-projects#labsentinel / https://github.com/nmaas87/labsentinel)

While the project did work, there was still a lot left to be desired:

  • The system did capture the complete screen in full size. Running inference on a 1024x768 or even higher resolution picture is not efficient and has a high failure rate.
  • Training, testing and improving the model was time consuming and did not yield the precision and results I was hoping for.
  • The system could differentiate between "good" and "error" states - however if an error occurred, I would have loved to get more information - "reading the GUI" and its output. For example in the lab power supply use case, getting the specific voltages of the different lines to see which line failed or what is wrong - maybe even with the possibility to cross check if the detected error is an error in the first place
  • While the Nvidia Jetson Nano Development Board is an awesome tool for development, it is not hardend enough / suited for a lab or even factory floor environment.

These were all points I wanted to address, but as time was lacking - I did not take up the project again - until the start of this year Advantech and Edge Impulse started their Advantech Edge AI Challenge 2022. They wanted to know about specific use cases and how to solve them with factory hardend Jetson products (e.g. Advantechs AIR-020 series) and Edge Impulse Studio.

Well, that reminded me of the first labSentinel - and I thought I'd give it a try. As luck would have it, I actually was one of the two lucky guys who were picked to be able to realize their project. Advantech sent me one of their AIR-020X boards (review is here :)) and I was good to go:

Let me introduce you to labSentinel 2:

Build from the ground up, it does solve the above mentioned issues:

  • The actually GUI window is found and extracted from the "full size Desktop screenshots" via OpenCV 2 - and resized to 320x320 pixels to neatly fit the inference model
  • All model training, testing and optimization is done with Edge Impulse, which makes handling a breeze
  • If an error is detected and included OCR module using tesseract can extract text from predesignated / labeled areas on the non-resized GUI and sent this information along with the MQTT alert
  • The AIR-020X board is more than robust enough for all normal lab and factory floors

All source code is freely available with a demo project and documentation on Github ( https://github.com/nmaas87/labSentinel2 ) and also a video instruction on how to use it ( https://youtu.be/KEN_HT20exs )

Thanks again to Gary Lin (Advantech) as well as Louis Moreau and David Tischler (Edge Impulse) for their support :)!

Update: I added a Review to the Advantech AIR-020X and got balenaOS working on it.

WD My Cloud Mirror Gen2 with Debian 11 and Linux Kernel 5.15 LTS

Intro

Since 2017 I have been using an Western Digital My Cloud Mirror Gen 2 which I bought at Amazons Black Friday (or similar) - because the included 2x 8 TB WD Red were even cheaper with the NAS than standalone. Using the NAS had been quite ok, especially the included Docker Engine and Plex Support were a nice to have, the included Backdoor in older Versions - not so much. Recently WD had their new "My Cloud OS 5" replace the old My Cloud OS 3 - and made things worse for a lot of people. As I don't want any more surprises - and more control over my hardware - I decided to finally go down the road and get Debian 11 with an LTS (5.15) Kernel running on the hardware. This is how it went.

Warning

Warning, these are just my notes on how to convert a My Cloud OS 3 / My Cloud Mirror Gen 2 device to a "real" Debian system. You will need to take your device fully apart, solder wires and lose the warranty. Additionally you will lose all your data and even brick the hardware if something goes wrong, I am taking in no way responsibility, neither can I give support. You're on your own now.

Step 0: Get Serial Console Access

Without a serial console, you will not be able to do anything here. You will need to completely disassembly the NAS and will lose all warranty. The plain motherboard will look like this. On the most right side you will see the pins for the UART interface you will need to solder to.

When you're done with that, connect your 3v3 TTL UART USB device like this:

... and connect to it via 115200 BAUD with Putty, TeraTerm Pro or any other software (Do not connect the 3v3 pin :)). It would be wise starting without the hard drives installed.

Step 1: Flashing U-Boot

The current U-Boot on the NAS is flawed, you need to replace it. I will be CyberPK here which did an awesome job explaining everything:

We have to prepare an usb drive formatted in Fat32, and extract the uboot at link into it and connect to usb port#2.

Connect the device to the serial adapter, poweron the device and start pressing '1' (one) during the boot until you can see the 'Marvell>>' Command Prompt
press ctrl+c
then

We will start here to change stuff and break stuff. But if I could give you one tip before you start: Please execute printenv once. Copy and paste all env variables and everything Uboots spits out. It could save your hardware one day. Thanks, Nico out!

usb start
bubt u-boot-a38x-GrandTeton_2014T3_PQ-nand.bin nand usb
reset

This will reboot the device. Access again the Command prompt and add the following envs, a modified version of the ones provided by bodhi at this post:

setenv set_bootargs_stock 'setenv bootargs root=/dev/ram console=ttyS0,115200'

setenv bootcmd_stock 'echo Booting from stock ... ; run set_bootargs_stock; printenv bootargs; nand read.e 0xa00000 0x500000 0x500000;nand read.e 0xf00000 0xa00000 0x500000;bootm 0xa00000 0xf00000'

setenv bootdev 'usb'

setenv device '0:1'

setenv load_image_addr '0x02000020'

setenv load_initrd_addr '0x2900000'

setenv load_image 'echo loading Image ...; fatload $bootdev $device $load_image_addr /boot/uImage'

setenv load_initrd 'echo loading uInitrd ...; fatload $bootdev $device $load_initrd_addr /boot/uInitrd'

setenv usb_set_bootargs 'setenv bootargs "console=ttyS0,115200 root=LABEL=rootfs rootdelay=10 $mtdparts earlyprintk=serial init=/bin/systemd"'

setenv bootcmd_usb 'echo Booting from USB ...; usb start; run usb_set_bootargs; if run load_image; then if run load_initrd; then bootm $load_image_addr $load_initrd_addr; else bootm $load_image_addr; fi; fi; usb stop'

setenv bootcmd 'setenv fdt_skip_update yes; setenv usbActive 0; run bootcmd_usb; setenv usbActive 1; run bootcmd_usb; setenv fdt_skip_update no; run bootcmd_stock; reset'

saveenv

reset

(This code was also modified by me to use the fatload instead of the ext2load)

With this, our NAS is ready.

Step 2: Build a kernel and rootfs

  • On your current linux machine, get yourself a copy / git clone of Heisaths wdmc2-kernel Repo
  • Get all dependencies installed according to this repo, I installed it on a Debian 11 machine
  • Replace the file content of wdmc2-kernel/dts/armada-375-wdmc-gen2.dts with the content of the real and improved dts for the WDMCMG2 (original from this link, copy available here) - but keep the file name still armada-375-wdmc-gen2.dts
  • Replace the file content of wdmc2-kernel/config/linux-5.15.y.config with the file from here (please know this config ain't perfect, but it will get you running. You can always file a PR and help me out ;))
  • Start the build process in wdmc2-kernel with ./build.sh
  • Mark: Linux Kernel, Clean Kernel sources, Debian Rootfs, Enable ZRAM on rootfs
  • Kernel -> Kernel 5.15 LTS
  • Build initramfs -> Yes
  • Debian -> Bullseye
  • Fstab -> usb
  • Rootpw -> whateverYouWant
  • Hostname -> whateverYouWant
  • Locales -> no changes, accept (or whatever you want)
  • Default locale for system -> en_US.UTF-8 (or whatever you want)
  • Tzdata -> Your region
  • Now your kernel and rootfs will be build

While this is on-going, get yourself a nice USB 2.0 or USB 3.0 stick prepared with

  • partition table: msdos
  • 1st partition: 192 MB, FAT32, label set to boot, boot flag enabled
  • 2nd partition: rest, ext4, label set to rootfs

When the kernel is done compiling and your usb stick is done, copy all the files (sda is the name of my usb stick

  • mkdir /mnt/boot /mnt/root
  • mount /dev/sda1 /mnt/boot
  • mount /dev/sda2 /mnt/root
  • mkdir /mnt/boot/boot
  • cp wdmc2-kernel/output/boot/uImage-5.15.* /mnt/boot/boot/uImage
  • cp wdmc2-kernel/output/boot/uRamdisk /mnt/boot/boot/uInitrd
  • tar -xvzf wdmc2-kernel/output/bullseye-rootfs.tar.gz --directory=/mnt/root/
  • rm -rf /mnt/root/etc/fstab
  • cp /mnt/root/etc/fstab.usb /mnt/root/etc/fstab
    // within /mnt/root/etc/fstab:
    // change all /dev/sdb to /dev/sdc if all two drive slots on the NAS are used <- this!
    // change all /dev/sdb to /dev/sda if no drive slots on the NAS are used
  • umount /mnt/boot /mnt/root

Step 3: First boot and getting things running

Insert the USB stick into the 2 slot of the NAS. Leave the drives still out and boot it up for the first time, watch it via terminal. Login at the end with root and your chosen password.

If it boots, you can shut it down again with shutdown -P now, unplug power, insert the drives and reboot.

First thing after the first boot with drives, your own initramfs / Ramdisk from your current setup:

  • cd /root/
  • ./build_initramfs.sh
  • cp initramfs/uRamdisk /boot/boot/uInitrd

Second, install MDADM for the RAID:

  • apt update
  • apt install mdadm
  • mkdir /mnt/HD
  • edit your /etc/fstab and add a mount point for your md/raid. I used the old drives with my old data on it like this (depending on the fact as which mdX it launches...)
/dev/md0        /mnt/HD         ext4    defaults,noatime,nodiratime,commit=600,errors=remount-ro        0       1
 

A lot of good knowledge about Ramdrives can be found here.

I would advise to do steps: 1. Folder2RAM, 2. Kernel Options, 4. Logrotate - option 3 did not work out for me.

To get the drive to sleep at some point, we need to reconfigure MDADM

dpkg-reconfigure mdadm
// monthly check ok 
// daily degration check ok
// monitoring disable

... and get hdparm working

apt install hdparm hd-idle

# hdparm config
, add in /etc/hdparm.conf 

/dev/sda {
#        apm = 127
#        acoustic_management = 127
        spindown_time = 120
#       spindown_time = 4
        write_cache = on
}

/dev/sdb {
#        apm = 127
#        acoustic_management = 127
        spindown_time = 120
#       spindown_time = 4
        write_cache = on
}

# Spindown Time means: 120 * 5 sec = 600 sec / 60 sec = 10 min
# apply it after saving the file with:
/usr/lib/pm-utils/power.d/95hdparm-apm resume

We can check the status of the drives with smartctl

smartctl -i -n standby /dev/sda
smartctl -i -n standby /dev/sdb

To get fan control working

apt install wget
wget -O mcm-fancontrol-master.tar.gz https://github.com/nmaas87/mcm-fancontrol/archive/refs/heads/master.tar.gz
tar -xvzf mcm-fancontrol-master.tar.gz
cd mcm-fancontrol-master/
cp fan-daemon.py /usr/sbin/
chmod +x /usr/sbin/fan-daemon.py
cp fan-daemon.service /etc/systemd/system/
systemctl enable fan-daemon
systemctl start fan-daemon

(You can change low temp and high temp in the /usr/sbin/fan-daemon.py to get the Fan to kick in later and also set DEBUG = True if you want to see some details in the systemctl status fan-daemon)

MDT Utils can be useful, just mentioning it here

apt install -y mtd-utils
cat /proc/cmdline
cat /proc/mtd

Samba ...

apt install samba --no-install-recommends
# change /etc/samba/smb.conf to your liking and setup your SMB

Plex ...

# Plex 
apt update
apt install apt-transport-https ca-certificates curl gnupg2
curl https://downloads.plex.tv/plex-keys/PlexSign.key | apt-key add -
echo deb https://downloads.plex.tv/repo/deb public main | tee /etc/apt/sources.list.d/plexmediaserver.list
apt update
apt install plexmediaserver
systemctl status plexmediaserver

Well, that's it.

Thanks a lot to all awesome contributors in the net:

Companion repo with files: https://github.com/nmaas87/WDMCMG2

Presentations with Markdown: revealjs

Being an active contributor to the PiAndMore and other conferences, I happen to make quite a few presentations a year. In the past I was using the good old Microsoft Powerpoint - which has its strengths, but also its drawbacks. Positioning text and graphics were never my taste (I use LaTeX, btw) - so I set out to find a new way to create presentations - and found revealjs back in 2018.

What is revealjs? Basically: Write your presentations in Markdown. Show in a Webbrowser - or export as PDF. TL;DR? Navigate through the demo.

However, using revealjs by its own was cumbersome, I was missing a live preview - and while all of this was available at its freemium service slides.com - I do not want to be dependant on online connection - nor share every presentation with the world (some might involve senstive data... so no).

That was when I started to use hacker-slides - a small Go implementation for all OS types, with a Live Preview, local/offline usage. It was near perfect, other than issues like having problems with carriage return and similar signs at some points (usage other Windows...) and some other stuff (I lost some presentations when I opened up too many at the same time and edited different presentations in different tabs). It was also the first project where I changed some Go code for my local copy. However, the final nail in the coffin was that this project is not really maintained anymore.

Enter vscode-reveal - it works in VSCode or Codium - has live preview and all the features you need. Your basic, local, revealjs powered, operating system independant presentation-making-machine.

I have used it for the latest PiAndMore - and I am not going back to anything else (at least for the time being) - so maybe you want to give it a try?

[Win10] Random ports blocked while using Docker / WSL / HyperV

I have been using Windows Subsystem for Linux (WSL) and Docker on my Laptop since a long time. And during last Docker Con, WSL 2 was released to which I switched instantly - which I did not regret.

(Note: Upgrading to WSL 2 and the native Docker for WSL 2 version will cost you your containers and Docker images, there is even a Thanos meme coming around - so I have to give this fair warning ;))

However: Said Laptop started acting strange as suddenly local MariaDB instances or Apache2 did cease to work and even some nodeJS projects on port 9000. All these ports were not taken directly by any application, but somehow it did not work anymore. It turns out that a faulty HyperV update led to the hypervisor reserving too many ports across the board.

Luckily there is a solution to correct this issue as shown here by Christopher Pietrzykowski.

To make it easy and fast: Open up a powershell or cmd prompt as admin user and enter

netsh int ipv4 show dynamicport tcp
netsh int ipv6 show dynamicport tcp

If it comes up with startport 1025 and a huge number of reserved ports, you are experiencing the same problem. Please enter these commands to realign the startport to 49152 for both IPv4 and IPv6

netsh int ipv4 set dynamic tcp start=49152 num=16384
netsh int ipv6 set dynamic tcp start=49152 num=16384

after a reboot, everything should be fixed again 🙂

Upgrade SonarQube 6.7 to 7.9

That was surprisingly easy: Just swaped the "sonarqube:6.7-community" for the "sonarqube:7.9-community" image entry in my docker-compose.yml and restarted the Docker container. Upon boot, the container restarted due to an error:

ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

This could be resolved by executing sudo sysctl -w vm.max_map_count=262144 on my Ubuntu 18.04 LTS / Docker Host. After another restart of the container, it worked and I could start the update using a webbrowser under http://IP:9000/setup

(also add the option vm.max_map_count=262144 to the /etc/sysctl.conf)

SonarQube 6.7 Community with Postgres 9.6 in Docker on Ubuntu

This is a very quick install for SonarQube on Ubuntu 18.04 LTS. I presume you got the latest Docker CE 18.09 and docker-compose 1.24 installed.

# create folders for sonarqube files and postgres
sudo mkdir -p /var/sonarqube/{conf,data,logs,extensions}
sudo chown -R 999:999 /var/sonarqube
sudo mkdir -p /var/sonarqube/postgres
# make folder for all Docker files in home
mkdir ~/sonarqube
cd sonarqube
# create docker-compose.yml with following content
version: '3.1'
services:
  db:
    image: postgres:9.6-alpine
    restart: unless-stopped
    volumes:
      - /var/sonarqube/postgres:/var/lib/postgresql/data
    environment:
     - POSTGRES_USER=sonar
     - POSTGRES_PASSWORD=sonar

  sonarqube:
    image: sonarqube:6.7-community
    ports:
      - 9000:9000
      - 9092:9092
    restart: unless-stopped
    volumes:
      - /var/sonarqube/conf:/opt/sonarqube/conf
      - /var/sonarqube/data:/opt/sonarqube/data
      - /var/sonarqube/logs:/opt/sonarqube/logs
      - /var/sonarqube/extensions:/opt/sonarqube/extensions
    environment:
      - SONARQUBE_HOME=/opt/sonarqube
      - SONARQUBE_JDBC_USERNAME=sonar
      - SONARQUBE_JDBC_PASSWORD=sonar
      - SONARQUBE_JDBC_URL=jdbc:postgresql://db/sonar
# launch 
docker-compose up -d

You can then access your SonarQube instance on http://<ServerIP>:9000 with the credential admin/admin.

 

Migrate SonarQube from MySQL to PostgreSQL

For checking the quality of my private programming code, I have been using the free edition of SonarQube for multiple years. It is actually a cool project with a massive flaw: They did allow to use MySQL as Database - but not MariaDB. This struggle kept me a bit at bay, as I parted with MySQL years ago - with this one exception.

Some days ago I then stumbled upon the anouncement of SonarQube that did not fix this long-standing issue, but will be parting completly from MySQL. So the only choice would be to migrate to Oracle, MS SQL or PostgreSQL.

They even provided a tool called mysql-migrator for this purpose. However, this did not work. It always kept on complaining that it could not detect the schema version, etc.

Long story short: If you want (or in my case must...) switch from MySQL to PostgreSQL, use pgloader, which is available as Ubuntu package - all infos here.

I just installed a fresh PostgreSQL 9.6 in Docker, bundeled it with the postgres 9.6 alpine, fired it up and then migrated it with the following command:

pgloader mysql://<mysqluser>:<mysqlpassword>@<mysqlserverip>:3306/sonar pgsql://<psqluser>:<psqlpassword>@1<psqlserverip>/sonar

Due to all things Docker, one had to play a bit around with the correct IPs, Ports and permissions to get the tool working, but once that was said and done, everything worked fine:

                    table name       read   imported     errors      total time
------------------------------  ---------  ---------  ---------  --------------
               fetch meta data        180        180          0          0.357s
                Create Schemas          0          0          0          0.001s
              Create SQL Types          0          0          0          0.004s
                 Create tables        106        106          0          2.842s
                Set Table OIDs         53         53          0          0.015s
------------------------------  ---------  ---------  ---------  --------------
            sonar.active_rules       1993       1993          0          0.116s
  sonar.active_rule_parameters        268        268          0          0.107s
             sonar.ce_activity          1          1          0          0.074s
      sonar.ce_scanner_context          0          0          0          0.040s
           sonar.ce_task_input          0          0          0          0.025s
     sonar.analysis_properties          0          0          0          0.157s
      sonar.duplications_index          0          0          0          0.020s
                  sonar.events        349        349          0          0.173s
                  sonar.groups          2          2          0          0.330s
             sonar.group_roles         12         12          0          0.433s
                sonar.ce_queue          0          0          0          0.033s
                  sonar.issues       7508       7508          0          1.546s
 sonar.ce_task_characteristics          0          0          0          0.029s
       sonar.default_qprofiles          9          9          0          0.162s
                sonar.es_queue          0          0          0          0.143s
            sonar.file_sources        500        500          0          1.733s
        sonar.loaded_templates         13         13          0          1.498s
                 sonar.metrics        246        246          0          1.564s
           sonar.organizations          1          1          0          1.744s
           sonar.org_qprofiles         26         26          0          1.722s
   sonar.perm_templates_groups          4          4          0          1.724s
            sonar.groups_users          3          3          0          1.283s
sonar.perm_tpl_characteristics          0          0          0          1.673s
                sonar.projects        542        542          0          1.850s
     sonar.internal_properties          2          2          0          1.384s
           sonar.issue_changes        501        501          0          1.522s
         sonar.manual_measures          0          0          0          1.306s
           sonar.notifications          0          0          0          1.297s
    sonar.organization_members          2          2          0          1.504s
           sonar.project_links          0          0          0          1.532s
    sonar.permission_templates          1          1          0          1.453s
       sonar.project_qprofiles          0          0          0          1.483s
    sonar.perm_templates_users          0          0          0          1.417s
        sonar.qprofile_changes       2001       2001          0          1.616s
                 sonar.plugins         13         13          0          1.366s
     sonar.qprofile_edit_users          0          0          0          1.545s
        sonar.project_branches          3          3          0          1.356s
 sonar.quality_gate_conditions          4          4          0          1.353s
          sonar.rules_metadata       1763       1763          0          1.311s
        sonar.project_measures      35940      35940          0          1.792s
          sonar.rules_profiles         26         26          0          1.503s
       sonar.schema_migrations        495        495          0          1.369s
                   sonar.users          3          3          0          1.497s
             sonar.user_tokens          5          5          0          1.489s
              sonar.properties          8          8          0          0.941s
    sonar.qprofile_edit_groups          0          0          0          0.924s
           sonar.quality_gates          1          1          0          0.834s
                   sonar.rules       1866       1866          0          1.069s
        sonar.rules_parameters        278        278          0          1.021s
       sonar.rule_repositories         21         21          0          0.999s
               sonar.snapshots        280        280          0          1.047s
              sonar.user_roles          0          0          0          0.960s
      sonar.webhook_deliveries          0          0          0          0.942s
------------------------------  ---------  ---------  ---------  --------------
       COPY Threads Completion          4          4          0          3.049s
                Create Indexes        127        127          0         22.308s
        Index Build Completion        127        127          0          1.460s
               Reset Sequences         33         33          0          0.051s
                  Primary Keys         51         51          0          0.051s
           Create Foreign Keys          0          0          0          0.000s
               Create Triggers          0          0          0          0.001s
              Install Comments          0          0          0          0.000s
------------------------------  ---------  ---------  ---------  --------------
             Total import time      54690      54690          0          8.412s

 

[VMWare] Get and upgrade ESXi 6.5 "offline" - without paid license

As I wanted to have a very recent version of ESXi, I went to VMWares website and checked out their Products, Free Products, vSphere Hypervisor section. This, however, only presented me with a ESXi 6.5.0a ISO from 02.02.2017 - too old. However, you'll get the much needed free license - so the visit pays off :).

So to get the latest version and updates, you need to go to http://vmware.com/go/evaluate-vsphere-en - and are presented with the 6.5.0 U1 ISO from 27.07.2017 - a lot better. With said image you can then install your server. Even if you had an old 6.5.0a install, you could download the VMware vSphere Hypervisor (ESXi) Offline Bundle - which will upgrade your old 6.5.0 installation to U1 from that site.

After that, you'll need to check out the very useful VMWare ESXi Patch Tracker on https://esxi-patches.v-front.de/ESXi-6.5.0.html. There you can see, which patches are needed to get your ESXi host to the latest version (in my case I only need to apply the 2017-10-05 patch series to get from U1 to latest). So now switch over to https://my.vmware.com/group/vmware/patch#search and look for ESXi 6.5.0 patches. I did find my needed ESXi650-201710001 patch with release date 05.10.2017 - and downloaded it. From the ESXi Patch Tracker I now know, that the Imageprofile of said Update is called ESXi-6.5.0-20171004001-standard and uses the Buildnumber 6765664. I then enabled SSH on the ESXi Host, shutdown all VMs, put the ESXi Host into Maintance mode and uploaded the ESXi650-201710001.zip to a folder on my Datastore datastore01 into a folder I created called ESXiUpdate.

After that, I could execute said update via SSH with the command esxcli software profile update --depot="[datastore01]ESXiUpdate/ESXi650-201710001.zip" --profile ESXi-6.5.0-20171004001-standard

As you can see, it needs to provide the path to the patch file, as well as the Imageprofilename we found out earlier via the ESXi Patch Tracker. After the successful installation, a reboot is need.

As soon as the machine has booted again, login and check if the Buildnumber now matches the Updates Buildnumber. If this is true, disable the Maintenance Mode, restart the VMs and you're good to go.

If other patches need to be applied, you would re-enable SSH  access, not restart the VMs and not disable the Maintenance Mode and just keep on uploading and applying the updates :).

More infos abot the esxcli commands can be found here - and you can still use your free license with ESXi 6.5 which you acquired at the first steps of this weblog - even if you use the most recent patch (luckily!).

And now, get those machines patched ;)!