Here is my presentation to the Raspberry Pi Introduction @ PiAndMore 12 1/4 (23.01.2021)
Videorecording of the talk can be found here
IT Systemelektroniker & Master of Science, IT Security, Networks, Embedded Systems, Docker Campus Ambassador and Raspberry Pi Geek
Here is my presentation to the Raspberry Pi Introduction @ PiAndMore 12 1/4 (23.01.2021)
Videorecording of the talk can be found here
kvm Virtualisiation is great, however, useable tools to create and manage said VMs are rare. The best tool for the job, virt-manager is only available for Linux machines. But what if you want to manage said VMs also via Windows 10? WSL2 to the rescue: Just install WSL2 as shown by the excellent Microsoft Guide, install i.e. a Debian/GNU Linux instance and then launch into it.
You should update the instance to the latest version first:
sudo apt update && sudo apt upgrade -y && sudo apt dist-upgrade -y
then you can install virt-manager
sudo apt install -y virt-manager ssh-askpass
the last thing you would need is to install an X server on your windows machine, i.e. Xming or MobaXterm (which contains Xming) and launch it. Then you need to setup the X forwarding in your WSL2 instance, by entering
export DISPLAY="grep nameserver /etc/resolv.conf | sed 's/nameserver //':0"
export DISPLAY="sed -n 's/nameserver //p' /etc/resolv.conf:0"
export DISPLAY=$(ip route|awk '/^default/{print $3}'):0.0
after that, you can launch virt-manager by entering
virt-manager
and configure it to connect to your KVM instance via SSH.
I updated an older Ubuntu 18.04 LTS system to the latest LTS and had (among other things) Docker and KVM installed. KVM is actually quite nice if you "just need" a small VM (pfsense ;)). I actually prefer Proxmox and ESXi, but hey, the right tool for the right job.
After the upgrade to 20.04, kvm did not work anymore and I got a lot of lvm2 errors during apt update / apt upgrade sessions, so a short google later I found this. I was a bit nervous, but the fix did neither hurt my kvm nor my Docker instances
sudo apt purge lvm2 && sudo apt install lvm2
(The fix is deleting and reinstalling lvm2)
After reinstalling lvm2, I could successfully execute a virsh list and got my list of running KVM machines back:
Id Name State ------------------------- 1 pfsense running
I have been using Windows Subsystem for Linux (WSL) and Docker on my Laptop since a long time. And during last Docker Con, WSL 2 was released to which I switched instantly - which I did not regret.
(Note: Upgrading to WSL 2 and the native Docker for WSL 2 version will cost you your containers and Docker images, there is even a Thanos meme coming around - so I have to give this fair warning ;))
However: Said Laptop started acting strange as suddenly local MariaDB instances or Apache2 did cease to work and even some nodeJS projects on port 9000. All these ports were not taken directly by any application, but somehow it did not work anymore. It turns out that a faulty HyperV update led to the hypervisor reserving too many ports across the board.
Luckily there is a solution to correct this issue as shown here by Christopher Pietrzykowski.
To make it easy and fast: Open up a powershell or cmd prompt as admin user and enter
netsh int ipv4 show dynamicport tcp netsh int ipv6 show dynamicport tcp
If it comes up with startport 1025 and a huge number of reserved ports, you are experiencing the same problem. Please enter these commands to realign the startport to 49152 for both IPv4 and IPv6
netsh int ipv4 set dynamic tcp start=49152 num=16384 netsh int ipv6 set dynamic tcp start=49152 num=16384
after a reboot, everything should be fixed again 🙂
I stumbled across this feature during my bachelor studies:
echo "Hello World" > /dev/tcp/127.0.0.1/5000
echo "Hello World" > /dev/udp/127.0.0.1/5000
You need to be root (obviously) and its supported in bash environment - but not on all systems. You can also cat on the ports and use dns adresses. Its neat to just get a byte out :).
And if you need something more sophisicated, be sure to use the good old netcat ("nc")
If you happen to write a lot of python scripts and just want to check which of the added "imports" are actually needed to function - and do not want to use an IDE - just check out https://pypi.org/project/importchecker/ - it comes down to a quick
easy_install importchecker
and afterwards you can check your scripts by using
importchecker myScript.py
It will only output the imports NOT needed, which you can then remove by hand
That was surprisingly easy: Just swaped the "sonarqube:6.7-community" for the "sonarqube:7.9-community" image entry in my docker-compose.yml and restarted the Docker container. Upon boot, the container restarted due to an error:
ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
This could be resolved by executing sudo sysctl -w vm.max_map_count=262144 on my Ubuntu 18.04 LTS / Docker Host. After another restart of the container, it worked and I could start the update using a webbrowser under http://IP:9000/setup
(also add the option vm.max_map_count=262144 to the /etc/sysctl.conf)
In order to supersede the aging Microchip ATMEGA328P as the de facto standard for Commercial off-the-shelf (COTS) On-Board Computers (OBCs) with a more powerful system for different kinds of high-speed sensors and image acquisition applications, we developed advanced processors, encryption, and security experiment (apex). The platform consisting of a newly developed OBC using COTS components has been flight tested during the ATEK/MAPHEUS-8 sounding rocket campaign. The main advantages of the apex OBC lies in the speed and simplicity of the design while maintaining operational security with a redundant master-master microcontroller system, as well as dual flash storage within each master. Additionally, a single board computer with a containerized and failure-resistant Operating System (OS) (balenaOS) was included to allow usage of a high-definition camera or other more compute-intensive tasks. The bench and flight tests were performed successfully and already showed feasible ways to further improve operational performance.
Published by "American Institute of Physics" (AIP) in "Review of Scientific Instruments" ( ISSN: 0034-6748 / DOI: 10.1063/1.5118855 )
Download (Paper): DLR Elib
Watch Video of flight on Youtube
Release on balena.io Blog (09.10.2019)
Release on Hackaday (11.10.2019)
Release on thenewstack.io (03.11.2019)
balena in Space Talk (11.06.2021)
2nd generation apex Mk.2/Mk.3 studies
This is a very quick install for SonarQube on Ubuntu 18.04 LTS. I presume you got the latest Docker CE 18.09 and docker-compose 1.24 installed.
# create folders for sonarqube files and postgres
sudo mkdir -p /var/sonarqube/{conf,data,logs,extensions}
sudo chown -R 999:999 /var/sonarqube
sudo mkdir -p /var/sonarqube/postgres
# make folder for all Docker files in home
mkdir ~/sonarqube
cd sonarqube
# create docker-compose.yml with following content
version: '3.1'
services:
db:
image: postgres:9.6-alpine
restart: unless-stopped
volumes:
- /var/sonarqube/postgres:/var/lib/postgresql/data
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
sonarqube:
image: sonarqube:6.7-community
ports:
- 9000:9000
- 9092:9092
restart: unless-stopped
volumes:
- /var/sonarqube/conf:/opt/sonarqube/conf
- /var/sonarqube/data:/opt/sonarqube/data
- /var/sonarqube/logs:/opt/sonarqube/logs
- /var/sonarqube/extensions:/opt/sonarqube/extensions
environment:
- SONARQUBE_HOME=/opt/sonarqube
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
- SONARQUBE_JDBC_URL=jdbc:postgresql://db/sonar
# launch
docker-compose up -d
You can then access your SonarQube instance on http://<ServerIP>:9000 with the credential admin/admin.
For checking the quality of my private programming code, I have been using the free edition of SonarQube for multiple years. It is actually a cool project with a massive flaw: They did allow to use MySQL as Database - but not MariaDB. This struggle kept me a bit at bay, as I parted with MySQL years ago - with this one exception.
Some days ago I then stumbled upon the anouncement of SonarQube that did not fix this long-standing issue, but will be parting completly from MySQL. So the only choice would be to migrate to Oracle, MS SQL or PostgreSQL.
They even provided a tool called mysql-migrator for this purpose. However, this did not work. It always kept on complaining that it could not detect the schema version, etc.
Long story short: If you want (or in my case must...) switch from MySQL to PostgreSQL, use pgloader, which is available as Ubuntu package - all infos here.
I just installed a fresh PostgreSQL 9.6 in Docker, bundeled it with the postgres 9.6 alpine, fired it up and then migrated it with the following command:
pgloader mysql://<mysqluser>:<mysqlpassword>@<mysqlserverip>:3306/sonar pgsql://<psqluser>:<psqlpassword>@1<psqlserverip>/sonar
Due to all things Docker, one had to play a bit around with the correct IPs, Ports and permissions to get the tool working, but once that was said and done, everything worked fine:
table name read imported errors total time
------------------------------ --------- --------- --------- --------------
fetch meta data 180 180 0 0.357s
Create Schemas 0 0 0 0.001s
Create SQL Types 0 0 0 0.004s
Create tables 106 106 0 2.842s
Set Table OIDs 53 53 0 0.015s
------------------------------ --------- --------- --------- --------------
sonar.active_rules 1993 1993 0 0.116s
sonar.active_rule_parameters 268 268 0 0.107s
sonar.ce_activity 1 1 0 0.074s
sonar.ce_scanner_context 0 0 0 0.040s
sonar.ce_task_input 0 0 0 0.025s
sonar.analysis_properties 0 0 0 0.157s
sonar.duplications_index 0 0 0 0.020s
sonar.events 349 349 0 0.173s
sonar.groups 2 2 0 0.330s
sonar.group_roles 12 12 0 0.433s
sonar.ce_queue 0 0 0 0.033s
sonar.issues 7508 7508 0 1.546s
sonar.ce_task_characteristics 0 0 0 0.029s
sonar.default_qprofiles 9 9 0 0.162s
sonar.es_queue 0 0 0 0.143s
sonar.file_sources 500 500 0 1.733s
sonar.loaded_templates 13 13 0 1.498s
sonar.metrics 246 246 0 1.564s
sonar.organizations 1 1 0 1.744s
sonar.org_qprofiles 26 26 0 1.722s
sonar.perm_templates_groups 4 4 0 1.724s
sonar.groups_users 3 3 0 1.283s
sonar.perm_tpl_characteristics 0 0 0 1.673s
sonar.projects 542 542 0 1.850s
sonar.internal_properties 2 2 0 1.384s
sonar.issue_changes 501 501 0 1.522s
sonar.manual_measures 0 0 0 1.306s
sonar.notifications 0 0 0 1.297s
sonar.organization_members 2 2 0 1.504s
sonar.project_links 0 0 0 1.532s
sonar.permission_templates 1 1 0 1.453s
sonar.project_qprofiles 0 0 0 1.483s
sonar.perm_templates_users 0 0 0 1.417s
sonar.qprofile_changes 2001 2001 0 1.616s
sonar.plugins 13 13 0 1.366s
sonar.qprofile_edit_users 0 0 0 1.545s
sonar.project_branches 3 3 0 1.356s
sonar.quality_gate_conditions 4 4 0 1.353s
sonar.rules_metadata 1763 1763 0 1.311s
sonar.project_measures 35940 35940 0 1.792s
sonar.rules_profiles 26 26 0 1.503s
sonar.schema_migrations 495 495 0 1.369s
sonar.users 3 3 0 1.497s
sonar.user_tokens 5 5 0 1.489s
sonar.properties 8 8 0 0.941s
sonar.qprofile_edit_groups 0 0 0 0.924s
sonar.quality_gates 1 1 0 0.834s
sonar.rules 1866 1866 0 1.069s
sonar.rules_parameters 278 278 0 1.021s
sonar.rule_repositories 21 21 0 0.999s
sonar.snapshots 280 280 0 1.047s
sonar.user_roles 0 0 0 0.960s
sonar.webhook_deliveries 0 0 0 0.942s
------------------------------ --------- --------- --------- --------------
COPY Threads Completion 4 4 0 3.049s
Create Indexes 127 127 0 22.308s
Index Build Completion 127 127 0 1.460s
Reset Sequences 33 33 0 0.051s
Primary Keys 51 51 0 0.051s
Create Foreign Keys 0 0 0 0.000s
Create Triggers 0 0 0 0.001s
Install Comments 0 0 0 0.000s
------------------------------ --------- --------- --------- --------------
Total import time 54690 54690 0 8.412s