Episode 23: Burkhard On Qt Embedded Systems
Welcome to Episode 23 of my newsletter on Qt Embedded Systems!
In October, I gave 6 legacy code workshops and 2 license compliance workshops, and started a new coaching project. I started the Win Without Pitching Bootcamp last week and gave the talk Hexagonal Architecture: The Standard for Qt Embedded Applications at Meeting Embedded 2021. It has been a bit too much. So, please accept my apologies for being one week late with my newsletter.
Enjoy reading and take care - Burkhard 💜
My Talk
I gave the talk Hexagonal Architecture: The Standard for Qt Embedded Applications at Meeting Embedded 2021. I argue that the hexagonal architecture (a.k.a. the ports-and-adapters architecture) should be the standard architecture for Qt embedded systems. I show an implementation of the architecture for a simple harvester terminal. The architecturally significant requirement for this walking skeleton is showing the engine speed in the HMI.
My Thoughts on Self-Learning Car Navigation
When I bought my BMW 1 seven years ago, I added an optional infotainment system. It was the basic version for €2,500 without built-in LTE modem and without Apple CarPlay or Android Auto. These features were only available in the full version for €4,000.  My system had built-in navigation, FM radio and hands-free telephone (via Bluetooth).
In each of these 7 years, I drove 30 times on average to the nearby mountains for hiking or cross-country skiing. I have a dozen favourite areas that I reach over 5 routes. Each area may have 5-10 parking places. I know the routes to our favourite areas by heart. Only the last 10-20km may differ, because I want to start from a different parking place. The parking places often don't have an address that the car navigation knows. I often navigate "the last valley" with a combination of the (printed) hiking map and the phone.
The Nav app stores the last 20 destinations. However, it doesn't store the route to these destinations. When I select one of the destinations for the next trip, the Nav app will give me its best routes, which rarely match mine. After 7 years, I know the traffic situation and road conditions a lot better than the Nav app. Moreover, quickest or shortest isn't the most important criterion after an 8-10 hours hike. I prefer the most comfortable route even if it takes 10 minutes longer. I'd love a button "Take me home on the same route I got here".
A self-learning Nav app would record the routes to the destinations. If I go to the same or a nearby destination, the Nav app will suggest the recorded route as the first choice. If there are considerably faster or shorter routes, it can suggest them as alternatives. Alternative routes overlapping largely with recorded routes are preferred. The Nav app would also record the users' choices between alternatives and use them as weights for the next selection. This helps improve the heuristics for fuzzy terms like "nearby destination", "considerably faster or shorter" and "overlapping largely".
For the last 1.5 years, a bottleneck road on one of my standard routes has been blocked. The shortest detour is 20km around a mountain. As the built-in Nav app doesn't have an LTE modem, it doesn't know about the road blockage (the phone does!). When I reached the roadblock the first time, I took the detour (signed to the next town but not to the final destination). Of course, the Nav app wanted to get me back to the blocked road for the next 10 minutes - until the distance to the destination was shorter over the detour.
A helpful Nav app would have asked whether there is a roadblock or whether I just missed the turn. Depending on the answer, it would have calculated a detour without further intervention or taken me back. When I take the route with the roadblock the next time, the Nav app will ask upfront if the blockage is still there. It could suggest a detour in advance, which would be shorter than the detour starting directly at the roadblock. Or, it could learn this detour from my changed driving habits.
Of course, my phone knows the detour and is connected via Bluetooth with the Nav app for hands-free telephony and music streaming. The Nav app could use the Bluetooth link to request roadblocks and other traffic information on the chosen route. When I find a parking place on the phone at home, I'd like to use the same link to flick the GPS coordinates to the car's Nav app. Such functionality would have been available if I had shelled out €4,000 for the full version. Not to forget: every map update costs €120.
Currently, there is no good solution with a fair price on the market. Nav apps in cars, on phones and on  navigation devices lack basic self-learning features.
A good solution - for the user -Â would be an app running on a phone and mirroring its display to the car display with Apple CarPlay or Android Auto. In contrast to the displays of phones or navigation devices, most car displays are easily readable in daylight.
This new Nav app gets the same sensor data from the car that the built-in Nav app gets. This would allow, for example, for understanding road signs (e.g., detours, signposts) through the car's camera. It gets up-to-date maps, the current traffic situation and road conditions through the phone's Internet connection. The app fuses all this information with its self-learning features to suggest the best routes, where the user's driving habits define what best means. This comes at no extra cost for users. It's all included in the car's price.
We arrive at such a self-learning HMI by listening to users. We must understand the common usage scenarios and make them easier or automate them with the help of available information. Our guiding questions should be: What steps do users perform to reach an objective? What information is needed to simplify or automate these steps? Where do we get this information?
Reading
Jürgen Bocklage-Ryannel (ApiGear): How to scale QML based projects - an architecture vision (video)
The radio application of an in-car infotainment system sends commands to the radio tuner service. The user triggers commands like switching to another station, scanning for stations or increasing the volume. The radio tuner service performs these actions on the radio tuner hardware.
Jürgen suggests
to define the interface between application and service in a special interface definition language and
to generate the code for the interface and its implementation with different communication mechanisms.Â
The second point needs some explanation. The generated code depends on where the service runs (see my talk A Successful Architecture for Qt Embedded Systems, minute 26:39 in the video or slide 11 in the presentation).
The service can run in the same process as the application. It can run in the same thread or a separate thread of this process.
The service can run in a separate process on the same network node and communicate with the application via Qt Remote Objects or DBUS.
The service can run on a different node and communicate with the application via Qt Remote Objects, MQTT or CoAP.
The service can run on the microcontroller of hybrid SoCs like the iMX8 and communicate with the application on the microprocessor via RPmsg.
Generating the code from interface definitions brings a couple of advantages.
We can generate code for testing, for simulation and for the product - using different means of communication. Writing this code by hand would be repetitive, tedious and error-prone. Generated code is right by construction.
The application and service is often developed by different teams. The interface helps synchronise the two teams. Each team can have a simulator or a mock for testing of the other team's work. The interface not only decouples the application and the service but also the teams (Conway's law in action).
I used this approach for two harvester terminals - on a lower layer. Terminals and the other ECUs in a harvester are separated by "natural" interface: the CAN bus. They use the J1939 protocol for communication. The two harvester OEMs had an Excel table or an XML document specifying all the J1939 messages. We generated the code for decoding and encoding the J1939 messages from these files and enjoyed the same advantages as above.
Jürgen bets his startup ApiGear on this approach. I'd say that this is a pretty good bet!
Kamel Bouhara (Bootlin): Another system update adventure with RAUC, Barebox & Yocto Project
When VW first released its ID.3, the software wasn't quite ready. They parked 25,000 cars on parking places all around Wolfsburg. As the software for OTA updates wasn't ready either, they had to update 25,000 cars manually. What a disaster!
The lesson for the rest of us: If one thing must work reliably on an embedded systems, it's the OTA update. It enables us OEMs to install new features or bug fixes quickly and with no cost. They don't have to send a technician to a harvester in the field or to call cars back into the garage for software updates. This saves them and their customers good money.
The people at Bootlin got us covered when it comes to OTA updates. They have written very detailed posts about three OTA update solutions: RAUC, SWUpdate and Mender. They explain step by step how to set up the bootloader, the Linux kernel and the Yocto recipes for the client (the embedded Linux system), and how to set up the server (e.g., hawkBit or Mender). Here are the links to the three posts.
Kamel Bouhara:Â Another system update adventure with RAUC, Barebox & Yocto Project
Thomas Petazzoni: Building a Linux system for the STM32MP1: remote firmware updates (SWUpdate)
Martin Bond (Feabhas): Multi-Part Series about CMake
Martin is writing a multi-part series about CMake - so far with four parts. The series is a well-paced introduction to cross-compiling C and C++ programs for embedded devices using CMake. The focus is on microcontrollers, but the posts are also relevant for microprocessors.
Part 1 - Dark Arts
Martin walks us through a minimal CMakeLists.txt file for the application and a toolchain file for an STM microcontroller. He also gives the CMake commands for shadow builds.
The toolchain file defines the C and C++ cross-compilers, the preprocessor options, the compiler options and the linker options. As the options are the same for all projects we want to build, we use add_compile_definitions, add_compile_options and add_link_options.
Martin suggests to use these add_* options in the application's CMakeLists.txt file as well. I think that there is a better solution. As the options are project-specific or even configuration-specific, we should use their target_* counterparts instead.
Part 2 - Release and Debug Builds
In constrast to Ninja, GNU make doesn't support build configurations. Hence, we must call CMake for each configuration separately to generate the corresponding Makefiles. For the debug configuration, we would call CMake with the options -B build/debug -DCMAKE_BUILD_TYPE=DEBUG.Â
If compile definitions, link options or similar settings depend on the build configuration, we must use generator expressions instead of normal CMake variables. A CMake run consists of two separate steps: the configure step and the generate step. CMake variables are evaluated in the configure step, whereas generator expressions are evaluated in the generate step. The build configuration is only known in the generate step but not in the configure step.
Martin uses generator expressions to clean up the toolchain file and application CMakeLists.txt file from Part 1. For example, -Og is the optimisation level for the debug build, whereas -O3 is the level for the release build. This gives rise to the generator expressions $<$<CONFIG:DEBUG>:-Og> and $<$<CONFIG:RELEASE>:-O3>, respectively.
The add_custom_command allows us to run commands at different points during the build process: PRE_BUILD, PRE_LINK and POST_BUILD. Martin uses the POST_BUILD point to generate the hex file from the binary file. We could also use this command to generate special source and header files.
Part 3 - Source File Organisation
Martin introduces two libraries: system and middleware. The top-level CMakeLists.txt file has an add_subdirectory command for each library. The CMakeLists.txt files in the subdirectories use the add_library command to define the source files of the library.
As the middleware library is not used in every application, its add_subdirectory command is only processed if the custom option USE_RTOS is set to ON. The option -DUSE_RTOS=ON is added to the CMake call on the command line.
The section Configuring File Dependencies contains a noteworthy tip, when the build target depends on files that are not part of the project. Linker scripts for microcontrollers are an example.
Part 4 - Windows 10 Host
By default, CMake generates build files for VisualStudio's compilers and nmake. Martin shows how we must change the CMakeLists.txt files and the toolchain file so that CMake generates Linux Makefiles, which are passed to GNU make from MinGW or Cygwin.Â
WSL2 and Docker are Martin's preferred solutions for cross-compilation. For both, the host development toolchain and the cross-compilation toolchain must be installed in the (Linux) container. Then, VS Code allows us to "store and edit the code on [the] Windows filesystem but run build commands in the WSL2 or Docker container".