Discover more from Better Built By Burkhard
Burkhard on Qt Embedded Systems: No. 5
Welcome to Episode 5 of my newsletter on Qt embedded systems. Bavaria is now on day 41 of the Corona lockdown. I used the slight easings on Monday this week to go hiking in the mountains. And, nature didn't disappoint: fire salamanders, marmot families playing in the spring sun and flowers galore. It felt so good to be outside again. I hope you can also enjoy some revitalising normality, however short. Most importantly, stay safe and take care of yourself and your loved ones. - Burkhard 💜
My Blog Posts
Docker Builds from QtCreator.
The post is also featured on the official Qt Develop Blog. I use this QtCreator-Docker integration in my daily work for developing the operator terminal of a metal sheet bender.
The common way to cross-build Qt applications for an embedded Linux system from QtCreator is to install the Yocto SDK, source the environment script and start QtCreator in this special environment. I described this approach in the post CMake Cross-Compilation Based on Yocto SDK. This approach works well as long as you can work in a single build environment.
When you want to build the same Qt application for your development PC, things start to get murky. You can hack the cross-build environment such that you can run native builds, too. Clever hacking won't help you, when you upgrade to a new Yocto version in a year or two while maintaining the older version. It won't help you either, when you migrate your system to a new SoC. A different Yocto version means a different build environment and often a different version of the Linux distribution on your build PC.
Ideally, you would like to hide the build environment completely from QtCreator. On a development PC, QtCreator calls CMake to create the Makefiles, to build and to install the application. Instead of calling CMake in the environment of your development PC, you want to call CMake in the build environment. The build environment becomes a blackbox with an interface for CMake calls. And, this blackbox is realised by a Docker container. The CMake calls are then executed inside the container.
In the post, I walk you through the solution step by step. The solution integrates QtCreator with a Docker container holding a cross-build environment. QtCreator has no idea, which cross-build environment is inside the container. When you are done with setting up a Docker-ised kit in QtCreator, you will be able to press Ctrl+R (Run) to build your Qt application for the target embedded systems, deploy it to the target and to run it on the target. It behaves exactly as on your local development PC.
FOSS Qt Releases Delayed by up to 12 Months?
Three weeks ago, Olaf Olaf Schmidt-Wischhöfer - one of the four board members of the KDE Free Qt Foundation - caused a bit of a ballyhoo with the following statement on the KDE mailing list (emphasis mine).
"[…] last week, [The Qt Company] suddenly informed both the KDE e.V. board and the KDE Free QT Foundation that the economic outlook caused by the Corona virus puts more pressure on them to increase short-term revenue. As a result, they are thinking about restricting ALL Qt releases to paid license holders for the first 12 months."
The Qt Company issued a non-denial denial within a day that this statement does "not reflect the views or plans of The Qt Company".
As KDE and The Qt Company are negotiating a new SLA for the open-source version of Qt 6, this may just be sabre rattling. The Qt Company may have threatened with a 12-month delay of FOSS Qt release, because KDE demands "immediate Free Software releases of Qt" (also in Olaf's email).
I discuss the ramifications of a possible 12-month delay and suggest a better sales strategy in my post.
So far, these statements sound like negotation tactics from both sides. An announcement on 28 April seems to corroborate this interpretation. The Qt Company announced lower prices for the Qt for Application Development license and the unbundling of Qt extensions like Qt Design Studio, M2M protocols, Qt Charts, Qt Safe Render, etc. In the section Less Love for FOSS Qt Users of of this newsletter, I suggested a better pricing strategy for Qt. It seems that The Qt Company starts moving in the right direction.
Professional CMake: A Practical Guide (6th Edition) by Craig Scott.
Craig updated his CMake book to cover version 3.17. He added a brand new chapter about Working With Qt (Chapter 30). I had the pleasure of reviewing the new Qt chapter. It is full of great advice and well thought-out examples - like the rest of the book. Although I have gained quite a bit of experience with CMake, I learned a few new tricks.
Calling find_package for each Qt module separately may find Qt modules from different versions. It's better to call it once for all modules.
If you switch on CMAKE_AUTOMOC, CMake will collate a file mocs_compilation.cpp, which includes all sources files generated by moc. This file can quickly become huge, takes a long time to compile, and becomes the bottleneck for compilation. Craig gives a solution with qt5_wrap_cpp and a custom target, where CMake generates a separate moc source file for each moc header.
CMake provides the macros qt5_create_translation and qt5_add_translation to generate the .ts files and to compile the .ts files into .qm files, respectively. Of course, Craig gives an example CMakeLists.txt file how to build and deploy translations.
CMake comes with deployment tools macdeployqt, windowsdeployqt and androiddeployqt for MacOS, Windows and Android. The best solution for Linux is to use the CMake install commands for CMake 3.14 or newer or the workaround with QtCreatorDeployment.txt as described here for older versions.
As you probably know by now, CMake is the default build system for Qt 6. This new chapter prepares you very well for the future. As in my review of the 5th edition in my verdict is: Just buy the book! It's worth every penny.
Boot2Qt Embedded Qt5 Image and Toolchain Zeus Release by Boundary Devices.
Boundary Devices manufacture i.MX-based single-board computers (SBCs) and system-on-modules (SoMs). On 21 April, they released the Boot2Qt images and toolchains (SDKs) for the i.MX6, i.MX7 and i.MX8 - built with Yocto 3.0 "Zeus".
You can burn the image to an SD card or to the eMMC inside the i.MX. You can build your applications against the SDK and install them on the i.MX. This is good enough for trying out things. When you get serious about product development, you will build the image and SDK yourself - including your applications. The post has good instructions for both the impatient and the serious.
Qt Virtual Tech Con 2020.
This year's Qt World Summit was cancelled because of the COVID-19 pandemic. The Qt Company offers a virtual conference on 12-13 May instead. The conference is free. This saves you the usual conference fee of 600-800 Euros and the travel costs. The agenda looks promising, too. There are talks about embedded Linux (Yocto), Qt on MCUs, QML, 3D, Qt 6, licensing and more. So, there are no excuses not to attend 😉
Open Source Summit + Embedded Linux Conference North America 2020.
Because of the COVID-19 pandemic, this conference will be a virtual event this year from 29 June to 2 July. The fee has dropped from the usual 800-1200 USD down to a 50-USD steal. The conference offers 230 sessions with live speaker Q&A. LinuxGizmos has cherry-picked a short list of embedded-focused talks: deep learning packages for the edge, MIPI DSI as display interface, OpenAMP for communication between Cortex-A and Cortex-M on hybrid SoCs, IoT networks, etc.
Mastering Embedded Linux (Parts 1-4) by George Hilliard.
In an amazing four-part series, George shows how to turn a Raspberry Pi Zero W into a Wifi access point. He uses Buildroot to build the embedded Linux image for the Pi - with six shell commands. The posts are extremely well-written and very easy to follow along. The reward is a DIY Wifi access point 😃
Part 1 - Concepts. The first part lays out the basic concepts of an embedded Linux system like microprocessor, RAM, storage (SD, eMMC, NOR/NAND flash), bootloaders, Boot ROM, kernel and user space. If your Boot ROM supports USB, the Boot ROM lets you download a Linux image onto your device. This neat trick makes your device nearly unbrickable. Once you got Linux running on a board, the rest is business as usual. In George's words: "every software component running on [embedded] Linux is nearly identical to the version you'd run on a desktop!"
Part 2 - Hardware. The second part provides three options how you can get started with embedded Linux systems.
Buy it - single board computers. This is the easiest and quickest way to get started - especially for "softies" like me. George discusses the pros and cons of two low-cost SBCs: the Raspberry Pi Zero and the Orange Pi Zero Plus2. Altough the Orange Pi has more connectivity options and an eMMC instead of an failure-prone SD card, he chooses the Raspberry Pi Zero for this series. The Raspberry Pi has much better documentation and community support.
Hack it - repurposed hardware. Using an IP camera as an example, George gives a quick tour of how to reverse engineer Linux devices. The idea is to run your own applications or even your own Linux system on devices like IP cameras and Internet routers.
Build it - custom boards. Yes, this is what you think. You assemble your own PCB. The "simplest" way is to put a SoM on a PCB. The hardcore way is to put a processor and some peripherals on the PCB. This approach yields truly low-cost devices. George built a Linux-powered business card for less than 1.50 USD.
Part 3 - Buildroot. The third part explains how to build a Linux image for the Raspberry Pi Zero W with Buildroot, how to burn it on an SD card and how to run it on the Pi. The whole process takes only six commands.
Buildroot is an alternative to Yocto. It uses GNU make for the build process, whereas Yocto comes with its own build system. From my own experience, I know that the learning curve for Yocto is much steeper than for Buildroot.
In a build configuration like raspberrypi0w_defconfig, you define which packages go into the Linux image. For each package, you can enable or disable certain features. Each package provides a GNU Makefile for building. Buildroot provides over 2000 packages.
Part 4 - Adding features. In the fourth part, you learn how to turn your Raspberry Pi into a Wifi access point. You need not write a single line of code. You only add two packages, a kernel driver and some configuration and init files to the Linux image from part 3. With the help of the terminal UI menuconfig, you tell Buildroot to include the packages hostapd and dnsmasq and the kernel module brcmfmac to the Linux image.
Then, you figure out the right configuration and init files directly on the Raspberry Pi or on a Linux desktop system. Once your setup works, you add the files as an overlay to Buildroot. An overlay is a directory tree that Buildroot adds to the root file system after finishing the build.
If you followed along, you can connect your computer or phone with your brand new Wifi access point running on a Raspberry Pi. How cool is that!
Patterns for Managing Source Code Branches by Martin Fowler.
The authors of the book Accelerate (see my review below) present empirical evidence that using version control, continuous integration (CI), trunk-based development and test automation leads to better software delivery performance. Martin describes continuous integration as a composite pattern made up of basic branching patterns (Mainline and Healthy Branch) and trunk-based development (a.k.a. Mainline Integration) with high-frequency integrations.
The Continuous Integration pattern is a tool-independent recipe how to implement CI in your team. It also explains why CI boosts your team's performance.
Branching is defined by three base patterns.
Source Branching defines what a branch is.
The Mainline is the master branch in git or the trunk in svn that holds the source code shared by the team.
A Healthy Branch is a defect-free branch containing only commits that have passed thorough testing. The mainline must be healthy. Other branches should be healthy before you integrate them into the mainline.
You can achieve healthy branches only with self-testing code. This practice means that you write a "comprehensive suite of automated tests" in addition to the production code. Test-Driven Development (TDD) is an excellent way of writing self-testing code.
The base patterns are the building blocks for the integration patterns.
Mainline Integration is also known as trunk-based development. You work on a local copy of the master branch and commit one or more changes to the local master branch. Finally, you merge your local master branch back into the mainline. While merging you may have to resolve some conflicts.
With Feature Branching, you create an extra branch from your local master branch. When your feature is complete after a couple of commits to the feature branch, you merge the feature branch back into the mainline - resolving possible conflicts.
You have probably heard that Mainline Integration is good and Feature Branching is bad. Neither of these two patterns is good or bad per se. They are good if you merge your branch into mainline frequently - best after every commit. They are bad if you commit many changes locally before you merge them into the mainline. Low-frequency integration leads to difficult conflict resolutions with long debugging sessions. High-frequency integration of small chunks of work gives you quick feedback and allows you to find the problem by a simple diff.
Continuous Integration is Mainline Integration with high-frequency integrations. Every healthy commit is integrated back into the mainline. Healthy commits typically represent parts of a feature.
Note that Martin's post is work in progress. He will regularly add new patterns. So, it makes sense to check back in regularly or to follow Martin on Twitter. His handle is @martinfowler.
C++ Standards Support in GCC and C++ Support in Clang. The articles provide tables which C++11, C++14, C++17 and C++20 feature is supported by which GCC or Clang version. For example, GCC 7 has support for structured bindings, if and switch with initialisation, inline variables and almost all other C++17 features. Qt 6 will be based on C++17. So, make sure that your Qt embedded projects use at least GCC 7 or Clang 5.
Accelerate - Building and Scaling High Performing Technology Organizations by Nicole Forsgren, Jez Humble and Gene Kim. In my review, I focus on Chapters 1, 2 and 4 of Part I: What We Found. Chapter 1 gives the context of the study. Chapter 2 explains how to measure the performance of software development teams. Chapter 4 provides the main results of the study (see Figure 4.2).
The remaining chapters of Part I delve into the capabilities shown in Chapter 4 to have a strong positive influence on team and organisation performance. These capabilities include architecture, integration of infosec, lean management and a sustainable work pace.
In Part II: The Research, the authors explain the methodology of their study: the science behind the book. Part III: Transformation shows how to use the findings from Part I to transform an organisation. The single chapter High Performance Leadership and Management of Part III is written by different authors, Steve Bell and Karen Whitley Bell, two pioneers of Lean IT.
Accelerate (Chapter 1)
In 2001, the Agile Manifesto laid out four values and twelve principles how to develop software in better ways: faster, with higher quality and with more respect to people. The signatories of the Agile Manifesto are practitioners who deduced these values and principles from their daily work in software projects. Many of us know from experience that teams following these principles perform better than teams ignoring them. Nevertheless, we often end up in organisations that ignore the Agile values and principles - mostly with bad consequences to businesses and people.
So far, we had only anecdotal evidence. The book Accelerate provides empirical evidence which practices lead to high-performing teams and organisations. The authors have collected over 23,000 survey responses from over 2,000 organisations, which cover all sizes and industries. They explain their approach in Part II: The Research - for everyone to refute or to confirm. With their statistical models, the authors can predict the performance of a team by looking at how well teams implement practices like deployment and test automation, continuous integration, loosely coupled architecture and empowered teams.
Measuring Software (Chapter 2)
Common performance measures (e.g., lines of code, hours worked, Scrum velocity) suffer from two flaws: "First they focus on outputs rather than outcomes. Second, they focus on individual or local measures rather than team or global ones."
Short excursion: As a solo consultant, I find the first sentence very interesting. It distinguishes hourly billing (output focused) from value-based billing (outcome focused). I can use the performance measures and the best practices improving performance to write value-based proposals. They help me to estimate the ROI or value of a project in a proposal and to measure the value during the project.
Back to defining a performance measure: The authors suggest four criteria for software delivery performance.
Product delivery lead time is the time "it takes to go from code committed to code successfully running in production". The survey participants could choose between seven durations ranging from less than one hour to more than six months.
Deployment frequency denotes how often the software is deployed into production. Again the participants had seven options ranging from several times per day over once per month to fewer than once every six months.
Mean time to restore is the time the team needs to fix a bug in a product and deploy the fix. The options are the same as for the lead time.
Change fail percentage gives the percentage how often the implementation of a story requires a fix later.
The study shows that the shorter the lead time and the mean time to restore, the higher the deployment frequency and the lower the change failure rate, the higher the performance of the team is (see tables 2.2 and 2.3).
The study also disproves the wide-spread dogma in the software industry that teams can only go faster if they compromise quality. In the words of the authors: "Astonishingly, these results demonstrate that there is no trade-off between improving performance and achieving higher levels of stability and quality. Rather, high performers do better at all of these measures."
This finding should also put an end to the tedious discussion about technical debt. We must not take on any technical debt, that is, sacrificing quality for speed. We can have both speed and quality at the same time.
Higher software delivery performance also leads to higher organisational performance (see Figure 2.4). Higher organisational performance in turn leads to higher returns on investment (ROI) and to higher resilience against economic downturns.
Technical Practices (Chapter 4)
Continuous Delivery (CD) enables teams "to get changes of all kinds [...] into production or into the hands of users safely, quickly, and sustainably." CD is based on five principles:
Build qualitiy in.
Work in small batches.
Computers perform repetitive tasks; people solve problems.
Relentlessly pursue continous improvement.
Everyone is responsible.
The CD principles suggest that we take many small steps, that we inspect the outcome of each step and adapt the next step when the outcome doesn't match our expectations. Small steps minimise the costs of missteps. Like Scrum and XP, continuous delivery introduces multiple inspect-and-adapt cycles or feedback loops such that we can deliver "high-quality software [...] more frequently and more reliably".
The authors identify nearly a dozen capabilities that have a strong influence on continuous delivery (see Figure 4.1). These capabilites include automated deployment, automated testing, continuous integration, trunk-based development, loosely-coupled architecture and empowered teams. These capabilities match pretty well with the capabilities in The Joel Test or The Matthew Test. The more CD capabilities a team has, the better it does on Continuous Delivery.
The results of the book can be summarized in the following statements (see Figure 4.2).
The better a team does on Continous Delivery,
the better it does on Software Delivery performance.
the less rework it must do.
the better the organisational culture (e.g., less pain, less burnout).
the stronger the identification with the organisation is.
The better teams do on Sofware Delivery Performance, the better the whole organisation performs.
One question must not remain unanswered: How much influence does Continuous Delivery have on software quality?
Measuring software quality is similarly tricky as measuring team performance. Test coverage, for example, doesn't tell you much about software quality. The authors settled on measuring quality by the percentage of new work, unplanned work or rework and other works (e.g., meetings). The rationale is that teams spend more time on unplanned work or rework when the change failure rate is higher and when the mean time to restore is longer.
The results are revealing (see Figure 4.4). High-performing teams spend 30% on other work, 21% on unplanned work and 49% on new work. For low-performing teams, the split is 35%, 27% and 38%, respectively. All in all, high-performing teams spend 11% more on new work than low-performing teams. That is an additional half day per week for new work!