Episode 9: Burkhard on Qt Embedded Systems
Welcome to Episode 9 of my newsletter on Qt Embedded Systems!
In August, I evaluated the C++ dependency analyser CppDepend. I hoped that the tool would help me to find structural problems in the code of my current project. It was a letdown.
Exclusively for this newsletter, I wrote down my experience how to leverage usage rights for creating intellectual property like productised services and even products. In the Reading section, you find a post about advisory retainers (subscriptions to your expertise), which are suitable for open-ended projects with vague goals - quite the opposite of productised services.
Moreover, the Reading section contains a description where I am stuck with my Internet radio project on the Raspberry Pi, tips from Matthew Eshleman how to prevent bugs from creeping into your code base, some tips about Git branches and how deep learning helps with weeds.
Enjoy reading and stay safe - Burkhard 💜
My Blog Posts
CppDepend: A C++ Dependency Analyser.
I am currently restructuring some legacy software. As I know the software pretty well by now, it took only a couple of hours to uncover more than 50 dependency cycles. I can't be sure that I found all cycles. Finding dependency cycles between classes in a code base should be an ideal task for a tool. A quick Internet search revealed two tools: CppDepend and SourceTrail. I had read some positive remarks about CppDepend from the C++ community on Twitter. So, I decided to evaluate CppDepend.
The evaluation results were sobering. CppDepend didn't provide a ready-made query for finding dependency cycles. Due to wanting documentation, it took me about a day of trial and error to figure out a query on my own. The query found 5 of the more than 50 known dependency cycles. The most likely reason for missing 90% of the cyles is the erroneous parsing of the code base.
CppDepend didn't do any better with computing the recursive incoming and outgoing call graph of a function. The built-in functions returned only the directly called or calling functions. I find call graphs very useful to understand unknown code. You find dozens of hand-drawings for each project in my notebooks. CppDepends wastes another chance for saving me time.
My conclusion should be clear. I won't use CppDepend. I plan to look at other tools in the future. Candidates are SourceTrail, Understand from SciTools (🙏 Nishin K. Vasu) and even doxygen (🙏 David Sugar). Feel free to point out other tools for C++ dependency analysis to me.
My Thoughts on Leveraging Usage Rights for IP Creation
When you write a piece of code, you have the copyright to this code according to German and EU law. Neither employers nor customers (collectively called clients) can take the copyright away from you. You should always add your copyright notice at the beginning of a file. Nobody is allowed to remove this notice, although clients will try.
Usage rights are different from copyright. Usage rights control who is allowed to use the software you have written in which countries for how long. When you sign an employment contract, you typically waive the usage rights. You grant your employer perpetual exclusive usage rights in return for your monthly salary.
Especially solo and small service providers give up usage rights too easily. They even think that it's impossible to keep usage rights if they work on a time-and-material basis (a.k.a. hourly rate). Of course, you won't get usage rights for the complete software of your customer. Most of the time it's, however, possible to define parts for which you can keep usage rights. Let me give you three examples.
Widget libraries. For nearly every project, you will write "special" UI widgets: buttons, on-screen keyboards and keypads, menu hierarchies, custom input dialogs for parameters, gauges, and more. It turns out that the widgets are not so special after all and that you could use them in the next project as well.
Communication libraries. Operator terminals communicate with machines using protocols like J1939, OPC UA and Modbus. The terminals must decode the incoming messages, show them in the GUI or send them to cloud servers. They also encode user inputs into messages and send them to the machine. Such communication libraries can be reused in other projects.
I18n converters. The translation of the texts shown in the GUI into many different languages is handled by translation agencies. They rarely use QtLinguist to enter the translations. You need to write converters from Qt's ts-format to the formats used by the agencies. Again, these converters are useful in other projects as well.
Having idenitified reusable parts of the software enables you to give customers pricing options. For example:
Option 1: The customer and you have non-exclusive perpetual usage rights. You charge an hourly rate of R for N hours of work.
Option 2: The customer has exclusive usage rights for three years. After three years, the customer and you have non-exclusive perpetual usage rights. You charge an hourly rate of 1.15 * R for the same N hours of works as in Option 1.
Option 3: The customer has exclusive perpetual usage rights. You charge an hourly rate of 1.4 * R for the same N hours as in Option 1 and 2.
The multipliers for the hourly rate are fictitious. You must adapt it to the benefit the customer will gain from having partial or full exclusivity of the reusable part. It also depens on how often you can sell the reusable parts to other customers. Trust me, purchasers love to have pricing options (a.k.a. discounts) 😉
Usage rights give you leverage in negotiating your fees. They also enable you to build up your own intellectual property (IP). You can create productised services or even products over time. You spend time and money once to create IP and then you sell it multiple times with little or no extra effort. Your income becomes independent of the time you put in.
Qt Embedded Systems - Part 3: Updating from Thud to Dunfell (Postponed)
Some of you may be waiting for the 3rd part of my series on Qt Embedded Systems. I wanted to update my system step by step from Yocto Thud to Dunfell. I am sorry to let you down for a second month in a row 😢 Here is a description of the technical problems I am facing and some ideas how to solve them.
I tried to get the radio application working on a Linux image built with Warrior. The application is displayed properly on an HDMI monitor, but the touch display connected to the Raspberry Pi 3B via a DSI ribbon cable is ignored. Then, the Pi's 3.5 mm audio jack is dead. This developer discussion confirms my observation and hints at a solution for Yocto releases after June 2020. This solution may get the Internet radio working for HDMI monitors but it doesn't for DSI-connected touch displays.
The above solution adds the line dtoverlay=vc4-kms-v3d,noaudio,audio,audio1 to the config.txt file on the boot partition of the Pi's SD card. The file contains property definitions in the format <name>=<value> - very much like .ini files. You find the documentation of config.txt here. The Pi's GPU reads this configuration file "before the ARM CPU and Linux are initialised". It's a plain text file, which you can change from your desktop computer mounting the Pi's SD card. The video options for HDMI, LCD touch display and composite video (TV) are described in the subsection Video options.
You can access the configuration values from a terminal on the Pi with the command vcgencmd get_config <name>. I checked the relevant properties (ignore_lcd, display_default_lcd, disable_touchscreen). They are all set so that the Pi should display the application on the touch display. But, it doesn't.
A look into the /dev directory on the Pi reveals that there is only one framebuffer device fb0. The device fb0 seems to be used for HDMI displays. I must figure out how to create a second frame buffer device fb1 for the touch display or how to use fb0 for the touch display.
Following the advice from the post Display Troubleshooting, I found out that the Pi detects the touch display correctly, but the boot command line lacks the settings for the width and height of the framebuffer. If I replace dtvoverlay=vc4-kms-v3d by dtoverlay=rpi-ft5406, both tests pass but the radio application is still not shown on the touch display. The post Device trees, overlays, and parameters may be useful in understanding overlays.
That's the state of affairs. I'll keep digging and hope to find a solution soon.
Stopping Bugs: Seven Layers of Defense by Matthew Eshleman (Cove Mountain Software).
This is another excellent article by Matthew. I learn a lot from his articles. He is definitely worth following on Twitter: @EshlemanMatthew.
In this post, Matthew gives seven tips how to avoid bugs outright or how to find them quickly when they happen.
Layer 1: Requirements and Specifications. Write requirements without ambiguities and communicate them clearly.
Layer 2: Architecture and Design. The main question: How easily do architecture and design accomodate new and changed requirements?
Layer 3: The Compiler. Fix all compiler warnings, use static assertions and use static analysis tools. Run this analysis during CI.
Layer 4: Units Tests. Cover your code with unit tests - best using TDD. Run the unit tests during CI.
Layer 5: Asserts. Add assertions to the code in the form of design by contract (pre-conditions, invariants and post-conditions) and leave the assertions in your production code.
Layer 6: QA Automated Tests. Run automated blackbox tests of your embedded system. Although rarely used, these tests are the magic ingredient to release more frequently with higher quality.
Layer 7: QA Manual Testing. The QA team runs manual blackbox tests of the embedded system - similar to what the users would do.
Layer Oops: The Customer. We reach this layer, when all our defenses have failed. Find out why the error fell through all defenses and avoid similar errors from occurring again.
Here are my thoughts. By investing a little bit more time in Layers 1 and 2, we will reap huge returns. Clarifying a requirement takes a couple of minutes, whereas changing the implementation takes hours if not days. Getting the architecture wrong or having no architecture at all will lead to a partial or full re-implementation of the embedded systems if not the failure of the project.
@patti_gallardo asked on Twitter: "Which software development opinion have you changed your mind about as you became more experienced?"
@bjorn_fahller's answer: "TDD. I've always liked having automated tests, but letting the test drive the design was something I couldn't grok. I believed in careful upfront design, implementation and then tests to find bugs. Letting tests guide design and document behaviour is so much better."
I went through a similar transformation as Björn. TDD has become my most important and most effective layer of defense against errors. That's Layer 4 above with the mandatory use of TDD.
TDD will give us lots of high-quality unit tests. It will force us to clarify requirements, because we cannot write tests for ambiguous requirements. TDD helps us find simpler and easier to test interfaces and allows us to evolve our architecture and design through refactoring. We need hardly any assertions, because they are checked by the unit tests for exceptional and error cases. The rich suite of unit tests guides the QA team when writing system tests and makes the system more easily testable. In short, all the other layers benefit from using TDD.
9 useful tricks of git branch by Srebalaji Thirumalai.
Here are my favourite tricks from the list. I use the first three tricks fairly regular. The fourth trick is new too me.
git checkout - switches to the previous branch like cd - switches to the previous directory.
git branch -d <branch> deletes the local branch and git push origin <branch> -d the corresponding remote branch.
git branch -a lists all local and remote branches, whereas git branch -r lists only the remote branches.
git branch --contains <commit> lists all branches containing the given commit.
Srebalaji's list contains some more tricks that will be useful from time to time.
Projects vs Retainers and Pricing Retainer Fees Without Knowning the Scope (video) by Jonathan Stark.
You are a solo consultant. A client wants to secure (retain) your expertise for an unknown period of time and has only a vague idea of what you'll do. The project is open-ended without a clearly defined outcome. This rules out value pricing. However, the project is ideally suited for an "advisory retainer [...] that gives your clients unlimited access to your expertise on a subscription basis". The access to your expertise is unlimited but not to your time.
A simple setup of a retainer could be that you guarantee your client 40 hours of your time per month. You specify in broad strokes, what you do in this time. For example, you could perform code reviews, customise board-support packages or put legacy code under test. The client pays the fee for the 40 hours monthly in advance. The client is responsible for fully using the 40 hours. The retainer runs for at least six months and automatically renews on monthly basis as long as the fee is paid.
Another setup could be to allow a development team to ask you 12 questions per month. You guide the team in finding good solutions to their problems. You don't do the actual work yourself.
AI for AG: Production machine learning for agriculture by Chris Padwick (Blue River Technology).
Traditionally, herbicides or weedkillers are sprayed on plants and weeds indiscriminately. Herbicides must be strong enough to kill weeds and weak enough not to damage plants. Assuming that weeds grow between the plant rows and avoiding to spray plant twice on turns helps a little bit. The ultimate goal is to decide for each plant in real time whether it is a weed or not. Weed is sprayed, crop is not.
This is exactly what Blue River Technologies is doing and why John Deere bought them for 305 million USD in 2017. Their spraying equipment uses high-resolution cameras, computer vision and deep learning to discriminate between weeds and plants (see the photo with the green boxes around plants and red boxes around weeds). The sprayer focuses one of its nozzles exactly on the weed and sprays the weed with a dose just strong enough to kill the weed. All this happens in real time - while the machine is moving forward at a steady pace.
Smart spraying helps to reduce amount of herbicides by up to 90%. It can also be used for reducing the amount of fertiliser. The sprayer inverts its logic and sprays the fertiliser on the plants.
The weed classifier (a convolutional neural network) runs on an NVIDIA Jetson AGX Xavier, which is more powerful than Blue Gene, IBM's super computer of 2007. Chris gives a detailed explanation why and how PyTorch, which was originally developed by Facebook and is now open-sourced, helps his team to achieve its goals. Chris' major reason for using PyTorch is that they can run production and research workflows at the same time.