Episode 31: Better Built By Burkhard
Electronica 2022 Round-Up
Electronica 2022 Round-Up
Trade shows in Munich are my favourites, because they are at my front door. The fairground is just a 40-minute drive on the “Autobahn” A94 from Mühldorf am Inn westbound towards Munich. This year there were two exhibitions in Munich: BAUMA and Electronica. Unfortunately, I missed BAUMA due to a COVID infection (not too pleasant but short and fully recovered by now). So, I was even happier to make Electronica. Here are my impressions.
Demos: From Boring to Exciting
My favourite demo was the microbial fuel cell from Plant-e at the NXP booth. The two “power plants” supply the sensor, Sigfox modem and LTE-M modem with electricity. The modems send sensor data to the cloud. This setup works in remote places, as long as there are plants doing photosynthesis.
Thanks for reading Better Built By Burkhard! Subscribe for free to receive new posts and support my work.
The standard demos were battery charging stations for EVs. I saw two dozens of them, a few accompanied by EVs. After the first three, it got a bit boring. Charging stations are already a commodity. Driven by automotive companies, the electronics industry focuses on cars, cars and cars. Instead, it should look at the much bigger market of mobility and come up with innovative mobility solutions.
The Podbike, shown at the STM booth, is one such solution. It is a 4-wheeled e-bike with a cabin. The Podbike is ideal for the last mile(s). My wife and I could run all errands in our hometown. For our hiking trips, we could use the train and then a Podbike to cycle the 5-10km to the start of the trail. The postman could use a Podbike. Once you start thinking about it, you’ll discover many more possibilities.
Still Searching for the Special BSP
In the previous newsletter, I needled Variscite with the question: How do you differentiate your BSP from your competitors’? Electronica was my chance to talk to some more SoM and SoC makers: Avnet, Phytec and Renesas.
Avnet claim “Made in Germany”, touch display support and high availability of SoMs as their advantage. I know the last one to be true. When one of my recent customers was hit by the chip shortage, Avnet were the only one able to provide thousands of iMX8M SoMs. They were a live-saver for my customer.
Avnet showed an impressive array of touch displays. They provide well-tested Linux drivers for the touch displays. Their customers don’t have to work with the Chinese manufacturers of the displays to figure out how to adapt the Linux drivers. The touch displays just work for their customers. Great!
“Made in Germany” is important for many of my German customers. For me, “Made in Europe” or “Made in the US” is fine. Getting useful information and quick help from Chinese SoM makers can be challenging. It can make or break a project.
Phytec’s speciality are camera modules. Phytec add the drivers for their cameras to the BSP and ensure that the cameras work well with OpenCV and Halcon (commercial). Then, it’s up to you to implement an application for, say, telling apart ripe tomatoes from green ones. That’s your core business and not Phytec’s.
I came across the Renesas R-Car family of SoCs more than ten years ago when talking with Mitsubishi about an infotainment system for Volvo Cars. I often thought that the R-Car E, M and H SoCs would be an excellent alternative to the NXP iMX6 and iMX8 SoCs ubiquitous on agricultural and construction machines. This thought intensified when Renesas was able to deliver SoCs during the chip shortage and NXP was not. I couldn’t even find a supplier for development kits.
Traditionally, Renesas sold its R-Car SoCs only to automotive companies in high volumes. Agricultural and construction machines had too low volumes and was assigned to the industrial business unit inside Renesas. This business unit was not allowed to sell R-Car SoCs but only industrial-grade SoCs, which don’t satisfy automotive requirements (e.g., the temperature range from -40°C to +85°C).
All these restrictions will be a thing of the past from December 2022. Renesas will sell its R-Car SoCs to businesses outside the automotive industry. You can treat yourself to an R-Car H3e development kit at Shimafuji, maybe as a Christmas present.
New Members of the iMX Family
The new family member at the low end is the i.MX 8ULP optimised for ultra-low power applications (see also the fact sheet). It sports one or two Cortex-A35 cores, one Cortex-M33 core, a 3D GPU with OpenGL ES 3.1, OpenCV and Vulkan support, and NXP’s new security subsystem EdgeLock. The new Energy Flex power management system on top of an 28nm FD-SOI manufacturing process make the 8ULP extremely power efficient. Connectivity is good: 2x USB, 10/100 ETH, FlexCAN, 4x UART, 4x I2C, 2x SPI, etc. Ports can freely be assigned to the A35 cores or the M33 cores.
Thanks to its improved security, ultra-low power consumption, 3D GPU and DSP, the i.MX 8ULP seems well-suited for portable and wearable medical devices. The Avnet website shows the i.MX 8ULP in pre-production. I reckon that the modules will be available in the first half of 2023. The module is on my shopping list.
The addition at the high end is the i.MX 93. It comes with one or two Cortex-A55 cores, one Cortex-M33 core, a neural processing unit (NPU), the EdgeLock secure enclave and ample connectivity (2x Gb Ethernet, 2x USB-C, 2x CAN-FD, 8x UART, 8x I2C, 8x SPI, etc.). The graphics capabilities are disappointing. There is no OpenGL ES or Vulkan acceleration, as the i.MX 93 comes only with a 2D GPU. The Avnet engineers have very early samples of the i.MX 93. I guess that the first modules could be available in Q4 2023. More will come in the first half of 2024.
Video over CAN XL
I associate CAN with speeds less than 256 Kbps. Yes, that’s roughly as much as my first ISDN modem for connecting with the Internet had in the 90s. So, I was amazed to see video playing smoothly over CAN XL at the Bosch booth. CAN XL allows speeds up to 20 Mbps. You can use the same cabling as for CAN and CAN FD, but you’ll need new transceivers on your boards.
You won’t see CAN XL in agricultural and construction machines any time soon, as the J1939 standard just allowed CAN FD (1-2 Mbps) as a transport layer. The excavator terminal, which I help building at the moment, still runs on a CAN network. The terminal could do CAN FD, but most other ECUs can’t.
I am wondering whether CAN XL may reach the market too late. Image data is already sent over Ethernet with at least 10 Mbps. You could send the CAN data over the existing Ethernet cables as well. An alternative to all variants of CAN could be Single-Pair Ethernet (SPE). SPE cables and connectors are a lot smaller and more flexible than for classic Ethernet. It reaches transmission speeds of 1Gbps up to 40m and 10Mbps up to 1000m. Which technology will dominate in 5-10 years: CAN XL, SPE, classic Ethernet or something else?
The Key Principles of Continuous Delivery
The five key principles of Continuous Delivery are: Work in small steps, build quality in, automate repetitive tasks, improve continuously and everyone is responsible. When you unleash the positive feedback loops of all principles together, you can avoid the cost explosion of changes or flatten it. The key drivers are TDD and continuous integration.
You can actually move fast and improve the quality. Continuous Delivery makes that happen by providing fast and frequent feedback many times per hour and many more times per day.
Can We Use Trunk-Based Development for Legacy Software?
Trunk-Based Development (TBD) requires developers run all unit tests before they integrate their changes directly into the main branch (a.k.a. trunk). There are no branches other than the main branch. Developers integrate frequently, best multiple times per day. If all unit tests pass, it’s highly unlikely that the developers have broken anything.
Per definition, legacy software has no tests. That’s why you need to adapt TBD a little bit.
You apply TDD to every piece of code you change and to the neighbouring code you need to understand for the change.
You create short-lived features branches for your changes. Your colleagues review your changes before integration. You could also do pair or mob programming.
You break down user stories into stories that don’t take longer than 2 days.
The adaptations are aimed at giving developers more feedback faster - the secret sauce of Continuous Delivery. TBD is one of the three core drivers of minimum viable CD.
Casting a negative float to an unsigned int
I published the original post over 9 years ago, when I stumbled over the problem for the first time. As my other two posts about floating-point numbers (Comparing Two Floating-Point Numbers and Is a C++ Float Variable Ever Equal to 0.0f?), it stayed in or close to the top-10 of my most popular posts.
When I ran into the same problem last week and re-shared the post, it quickly became my first viral post on LinkedIn. So far, it garnered more than 100K impressions, 800 reactions, 60 comments and 25 reposts. And, I got over 250 new connections. Pretty good for such an old post!
Contrary to quite a few comments on LinkedIn, the cast is not “moronic” or “meaningless”. The floating-point number should have never been negative. The J1939 standard mandates that all numbers must be transformed into unsigned integers for transmission over CAN.
Here is how negative numbers sneaked in. The outside temperature for a harvester can be assumed to range from -45°C to +85°C. Otherwise, the electronics stops working. The temperature is measured in 0.1°C. Then, -27.4°C would be represented as the non-negative integer 176:
(-27.4 + 45.0) * (1.0 / 0.1) = 17.6 * 10.0 = 176.0 = 176
The integer representations range from 0 to 1300, all non-negative. J1939 messages pack the temperature values into an 11-bit unsigned integer. This saves 21 bits over a
float. And casting a non-negative float into an unsigned integer is perfectly OK:
auto uTemp = static_cast<quint16>(fTemp + 45.0) * 10.0;
Now the inevitable happens. Someone ignores the J1939 standard, leaves out the offset and “just” uses a signed integer.
auto iValue = static_cast<quint16>(fValue) * 10.0;
The unit tests for negative values pass on the PCs with Intel architecture. Disaster strikes on the ARM-based ECUs. You see a lot of zeros where there should be negative values.
Around the Web
Qt Marketplace License Abandoned
A reader of my post Using Qt 5.15 and Qt 6 under LGPLv3 pointed out that Qt Charts is not available on the Qt Marketplace any more. I checked and double-checked Qt Marketplace. None of the Qt modules Charts, CoAP, MQTT or Design Studio Bridge (marked yellow in my post) are available under the Qt Marketplace license any more. This is true for both Qt 5 and Qt 6. In other words, these modules are only available under Qt Commercial. They are included in Qt for Device Creation Professional.
Minimum Viable CD
Minimum viable Continuous Delivery defines a minimum set of “behaviors and abilities that must be met in every context to qualify as ‘continuous delivery’”.
Use continuous integration.
The application pipeline is the only way to deploy to any environment.
The pipeline decides the releasability of changes, its verdict is definitive.
Artifacts created by the pipeline always meet the organization’s definition of deployable.
Immutable artifact. No human changes after commit.
All feature work stops when the pipeline is red.
Production-like test environment.
Application configuration deploys with artifact.
The web site provides links to other useful information and to more than 150 people that could be worth following.
Nicolas Carlo: Comparing 2 approaches of refactoring untested code (Part 3)
In Part 1 of this 3-part series, Carlo presents Testing First as the approach to refactor legacy code.
Write high-level, integrated tests that will involve an actual MySQL database and HTTP calls [the equivalent of taking to hardware on embedded systems].
Refactor the Business Logic (Domain) out of the I/O (infrastructure).
Pull down most of the integrated tests to write isolated tests on the Domain code instead.
When I read Testing First, I immediately thought of TDD. But Step 1 suggests to do quite the opposite: writing system tests. After finishing Step 3, you might have some unit tests. Carlo recommends a top-down approach: starting with system tests, moving down to tests of multiple and single components and finishing with unit tests.
Writing the first few system tests may be fairly quick, but it gets harder with every test. The costs for writing tests will soon explode similar to the costs for changing legacy code. And, why should writing tests be simple for code that is hard to change?
The reason is obvious: It’s legacy code! By definition, it has no tests or hardly any. The developers didn’t write the code with testability in mind. Both controllability and observability are low. Therefore, developers will struggle heavily to write useful tests.
I’d use James Grenning’s crash-to-pass algorithm (a.k.a. The Legacy Code Change Recipe) to get a class under test. Once under test, you move forward with TDD. The crash-to-pass algorithm forces you to replace the dependencies by test doubles (dummies, fakes, mocks, etc.). For example, the function originally retrieving data from a database returns suitable data stored in a member variable. You can control the output of this function and observe its inputs. You get high testability.
You can apply the crash-to-pass algorithm to any class, no matter whether it is at the top, in the middle or at the bottom. You can choose the hotspots in your software, the classes with the worst dependencies, with the worst code smells and with the most changes. You always work on the most important parts of your code. It may take a couple of hours to get a complicated class under test, but you literally feel the pain points and you start seeing possible solutions. As developers we must get our hands dirty with code. High-level blackbox tests delay the feedback from the code.
In Part 2, Carlo introduces another approach: Refactoring First before writing tests.
Refactor the Business Logic (Domain) out of the I/O (infrastructure) using safe refactorings.
Write isolated tests on the Domain code.
I have no problem with such refactorings. I use them all the time, for example, for applying the pimpl pattern to a class. IDEs have automated refactorings like extracting a method. You can execute the refactorings of Martin Fowler’s seminal book Refactoring in a mechanical way guided by the compiler or the syntax checker of your IDE. Christian Clausen gives very simple rules how and when to refactor in his book Five Lines of Code. For example, if a function has more than five lines of code, you extract code into separate functions.
In Part 3, Carlo compares the two approaches. His conclusion: “Most of the time, I recommend people to Refactor First.” My conclusion would be slight different. Use both safe and test-guided refactorings in the refactoring step of TDD.
Thanks for reading Better Built By Burkhard! Subscribe for free to receive new posts and support my work.
Some thoughs about "Nicolas Carlo: Comparing 2 approaches of refactoring untested code"
The mentioned measures "Write high level tests first" and "Start refactoring immediatly" actually fall short in legacy-code refactoring.
Before thinking about changing code, it is imperative to gain an understanding of the software's function.
A good source for a better understanding can of course be existing requirements.
But practice often shows that with "grown" code, the current requirements were no longer included with the code changes.
To achieve a better understanding of legacy code, techniques such as those described in "The legacy code programmer's toolbox" by Jonathan Boccara can be used. (including "10 techniques to understand legacy code").
After analysing the code, an attempt can be made to make minor changes to the existing code.
As you point out in the article, this can be approached in the context of TDD with fakes, stubs and spys.
In my opinion, however, a more systematic approach would be to make the first changes using the Mikado Method (Mikado Method, by Ellnestam & Brolund).
This way, a target picture can be formulated and the way to get there can be well documented.
Incidentally, this method does not contradict the TDD method, but supports it in a very good way.
Some characteristics of the methods mentioned above
Write high level tests first
- Flaky tests (as described in the article) are in my opinion completely worthless
because every time the tests fail, it has to be investigated if the malfunction is caused by the test or by the code
- Complicated to create because of the dependencies.
- Mostly complete parts of the application or the library are needed to create the test.
- Dependencies to the outside world (database, UI etc) are required.
These dependencies generally have "state". This makes writing tests even more time-consuming.
- Deploying the environment is made easier by Docker, but makes the tests even more time-consuming to maintain.
Different states of the database have to be prepared before the tests.
Storing the test scenarios in another repository
Slows down test processes.
- Due to incomplete requirements, it is difficult to guarantee the completeness of the tests.
- Carries the risk of introducing errors.
- Refactoring of individual classes is too little as a measure
Possible structural problems can be overlooked by focusing on individual classes.
- The creation of fakes, stubs and spies in large projects poses problems in terms of maintenance but also the motivation of the developers.