Episode 20: Burkhard on Qt Embedded Systems
Welcome to Episode 20 of my newsletter on Qt Embedded Systems!
1 June 2021 was my 8th anniversary of being a solo consultant. Two years ago, I shared my Lessons from Six Years as a Solo Consultant: don't bill by the hour, make potential customers call you, and productised service. My post got featured on Hacker News. My website was hit by 10,000 clicks in one hour and broke down. Most of the 149 comments told me more or less friendly that I had no clue what I was talking about.
In the two years since then, I increased my hourly rates significantly by offering price options, created a productised service earning me over 50K Euros this year, sold an advisory retainer and figured out by trial and error which projects I can value-price. I dare to share my pricing experience in this newsletter and hope for a lively and constructive discussion. So, hit the reply button and let me know your experiences and questions.
Enjoy reading and stay safe - Burkhard 💜
My Blog Posts
My Talk “A Successful Architecture for Qt Embedded Systems” at Qt Day Italy 2021
This brief post provides the video and slides from my talk at Qt Day Italy. The talk was 65 minutes followed by a lively discussion of 25 minutes. 91 people attended my talk🙏
My Thoughts on Pricing Services
In spring 2019, I came off my biggest project ever. I had billed nearly 4000 hours at 90 Euros per hour in 2.5 years: a huge financial success. Together with a half-time novice developer, I had developed a harvester terminal from nothing to release. I was developer, system architect, coach, team builder, project manager, product manager, UX designer and tester in one person. I was very close to a burn out and I was furious.
The previous generation of the terminal had been built by a consultancy. It took the previous company four person years longer than me to build their terminal from scratch. As the quality of their terminal was subpar, my customer replaced them by me. Their hourly rate was only 15% lower than mine. It was clear that I had to change something!
I want a fair compensation for the knowledge and expertise I bring to projects, that is, I want higher fees. As my knowledge is decoupled from the hours worked, my fee should ideally be decoupled from the hours worked as well. In short, I want to work less and earn at least the same as before. I'll tell you what worked for me and what not.
Hourly Billing
I resort to hourly billing, if the potential customer is unwilling or unable to specify a minimum viable product in a separate paid project before the typically much bigger development project, that is, if the requirements are unclear. I used to offer customers several months (mostly between 3 and 6 months), where I'd work 160 hours per month at a fixed hourly rate. Payment was due 15 or even 30 days after the end of the month, in which I did the work. I don't do this any more.
I work at most 120 hours per month or 30 hours per week. This gives me one day, where I can run my business: where I do sales, marketing and accounting, where I create and execute productised services, and where I prepare and give talks. If I work 40 hours per week, I can't do these things and my future business will suffer.
I require payment in advance. This saves me the moot discussions at the end of the month, why this or that task took so long. If customers are not happy with the results, they may hold back parts of the payment. It doesn't help to remind customers that they pay for my time and not for the results. I preempt these discussions by setting up a Kanban or Scrum process. So, customers know the progress exactly and decide what to do next.
My contracts contain a clause that I can terminate the project when payments are late. With pre-payments, I notice that after a week and would lose a week of work in the worst case. With post-payments, I'd lose two months of work. I used this clause once, when the customer was unwilling to pay. It was my best option to exit a death-march project.
Pre-payment gives rise to some price options. If the customer pays for the full project duration in advance, they get a discount. If they only pay monthly in advance, they get no discount. For a 6-month project with 120 hours per month, I might offer three price options ("you" refers to the customer):
Option 1: The total fee is €93,600 with monthly instalments of €15,600 due on the first day of the month.
Option 2: The total fee is €88,920 with two instalments of €44,460 due on 2021-08-01 and on 2021-11-01. You get a 5% discount.
Option 3: The total fee is €84,240 due on 2021-08-01. You get a 10% discount.
The hourly rates would be €130.00, €123.50 and €115.00 for Options 1, 2 and 3, respectively. I wouldn't mention the hourly rates in my offer, because the savings look more impressive for the total fees. By offering three options instead of a single one, you increase your chances of acceptance significantly. The customer can say "No" to one or even two options and can still say "Yes" to another option. Customers often choose the middle option. That's basic pricing psychology.
People love discounts. So, presenting pricing options as discounts plays to this love. And bonus: Accounting departments are often obligated to choose a discounted option.
If the customer is sceptical of pre-payment, I may add an Option 0 or replace Option 1 by Option 0: The total fee is €104,000 with monthly instalments of €17333.33 due on the last day of the month. This normally appeases the reservations about advance payments.
Other reasons for pricing options could be onsite work, urgency, availability and usage rights. My normal fee includes up to 25% on-site work at the customers' premises. If the customer wants me to work 25%-50% on-site, I raise my fee modestly (e.g., by 10%). As I regard working more than 50% on-site as ineffective and as it disturbs my work-life balance, I raise my fee significantly (e.g., by 35%).
If a project is very urgent (Customer: "We need help by yesterday!"), I can offer different price options. A higher price for the customer's desired starting date, a lower price for the customer's last possible starting date, and a mid price for a start date in-between. If I happen to be the only available person who can solve the problem at hand and who speaks German as well, I can charge more for this.
Usage rights lend themselves naturally to pricing options. For years, I granted to my customers exclusive and indefinite usage rights for the software I wrote. I gave away huge value to my customers for free. The high price option would include exclusive and indefinite usage rights, the mid price option exclusive usage rights for, say, three years, and the low price option non-exclusive usage rights.
With the low and mid price options, I keep usage rights from the beginning or gain them after a certain time. This allows me to build productised services or even products from my software. Of course, the customer and I must agree up front for which parts of the software we share the usage rights. Value Pricing is much better suited for shared software parts than hourly billing.
Lessons so far:
I only offer a lower price if I get something in return from the customer.
I always offer three price options, as it increases the chance of acceptance.
I combine multiple reasons to create the three price options.
Value Pricing
Ronald J. Baker defines value pricing in his book Implementing Value Pricing (Amazon) as follows: "Everything we buy as consumers we know the price upfront. The billable hour violates this basic economic law [...] No customer buys time - it measures efforts, not results. Customers demand to make a price/value assessment before they purchase. [...] Therefore, value pricing can be defined as the maximum amount a given customer is willing to pay for a particular service, before the work begins."
With hourly billing, customers don't know the price for a set of features in advance, because it's unclear how long the implementation will take. They pay for the hours spent by the developers (no matter whether the developers are freelancers or permanent employees) and not for the results. Hence, customers cannot assess in advance whether they get a good return (value) on their investment (price paid).
The risk is fully on the customer side. Customers buy a pig in the poke. That's most likely the main reason why customers are trying so hard to beat down the hourly rate. As they don't really know what they are buying, they have difficulties to compare the rates of two freelancers.
When I value-price a service, I give the customer the price in advance and I guarantee a result. This changes the dynamics in price negotiations completely. The risk is now with me, but I also get more control over the price. And - I get total freedom how I implement things.
In what Jonathan Stark (Ditching Hourly) calls the why conversation, I try to figure out the value the customer is willing to pay for my service. I basically try to talk the customer out of hiring me. So, they convince themselves whether it's worth hiring me or not. Prices are not part of the why conversation. Pretty nifty what Jonathan concocted!
In addition to the value, I need to find out during the why conversation what kind of service the customer wants: a custom project billed by the hour or value-priced, a productised service, an advisory retainer, a workshop or a training. If both sides decide to work together, I can often write a proposal. Sometimes, I need some more calls or emails for clarification.
Don't worry I haven't gone totally mad and make a value-priced offer for a 12-month custom project without knowing the requirements. I'd offer a project to create the system architecture, to gather the requirements for a minimum viable product (MVP), to review the architecture of legacy software, or to build a small prototype for proof of concept. I know my efforts and hence my costs for such projects very well. So, I can set a price somewhere between my costs and the customer's value. If the price is too close or below my costs, I won't make an offer. If the price is too close or above the value, the customer will reject my offer.
All these projects have well-defined requirements and I can execute them with less than 4 weeks of effort. This significantly reduces the financial risk of a fixed-price project, which every value-priced project is. It also helps that the price in a value-priced project is considerably higher than in a fixed-price project.
Creating an architecture, an MVP or a PoC lay the groundwork for building the product: the realisation phase of the project leading to the product release. Customers need many hands in the realisation phase to write, test and release the code. That's where thousands of hours are spent and billed.
As someone who loves writing code, I hate to admit that writing code is less valuable to customers than creating an architecture, MVP or PoC. This value difference is best explained by the difference between efficiency and effectiveness. Efficiency focuses on doing things right. TDD, refactoring, CI, CD and pair programming enables us to write code with higher quality faster, that is, to write code more efficiently.
Effectiveness concentrates on doing the right things. A successful architecture enables a customer to evolve their product in unforeseen directions over 15 years and avoids a complete re-implementation in three years. An MVP helps a customer focus on the right requirements. A PoC tells a customer which technology to use and which not. This knowledge is invaluable to customers. Value comes from effectiveness, not efficiency.
Ronald J. Baker puts it this way: "Efficiency means focus on costs. But [value pricing] should focus on effectiveness. Effectiveness focuses on opportunities to produce revenue, to create markets, and to change the economic characteristics of existing products and markets."
If I work as a developer during the realisation phase, I'd bill by the hour. If I coach the development team, if I work as the product owner or Scrum master, or if I do a combination of these roles, I'd offer an advisory retainer to the customer.
I can identify some high-value parts of the software and offer the implementation of these parts with a value price. Interestingly, the hexagonal architecture helps with identifying such parts. Developing a port with one or more adapters is a good candidate for value pricing. My prime example is the port with its adapters for machine communication on an operator terminal.
Over the years, I have implemented a CAN adapter for two harvesters and an e-bike, an Ethernet adapter for a metal-sheet bending machine, a Bluetooth adapter for a medical cleaning robot, and some more adapters. Unfortunately, I didn't think of retaining the usage rights for of these adapters - except for one. I tried for the Bluetooth adapter, but the customer preferred to pay a 20% higher fee. Once I have the usage rights for a machine communication adapter or a cloud communication adapter, I'll turn them into productised services.
Lessons so far:
Value comes from doing the right things (effectiveness) and not from doing things right (efficiency).
I use value pricing only for custom projects with well-defined requirements.
I can execute value-priced projects with less than 4 weeks of effort.
To Be Continued
As this post and this newsletter is getting too long and my Sunday is getting shorter and shorter, I'll discuss my pricing experiences with productised services, advisory retainers, workshops and trainings in my next newsletter or maybe in my next two newsletters 😉
Reading & Watching
Arne Mertz: Docker4c: portable C++ development environment
Setting up your C++ development environment can be a daunting task. You must install your IDE, build tool (e.g., CMake), compiler, debugger, package manager, static analysis tools, test tool and many other tools. Every member of your team must do the same. Inevitably, builds work for one team member but not for another.
Arne has built a Docker container docker4c (Github) that provides a ready-made C++ development environment. The "4c" in docker4c stands for the Clang compiler, CMake, the Conan package manager and the CLion IDE. The first three Cs are installed in the container. The fourth C, the CLion IDE, should be installed outside the container. The container contains more tools like GCC, clang-tidy, Cppcheck, sanitisers, valgrind and perf. You can and should add the tools you need for development.
I agree with Arne that the IDE should not be part of the container. His reason is that different people use different IDEs. My reason is that the IDE and the container run on different computers. The IDE could run on your laptop and the container on a build server (possibly in the cloud).
CLion and VSCode support such remote development with a Docker container on the same computer or on a remote computer, QtCreator does not. In the Webinar: Building Embedded Applications from QtCreator with Docker, I show a workaround how you can cross-build and run Qt applications from QtCreator, where the build environment is encapsulated in a Docker container. QtCreator and the container must be on the same computer.
Arne's blog Simplify C++ is one of my favourite resources when I need to understand modern C++ features. So, I am happy that Arne is back blogging after a two-year hiatus 💜
Juan Manuel Garrido de Paz: Ports and Adapters Pattern (Hexagonal Architecture)
Juan Manuel has dedicated his complete website to the Ports and Adapters Pattern (a.k.a. Hexagonal Architecture) created by Alistair Cockburn. He has a detailed description of the pattern, an implementation guide, an instructive interview with Alistair Cockburn (Part 1 and Part 2) and a Spanish translation of Alistair's original article. Juan Manuel's website is an excellent starting point to learn about hexagonal architecture.
In my talk A Successful Architecture for Qt Embedded Systems,I describe how to use hexagonal architecture for Qt embedded systems (see video 12:02 - 30:16).In this post, I'll add a couple of points I learned from Juan Manual's description.
Hexagonal architecture distinguishes two types of adapters and hence ports.
"A driver adapter uses a [...] port interface". The GUI, CLI, cloud connector and tests are driver adapters.
"A driven adapter implements a [...] port interface". The machine communication, customer database and mocks are driven adapters.
You use the Composition Root pattern (see also my next post) to create the right adapters and to connect them. The driver adapters depend on the driven adapters and take the driven adapters as arguments to their constructors (dependency injection). Of course, the constructors depend on the port interfaces and not on the concrete adapters. The main() function is the natural composition root of the application.
In my talk, I just mentioned the pros of hexagonal architecture and didn't mention the cons - due to time constraints. Juan Manuel fixes my little omission.
The most important advantage of the hexagonal architecture is its inherent testability. By default, you should provide a test adapter for each driver port and a mock adapter for each driven port. Test and mock adapters make it easy to test the business logic, Every adapter can be tested in isolation from the other adapters and from the business logic. Hexagonal architecture makes it easy to write unit, integration, acceptance and system tests.
Being able to provide multiple adapters to the same port gives hexagonal architecture an enormous flexibility. Adapters enable you to switch between different technologies. For example, you can replace the CAN communication with the machine by Ethernet or BroadR-Reach communication - without touching the rest of the application. The application can select the adapter at runtime guided by a configuration file. The flexibility can help you with at least three scenarios.
Different machine models use different means of communication. The configuration file of the operator terminal specifies the proper adapter.
The previous scenario typically evolves over time, as new machine generations come with hopefully better communication technologies.
You can start with a software simulator of the machine. The terminal and machine applications run on the same PC and use inter-process communication (e.g., Qt Remote Objects). This gives you time to decide about the right communication technology later. It also gives you a simulator in the office.
As most patterns, hexagonal architecture introduces additional complexity, some indirections and a small performance hit at startup. I think that these cons are far outweighed by the pros.
When should you use hexagonal architecture? Juan Manuel's answer: For small projects (e.g., smartphone apps), hexagonal architecture may be too heavy-weight. "For medium/large projects, which are supposed to have a long life cycle, and are supposed to be modified many times during their lifetime, using Hexagonal Architecture will be worth it in the long-term."
The "medium/large projects" describe pretty much the products I am building. I would answer the question even more affirmative than Juan Manuel. You should have a very very good reason not to use hexagonal architecture for Qt embedded systems.
Mark Seemann: Composition Root
Hexagonal architecture creates its adapters in the composition root and uses constructor injection to pass the driven adapters to the constructors of the driver adapters. Of course, the constructors use the port interfaces as their arguments and not the concrete adapters. This is a nice application of the Composition Root pattern.
Qt applications can use signal-slot connections in addition to constructor injection for the communication between adapters. As signal-slots are syntactic sugar for callback function, they are a case of function injection. I used both constructor and function injection to break the dependency cycles in a Qt application - not knowing at the time that I refactored the application's main() into a composition root.
Mark gives a detailed description of the Composition Root pattern in his book Dependency Injection Principles, Practices and Patterns. The chapter describing the pattern is freely available.
Liz Keogh: TDD All the Things (Video, 5:06:02 - 5:44:55)
In her talk at the First International Test-Driven Development (TDD) Conference, Liz applied TDD and refactoring not to code but to people. Yes, to people! This unique perspective made her talk stand out from the other talks. Alex Bunardzic, the creator of the conference, summarised this well: "Liz starts from people and drives to the code. [Others] start from code and drive to the people."
When you implement a new feature using TDD, you'll iterate through theses three steps until you are done.
Amplify the positives. You make your existing code better (but not perfect) by refactoring it. Refactoring emphasises the strengths of your code.
Describe desired behaviour. You write a test, which describes the expected behaviour of the code. The test fails (the red TDD state).
Change the behaviour. You write the one or two lines of code that make the test pass (the green TDD state).
Corresponding to these three TDD steps, Liz presents "3 models of feedback to help people change their behaviour". The obvious example is that you coach a team to adopt TDD for development.
One slice of bread. You commend people on doing the right things, even on small things. If developers write tests first, thank them with a "Well done!". If someone refuses to write tests, find another person who does and praise this person. You reinforce habits with positive feedback (TDD's green state).
Sandwich model done well. The "feedback sandwich" surrounds the description of the desired outcome by positive feedback (amplifying the positives). You redirect developers to the expected behaviour (TDD's red state) like writing tests first. You do not write the tests for the developers, but let them do this themselves. They change their behaviour of their own accord to reach green again.
Radical candour. You drop the nice wrapping of the sandwich model and tell the developer directly what the desired behaviour is. You may even take the developer through a TDD cycle (e.g., using pair programming) to demonstrate what you want. You can use radical candour only when you have done the sandwich many times before. This kind of feedback "must come from a place of care".
Liz summarises her feedback models aptly: "It's the same as TDD. Anchor [and amplify] what you value, describe the behaviour that you want and then the [person's behaviour] changes."
At 53 years old, I still need a reminder from time to time (thanks Liz 🙏) that many people cannot be persuaded by rational arguments to change their behaviour. The discussion about (not) getting vaccinated against Covid-19 is an excellent example (see Ezra Klein's op-ed What if the Unvaccinated Can't Be Persuaded in The New York Times). Rational arguments are the fast and direct way to changed behaviour. Sometimes they work, often they don't.
The more promising approach is to appeal to people's emotions - and that's what Liz' feedback models do. They create a psychologically safe working environment, which "is the #1 indicator for high-performing teams". They are also an application of system thinking. You can only guide people into the desired direction by changing the context, in which they are acting. You can raise the benefits for changing or the costs for not changing.
Always remember: People only change their behaviour out of their own accord.
You can follow Liz on Twitter or on her website.
GeePaw Hill: On (Not) Using Mocking Frameworks
I like GeePaw for not mincing words. In this post, he argues (convincingly, I think) why the use of mocking frameworks or "automockers" is a bad idea.
Automockers make it "easy to write pseudo-tests", which test the mock and not the real code.
The code of automockers is hard to understand, so are the error messages. Hence, people tend to copy and paste mock code. And "we all know what happens when we do this".
Automockers "make it easy to write (possibly pseudo-) tests" against legacy code instead of refactoring the code.
Automockers are often used to test behaviour, say, of a communication protocol or of a firmware updater. "Behavior-testing, as opposed to result-testing, is highly fragile-under-change. Refactoring breaks tests much more often when we test behavior."
You can mitigate the first three issues by writing the mock code by hand. The fourth issue is inherent to behavioural tests. It won't go away by handcrafting the mock. GeePaw is against using mocking frameworks and not against using mocks, although mocks should be used sparingly. In chapter "10. The Mock Object" of his book Test-Driven Development for Embedded C, James Grenning gives a step-by-step explanation how to use TDD and a mock for implementing a flash driver.
In a recent Twitter thread, GeePaw made an interesting remark (emphasis mine): "5) I avoid fakes - test-doubles, mocks, spys, auto-mocks, there are million words for a million flavors - in general, tho they're occasionally unavoidable. The techniques of hexagonal architecture have helped me with this a great deal."
Unfortunately GeePaw doesn't elaborate on how hexagonal architecture helps him avoid fakes. So, I try to shed some light on it. One of the adapters, which is plugged into a port of the hexagonal architecture, is typically a mock so that you can test the business logic against this mock. As the adapter inherits the port interface (the Target in the Adapter pattern), the mock is just another implementation of the adapter. Another advantage of the hexagonal architecture is that the adaptees can be tested separately - without the adapter and certainly without the business logic.
You can follow GeePaw on Twitter, on his website or on his newsletter.