Writing a unit test from scratch for an embedded software project is almost
always an exercise in frustration, patience, and determination. This is because
of the constraints, as well as breadth, of embedded software. It combines
hardware drivers, operating systems, high-level software, and communication
protocols and stacks all within one software package and is usually managed by a
single team. Due to these complexities, the number of dependencies of a single
file can quickly grow out of control.
Hello Tyler. I just want to thank you for this amazing resource. I spent one week setting this up on my project and I am happy to share that my project now has 98% coverage. I am still wrapping my head around mocks, and will hopefully reach 100% coverage soon.
Thank you for such a great post. It’s very helpful.
I think it’s better to use some simple/tiny test framework such as uCUnit to perform tests on a real board. With minimum resources, we can make hardware tests simpler & automated.
I was thinking to integrate a board with a UART console into a test-runner PC. May I ask for a few suggestions? Should I build a piece software to monitor UART console output to tell if a round of tests succeeded?
I have been doing this stuff for years with great success. These topics in particular are how I’ve come to think of the bottom two layers of the Agile test pyramid – unit tests and integration tests (the top layer is UI/user-driven end-to-end).
I’ve learned that the most benefit from the unit-test layer comes from taking it to the extreme – minimal code under test; APIs / classes exercised in isolation using appropriate techniques (DI, SRP to make units more testable, fake/mock/stub, etc.) to managing dependencies. This ensures very light, fast tests that will be run frequently enough to be beneficial during development cycles.
Anything beyond this scope is integration testing – even if it is the unit-test suite cross-compiled for execution on the target. The target hardware, cross-compiler, and source (RTOS, libs, etc.) are still being combined such that the execution environment requires addressing those dependencies directly (target configuration, target communication, target/host test-runner scaffolding, etc.). Moreover, the increased cost to run (hardware, execution time relative to pure software) means the frequency of execution will be less (say, only on demand or part of CI compared to with every recompile).
“Integration testing” is a large spectrum (small-subset-of-the-system to full-system), and the cost of building a fully automated, CI-enabled framework for the target environment can be substantial. Determining the makeup of the integration platform is critical – what elements will be integrated, how will stimulus be provided / behavior be analyzed, how much “real” hardware is required, etc. Creativity and the 80-20 rule apply here to approach integration testing with a “what is possible, given this subset-system?” attitude instead of focusing on the limitations.
All layers of the pyramid are essential (and not substitutes for each other), and all embedded software engineers should develop skill applying appropriate techniques (see: your post) at each layer.