Continue Discussion 23 replies
October 2019

kcon

Nice blog post, Tyler! One C unit test framework I enjoyed using at Vaunt which addresses the issue you mentioned of state leaking from one unit test to the next is Criterion (https://github.com/Snaipe/Criterion). It seamlessly runs each unit test in a separate process to help isolate them from each other, and it also supports pure C tests so no need for wrapping includes in externs. :grinning:

October 2019

tyler

One C unit test framework I enjoyed using at Vaunt which addresses the issue you mentioned of state leaking from one unit test to the next is Criterion

Thanks for the tip! It looks quite good and would definitely help out with the state bubbling into other tests.

February 2020

Thekenu

Hey Tyler, the examples relies on a x86 port of littlefs. I was thinking, isn’t this functionally equivalent to a “fake”? It’s a piece of code that compiles on x86 and has the same interface as what we use on the embedded target.

1 reply
March 2020 ▶ Thekenu

tyler

You are right. The x86 port of littlefs can be considered a fake. The port is mostly contained within the emubd module, which translates the littlefs API calls host filesystem calls.

March 2020

Thekenu

Thanks for the clarification! Can you also give recommendations on how to organize the embedded target directory along side the test directory?

For the test directory, you suggested something like:

test
|— mock
|— fake
|— src
|— header_overrides
|— makefiles
|— (etc)

Should the embedded target code live in a neighbouring directory to test? E.g.

MyProj
|— embedded
| |— mutex // e.g. mutex/mutex.h from your examples
| |— src
| |— inc // e.g. kv_store.h from your examples
|
|— Test

Should there be a top level Makefile under MyProj? How can we elegantly handling using 2 different compilers in a project (e.g. arm-non-eabi-gcc for embedded and native x86 compiler for cpputest)?

1 reply
March 2020

tyler

The way I’ve seen it done in the past is to have separate Makefile’s for the ARM build and another for the unit test build. This would require the following then:

cd embedded && make -j4
# and
cd test && make -j4

If you did want a top level Makefile to control all of these sub Makefiles, something like the following would work, but I don’t usually do this…it makes wrangling all of the options and argument handling more difficult and confusing.

test:
    $(MAKE) -C test/

embedded:
    $(MAKE) -C embedded/

If you do opt for separate sub-directories and separate Makefiles, I usually use something like Invoke to wrap everything using the strategies discussed in another post of mine

August 2020

danielhep

I’m doing testing with CppUTest, but I’m encountering an issue with the device drivers, which try to call some assembly instructions and fail since we’re on an x86 machine rather than the MCU. What is the proper way around this? I need to keep the driver headers included since I use some of the typedefs in them.

1 reply
September 2020 ▶ danielhep

tyler

Hey @danielhep! Great question.

What I’ve used in the past is to create a header_overrides directory where I have my mocks, stubs, and fakes directory. For reference, this section mentions it (briefly) Embedded C/C++ Unit Testing Basics | Interrupt.

Place any header in that folder, with the structure you are trying to match, and then include that directory before any of the include paths you are trying to override. This will ensure that CppUTest picks up your overridden header first before the MCU one.

For example, if you are trying to override a fake header that is included in the form of

#include stm32/inc/stm32f4xx.h

then make this the structure within header_overrides

$ tree header_overrides
header_overrides
└── stm32
    └── inc
        └── stm32f4xx.h

EDIT: I missed your last sentence:

I need to keep the driver headers included since I use some of the typedefs in them.

My solution has always been to redefine the typedefs in the override header. If things change, the unit tests will just break but that’s usually fine. The worst would be if the overridden header changed and things didn’t break.

September 2020

hankrof

Hi, i found the acronym of test driven development is incorrect:

Through proper use of unit tests, and especially while using practices from Test Driven Development "(TTD)"1

which should be “TDD”.

1 reply
September 2020 ▶ hankrof

tyler

Woops, thanks for the report. Just pushed a fix. https://github.com/memfault/interrupt/commit/b64d3b0309b61f94894d04359f2aadf955a0500f

April 2021

easilok

Hello.

First thanks for your articles on this subject. I’m an embedded developer recently found that I’m missing unit testing to make better code without bugs or the painful task of debug code.

I found your articles, and they are well explained, with good examples, and I was convinced.

However, I would like to make a small question. I found that your way of organize files was easy to understand and an excelent approach. However, on the cpputest site they point to a repo (GitHub - jwgrenning/cpputest-starter-project: gcc cpputest starter project, with instructions to help get your legacy code into cpputest for the first time) that is from the author of the renamed book on this subject. I found that file organization a little more hard to understand, and it does not seem to make use of separate makefiles for different tests, as you do.

I know I’m starting this up, and I’m reading that book, so maybe I’ll understand more later, but can you tell what advantage that way of structure has? Or do you find your way better?

Sorry for the long post, and reactivation of this topic.
Thanks for all your help.

1 reply
April 2021 ▶ easilok

tyler

I’m glad you liked the post! And no worries on commenting on an old post, this post continues to be updated as I find new information, as do all of our posts here. Comments are always encouraged.

I know I’m starting this up, and I’m reading that book, so maybe I’ll understand more later, but can you tell what advantage that way of structure has? Or do you find your way better?

The example repo you linked assumes that you’ll only need one compilation script and one compiled unit which runs all of your tests, which I’ve found to only handle the simplest of cases.

Let’s say you are testing a bunch of different pieces of your system, and we have this analytics module that is used in a number of places throughout the code base. In other words, our project is more than a single .c file.

You are going to want to stub out the analytics module in most of your tests, probably fake it out in a few of them, maybe use a mock in a few more, and actually use the real analytics.c subsystem in a single unit test. Since you can only compile in a single mock/fake/mock/implementation for a single module, you need to choose which one you include for which test.

To satisfy each of these requirements, you need to compile three different binaries and run them each separately. That is the primary reason for using multiple Makefiles.

I prefer the structure I’ve used in the Interrupt post example, as it’s the structure I’ve used at previous companies and used in complex projects with over 100 tests and 100+ modules as well.

1 reply
April 2021 ▶ tyler

easilok

Thank you for your quick answer.

That’s what I was thinking, I like your structure because you create a kind of a sandbox on a test. You can test each .c module with what stubs, fakes and mocks you want, that are enougth to test that module, without any kind of interference. The other structure seems that will always be a kind of integration test, that you can test any functions, but most of them with all other modules in the compilation.

I’ll dive more into this subject, but thanks for all your help.

October 2021

RyanEdward

@tyler At the beginning you made the distinction between testing ‘embedded software’ and ‘firmware hardware and drivers’. By ‘embedded software’ do you mean a project that involves some sort of kernel e.g. ChibiOS or linux? By ‘firmware hardware and drivers’ do you mean bare metal?
I’ve read recent Memfault posts that use Renode to test. Would you use unit test frameworks like CMocka for ‘embedded software’ and Renode’s Robot framework for ‘firmware and hardware drivers’? I’m a bit confused as to what should be used when.
Thanks

1 reply
October 2021

t444

Hi, thank you for post.

How do you test static functions?

1 reply
October 2021 ▶ RyanEdward

tyler

By ‘embedded software’ do you mean a project that involves some sort of kernel e.g. ChibiOS or linux? By ‘firmware hardware and drivers’ do you mean bare metal?
I’ve read recent Memfault posts that use Renode to test. Would you use unit test frameworks like CMocka for ‘embedded software’ and Renode’s Robot framework for ‘firmware and hardware drivers’?

It’s a good question, and is one that is generally a gray area. I’ll do my best to answer.

ChibiOS and Linux would be very hard items to test themselves, as they are built upon event loops, interrupts, likely close hardware interaction, and when you include a single source file of ChibiOS, you likely need to pull in the entire project.

What I typically mean by embedded software, is the layers on top of the hardware and OS.

These modules that your unit testing should also have small number of dependencies. They should take in inputs and spit out outputs, and that is what is tested.

By firmware hardware, and drivers, yes, I typically mean bare metal in these situations. I have found this type of code very difficult to test in unit tests, and it’s usually more work than it’s worth. I’d probably even push for flashing specific firmware for on-device validation tests (simple ones though).

With Renode, it does open up the possibility to test lower level bits of your stack! I would still say it’s probably not worth it. Once an OS, driver, or hardware interaction module works, it works. It doesn’t contain more bugs, and the bugs that do remain always seem to be timing bugs and/or undocumented “features” from the manufacturer, which Renode or QEMU might not even help out much with.

With Renode or QEMU, I’d focus more towards integration tests, which pull in almost the entire firmware and run that as a whole with a few of the lower-level modules mocked/faked out (like the communication stack. Make it work over USB).

CMocka for ‘embedded software’ and Renode’s Robot framework for ‘firmware and hardware drivers’?

Yes, that sounds right.

October 2021

tyler

Oops, I think I meant to add this to the post!

I typically define something like the following and then use this instead of static in my codebase for those functions that need to be unit tested.

#if UNIT_TESTING
#define TEST_STATIC 
#else
#define TEST_STATIC static
#endif

Then, for unit testing, TEST_STATIC becomes a no-op and your functions become global, but for normal firmware builds, static is inserted.

1 reply
October 2021 ▶ tyler

t444

Thank you for response. And what do you think about including C file with modules under test into module with test cases. For example:
https://github.com/whot/unit-tests-for-static-functions

1 reply
October 2021 ▶ t444

tyler

Thanks for the link! I have never seen anyone try this this way.

Honestly though, looks like a huge hack (and is mentioned that it is one), and I am pretty confident you’ll fight it more than it helps when you work on larger projects.

The strategy I mentioned was used at Pebble when testing our firmware, and the project itself was hundreds of files. It worked well, and I wouldn’t try to fix something that isn’t broken.

October 2021

t444

Thank you for response. I think so but it is interesting to know different points of view. Strategy used in Pebble looks pretty good.

January 2022

giusmod

Nice article. Unfortunately my development machine is Windows and I used to work with it.
I tried to compile cpputest with mingw/cmake/“gnu make for windows” and it worked, now I have the two .a libraries.

I tried to use MakefileWorker.mk, but it seems it isn’t compatible with Windows standard tools. It uses uname, mktemp and so on. Any chance to port that Makefile so it works with a standard Windows command line?
Any suggestions how to use cpputest under Windows, without using Cygwin?

September 2022

KaDw

We have a library that is sort of Hardware Abstraction Layer for different families of MCU (at least all of them are Cortex-M devices). Basically the code is bunch of SDK function calls from MCU manufacturer with very occasional logic (check if parameter is NULL etc.)

I’m thinking how we can test it to add new platform support easier. The unit tests are not the best fit. Now suppose we want to support a new platform. Manual testing if peripheral is working is a bit daunting task but I think this is the only way to do it. I suppose this part of the code won’t change much over time.

April 2023

theJoel

Great intro, @tyler . I’m really hoping to lean into automated unit tests in our new project.

One problem I’m seeing in my initial dabbling (Ceedling/Unity) are build errors around datatype mismatches between the 32-bit MCU (Cortex-M4) embedded code and the x64 test code. Clearly a uint32_t is the same size on all architectures… but (e.g.) time_t is not.

I also have a [probably unfounded] concern that gcc-7.x used for the target hardware is going to do something subtly different than gcc-12.x. Should I attempt to use an older gcc for the test code?

Is there a generic way to build the test code as a 32-bit application, or would you deal with specific issues – i.e. sizeof(time_t) – as they arise?

Kind regards,
+Joel.