Pocket article: Debug vs. Release Builds Considered Harmful | Interrupt

Separate “debug” and “release” builds are very common in embedded development. Typically the notion is improved debug capabilities (less aggressive compiler optimizations, more debugging information like logs) vs. highly optimized and hardened production release builds. I’m here to describe disadvantages to this practice, and why it might make sense to consolidate to a single build!


This is a companion discussion topic for the original entry at https://interrupt.memfault.com/blog/debug-vs-release-builds

I respetfully disagree with your advice to avoid separate “Debug” and “Release” builds at all costs, and I disagree that there are few cases where this is actually required. Instead, I would argue that there are very few cases where having the same build for debug and production release is warranted.

I speak from a background of safety-critical software, but these points hold true for all embedded software:
Software goes through several different stages of testing from feature development to production release. Each of these stages requires different hooks and compiler features to support differrent types of testing: cycle count profiling for worst-case performance proofs, coverage accumulation for branch coverage proofs, alternative library linking for procedural fault injection, precondition and postcondition assertions, execution history tracing, all require different compilation flags and linkages. You cannot build one single binary which allows you to do all these various kinds of testing.
Different binaries are required for tesing in an emulator vs. hardware test bench (fixtured jigs), vs. target hardware.
If a software defect can resuilt in loss of life, none of these are optional. We typically have several different “integration branches” as software progresses through the different stages of testing before release. Even if your software is not safety-critical, there’s no reason why we should hold ouselves to a lower standard.

1 Like

Great points, you are correct! there are plenty of situations where you need a very different build- what I was getting at is avoiding shipping a different build than the one you’ve tested.

The examples you provide are great:

  • “cycle count profiling for worst-case performance proofs”: usually I’ve found I need to use the exact same compiler settings as in the final production build for valid data here, but often I’ll want to slice out a small subsystem for detailed testing here, so this makes sense to me!
  • “coverage accumulation for branch coverage proofs”: I think this is usually going to be performed off-target, on a host build (i.e. a unit test build), though I’ve seen it done on target too
  • “alternative library linking for procedural fault injection”: this is a great technique, underutilized!
  • “precondition and postcondition assertions”: you’re 100% correct, in safety critical systems runtime asserts will be treated much differently than non-safety-critical.
  • “execution history tracing”: this one is interesting- I think I’d normally want to capture trace data on my production build, so the data matches the production use cases (i.e., same compiler settings). There’s trace tools like J-Trace or the Lauterbach Trace32 that can provide this trace data without requiring modifications to the target (assuming the trace pins are free on the test board)

What I’ve often seen is shipping a production build after only testing other build variants through beta, internal field test, etc, (or only minimal validation on the production build) and then hitting surprise cases that weren’t exercised before.

The need to have several distinct build variants I never felt. Normally the compiler setting is “optimize for speed” and acompanying automatic tests are developed. In the step debug case, -o1 is my best friend just for the moment. Thanks, Noah, for this great post!

I agree with the 1st 2 points, however there are 2 things that I think can make the last point a decision:

  1. If you’re using libraries, the NDEBUG is pretty standard, and you may not have another way (short of modifying the library itself) to enable/disable the debug features. A good example of this is the Eigen library.
  2. Asserts, for us, are treated as runtime code checks. I.e. we don’t use them for errors that should be able to occur at runtime. The key place where they become valuable is being able to check runtime code correctness in performance critical sections. In the relaxed constraints of the debug domain, we can typically allow for slower code, and thus more extensive checking. The obvious caveat here, that you have already alluded to is that if you have any timing related bugs in the code, the different execution time can make debugging more tricky.

So if you do have a need to have multiple builds, it is really helpful to make sure that you run testing on both.