Device Firmware Update Cookbook | Interrupt

Great post. Another possible design for the updater is to use a Trojan horse style. Meaning you integrate your new loader into the binary file of the updater. For instance using srecord cat as mentioned before.
With this your updater runs through automatically. You can also perform additional operations for a particular loader since updater and loader are more tightly coupled.

Nice design to consider, I’d really prefer ESP8266 Arduino’s main design concept (Using Section B for receiving and accumulating the newer binary … onComplete: the device’s next boot will update the firmware), because of two reasons:

I’ve an application communicates with a gateway through an RF, the communication’s protocol is already secured, and would really want to re-use the same protocol architecture for receiving the binaries. However this was the first reason.
The second reason is: the Device’s communication with the gateway could be interrupted by any reason: Gateway has Internet-timed-out, interrupted power supply, … etc. So need my application to receive packets and store them at another section in the flash memory (B), reboot and make the real update the next boot.

So it’s a fail-safe design.

Do you think I’d need an Updater or a Loader in my design?
Or it’s enough to have a simple bootloader checks out the status of stored binary and validate it, then decide to do DFU or run the application itself?

Thank you for your helpful posts :slight_smile:

Hi @HamzaHajeir, yes in the event you have enough room in your flash for 2 copies of your firmware, A/B updates are great. Most projects I’ve worked on in the past were code space constrained however, so using a Loader was a more pragmatic design in that case.

1 Like

I see, Thank you for your reply.

I’d really set a criteria to have at least x2 flash space than total application code.

At a very advanced point, would you recommend transferring parts of the binary (not the whole image)? Which is a very risky in terms of C++ optimizations and underlying framework upgrades…

Say at minimal: I store each (system / application) in different sectors in memory, so I can in some point transfer the application itself, or both. would it be any good? or would go unmanageable?

In the event of power loss, RAM will lose its state. To detect those events we use a magic value here as well.

I’m sure you know that there is no guarantee that RAM will reset to 0 or any other defined state on power loss. In the event of a short power loss, it’s quite possible that the shared memory magic value will be preserved, but part of the state will be corrupted, possibly leading to unpredictable behavior (especially as the complexity of the shared memory state increases).

Fortunately, most microcontrollers provide a way to detect if a power loss has occurred. On the STM32F429 used in this post, it’s RCC_CSR.PORRSTF (“Power On Reset ReSeT Flag”). I’d love to see shared_memory_init start with

if (RCC->CSR & RCC_CSR_PORRSTF) {
  // Power loss has occurred.
  // Clear magic to ensure we initialize shared_memory.
  shared_memory.magic = 0;
  __DMB();  // Possibly not necessary.
  // Clear reset flags.
  RCC->CSR |= RCC_CSR_RMVF;
}

I would also consider making all of the fields in shared_memory volatile to prevent a smart compiler from omitting write operations if it knows the value will never be read (by the current program).

Thanks for the note @dreiss. You are completely correct and I added your code snippet and a note to the article.

After “playing” a bit with Renode (thanks to your other article!), looking in the function call log, and seeing calls to libc stuff there, I just wondered… Would it be feasible, and posible at all (convincing the linker needed…), to share C & C++ library stuff between the different parts, e.g. having the C lib only in the loader area, at a guaranteed fixed address, while applications are put together in a way that they assume the library stuff to be at that address, too? That way, the image size to transfer for DFU would be reduced some. The app would always have to use the same toolchain version, I guess. I haven’t looked at how big that actually is in MCU implementations & worthwhile.

Yes! Believe it or not, but we did this at Pebble as we were under extreme code size pressures. IIRC the easiest way to implement it is to create a shim library that redirects the function calls to your fixed addresses, though I suspect you could also do this with a NOLOAD section in your linker script.

1 Like

For STM32F1 series, the bootloader may appear to work, especially in Renode, but interrupts will not work. Let’s say the position of the image will be at 0x8002800, and the position of the vector address will be at 0x8002860. You need to move record the ORIGIN in the linker file as 0x8002800 - 0x60 (shifting the header of the image).

The address that is used for the SCB_VTOR (Vector Table Offset Register) must be aligned to the beginning of a page boundary. This means that the address must be divisible by the size of the memory page used by the microcontroller. For example, on Cortex-M processors, the page size is typically 0x200 (512 bytes), so the address must be a multiple of 0x200. In the case I just gave, 0x8002800 is divisible by 0x200 and it is a valid address. On the other hand, 0x8002860 is not divisible by 0x200 and therefore not a valid address for the VTOR.

It’s also worth noting that the actual size of the vector table is also dependent on the configuration of the microcontroller and the size of the exception handlers. Some device might be have different size of vector table and different alignment requirements.

Good stuff!
I did find an error in your code:
Below code can only set the flag, but not clear it:

void shared_memory_set_dfu_requested(bool yes) {
    shared_memory.flags |= DFU_REQUESTED;
}

It should be something like this:

void shared_memory_set_dfu_requested(bool yes) {
  if (yes) {
    shared_memory.flags |= DFU_REQUESTED;
  } else {
    shared_memory.flags &= ~DFU_REQUESTED;
  }
}

Cheers,

Herman