A Practical Guide to BLE Throughput | Interrupt

Bluetooth Low Energy (BLE) was first added to smartphones in 2011 as part of the iPhone 4S. Since then it has become the de-facto way for smartphones to communicate with external devices. While BLE was initially intended to send small amounts of information back and forth, today many applications stream large amounts of data, such as sensor data for tracking steps, binaries for firmware updates, and even audio. For these types of applications, the speed of transfer is very important.


This is a companion discussion topic for the original entry at https://interrupt.memfault.com/blog/ble-throughput-primer
1 Like

Very nice overview! Thanks. Do you know where I could find a similar write-up focused on reliability? I presume that longer PDU are more likely to encounter a bit error, and then resend the whole thing? Do you know what the retry logic is? Retry timeout and how many retries until connection abort? Is there FEC other than new BLE 5 feature you mentioned? Thanks.

Very nice overview!

Thanks, glad to hear you enjoyed it!

Do you know where I could find a similar write-up focused on reliability?

hrm, I’m not aware of any write-ups in particular but all the gory details can be found in the Bluetooth Core Specification that is available here.

I presume that longer PDU are more likely to encounter a bit error, and then resend the whole thing? Do you know what the retry logic is? Retry timeout and how many retries until connection abort?

The Bluetooth Link Layer is “reliable” (every LE packet must be ack’d by the remote peer before another packet is sent).

Unack’d packets will continue to be resent up until the LE “Supervision Timeout”. This is configurable at connection time and can be any value from 100ms up to 32 seconds but a typical timeout is usually around 6 seconds or so. If no packets have been received in this amount of time, the connection will timeout (with a CONNECTION TIMEOUT (0x08) disconnect reason).

Is there FEC other than new BLE 5 feature you mentioned?

hrm, not that I can think of. During a connection devices may send LL_CHANNEL_MAP_IND messages to control the LE data RF channels that will be in use. Many LE chips will do this to selectively stop using certain channels experiencing high error rates in a particular environment.

Thanks, do you think it would be fair to assume that using a smaller PDU would transport the same amount of data more reliably at the expense of lower throughput? My logic would be that a longer PDU is more likely to encounter a bit error, so more likely to encounter enough successive failures to trigger CONNECTION_TIMEOUT. Smaller PDU requires more sends to transport same amount of data, but each success resets timeout/retry logic. Or, is there lower level logic that would contradict this assumption?

Hi @tim @chrisc

The BLE5 has 3 different PHY configurations:

  • LE 1M PHY
  • LE 2M PHY
  • LE Coded PHY

The First 2 are uncoded. Meaning, there is no FEC. The GFSK modulation with modulation index of 0.5 as per the specification is capable of reaching the reciever with the throughput BLE offers for that range.

LE coded PHY is for LE long range feature upto 1Km. In that case, there is convolutional FEC coding, that will add error coding bits to the actual payload resulting in lower throughput of actual payload.

Thanks a lot for such a detailed overview! There is just one thing I don’t get yet. It is DLE vs ATT MTU. In the example you gave - Optimizing throughput Example, you showed how to increase the throughput by increasing the connection interval so it will be aligned with LL packet transfer time, and you reached 5 full LL packets 251 byte each per connection interval. But if the maximum ATT MTU is 512 byte, this means that we can only utilize 2 full LL packets and 10 byte from another LL packet per one GATT characteristic in one connection interval. So if I get it correctly the full LL throughput utilization in example you gave I can only achieve when I have several GATT characteristics?
Thanks again.