How to transfer pictures over Sigfox network ?

Yesterday in a communication around Securitas Direct deal a small phrase has waked up all the Sigfox community:

Announcing in a certain way the arrival of 600Bps support for Europe (this is already the North American standard speed) to support the picture transfer over the LPWAn network. That said, even at 600Bps it’s a bit complex to transfer a picture… let’s see what we can do with this:

The complete text translated in english is the following:

Cellnex Telecom and Sigfox will multiply by six the current capacity of its 0G network of Internet of Things (IoT) to provide it with more features, among which are the ability to transmit images, send audio messages and optimize the reception of messages issued in movement. They will also extend their coverage to Portugal, becoming the official operator of the network in both countries.

Multiply by 6 the capacity may correspond to a change from 100Bps to 600Bps in the communication. By reducing the communication time by 6 we can also expect a better reception rate in movement as indicated. For detail you can take a look to my previous post about impact of mobility for Sigfox network.

That said even with 600Bps communication rate transferring a picture is still a big deal. You can have different approach:

  • Try to transfer 1 picture immediately
  • Try to transfer 1 picture within a day

Here is a big difference: in the first case you can manage an immediate alert to confirm it or not with a really low quality. In the second case you can expect a better quality for eventually a post event investigations.

Use case

Let’s consider the following picture a camera in an IoT device could take and have to report:

This picture is 640×480 pixel full color and its jpeg size is 63394 Bytes.

Transmit 1 picture immediately

With a 1% duty cycle, 600Bps you can transmit 432Byte over Sigfox network before having to stop talking for an hour. 6 time 6 message of 12B each. This number of bytes is too low for transmitting any usable picture.

We could consider sending a picture as an inaccessible goal but we can play with a second parameter: the replication factor (N). The replication factor is the number of time a message it repeated on the Sigfox network. This is improving the QoS but with a high density network it is not something mandatory. By changing this setting from N=3 (1 transmission + 2 repeats) to N=1 (1 transmission) the number of bytes you can transmit one shot respecting the duty cycle becomes: 432Bytes * 3 = 1296 Bytes.

After different quick test, the best compression solution, easy to use is WebP. This choice is not a good one as the memory and cpu power needed to compress a WebP image is not really fitting IoT. Regarding different discussions I had, a solution base on SPIHT could be more promising. That said, WebP illustrates correctly what can we do.

We have an optimal compression and by reducing the size of the picture we can reach the 1.2KB goal. The following image is a 198×149 picture with a binary size of 1209Bytes: (here in png for wordpress display, so you need to trust me)

This image can be transmitted in less than 36 seconds over Sigfox network respecting the 1% duty cycle with 600Bps speed and N=1

We can also see what we can get with N=3 ; the duty cycle compliant image size is 432Bytes. Here is what we can get (426Bytes) for a 96×72 pixel picture:

Even if these pictures are not really nice you can easily identify if a malicious person has entered in the room.

I assume we have a lot of possible improvements by reducing the number of color and spend more time on the compression algorithm. The best would be to have a streaming solution to improve the picture quality frame after frame….

Transmit 1 picture for later investigations

The second use case requires higher quality picture transmitted in a longer period. To transmit the original image, good quality, using WebP compression its size is about 16KB (15 668 Bytes).

Transferring a such picture over Sigfox networks needs:

  • 100Bps / N=3 => 217 hours
  • 600Bps / N=3 => 36 hours
  • 600Bps / N=1 => 12 hours

Waiting 12 hours to get a high quality / color image is something acceptable in most of the case if you were able to get the low definition picture first then request the device to get the high definition one.

It means with 600Bps the N=1 factor is something that needs to come with. In my point of view 36 hours is a bit too long.


The network capacity improvement to 600Bps is clearly an enabler for images transfer and even really low quality images can be used for security purpose. The N=1 factor is also a required capability, in my point of view, for such use cases.

The use of larger frame (over 12B payload) could also be a solution to improve the quantity of data transmitted withing the duty-cycle time-frame. Frame size also allow to keep N=2 for QoS with an equivalent bandwidth. To be more precise the actual Sigfox protocol only have a 48% efficiency (payload / total lenght). 24B to 64B payloads could bring it to 83%.

User Payload per 1% DC12B Frames64B Frames
100Bps / N=372B128B
600Bps / N=3432B768B
600Bps / N=2864B1536B
600Bps / N=11296B2304B

You can see the impact of a such evolution. Even if this evolution is a bit more intrusive for the sigfox protocol it can be another evolution for such use cases.

Now, we have to wait for an official offer from Sigfox for such feature. Here is the more problematic part as Sigfox is actually proposing innovation only to some VIP clients with a too long time to market for smaller business ; something difficult to understand for the ecosystem and really limiting the ecosystem innovation rate.

2 thoughts on “How to transfer pictures over Sigfox network ?

  1. Webp is optimized for a scenario where the image is compressed once (by a powerful computer), to later be viewed millions of times by web browsers. This optimization makes sense in the web context, but producing webp images on IoT devices might be a big challenge.

    A test ( showed that creating a webp image from a 300×300, 264 KB uncompressed image required 40MB of ram and several minutes of cpu time on a desktop PC. That’s far from the power available in most microcontroller-based IoT devices.

    Possible solutions:
    * Maybe webp isn’t the way to go, but there are other image formats that offer high compression, for example

    * An IoT device that has a camera probably supports hardware compression (H.264 being the most common format). These compression functions can work on still images as well, so maybe it is just a matter of asking the hardware to deliver a compressed still image, relieving the microcontroller from performing the compression.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.