Debunk an LPWAN / IoT comparison

The LPWAN comparison artwork

Recently, on Linkedin, I reacted on a publication that is looking like this one. I’m used to react on LPWAN publication when they are comparing technologies as this one. Comparing apples with eggs and usually meaningless. This one was particularly interesting me because, most of the content is non-sense and scientifically subject to discussion. I’ll detail it in this blog post.

It’s really interesting the way it has been made and also the way the author publish it, react on it on Linkedin and what objectif is serves : capture people in a world where the truth is adapted to make you think only one of the technologies serves all the possible use-cases and all the others are the worst existing. The purpose is to sell you some books and services. This is really looking like the way flat hearth believers, radio waves danger believers and other groups do to find adepts and to sell goods to them. It’s really funny to see and discuss.

As the Author of the original document above considers his slide as “art” you can’t use, copy, cut (even if he published it online on social network) I have made my own one and simplify it to not entering in the expecting promotion this guy is looking for and to troll on the social networks. The curve you see are the exact copy of the original one. These data seems to come from a university work and are needed to be debunked. I just not mention the highlighted technologies other than Sigfox and LoRaWan because they are the one the slide tries to discredit and we will see how that’s wrong.

I do not identify the original author of that “artistic work” because I consider the scientific aspect of that “work” so bad that it discredit too much this person, its student and the associated university, that I don’t want to discredit these people directly. As I did on Linked-in but the author has immediately identify itself to start its promotion.

As I did not had access to the full study, sources of these graph, I can’t tell if the initial work quality that as been done is bad or if the context of the experience is explained. May be the original document explains different conclusions, so I’ll try to not judge too much the original work that has been done. I’ll judge what the author of the slide gave to us, as a single slide with pseudo-scientific information and a fake conclusion. Apparently, if you are ready to pay for the book / register… , you can get more details, thing I do not want to do to feed the troll.

Overall analysis

overall view of the slide

What, does that slide basically tell us: This is about a LPWAN comparison, it it supposed to be really serious as it comes from a university. Knowing the Author, German is also in important keyword, as most of the time in his discussions, he is considering Germany as the source of everything concerning LPWAN. I don’t want to debate on this, it just to say that this is supposed to get a lot of credit to the publication. On the bottom, you have some logo of University and reference to a professor.

This is a work apparently made by a group of student. I’m also a teacher and I’ve made a lot of work with students, if the author conduct the project with the students and drive them, honestly it should not be presented that way. If the work has been made by a university laboratory, as you will see in the later analysis, I’m really afraid by the university work and I don’t see how it could have been published without a pair review rejection.

You also see that this slide has been presented to a conference another proof of quality of the content … when you see all these elements, with any link to a billable publication, this should already be a warning for you. Here we have all the element of the manipulation. When you see that to get the details, you can buy something, you should understand that we are not talking about science here but business. Business is not bad, but when you see “The winner is” is something where no context is given, your fake science alarm should immediately ring !

The graphics purpose are just to give trust, they are technical enough for having 95% of reader not understanding them and they are addressing simple concepts so most of people won’t recognize they are not really understanding them. They are the pseudo-scientific proof of the conclusion “The winner is …”

As there is no information about the context of the experience, there is an additional information about the LoRaWan gateway used, this information, for real is not interesting at all in the context analysis but it gives a message of “transparency” to the reader. It’s interesting to read something about ADR, we will see it in detail later. This is supposed to make you more confident about this contextual information as, I think 95% of reader don’t know what this means.

In another hand, these details are interesting : If you look how the scammers do, they are creating bad quality email to fish you because they don’t want knowledgeable people to respond. Knowledgeable people won’t pay the scammer at the end and the scammer lost time. Here, the expert will ask to be refurbished later when he will see the quality of the content. So they prefer to filter people letting you understand that it is a scam. Here we have the same approach, there are just enough of information for the 5% of people understanding, they will immediately know that the whole content is pure bullshit and they will pass away. The others, may be you, that do not have the skills to detect this will be curious and look more in details.

Let’s see what is shocking the IoT & LPWAN experts on such slide

All the technical detail that ring the fake-science alarm detector

Here is the same slide but for each of the point that are ringing my fake-science detector, you see a round and a letter. As you can see, it’s not really possible to trust this slide. The high density of non-scientific, wrong information makes it remarkable. This is also the way the author is doing. And I really respect the way these documents are constructed. On one hand, 95% of reader can think they are potentially true information and on the other hand, there is so many wrong things that the experts will react on the social media post. This is exactly what the Author is looking for: he wants to make the buzz and getting benefits of the other experts network. For every response, the author will post links and other slides like this one with the same level of technical quality. At the end, this is generating traffic to its shop and he expects to sell books.

What is also interesting, like for flat-earth believer, there is many people that want to believe in what this guy say, wrong or true is not the question. They want to believe in it because the IoT LPWAN is so disruptive for people that do 3GPP since decade that some have to learn so much to be state of the art. So having a more simple world where things are stable and new skills not to be understand is really confortable. This is the way you find adepts.

Now, I’ll detail the different points to understand how what is displayed on this slide is a non sense in a technical point of view.

The overall idea of the document is to explain that the technologies Sigfox / LoRaWAN and Tech #2 are loosing so many packets, even in a short range that they can be used. Only the Tech #4 is a viable solution.

The first problem (J) on this study you can see here is the use of ADR for LoRaWan. ADR is Adaptative Data Rate, it means that the Spread Factor and the Power of the device will be adapted based on the previous communications. As any LoRaWan user may know, you must not use ADR on a mobile device. How that works : the network server will request the device to reduce is transmission capability when conditions are favorable. When you do an experiment like this one, you are at start at distance 0 and the conditions are really – really favorable, so the network will request the device to reduce power and SF. On the next measure you will be at a larger distance with parameter calculated for the short distance … We don’t know what is the device setting, we don’t know how many messages has been sent on each position so it’s impossible to see the exact impact, but that choice is at least a big mistake in a such experiment.

RSSI vs DISTANCE

RSSI vs DISTANCE

This first graph is about the RSSI (Received Signal Strength Indication). This is the signal strength in dBm. As the communication is from device to network, this signal should be on the network side. It could be on the device side but there is no information about this on the slide. Before considering the RSSI value, you need to know if it is not fully accurate as not all the devices are calibrated the same way, sometime there is no unit associated to RSSI as the calibrating is not performed. For Sigfox and tech #4 we can consider them as accurate because the networks uses high quality gateways. But for LoRaWan, and Tech #2, nothing is certain. This should be important to have note on this. On a standard LoRaWan gateway this is sometime a setting you can adjust.

Transmission budget link

The RSSI is basically the signal level on reception, this is related to different informations:

  • The device TX power (initial signal strength)
  • The device antenna gain / loss
  • The loss related to the transmission medium and distance
  • The receiver antenna gain
  • The receiver internal loss
  • The bias of measurement

So from this we can understand different things, at first if we want to compare RSSI with different technologies, we need to make sure that the different technologies uses the same Tx Power, same antenna gain on RX and TX… Then if you reach all of this, you are just comparing the transmission attenuation also called (Free Space Path Loss) To make it simple, on earth, this is just a function of the distance and the frequency.

So, from this, we have something interesting, this graph try to compare something that is directly related to the distance with the distance itself, where the technologies are not part of the equation… what only change in the technologies are the frequency for the tech #4 but they are really close and the impact is limited. The device transmission power also in the benefit of the tech #4. What I mean is by doing this comparison, you are not comparing the technologies, you are comparing the device antenna gain, the network antenna gain… and that’s it.

Now let see the details:

  • N – the average RSSI, we do not know how many frame have been capture for this average. I’m using sometime this way to measure the improvement made on my device antenna. I know that we need a minimum of 20 messages to make a correct average as the variability of the RSSI is high from a message to another. Did they do that ? or just 2 ?
  • C – the distance, it’s interesting to see that the distance is limited to 15km. LoRaWan is well known for covering 20 to 80 km without any problem and Sigfox 40-80 km. So why not having a longer distance ? You can tell me that the experience show that Sigfox & LoRaWan goes under this distance, but as the fact prove that these data are wrong, we can assume that doing that way, you won’t see the limit of the Tech #4 that can be under the Sigfox & LoRaWan commonly seen limits. The other interesting thing is about the distance calculation. Sigfox and Tech #4 are network operator oriented. So basically you don’t really know if there is one or multiple antennas around, you don’t exactly know where they are. So knowing the distance and the context would have been an interesting information.
  • ELoRaWan, there is many ways to communicate on LoRaWan, you can use SF7 to SF12 and this is really impacting the sensitivity, it goes from -123 to -136dBm. As the signal stopped to be received at -120dBm on the graph, we can assume they used the worst condition for the test. Why ?
  • DNo data over 9 km, for LoRaWan, Sigfox, Tech #2. For LoRaWan and Sigfox, we all know that the communication works really well over these distances. It’s impossible to have no communication at this distance on open field. You can have frame loss but on an average you must have some received. The only blocking reason can be geography (like a mountain between device & antenna) or a high level of noise. Here is an exemple for LoRaWan on Helium network. Were you see messages captured 21 to 25 (about 40km) miles away with RSSI from -104dBm to -122dBm.
  • A Variability at distance 0, here we see a large variability at distance 0, as we have seen distance is a factor of the RSSI and when you are close to the source the signal goes really high. To make it simple dBm attenuation in the air is a logarithmic function, you can do the simulation here, basically you loose 31dBm after the first meter, 51dBm at 10 meter, but you need to have 30km to loose 120dBm. So basically LoRaWan and tech 2 real distance at 0 is about 10m. It makes sense as they are technologies you can use as private. So you can deploy them and control the distance. But for the Sigfox technology, it seems that the real distance from the base-station is more about 1km than 0km to get a such signal. Then when the device move, we don’t really know how to apply this distance. For the Tech #4, having -120dBm is indicating the antenna if more about 10+km ; for sure not 0 or it is indicating that the RSSI information is just not valid.
  • BRSSI evolution, here we have different points at different distance. We can consider that this is distance from point 0 but as we don’t know where was point 0 for Sigfox and Tech #4, potentially the distance evolution has not been the same. As an exemple if we look at the Tech #4 (the champion), at km 6 the RSSI is stronger, this means the distance is lower, so basically, at km 6 in reality the device is closer to the antenna. So it is not km 6 .. it’s bullshit. We have no information about the experience conditions. As the distance is really small compared to what the technologies allow, we can consider different problems not mentioned. In theory for Sigfox and LoRaWan, the RSSI should be around -100dBm for 9 km distance and not -120/-130 as measured. The reason can be a bad antenna, dis-adaptation linked to the use, eventually a test made inside a city where many obstacle makes a longer path and attenuation. In a such case it should be indicated and distance is not relevant as is depends on the direction you go.

SNR vs DISTANCE

SNR over distance

The SNR (Signal over Noise) is a ratio between the Signal and the Noise. If the value is positive, it means that you have more signal than noise. Some technologies like LoRaWan can receive a signal even with a negative SNR. This depends on the SF chosen -7.5dBm on SF7 vs -20dBm on SF12. (F) The SNR is related to the noise around the receiver, this is not related to the distance. It is also related to the power of the received signal. This is basically RSSI. As a consequence, making a graph about SNR after a graph in RSSI over distance is a non sense scientifically speaking. Because in one hand it is the same thing and on the other hand, what can change during the experience is the noise level around the receiver and this is related to time and not distance. So when you see a such graph, the fake information bell is ringing.

  • G Initial values, show us a problem on LoRaWan. At distance 0, we have RSSI -40dBm but SNR 0dBm. it basically means that the level of noise is equivalent to the level of signal. So it means that during the experience, the LoRaWan gateway has a noise level at -40dBm, this is really high and as a consequence, it can explain why the maximum distance has been really bad. But the reason is not the technology, it is the noise around the gateway that has been installed by the guy who capture these data… At the opposite the tech #2 has 120dBm SNR. this basically means that the signal strength is over 120dBm… the RSSI was -20dBm so the noise would be -140dBm. If Tech #2 and LoRaWan are in the same location (distance 0) apparently they are not getting benefit of the same noise conditions. The two others technologies are SNR 40dBm but we know that the position of these two is unknown. They are technologies deployed by network operators so the noise condition are supposed to be good.
  • At 12 km, Sigfox SNR is stable indicating a good ratio for the signal 20dBm. From 9 to12 km the signal attenuation at 868MHz is about 2.5dBm. This is a proof that the reason of Sigfox communication stop after 9km is not related to the signal strength. It can potentially be the geographical environment, something between the device and the antenna. Over 50km the SNR is still positive if we extrapolate the value here.

Loss frame rate

Packet Loss Rate

Let’s take a look to the conclusions, they are about Packet Loss Percentage. I basically read this as over 80% you have no more communication. This is the limit of the technology. Basically LPWAN stands for Low Power Wide Area and wide area for dozens of kilometers. So these result at first look surprising and basically wrong without more context given.

  • HPacket Loss definition – when you deliver a statistic study with a percentage, you at least need to indicate the number of packets transmitted. Is that 10 or 100 packets that has been sent? You also need to define what is a packet. LoRaWan have retry and repeat mechanisms, Sigfox is transmitting 3 packets for a data frame, Tech #2 send multiple packets with redundancy for a single message. Tech #4 have an acknowledgment mechanism. So basically depending on what has been used, eventual retransmission, we are not comparing the same thing.
  • LResults – The tech #4 is not loosing packets but we also seen previously that for all the measures the distance was stable (same RSSI of higher, same SNR). So what to conclude ? other than the experience is fake or wrongly executed ? same situation = same result. Tech #4 is network based so there are many reasons to have multiple cell involved in the test. The Tech #2 is not working over 12km, this can be true, I don’t know the performance over distance of this technology. LoRaWan distance limit show here is factually wrong, but we identify that a high level of noise is seen on the gateway side, so basically, the problem is on the experience and not the technology. And for Sigfox, we seen that the SNR was strong enough to offer a coverage over 30-50km from the last point. So the best thing to consider is that the geography was just cutting the communications, we can add that this has been done in an area with not a lot of Sigfox redundancy otherwise another basestation would had get the traffic.

Conclusion

Thank you for reading this long document at first. I hope that for most of you, you will have a better understanding of RSSI, SNR and packet loss and also a better understanding of Sigfox and LoRaWan. I hope that you will be better prepared to such documentation shared over social media that have the taste of scientific publications but are not scientific at all.

This really remind me the excellent doc on Netflix about flat earth believer when at the end they were making some experiment with laser to get some proof of their believe and then will use this to convince other people. A scientific experiment can’t be reduced at some conclusion in a graph, it value comes from the context of the experience and must be reproduced by someone else, in different places.

We have seen that if the value above are possible to explain, they are all based on the context of the experience that is really specific and not related to any of the real conditions you are going to find on the field. These conditions are multiples, some will look like these one, most will be different and totally changing the graphs. There is no winner here, there is just two loosers : the scientific methodology and the truth.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.