Many different crypto projects are running decentralized computing as it’s a good piece to complete the distributed storage or messaging I previously documented. The principle is to be able to deploy a workload on someone else computed to serve a reliable web service. This layer is a IaaS or CaaS layer like we can find on Azure, Aws, GCP, Ovh… This layer is an important piece of a decentralized cloud in construction in the web3.
In this blog post, I’ll review some of the projects and explain how to be part and use the one that seems the more reliable. As there are many different solutions and it has been a bit long to test all and publish them in a single blog post, this will be split into different one. The first one is about FLUX.
Flux, the more trendy but less making sense
Flux is a really strong project with the larger size, this project seem in my point of view being successful thanks to all the artifact in place to make it a profitable project for mining but in the same time, it just not good for what it has been done: delivering computing power.
Why do I say this ? The way the token distribution is made is quite strange: you have a proof-of-work mining for 50% of the distributed token. I assume that GPU miners are happy but this is getting no benefit to any distributing computing. Clearly this is the useless idiot part of the tribe: they burn energy in mining and this gives a value to the token, it basically protect the token value and that’s it, all other operation running the chain can be made with a PoS and run on the Flux computers themself, creating a usage. The other 50% are distributed to the one providing the hardware with a proof-of-stake approach. This is a good way to reduce the available token supply and also maintain the token value. You need to stake a lot of token to add an instance on the network, the size of the machine you can add to the network is related to the amount you stack.
|Type||Server Size||Stacking||Rewards||Reward / d|
|Public cloud |
price for equivalent
|Cumulus||2 cores 8GB|
5,6 Flux / blockfind
|20€ / month|
REAL APY – 7%
|Nimbus||4 cores 32GB|
9,4 Flux / blockfind
|27€ / month|
REAL APY – 14%
|Stratus||8 cores 64GB|
22,5 Flux / blockfind
|40€ / month|
REAL APY – 13%
The reward are given to one of the node on every block, per type of node. so depends on the number of active nodes you will need to wait a way long for your reward. The Rewards are not based on your stacking but based on the overall rewards per blocs. So the real reward APY depend on the number of nodes. The current FLUX per day is given in the following analytics. You have a running cost for the server so you have a REAL APY. This one depends on the token price, I’ve taken the current price ($1.2).
The APY is in my point of view really low compared to the risk taken by stacking $48k, many project are offering better APY with a lower complexity to be supported. The size of available capacity is also limited by the reward distribution going down with the number of node deployed. The Reward APY is decreasing with the number of node. The good news is higher the token is and slower the number of nodes (and the service offering) is growing.
Currently there are 14.000 nodes mainly cumulus with a total of RAM of 271TB RAM. This indicator is the most important as the RAM can’t really be over-allocated and you can estimate the equivalent of physical server it correspond to: this is about 352 servers as they are commonly configured in a data-center. We can assume about 35 bay. So FLUX is basically equivalent to a room in a standard Datacenter. The locked stacking is about $158M (132M FLUX) and this is about 3x to 5x more than what you need to create an equivalent computing power. My estimate of cost of a such infrastructure using Dell servers bare-metal is about $20M-$50M.
- considering 352 servers with 768GB Ram, >16TB Storage and 2x16t, we have less CPU but Flux is VCPU. Public price Dell R350 @ 20850€ each is 8M€, lets assume 2x for a 4 sockets version (really conservative) you get $20M max - considering 12624 servers with 32GB Ran, 480Gb Storage and 2x8c-16t, we have more of everything. Public price Dell R350 @ 3450€ each is $44M As most of flux instances are Cumulus, they are mostly based on virtual machines on public cloud, the first assumption is the more relevant.
What does that mean : it means that FLUX requires much more investment than classical public cloud to deploy its service and at no case it can be competitive at the end. The value of FLUX has been growing and we can assume that the initial locked value has been really lower, it could be an argument but… whatever it means it’s not scalable as the current value makes it at this level of investment. And I let you imagine how much you can get from your bank if you have $158M you can lock as a financial guarantee, and what you can build with this.
The cost for using flux is in FLUX, it mean you don’t know what is going to be your service price in the future, This is something I usually point out as a big adoption risk. But what is more strange is that the service price is lower to the real underlying hardware cost. When you price a Cloud offer you primarily price it based on RAM used (not really easy to over allocate) then on hardware size. Here every metric have it’s own economic (1 core = 3GBram = 60GBhdd). As a consequence you may saturate the memory on a physical machine where it is deployed but not the CPU or the hard-drive. This is lost of hardware and loss of potential cash for node providers.
Pitfall of having small cumulus machine (75%) is to create a large fragmentation in the pool of resources. When creating a Cloud service you normally basically use larger computes as possible to get a good optimization of the resources used. As an example, if you rent 1CPU, 8GB Ram, 30GB Storage, you have lock and entire Cumulus but use only 50% of CPU and 10% of storage and this cant’ be shared with anyone else as the memory is fully allocated. Same can be done with HDD.
The other interesting point is that when you select the equivalent of a Cumuls machine, you are at $6.3 (5 flux) for a month, but you won’t find a hardware available for this price, it costs about $20 to rent it. If you buy hardware and host at home, it means about 10 years return on invest. So it makes sense if you only use really small instances (10% of a Cumuls) as it will cost $2 and provide an equilibrium on the run cost / service cost. In fact the hardware provider revenues, in this model, comes from the rewards related to locked FLUX, basically speculation and this is not a forever viable mechanism…
As a consequence, you need to have a strong FLUX token value to support an equilibrium and at the end it will never be more effective than a centralized cloud solution.
Flux has been made for CaaS, not for IaaS but does not includes the equivalent feature you can find in a K8S infrastructure (at least currently). The architecture would fit well with FaaS but I did not saw it implemented. To use Flux you need to create and register a docker container, this is a standard approach and it’s positive. the negative point is that you need to register your container (App) and this is a centralized process. If you want to make an app for your private usage it means you need to disclose what it is doing and show it publicly. It’s limiting the potential of private use as, if I understand well, you create a public container that anyone can instantiate. Only 358 docker images have been validated until now, most are public well known images, other made by anonymous you need to trust in term of security, 15% of containers made by a single guy and 8 minecraft server version. As part of the instance, you find about 140 running mariaDb instances, about 20 portainer instances.
This website is monitoring the Flux usage and apparently today, it run about 844 paying apps depending where you look at for about 1975 Flux per month, it’s a beginning and I’m happy of this number regarding the complexity of starting running something on flux.
I might be stupid, but I did not succeed after an hour to deploy a container, I did try to create a portainer container as it is whitelisted to have more ability to run my own containers but it seems just a trial with a definition of portainer with 100MB of RAM… I’m sure of nothing but I was stuck with a memory error message when looking at deploying it with1024 or + MB. After one hour of investigations I did stop to try to run something on it that way. Documentation is quite limited out of new container creation, community seems to not be really interested in writing tutorials on how to use it. If you know good resources for using it to deploy a company backend workload, let me know in comment, I will be happy to change my opinion.
In the dashboard you can use the marketplace to deploy application, this seems to be simple but you need the zelcore application installed on your machine (not available on Linux). I did some tests with my smartphone and the experience is not be so bad if there was something interesting to deploy from the marketplace, basically I found 7 crypto app + Hello World. That is a bit limited and did not retained my interest.
Green gaz impact
So to conclude, I really love the front page (you can appreciate on the left) about going green, when you know 50% of rewards are spent for highly energy intensive PoW calculation, estimated to 8,66Msol/s considering 0,35 sol/w you get 24MWh. This is about 210GWh / year eq 12500TCO2 per year.
The service side is corresponding to about 352 servers 140kWh / 1,2GWh per year eq 80TCO2 for doing basically nothing. Not it could be used, but there is nothing interesting to do with it for a professional. I think that it is normal to have a low level of usage for new web3 technologies, but at least you should be able to create something with it, like you can do on standard solutions like AWS, Azure, GCP… PoW is basically over consuming and impacting the service by a factor of 200 ! crazy !
Flux will go to Proof-of-Useful-Work, sometime later, the idea is to consume the PoW energy to make something Useful compared to just a Math challenge (like running a machine learning process) What is interesting here is that the computing power of the PoW is currently about 171x higher than the PoS computing power (quick & dirty estimate based on power consumed, but should be really higher as GPU are more efficient in many tasks). I’m not sure that the network will quickly find consumers for a such computing power, I hope so. So I’m not expecting it to go green shortly and should not reduce the carbon emission at the end, just making it something better than just waste.